Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
iMapReduce: a distributed computing framework for iterative computation
Zhang Y., Gao Q., Gao L., Wang C. Journal of Grid Computing10 (1):47-68,2020.Type:Article
Date Reviewed: Sep 28 2021

MapReduce is one of the most popular standard algorithms in distributed processing. This paper contributes to performance improvement in MapReduce, which otherwise performs low on social networking and web-based data due to iterative processing. The work presented has added iterative processing features to MapReduce, thus coining the new term iMapReduce. With MapReduce, users need to redesign several jobs to perform the functions of creating, scheduling, and destroying these jobs every time, resulting in performance penalties; iMapReduce, however, eliminates the need for shuffling data and executes map tasks asynchronously. The new algorithm is claimed to perform 1.2 to 5 times better than MapReduce.

Serial jobs in MapReduce require the first job to be finished before the next job is started, whereas iMapReduce allows the map cycle to start (asynchronously) as soon as the input data becomes available, without waiting for the previous reduce cycle to complete. The map and reduce jobs are persistent and the reduce output is directly fed to the map. This is accomplished by assigning the input data of the map and reduce cycles to a slave worker.

Further, iMapReduce performs task migration periodically for load balancing. In order to keep the map and reduce tasks together in the same processor, they are migrated together along with state and static data. For fault tolerance, it keeps data in a local file system, accessible in a distributed file system (DFS) manner, which returns the last iteration in the event of failure instead of starting a fresh one.

Performance is compared using single-source shortest path (SSSP) and PageRank algorithms, on a commodity hardware and Amazon Elastic Compute Cloud (EC2) cluster using the Hadoop DFS (HDFS). For SSSP, a DBLP author cooperation graph is used: “each node represents an author and a link between two nodes represents the cooperation relationship between [them]”; “link weight is set according to the cooperation frequency of the two linked authors.” Similarly, a Facebook user interaction graph is created and evaluated where interaction frequency is used to assign weight to user friendship links in this graph. Two much larger synthetic graphs are generated using “the power-law parameters on the link weight and the node out-degree ... extracted from the two real graphs.” The data for PageRank is similarly generated: there are two real graphs, Google web graph and Berkley-Stanford web graph, and much larger synthetic graphs are generated using these real graphs.

Overall, the paper demonstrates interesting and useful work. Since HDFS and MapReduce can be easily installed on Linux, it is potentially useful for computer science (CS) graduate projects.

Reviewer:  K R Chowdhary Review #: CR147364
Bookmark and Share
  Reviewer Selected
Featured Reviewer
 
 
General (D.1.0 )
 
 
Iterative Methods (G.1.4 ... )
 
Would you recommend this review?
yes
no
Other reviews under "General": Date
Problems in programming
Vitek A., Tvrdy I., Reinhardt R., Mohar B. (ed), Martinec M., Dolenc T., Batagelj V. (ed), John Wiley & Sons, Inc., New York, NY, 1991. Type: Book (9780471930174)
Aug 1 1992
KNOs: KNowledge acquisition, dissemination, and manipulation Objects
Tsichritzis D., Fiume E., Gibbs S., Nierstrasz O. ACM Transactions on Information Systems 5(1): 96-112, 1987. Type: Article
Nov 1 1987
Programmer perceptions of productivity and programming tools
Hanson S. (ed), Rosinski R. Communications of the ACM 28(2): 180-189, 1985. Type: Article
Jul 1 1985
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy