Meaningless Drivel is fun!*
The moose likes Hadoop and the fly likes Hadoop MapReduce Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Murach's Java Servlets and JSP this week in the Servlets forum!
JavaRanch » Java Forums » Databases » Hadoop
Bookmark "Hadoop MapReduce" Watch "Hadoop MapReduce" New topic
Author

Hadoop MapReduce

Ralph Hoch
Greenhorn

Joined: Jun 04, 2011
Posts: 4
Hi,

I'm new to Hadoop and I'm trying to figure out how it works. As for an exercise I should implement something similar to the WordCount-Example. The task is to read in several files, do the WordCount and write and output file for each input file.
Hadoop uses a combiner and shuffles the output of the map-part as an input for the reducer. Then writes one output file (I guess for each instance that is running). I was wondering if it is possible to write one output file for each input file (so keep the words of inputfile1 and write result to outputfile1 and so on). Is it possible to overwrite the Combiner-Class or is there another solution for this (I'm not sure if this should even be solved in a Hadoop-Task but this is the exercise).

Thanks...
Satyaprakash Joshii
Ranch Hand

Joined: Jun 18, 2012
Posts: 131
You need to process each file separately........and number of reducers should not be more than 1 to ensure 1 o\p file for each input file...
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Hadoop MapReduce
 
Similar Threads
Formatter question(File)
Help:
XSLT newbie problem
Trying to list the directory contents in a <fileset>
Mapreduce using Java