posted 8 years ago
For what i know from my actual knowledge the mapper/reducer, give back the result under form key, value, the value can be an object of something if you respect internal serialization/deserializatoin of hadoop.
other case, you can process mapper, but you can process with 0 reducer if you dont need to save result, depend of business requirement,
if you have a look on hive/pig, when you execute request pseudo sql on datanode, it will call map reduce on all node for give back the result dataset.
but can key.value is a principe of map reduce mechanism.
hadoop 1 use only map reduce
hadoop 2, give now more possibilities, it use container application/node, map reduce v2 can be optionaly processded, for save compatbility with hadoop 1,
or you can use other application mechanism (like hbase/spark creating application under yarn without map reduce batch processing).