Hadoop map reduce works on key value pair logic. But data to process can be of any kind. So is Hadoop map reduce suitable for processing any kind of data or only the data which is suitable to be in form of key value pairs.
For what i know from my actual knowledge the mapper/reducer, give back the result under form key, value, the value can be an object of something if you respect internal serialization/deserializatoin of hadoop.
other case, you can process mapper, but you can process with 0 reducer if you dont need to save result, depend of business requirement,
if you have a look on hive/pig, when you execute request pseudo sql on datanode, it will call map reduce on all node for give back the result dataset.
but can key.value is a principe of map reduce mechanism.
hadoop 1 use only map reduce
hadoop 2, give now more possibilities, it use container application/node, map reduce v2 can be optionaly processded, for save compatbility with hadoop 1,
or you can use other application mechanism (like hbase/spark creating application under yarn without map reduce batch processing).
Listen. That's my theme music. That's how I know I'm a super hero. That, and this tiny ad told me: