This week's book giveaways are in the AI and JavaScript forums.
We're giving away four copies each of GANs in Action and WebAssembly in Action and have the authors on-line!
See this thread and this one for details.
Win a copy of GANs in ActionE this week in the AI forum
or WebAssembly in Action in the JavaScript forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Bear Bibeault
  • Paul Clapham
  • Jeanne Boyarsky
  • Knute Snortum
Sheriffs:
  • Liutauras Vilda
  • Tim Cooke
  • Junilu Lacar
Saloon Keepers:
  • Ron McLeod
  • Stephan van Hulst
  • Tim Moores
  • Tim Holloway
  • Carey Brown
Bartenders:
  • Joe Ess
  • salvin francis
  • fred rosenberger

is Hadoop map reduce suitable for processing any kind of data ?

 
Ranch Hand
Posts: 1299
8
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hadoop map reduce works on key value pair logic. But data to process can be of any kind. So is Hadoop map reduce suitable for processing any kind of data or only the data which is suitable to be in form of key value pairs.

thanks
 
Greenhorn
Posts: 13
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
For what i know from my actual knowledge the mapper/reducer, give back the result under form key, value, the value can be an object of something if you respect internal serialization/deserializatoin of hadoop.

other case, you can process mapper, but you can process with 0 reducer if you dont need to save result, depend of business requirement,

if you have a look on hive/pig, when you execute request pseudo sql on datanode, it will call map reduce on all node for give back the result dataset.
but can key.value is a principe of map reduce mechanism.

hadoop 1 use only map reduce

hadoop 2, give now more possibilities, it use container application/node, map reduce v2 can be optionaly processded, for save compatbility with hadoop 1,
or you can use other application mechanism (like hbase/spark creating application under yarn without map reduce batch processing).
 
Don't count your weasels before they've popped. And now for a mulberry bush related tiny ad:
Java file APIs (DOC, XLS, PDF, and many more)
https://products.aspose.com/total/java
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!