Last week, we had the author of TDD for a Shopping Website LiveProject. Friday at 11am Ranch time, Steven Solomon will be hosting a live TDD session just for us. See for the agenda and registration link
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
  • Campbell Ritchie
  • Paul Clapham
  • Ron McLeod
  • Jeanne Boyarsky
  • Tim Cooke
  • Liutauras Vilda
  • paul wheaton
  • Henry Wong
Saloon Keepers:
  • Tim Moores
  • Tim Holloway
  • Stephan van Hulst
  • Carey Brown
  • Frits Walraven
  • Piet Souris
  • Himai Minh

Hadoop typical uses

Ranch Hand
Posts: 462
Scala jQuery Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
To the authors:

Can you explain scenarios where Hadoop really shines and why it is better than the competition?

What is the learning curve for it for an experienced Java developer?
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Regarding where it shines, it really is the classic situation that if you have large volumes of structured or semi-structured data and have analytics that need to touch a lot of that data then it's possibly a good fit.

I suspect I'll make this point multiple times this week -- I view Hadoop as one component of the data processing systems I build but I use it alongside traditional databases and data warehouses. If your use case requires you to pull specific items from a well structured data set then odds are you'll be much better off with a traditional RDBMS. Can you do it in Hadoop, sure, but pick the best tool for the job. If your queries on the RDBMS turn into table scans because of how much data you need process to generate your results then in that case I'd consider Hadoop.

I find the Java APIs in Hadoop very well designed and easy to pick up. I find the biggest learning curve is more conceptual; learning how to take a particular problem and expressing it as a series of MapReduce jobs. You can find yourself with a series of MR jobs the code for each is literally only a few lines in each map and reduce method. But put together in the MapReduce framework the processing chain can do extremely sophisticated things. This is where the real experience will need develop.

You save more money with a clothesline than dozens of light bulb purchases. Tiny ad:
free, earth-friendly heat - a kickstarter for putting coin in your pocket while saving the earth
    Bookmark Topic Watch Topic
  • New Topic