• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

How to test hadoop job performace

 
Greenhorn
Posts: 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi all,
I've implemented a frequent itemset map-reduce algorithm based on SON for Apache Hadoop. Now I need to test its performance, i.e. study how its execution time varies using different datasets and compare it with different versions of the algorithm in order to choose the best one.

So, I run several jobs on a 6-machines cluster and I have noticed that the execution time varies significantly even keeping the same dataset and the same algorithm version. I have come to the conclusion that in this type of environment the execution time is unpredictable because of the (un)availability of requested data in the machine where the computation runs.

How can I run this type of test in a reliable way?

Thank you
 
reply
    Bookmark Topic Watch Topic
  • New Topic