How Hadoop related to distributed transactional cache implementations (like IBM ObjectGrid, etc.)?
Also, is Java the only language of choice or something like Erlang can be more suitable for this purpose?
Of course Java isn't the only choice, but it's *a* choice. I don't think Erlang is necessarily "more" suitable, especially with the current focus on distributed solutions in a host of other languages, including Java, and other JVM solutions.
Hadoop has also its own cache implementation, which stands for bringing big data at the right place in the right time, but is not a transactional cache.
Distributed transactional cache are used in transactional systems to communicate states between the part of the processes running in diferent nodes but sharing the same transactional scope.
While MapReduce framework is not suitable or counterproductive to implement algorithm where you always have to share a common state data in a transactional manner. For example a Montecarlo based algorithm usually needs such a shared state, that's why MapReduce framework is not quite adequate for that category of algorithms. If you augment a distributed transactional cache to your MapReduce framework, then of course it will do the job with Montecarlo algorithms, but scalability will suffer as in any distributed transactional systems.
MapReduce framework is best suited for algorithm which does not require shared state between computational nodes, that's why the scalability of MapReduce programming model is so good. Luckily there are many areas of large scale computational algorithms where a MapReduce framework as it is implemented in Hadoop is suitable naturally.
Gartner says :Bigdata will be most advanced analytics products by 2015 !
Time to Become Big data architect by learning Hadoop(Developer,
Mahout, Splunk,R etc) from scratch to expert level