chris webster wrote:Hadoop is all about distributing your data and your processing across multiple cheap machines. The data is replicated so there are e.g. 3 copies of each block of data, with diifferent copies on different machines. If you have more nodes than replicas, e.g. 3 replicas across 6 nodes, then on average each node only contains half the total original data volume. Hadoop knows where your data is replicated, so it can decide to process different subsets of your data on different nodes at the same time. This is how Hadoop allows you to exploit the power of distributed processing.
If you only have two nodes, and your replication factor is 2 or more, then each node contains all your data anyway, so Hadoop cannot decide how to break up the processing in this way. And if you only have one node, then nothing is distributed at all.
The first case you mentioned i.e of 3 replicas across 6 nodes, you mentioned Hadoop can decide what to process where.
Whereas, in your last example, i.e two nodes with replication factor is 2 or more, in this case you said, Hadoop cannot decide how to breakup processing.
My question, why in 2nd case, Hadoop cannot decide ? If both nodes are deployed on two separate machines, and one machine is loaded and not have good resources as compare to the other, then don't you think YARN will select the second machine to process the task ?
Thanks.
Viki.