1. Hadoop is already a full stack of subprojects, each of them is an extension of the core part. But first of all, instead of extending the stack, mostly you will need to write only the MapReduce algorithms.
If we would like to compare to a J2EE stack, I think that Hadoop is much more extendable, just look at how many interrelated subprojects it has. Or do we know too much people who extended the J2EE or they instead write beans?
2. wikipedia has a list of various mapreduce implementations, but Hadoop is one of the most accepted because of the long list of huge projects developed onto it
3. Of course, you can implement in C++, Python... there are various language bindings. Hadoop Streaming handles the various mapreduce implementations.