Spark code is developed first on local using typically Eclipse and run by setting master as local. Later the jar is run on cluster of multiple nodes.
However when we are running on local using eclipse, we are dealing with single node whereas there will be multiple nodes on cluster. Given this ,whatever code we test by running on local on Eclipse , will it always run fine on cluster with multiple nodes too ? Which means that can the code be tested on local or will there be some cases where no matter if we have tested on local on Eclipse ,it may still not work as it is on cluster because it has multiple nodes ? Thanks