and it works, I can start both the DFS and YARN, and make directories in HDFS. However, when I reboot my machine I try to start the HDFS again, but the namenode will not start. I sifted through the hadoop logs a bit and found this in the namenode logs:
So I believe the problem is that it is using the wrong temp directory, it's trying to use /tmp/hadoop-michael/, but the file $HADOOP_HOME/etc/hadoop/core.site explicitly sets it to /share/hadoop/hdfs. That directory exists and has appropriate permissions.
I have run $HADOOP_HOME/bin/hdfs namenode -format in the past. If I run it again, then I can start the name node; however, all of its filesystem is now empty. So formatting it after every boot is not really an option.
Somebody please tell me what I am doing wrong.
posted 5 years ago
I think maybe it's because I was using 'hadoop-tmp-dir' as the property and not 'hadoop.tmp.dir'. My 'Pro Hadoop' book used hyphens, but I'm seeing other people use periods. Going to try re-booting now and see if it fixed things.
They weren't very bright, but they were very, very big. Ad contrast: