Babu Singh

Ranch Hand
+ Follow
since Aug 17, 2009
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Babu Singh

this method is called in a loop which read the serverurl and ip_sequence from external source. suppose there is 3 server url then it is called 3 times.
java class:


jsp code:


its show the result:


i wants the result should be:


problem is output is showing in row wise, i need this data in column wise for each server ip.

please help.
Thanks.
6 years ago
I have made some changes now it show the content length but file are not reaching to server.
here is the code:


server response:


But when I upload a file using chrome rest client, file are uploaded sucessfully.
server response:


please suggest to solve the issue.
8 years ago
Hi,

I am trying to upload a file to a server. I have write the client code in java. Server code is written in php.
I am getting Content-Length=0 in server response. But when I upload a file using chrome rest client it gives the content length. Here is my java client code:



Here is server response:


please suggest to solve the issue.

Thanks.
8 years ago
Hello,
I have delete the vij.txt from hdfs. but issue still persist.

these are my java files:
WordCount.java:

WordMapper.java:


SumReducer.java:


please suggest to solve the issue.
Thanks.

amit punekar wrote:Hello,

Please note the exception message -

Exception in thread "main"
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory
hdfs://localhost:54310/user/ubuntu/wordcount/input/vij.txt already
exists



You can delete the file and then try running. I cannot recollect but there is an option to overwrite the files if they exists.

Regards,
Amit

9 years ago
I am using Hadoop 2.2.0.
Following commands are running fine on hdfs.


I have made a wordcount program in eclipse and add the jars using maven and run this jar using this command:


it give following error:


jar is on my local system. both input and output path is on hdfs. there is no output dir exist on output path on hdfs.

core-site.xml:


hdfs-site.xml:


mapred-site.xml:


yarn-site.xml:


/etc/hosts:


please advice to solve the issue.

Thanks.
9 years ago
now i have again setup Hadoop in my machine.
I have change the port number.
mapred-site.xml:


core-site.xml:


now when i stop all nodes.


when i use these URL's on browser it shows Server not found.
http://localhost:55531/jobtracker.jsp

http://localhost:55571/dfshealth.jsp


log's files:

hadoop-kumar-jobtracker-kumar.hadoop.log:


hadoop-kumar-datanode-kumar.hadoop.log:


hadoop-kumar-namenode-kumar.hadoop.log:


hadoop-kumar-tasktracker-kumar.hadoop.log:


hadoop-kumar-secondarynamenode-kumar.hadoop.log:


please advice to solve the issue.

Thanks.

chris webster wrote:If you are just starting out and you just want to explore Hadoop and related tools like Hive, Pig etc, then you might find it easier to use one of the pre-packaged virtual machines from Hortonworks or Cloudera.

For example, I've been using the Hortonworks Sandbox. This gives you an integrated single-node Hadoop installation with tools like Hive, Pig, HCatalog and Hue, plus links to lots of well structured tutorials. The sandbox runs as a virtual machine e.g. inside Virtualbox or VMWare Player, and you can access a lot of the functionality very easily via the browser-based Hue interface. It's a lot easier than installing all these components by hand, and it's a great resource for learning about Hadoop, even if you plan to use a different Hadoop distribution for your project.

9 years ago
Hi Amit,

log file give the error:



please suggest to resolve this.

Thanks.

amit punekar wrote:Check if this file has any clue why jobtracker would not have started "/home/kumar/hadoop-0.20.2-cdh3u4/logs/hadoop-kumar-jobtracker-kumar.hadoop.out " .

Regards,
Amit

9 years ago
One day my machine have occurred a problem of disk space. (machine does not have enough space). then i remove some items from download folder using command prompt. after some time when i tried to run Hadoop, it was not run. and Hadoop folder did not show in Home folder where i extract that.

Thanks.


chris webster wrote:If you are just starting out and you just want to explore Hadoop and related tools like Hive, Pig etc, then you might find it easier to use one of the pre-packaged virtual machines from Hortonworks or Cloudera.

For example, I've been using the Hortonworks Sandbox. This gives you an integrated single-node Hadoop installation with tools like Hive, Pig, HCatalog and Hue, plus links to lots of well structured tutorials. The sandbox runs as a virtual machine e.g. inside Virtualbox or VMWare Player, and you can access a lot of the functionality very easily via the browser-based Hue interface. It's a lot easier than installing all these components by hand, and it's a great resource for learning about Hadoop, even if you plan to use a different Hadoop distribution for your project.

9 years ago
Hi,

I am not using vm. I have installed CentOS 6 on my machine. now trying to set up again on my machine.

although hadoop had already run on my machine. i run two months on my machine with Hive, Pig, Sqoop. but unfortunately hadoop automatically stopped and hadoop folder removed. i don't know what happened.

anyways, please suggest to solve the issue.

Thanks for your reply.


chris webster wrote:If you are just starting out and you just want to explore Hadoop and related tools like Hive, Pig etc, then you might find it easier to use one of the pre-packaged virtual machines from Hortonworks or Cloudera.

For example, I've been using the Hortonworks Sandbox. This gives you an integrated single-node Hadoop installation with tools like Hive, Pig, HCatalog and Hue, plus links to lots of well structured tutorials. The sandbox runs as a virtual machine e.g. inside Virtualbox or VMWare Player, and you can access a lot of the functionality very easily via the browser-based Hue interface. It's a lot easier than installing all these components by hand, and it's a great resource for learning about Hadoop, even if you plan to use a different Hadoop distribution for your project.

9 years ago
Hi,

I am not able to find out the exact cause.

please review again and suggest to solve out the problem.

Thanks.

amit punekar wrote:Check if this file has any clue why jobtracker would not have started "/home/kumar/hadoop-0.20.2-cdh3u4/logs/hadoop-kumar-jobtracker-kumar.hadoop.out " .

Regards,
Amit

9 years ago
I am new in Hadoop. i am facing problem when install Hadoop.
I have CentOS6
I have extract hadoop-0.20.2-cdh3u4.tar.gz in Home folder.

i used the following commands.





then made folder pseudo/dfs/data , pseudo/dfs/name in hadoop-0.20.2-cdh3u4 folder.

then modify xml files:

core-site.xml


hdfs-site.xml:


mapred-site.xml:


.bashrc:


then use following command:


it shows:


now when i give this two url from browser:

http://localhost:50030/jobtracker.jsp
http://localhost:50070/dfshealth.jsp

it shows pages in offline mode.


when use following command:


it shows:


please advice to run the Hadoop on my machine.

Thank you.
9 years ago



I am getting exception:
java.lang.ClassCastException: java.lang.String cannot be cast to [Ljava.lang.Object;

on these line:


please advice.
Thanks.
9 years ago