i want to start my career in hadoop as soon as possible and i am ready to give 6-8hours daily for it.
Reality:
I gather you're a fresher or recently out of college, yes? When I google "hadoop jobs", I don't see anything for freshers. Even the most minimal experience required was 2+ years.
So you are going up against the reality of how companies hire for these technologies.
The reason is simple.
For companies, these tools - hadoop/spark/storm, etc - are mere tools. Just glorified excel sheets, but on a massive scale, to extract useful business information out of their data.
If they could use excel, they'd definitely prefer that. But sometimes excel is too puny, and they have to resort to these big hammers.
Ultimately, they really want from tools is to derive maximum business value using them, and the fastest way is to hire people already
proficient and experienced in writing data mining algorithms using these tools.
Extracting information out of business data is a difficult problem on its own.
They don't want to waste time or money training employees from the ground up for this.
Required skills:
So at any interview, they'll start by
testing your fundamentals of data structures, algorithm analysis, java/python, databases, SQL, and popular data formats like XML or JSON.
It also helps to understand some basics of statistics, because the quality of data as well as results of data mining are judged statistically.
These are the most fundamental skills necessary to write distributed data mining algorithms.
If you don't do well in them, they are unlikely to proceed to asking questions on technologies like hadoop, and they won't believe anything else you say.
Much practical knowledge in these areas can be picked up by simply working as a developer.
So before learning hadoop or whatever...
Learn your data structures and algorithm fundamentals.
Learn
java better. Learn how to write code for basic data processing tasks like parsing and processing text files, XML and JSON.
Learn to use databases and SQL.
A lot of data science is actually done using python, and higher level technologies like Pig and Hive that build upon Hadoop.
Learn to write hadoop jobs in java, python, pig and hive.
Of course, to write these jobs, first you need problems to solve.
So think up your own data science problems to solve, or use something like Kaggle, and build a portfolio of them on Github or your own site.
They could be something very personal - like processing your own electricity bills.
Or you could try many of the public datasets available (check out data.gov.in for Indian open datasets).
Roadmap:
Research some small and mid-sized companies that have teams doing data science and data engineering, and is hiring freshers, but not necessarily for those teams.
Try and get hired in them. If you know your fundamentals well, small or mid sized companies won't reject you summarily for not having required experience.
Later, with some experience, you can try shifting to their data team. Even if that doesn't work out, you'd have built up enough skills to jump to another company's data team.
There are 2 kinds of data analysis - "exploratory" (where you extract information from available data), and "predictive" (where you predict what will happen from available data).
The latter comes in the realm of machine learning, and would be somewhat too advanced for you at this point. So concentrate on exploratory data analysis for now.
When you get hired as a data engineer, you'll anyway eventually run into predictive analysis at some point and you can pick up that skill then.
Good luck!