Win a copy of Microservices Testing (Live Project) this week in the Spring forum!

Nikhil Das Nomula

Greenhorn
+ Follow
since Jun 10, 2011
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Nikhil Das Nomula

Well my thought is that I would generate an XML which represents the file tree and then use this to generate the tree in the web-app. I am planning to make the tree a dynamic one and allow user to drill down the file tree by clicking on nodes.

The reason I am not going through java nio's Files.walkFileTree is I want the response to be quick and hence I want to generate the xml when a user logs into the application and use this xml to display the tree structure.
8 years ago
Ok. Thank you. I agree the question was not directly to the point.

Yes as you mentioned I am going to use HTML to display the file system.

So the point where I am stuck at is

Generate the XML out of the file tree of the mounted linux system
8 years ago
I am creating a web application that runs on server X(unix) and it has another unix system mounted on it.

I want to generate the file tree structure of this mounted unix file system and show it on to a web application so that users can select a file and move it onto this current unix machine.

I know this sounds stupid and you may want to say why cant we directly copy the file, I am doing a proof of concept and using this as a basis.

Thanks,
Nikhil
8 years ago
After dealing with enough issues with cygwin, I installed ubuntu on my machine and was able to install and run hadoop on my machine without any issues. Here is the the tutorial that really helped me http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
9 years ago
Thank you Pablo. That helps. Another question which I have is

I am sure that we can call a map-reduce job from a normal java application. Now the map-reduce jobs in my case has to deal with files on hdfs and also files on other filesystem. Is it possible in hadoop that we can access files from other file system while simultaneously using the files on hdfs. Is that possible ?

So basically my intention is that I have one large file which I want to put it in HDFS for parallel computing and then compare the blocks of this file with some other files(which I do not want to put in HDFS because they need to be accessed as full length file at once.
9 years ago
I have to process data in very large text files(like 5 TB in size). The processing logic uses supercsv to parse through the data and run some checks on it. Obviously as the size is quite large, we planned on using hadoop to take advantage of parallel computation. I install hadoop on my machine and I start off to write the mapper and reducer classes and I am stuck. Because the map requires a key value pair, so to read this text file I am not sure what should be the key and value in this particular scenario. Can someone help me out with that.

My thought process is something like this (let me know if I am correct) 1) Read the file using superCSV and hadoop generate the supercsv beans for each chunk of file in hdfs.(I am assuming that hadoop takes care of splitting the file) 2) For each of these supercsvbeans run my check logic.
9 years ago
I am trying to learn hadoop and I am following the installation steps on http://v-lad.org/Tutorials/Hadoop/12%20-%20format%20the%20namendoe.html. However I am getting the following error and I have no clues why is it not able to find PlatformName in the classpath. Also I am concerned about,"cygpath: can't convert empty path". Can anyone let me know what might be the reason that I am getting this error.

Thank you

9 years ago
Yea JSP's are much simpler and straightforward
9 years ago
I want to read a huge csv file. I am using superCSV to parse through the files in general. In this particular scenario, the file is huge and there is always this problem of running out of memory for obvious reasons.

The initial idea is to read the file as chunks, but I am not sure if this would work with superCSV because when I chunk the file, only the first chunk has the header values and will be loaded into the CSV bean, while the other chunks do not have header values and I feel that it might throw an exception. So are there any other ways to approach this problem?
9 years ago
I was going through my profile and how stupid was i ask this kind of question. However, if anyone wondered why I asked this question at that point of time was that we were using struts as the front end for our web application. And the view components were written in ftl's and JSP's. Struts support for ftl's is well known but it makes life complicated to debug. Hence we wanted to convert those ftl's into jsp's. So to do this in a struts application, you need to have a thorough understanding of OGNL because that is how the data is moved to/from the view in a struts application. All I needed to do is just change the syntax from ftl's to jsp's.
9 years ago
Thank you Jeanne. That was helpful !
10 years ago
I am getting the a NoClassDefFoundError when I type in the below command



After my research on the web it seems that junit3.8 had it and I am suspecting that junit.swingui.testrunner is removed in the latest junit jar. Can someone confirm this please? And if it has been removed what
10 years ago
I think its been ages since this has been posted. Anyways the issue might be that you need to check your classpath. I think you are using a command like




Instead if you can explicitly specify the classpath as shown below then it solves the NoClassDefFoundError exception.

10 years ago
Can you add the @Consumes annotation too and try it. Something like the one shown below

10 years ago
Thank you William or else I would have lived with that false impression
10 years ago