Tony Docherty wrote:
kri shan wrote:Which give better performance for counter
What do you mean by better performance, is it accuracy, speed or something else altogether?
Pablo Abbate wrote:If you have multithreads, use AtomicInteger, if not, use the common increment.
I think the decision is a bit more complex than that.
It may be that the reads and writes are currently done from within synchronized blocks for other reasons we are not aware of or it maybe that the hit counter is only indicative and so (providing writes are done from within a synchronized block) occasionally being out by a few hits on reads may not be an issue.
Nikhil Das Nomula wrote:Thank you Pablo. That helps. Another question which I have is
I am sure that we can call a map-reduce job from a normal java application. Now the map-reduce jobs in my case has to deal with files on hdfs and also files on other filesystem. Is it possible in hadoop that we can access files from other file system while simultaneously using the files on hdfs. Is that possible ?
So basically my intention is that I have one large file which I want to put it in HDFS for parallel computing and then compare the blocks of this file with some other files(which I do not want to put in HDFS because they need to be accessed as full length file at once.
Paul Clapham wrote:
Pablo Abbate wrote:We are not discussing best practices, he wants to know what can he do with JSP ...
But this is not some game where the task is to identify as many different ways to use a JSP as possible. The poster wants to learn about JSP, presumably so that he or she can use it to write web applications. Given that, the responsible thing is to identify best practices before the poster starts learning the less-than-best practices which are so lamentably common.
Bear Bibeault wrote:
Pablo Abbate wrote:You can think the JSPs files like a mixture of JAVA+HTML+ JSP TAGS.
Not quite. Putting Java in a JSP has been obsolete and discredited for 10 years. No Java code in JSPs!