• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Which approach is better?

 
Ranch Hand
Posts: 47
Eclipse IDE Tomcat Server Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
We have a large file on disk (~1Gb). We are reading that file into memory using java.nio MemoryMappedByteBuffer class. We have limited the in memory file size to 1Mb. Now we need to get the data from this in-memory file and have to process it passed it to the socket as a chunk of 10 Byte.
According to the given information which approach is better
1. Copy the entire data (1Mb) into an array and then process it.
2. In loop - Copy the chunk of size 10 Byte into an array and processs it.
 
Bartender
Posts: 1357
39
IBM DB2 Netbeans IDE Spring Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'd bet answer 1, just because a) in none of these scenarios you'll be able to avoid processing 1GB of data in very small chunks b) I presume that I/O on original file of 1 GB may be minimized if you copy blocks of data in memory of 1 GB.
To be honest: the real bottleneck will be processing data via socket, despite how you're going to read the original bulk data.
 
lowercase baba
Posts: 13089
67
Chrome Java Linux
  • Likes 3
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
according to the given information, the best way is to write the code in whatever way is the cleanest, easiest to read, understand, and maintain. Your specs have no requirements for any kind of speed, so forget about it entirely. You will almost ALWAYS be better writing code that makes sense over trying to do micro-optimization but add hundreds of hours to coding and debugging time.
 
Rancher
Posts: 43081
77
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My gut feeling is that handling 1MB chunks gets you a much better data handling to data processing ratio, and is thus preferable.

I agree with Fred that clean code is preferable until you have proven that you need to optimize for speed. But the differences between the two approaches in terms of code cleanliness are likely to be small, so you might as well choose whichever approach gives you better performance.
 
reply
    Bookmark Topic Watch Topic
  • New Topic