• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Is it better to write each line to a file or wait and write whole string

 
Ranch Hand
Posts: 281
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have a large portion of text that needs to be written to a file. The text is generated dynamically based on the result set of query and other factors. The text result could be very large (about 220K). The question I have is it better to write out each line to the stream as I loop through the dynamic information, or is it better to append all of the information into a StringBuffer and then when finished output all of it to the stream in one shot by writing the String to the file(stringBuf.toString()) ?
Currently I'm appening all the dyamic information to a StringBuffer and then writing this to the file:

Is the above the best/most efficient way to do this? Thanks
 
Bartender
Posts: 2205
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It is definitely more efficient to write to a file in as large a block as possible. The limiting factor in terms of speed when writing files is the latency of the hard drive - it takes time to locate the sector on the disk to write to, and this takes much much longer than any other task in terms of writing data. You want to take advantage of the current location of the disk head, and write as much data as possible where it is. If you write continuously to a file in a seperate step via multiple loop iterations, that is the slowest possible way to do it.
That being said, most modern OS, including windows, already buffer their file system. That is, when you make a platform-specific OS file write call, you're really not writing directly to the disk, you're writing to a block of RAM, which will be very fast. When the buffer is full,it is then flushed to the drive.
On top of this, you can create your own buffer in java. You are doing so by writing everything to a StringBuffer first. Another way to do it is simply use a BufferedOutputStream. It will let you buffer your output and use the same OutputStream API, and you won't have to do something special like write to a StringBuffer.
There may be good reasons for doing what you are doing, especially if you need to pre-fill a buffer quickly, but then do some processing on that buffer before you finally send it to the OutputStream. I don't know if you are doing this or not.
You will have a slight performance hit when you copy the StringBuffer data to a String object via the toString() method. If the data in the STringBuffer is small, and you are not writing it that often, this extra step will probably not be noticeable. However, if you're filling your STringBuffer with a lot of data in a loop that gets executed many times, this extra step may become a performance bottleneck. It really depends on your particular programming situation.
I would take a look at using a BufferedOutputStream and compare the speed and code required to do it each way, just to get a sense of what some alternatives are.
 
Rick Reumann
Ranch Hand
Posts: 281
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
First, thanks Rob. Concerning this..

Originally posted by Rob Ross:
On top of this, you can create your own buffer in java. You are doing so by writing everything to a StringBuffer first. Another way to do it is simply use a BufferedOutputStream. It will let you buffer your output and use the same OutputStream API, and you won't have to do something special like write to a StringBuffer.
<SNIP>

You will have a slight performance hit when you copy the StringBuffer data to a String object via the toString() method. If the data in the STringBuffer is small, and you are not writing it that often, this extra step will probably not be noticeable. However, if you're filling your STringBuffer with a lot of data in a loop that gets executed many times, this extra step may become a performance bottleneck. It really depends on your particular programming situation.


Well, this is exactly what is happening the StringBuffer is doing a LOT of appends. What am I'm doing is iterating through a large ArrayList of beans and creating an .html file with tables that will then be e-mailed. Every iteration through the loop will append more data. Are you saying it might be better, if that is the case, to just use write with a BufferedOutputStream during each iteration of the loop. That would probably make sense, since as you mentioned the Stream is buffered and will only write when it's full. I could probably also increase by perforance by giving the Buffer a larger size in the constructor?
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
For the code excerpt shown - the BufferedWriter is already taking care of all your buffering needs. There's no need to do additional buffering with StringBuffer. Moreover, it's a needless drain on memory. For really large files, insisting on putting the whole thing in a StringBuffer will have a notable effect on system performance, as you're consuming RAM. (Then again, 220k isn't really a very big file compared to available RAM on most systems, so maybe this doesn't matter.)
For the BufferedWriter, you can experiment with the size of the buffer to see what works best. In theory the bigger the buffer, the better the IO performance, as you can write more chars/bytes to the file at once. But past a certain point, further increases in buffer size have no noticeable effect, and only serve to eat into your memory usage.
Also, to write text to a file, it's both easier and slightly more efficient to use a FileWriter rather than an OutputStreamWriter wrapped around a FileOutputStream. It seems that FileWriter has been optimized for this particular combination, and eliminates some of the overhead of method invocation. Not a big deal. However, if you need to specify the character encoding to be used, then the OutputStreamWriter is absolutely necessary.
 
Rick Reumann
Ranch Hand
Posts: 281
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks Jim and Rob. This is exactly the kind of information I wanted. With FileIO there seems to be so many different ways things will 'work' but they might not be the best. Good idea about using FileWriter.
Just out of curiosity what is that ideal setting for a buffer size? I have no clue since I've never set the size and just let the default do it's thing.
Thanks again for your replies, they've helped tremendously.
[ March 07, 2002: Message edited by: Rick Reumann ]
 
Rick Reumann
Ranch Hand
Posts: 281
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Only possible problem I'm seeing using FileWriter as opposed to FileOutputStream is if I use FileWrtier I can't change the size of the buffer underneath. Should I not even concern myself with this?
 
Rob Ross
Bartender
Posts: 2205
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You need to experiement to find the best size. Usually, 500 bytes - 2K is about right. But your particular file system may do better with a different value. It's usually best to make the buffer size the same as your files system's block size or a multiple of that.
 
Jim Yingst
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
With a FieWriter, you can still use a BufferedWriter (as you did in you original code - I didn't mention it again because I assumed you'd leave it in place). BufferedWriter allows you to specify a buffer size, the same as BufferedOutputStream does.
My experience is that the default size for a buffer is usually fine. The optimum value will probably vary from system to system anyway - for a cross platform system you might want to consider making it a user-configurable paramenter.
As for general I/O performance issues - by coincidence, I just came across an article here that you might want to check out.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic