fred rosenberger wrote:you can use the -Xmx and -Xms to increase your heap size. This may not be a permanent fix, but it might let you limp through your initial struggles.
Paul Clapham wrote:It isn't obvious to me why you have to read all 20,000 strings into memory before processing them. Couldn't you just do something like "read a string, process a string, until end of file"?
Corey McGlone wrote:I've considered this, as well. The reason I read all the Strings (which are numeric values from 6-8 digits long, so they're not horribly large) into memory is so that I can get an accurate count of how many there are. This allows me to provide progress statistics. I read the numbers into a list and then set the original file to null so the "net memory usage increase" should be negligible.
Even still, like you said, 20,000 Strings of that size shouldn't really be causing that much of an issue.
John de Michele wrote:It would seem to me that keeping a running count would be just as accurate, and a much better use of resources. What happens when your file gets to 200,000 lines, or 2,000,000?
Steve Fahlbusch wrote:Use adaptive calculations. The bytes process / bytes in file should act as a good approximation of the % complete.
Get the size of the file from the OS.
Keep a count of the bytes read. Bytes read / Size of file * 100 = percent of completion.
Do you or anyone have any explanation on that, how not closing a CallableStatement causes out of memory error?
Corey McGlone wrote:Just to close up this thread - I found my issue today. As usual, the problem lied in code that I wasn't even looking at. It was inside this method:
Inside that method, I was creating a CallableStatement and using it to invoke a stored procedure on the database I'm connected to. After processing, I was properly closing the ResultSet, but I had forgotten to close the CallableStatment. Adding cStmt.close() fixed all my memory issues.
Thanks for all the help, everyone.