• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

OutOfMemory Error and Statement.close()

 
Ranch Hand
Posts: 48
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
We have a standalone application which does intense DB update operations. Pseudo code below


We are getting OutOfMemory for messages more than 175000. I assume this could be because of not closing the Statement objects and its not being GC'ed. Please let me know if this correct. Before fixing the code i need to confirm this. Please advise
 
Ranch Hand
Posts: 333
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Karthik Prabu:
I assume this could be because of not closing the Statement objects and its not being GC'ed.



Possibly not. If your driver conforms to the JDBC specs, then when you close the connection, the Statement and the ResultSet should be closed for you by the driver. (It never hurts to explicitly close them though).

More likely you're running out of memory while working on the ResultSet, although it depends somewhat on what database/driver you are using. That might be an easy fix or a hard fix, depending on whether you need the entire ResultSet in memory at once or not and how you process it. Some drivers (such as Postgres) will by default bring the entire ResultSet into memory when the statement is executed or when the first run is fetched. Other drivers don't work that way be default, and will only retrieve a fixed number of rows at a time into memory, buffering the retrieval. However, you can suggest to the driver that the default buffering be overriden, by using Statement.setFetchSize(). However, as the JavaDocs note, this is a hint to the driver, and the JDBC spec does not require the driver to honor the hint. Most drivers do honor the hint when they can, but there are some other options that can make honoring the hint impossible for some databases.

Of course, the other aspect of making you code work for large numbers of rows is making sure what ever happens in:

doesn't have to all be in memory at once...
 
Sheriff
Posts: 9109
12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
If you run one query and it returns 175,000 rows in the resultset, why would the whole resultset not be in memory at once? Where else would it be?
 
author & internet detective
Posts: 41860
908
Eclipse IDE VI Editor Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Karthik,
From looking at your code, it looks like there are 175,000 messages, rather than 175,000 records in the resultset. Which would mean runQuery() gets called thousands of times. In this case, you should definitely close the statement at the end of runQuery(). Preferably in a finally block.

While closing the connection should close the statement, this doesn't happen until after the 175,000 database calls - very memory intensive.
 
stu derby
Ranch Hand
Posts: 333
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Marilyn de Queiroz:
If you run one query and it returns 175,000 rows in the resultset, why would the whole resultset not be in memory at once? Where else would it be?



A ResultSet is not a collection; it's a wrapper around a connection to the database. The result of the query is contained in the database, and the ResultSet is required to give you a row at a time when ResultSet.next() is called; otherwise JDBC doesn't impose any requirement on how the driver buffers the transfer of data from the database to your Java program.

Read the javadocs for Statement, especially Statement.setFetchSize()
 
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Karthik Prabu:
We have a standalone application which does intense DB update operations. Pseudo code below


We are getting OutOfMemory for messages more than 175000. I assume this could be because of not closing the Statement objects and its not being GC'ed. Please let me know if this correct. Before fixing the code i need to confirm this. Please advise




well your psuedo code does not contain any pseudo opening and closing statements so what was its purpose?

Not closing yoru statements is VERY bad. The driver will keep the data available until the statement is closed. If you let the GC close the statement, well thats just terrible programming. Its a safety net that sun should not have included.

Also note that the GC can collect the statement object without running its finalizer right away. And that much of the data is probably in the driver waiting on the statement to tell it that it can be released. So GCing the statement is not actualy what releases the memory directly. That will happen when the GC gets around to running the finalizer. Which can explain your OOM error. The objects are guaranteed to be collected before OOM, but their finalizers are not guaranteed to be run before OOM.

If you want it done right you gotta do it yourself.
reply
    Bookmark Topic Watch Topic
  • New Topic