im writing an application that needs to process x number of rows in a database (1000's) in a multi threaded applcation. i do not want to go to the database once and retrieve all the rows but instead implement a simple cache of size y to store the data in and make subsequent calls to the database for another y rows when they are all processed. is this a good approach and how would i go about implementing it - i was thinking about using an ArrayList and tailoring it towards my needs.
CachedRowSet looks cool. I'd never seen that before. Thanks!
I'm pretty sure that I've read that all JDBC implementations are lazy about transferring data from the database into Java memory. When you select a zillion rows the driver fetches some subset of them into the ResultSet. As you work through the RS the RS gets more rows into memory. If you're set to scroll forward only, the RS can throw away rows you've already seen.
Anybody know if my recollection is correct? Does that also do just what the poster wants?
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Stan: I'm sure it works that way in at least some implementations, and probably most of them. I'd be surprised if that behavior is actually guaranteed somewhere though. It seems like the sort of thing they'd leave to implementations to decide. Statement, PreparedStatement and ResultSet are just interfaces, after all, so each implementation can provide completely different code for this. I don't see anything in the API that would prevent an implementor from completely loading thedata for a ResultSet before returning from an executeQuery(), for example. Most implementors wouldnt do that because it's inefficient, but you never know. I think. Maybe someone else has better info on this?
"I'm not back." - Bill Harding, Twister
Joined: Jan 29, 2003
CachedRowSet still looks better. The doc has an example that claims to have no more than 100 rows in memory which is just what the OP was looking for, I think. But it doesn't say what's happening with the ResultSet that must be buried inside somewhere.
Try the setFetchSize(int rows) on the ResultSet. As far as I know, the JDBC specification talks about giving caching "hints" to the implementation. I have experimented with varying cache size on a performance related problem and have found quite significant benefits. I was using Oracle 9i and the OCI driver.
setFetchSize(int rows) - Gives the JDBC driver a hint as to the number of rows that should be fetched from the database when more rows are needed for this ResultSet object. If the fetch size specified is zero, the JDBC driver ignores the value and is free to make its own best guess as to what the fetch size should be. The default value is set by the Statement object that created the result set. The fetch size may be changed at any time.
As suggested solution to keep a cache in a resultset or with CachedrowSet, i m seeing the problem in the database table if some query is done from outside the jvm instance then what happen. I am asking about cached version and database table synchronization.
Currently i working with static treemap object and storing the bean object in it. This solution is very good. But i dont have synchronization way.
How we can synchronise between database and cache?
How about using opensource tools like hibernate for object/relational persistence instead of your own implementation for coding all synchronizing stuff wrt. database and objects in memory. Correct me if I wrongly understood your question.