File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
The moose likes JDBC and Relational Databases and the fly likes JDBC - Result Set to Array List Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Databases » JDBC and Relational Databases
Bookmark "JDBC - Result Set to Array List" Watch "JDBC - Result Set to Array List" New topic

JDBC - Result Set to Array List

Suhas Mohamedali

Joined: Jun 05, 2012
Posts: 29

My requirement is to improve the iteration time on the result set fetched.
The query fetches 1 million records.
The code snippet

takes less than 1 second to execute.

But the iteration of the result set and converting it to the list of POJOs takes around 90 seconds.
The code snippet is

Please help to reduce the execution time.

RatiKanta pal
Ranch Hand

Joined: Nov 13, 2011
Posts: 83

Hi suhas,

You can use parameterized constructor insted of setter method.This will help to reduce the control move between the method and loop.


rec = new CProcedureFeeRec(result.getLong(1),result.getString(2),........);
cprocedureFee.add( rec );

Suhas Mohamedali

Joined: Jun 05, 2012
Posts: 29
Dear Pal,

Thanks for your reply

I tried it already and there is no effect.

Campbell Ritchie

Joined: Oct 13, 2005
Posts: 46397
How long does the query run on its own, without creating the million objects from the ResultSet?
chris webster

Joined: Mar 01, 2009
Posts: 2292

Do you really need to fetch one million records?

No more Blub for me, thank you, Vicar.
Bill Clar
Ranch Hand

Joined: Sep 21, 2006
Posts: 163

My reply won't be popular but...

For large queries I skip the ArrayList or Collections object and return the ResultSet directly to the JSP page. *ducks for cover*

Paul Clapham

Joined: Oct 14, 2005
Posts: 19973

Displaying one million rows on a JSP page isn't likely to make you popular with the users of the page either.

But then I didn't see anything in the original question about JSP, so I'll take your suggestion with a grain of
Jeanne Boyarsky
author & internet detective

Joined: May 26, 2003
Posts: 33129

It is likely that your biggest time sink is the network transfer of those million objects. Or running out of memory. Do you have enough memory allocated to your JVM to hold all this data?

What are you doing with the data? If writing it to a file, you don't need to hold it all in memory at once. Get a batch, write to file, get another batch, etc. And if showing to the user, you certainly don't need it all in memory. That is what paging is for!

[OCA 8 book] [Blog] [JavaRanch FAQ] [How To Ask Questions The Smart Way] [Book Promos]
Other Certs: SCEA Part 1, Part 2 & 3, Core Spring 3, TOGAF part 1 and part 2
Suhas Mohamedali

Joined: Jun 05, 2012
Posts: 29
Thank you all for your suggestions.

The requirement is,

While starting up the server ( our own TCP \IP server) , the data from some tables are fetched and it will be stored in the list \vector.
It will act as a virtual table thought out the user session. So the application will not hit DB after server initialization.

I know there are better implementation available but we need to use this approach only as the architecture demands (it is a pretty old application) .
We dont have time to re implement this using JBoss and EJB.
It is a java swing application (coded in 2002).

Please help to solve this
Martin Vajsar

Joined: Aug 22, 2010
Posts: 3733

I'd assume that the server is started and then runs for days, so the 90 second startup should not matter that much. I mean, it's still under two minutes!

You should also carefully think about storing data in a list. If you search records in a list by simply iterating over it, then a DB call might actually be faster, if it used an index to get the record. I hope you'll be using at least binary search. Perhaps you should use a HashMap, though.

There is often a low hanging fruit - the fetch size (see java.sql.Statement#setFetchSize(int)). It is possible you're using a fetch size which is too low. Larger fetch sizes obviously result in fewer database calls, but can have other undesired effects. A fetch size between 100 - 500 might work best. You should experiment with various fetch sizes (even outside that range) to see which gives you best results.

If this does not help, you need to do the hard work - find the bottleneck. It could be the network, or the the query performance itself, or perhaps the memory in the server (hopefully not, as if this was the case, the server wouldn't have much space to do anything other than keep the data ). Or it could be even the IO in the database.

If it is a complex query, perhaps it can be tuned for performance. You might gauge this by storing the results of this query to a table and modifying the code to fetch records (as simple select *) from this table. This could give you some idea of how much time you can save by optimizing the query. (Note: it does not matter that the stmt.executeQuery( sql ); returns in less than a second. The database retrieves the data as they are needed (fetched by the client), and for this amount of data, this is where the bulk of the processing happens.)

You could also try to create a million of mock objects to see whether this is the problem; fetch just one representative record from the database and create million instances from it, without calling next().

Some databases/JDBC drivers offer tracing and diagnostic tools, these might reveal whether the problem lies in the network.
I agree. Here's the link:
subject: JDBC - Result Set to Array List
It's not a secret anymore!