As a design suggestion, perhaps you could reorganize the qurey so that you don't have to fetch 300000 rows? That could be a substantial amount of data (depending on the size of each record) that if fetched could seriously drain network traffic and jvm memory. I would suggest design solutions (if possible) that have you fetching far less records at a time.
>>There are 300000 records in the database table. >>But when I query this table for all records I get 99999 records only. >is it really Jconnect driver limitation or am I missing some tweaks. It should be a limitation of Jconnect driver.
Originally posted by Raj Murthi: Ya reoraganizing query is the workaround
I'm not sure this really constitutes a 'workaround'. As I said earlier, even if the query was allowed to be executed, the performance impact in most systems would be awful. Customers would be calling you every five minutes saying the system is hung.
The database world is less pure than the java world in that you always have to take in to account the performance impact of every query you write. (you have to do this in java too but its not as common since the system can organize a lot of it for you). I'll give you a good example of when I encountered this in real life...
I was using a provided web architecture that performed inserts into a database table and, as input, took an array of records to insert. So, I passed in about 200,000 records. I assumed the architecture would be smart enough to organize this for me, but it was not. It was making a single database connection for each record. This resulted in terrible performance that lasted *20 minutes* to import all the records.
In an alternate case I tried forcing it to do the entire insert in a single database call. This was better because it was only one database call but with a ton of data. This was a problem with bandwidth since it meant the server would have to keep a connection open for a very long time and the web server I was using often crashed because of timeout errors. In the case that it suceeded it still took about *5 minutes*.
In a final attempt, I wrote a chunk mechanism that inserted 1,000 records at a time basically making it 200 calls of 1,000 records each. This took *25 seconds* to insert all 200,000 records and never crashed.
subject: Unable to Iterate over large number of records