• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Fetching large number of records

 
Arjun Shastry
Ranch Hand
Posts: 1898
1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi,
I am creating a swign application which connects to Log database table and calculates stats of those records and shows to user. Number of records fetched depends on Start Time and End Time. Query is simple-
"SELECT X,Y,X,W,P from Table where Entrytimestamp between StartTIme and EndTime"
This query returns some times 300000 records or even more. I am setting setFetchRowSize to 1000. I m running the jar file with -Xms512M -Xmx1024M. Still for 300000 records it takes more than 15/20 minutes.
Is there anyway i can reduce this time ?
 
Ashwin Sridhar
Ranch Hand
Posts: 277
Flex Oracle Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Do your columns have indexes ?
 
Martin Vajsar
Sheriff
Posts: 3752
62
Chrome Netbeans IDE Oracle
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Are you fetching all these records just to calculate the stats? If it is the case, calculate the stats in the database. It's the fastest way to go about it.
 
Arjun Shastry
Ranch Hand
Posts: 1898
1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Two columns have indexes.Currently we are doing stats on DB sie using SQL query(avg function etc) But for each set of records, we have to run 14 queries. Because logs are of different type- So for each type, we have to run the query. To minimize this, i m planning do to do stats in java.
 
Martin Vajsar
Sheriff
Posts: 3752
62
Chrome Netbeans IDE Oracle
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I think this is a step in a wrong direction, performance-wise. Fetching huge amounts of data just to do some statistics on them puts a big strain on the resources (especially the network).

Which database are you using? Some databases have really powerful tools for doing all kinds of aggregations directly in SQL. Even if yours doesn't, you might be better off with a stored procedure.

Fourteen queries to get the statistics doesn't sound that bad to me. Keep in mind that databases usually cache recently accessed data. Anyway, do the source rows of these statistics "overlap" (ie. are some rows processed by more than one query)? If they don't, it might be well the optimal solution. If they do, perhaps you could compute partial aggregations in the database, fetch these an combine then in Java to produce the final statistics. It all depends on what exactly do you need to compute, of course.
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic