• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Split ResultSet

 
Ashwin Sridhar
Ranch Hand
Posts: 277
Flex Oracle Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi,

Suppose I am populating my front-end with 2000 records, the user may not see all 2000 records immediately. So I prefer to populate my front-end in batches, say 1000 records. So I am trying to split my resultset into possibly more chunks and display them in a paged manner. I tried using CachedResultSet without much success. Do we have any way of splitting the resultset.
 
Jeanne Boyarsky
author & internet detective
Marshal
Posts: 34207
341
Eclipse IDE Java VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Paging doesn't usually split the resultset. It usually controls the query itself to only return X records. For example, in Oracle, we use rown um to control which rows are returned.
 
Ashwin Sridhar
Ranch Hand
Posts: 277
Flex Oracle Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Jeanne ,

Using such kind of rownums would make me hit the DB more number of times. I would prefer storing the data on the server and access them. By the time all these data are viewed, i would fetch more data and keep them ready. So it reduces the waiting time drastically.
 
Jeanne Boyarsky
author & internet detective
Marshal
Posts: 34207
341
Eclipse IDE Java VI Editor
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
At the cost of the first hit. The user may not visit all 2000 hits, but had to wait for you to retrieve them up front.
 
Ashwin Sridhar
Ranch Hand
Posts: 277
Flex Oracle Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Jeanne ,

That is exactly what am trying to figure out. A way to reduce that wait time. My Query would return 2000 records. But as soon as i reach say 500 records, i would send them back to my grid while the fetching of remaining 1500 records happens simultaneously.

I am not sure if such a concept is available. I am trying to brainstorm various possiblities.
 
Martin Vajsar
Sheriff
Posts: 3752
62
Chrome Netbeans IDE Oracle
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hmmm, it is usually not done this way. Imagine Google caching full results of all queries of all people that do a search.....

When done right (proper indexes in place and proper pagination technique for your database), pagination is quick. If the query itself was inherently costly (this probably means that the design was not done right, though), I'd think of materializing the search results in a temporary table in a database, and then paginating records from that table. Having it in the database scales better in my opinion, as you don't risk to blow up your server memory.

Under very specific condition (hard upper bound on the number of concurrent requests and on the size of every possible request) something you envisage might be possible. Just might be. However, if the users won't always browse complete results, your only achievement could easily be several times inflated network and database load for no real gain whatsoever.

Remember: More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason...including blind stupidity. — W.A. Wulf (shamelessly stolen from Winston's signature).
 
Ashwin Sridhar
Ranch Hand
Posts: 277
Flex Oracle Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Martin,

Thanks for your response. I fail understand how the performance would improve by storing in a temporary table. In such a scenerio, i might have to hit the DB multiple times. I prefer storing data on my server in smaller chunks. So i will make a single DBhit. Any suggestion on this.
 
Martin Vajsar
Sheriff
Posts: 3752
62
Chrome Netbeans IDE Oracle
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
That remark of mine was introduced by "If the query itself was inherently costly (this probably means that the design was not done right, though), ..." (The query itself means the query whose result you want to paginate.)

So if your query does not suffer performance problems, you would not use this technique. If your query suffers the performance problem, this would allow the query to be executed only once and the pagination to go against the stored (precomputed) result of the query. If your only concern is reducing the number of roundtrips to the database, this does not apply to your situation, but it was not crystal clear from the information you've provided so far (at least to me).

My suggestion is: don't do this. Use standard pagination technique as (nearly) everybody else does. If there are performance problems with this approach, investigate where exactly the problems are and address these problems specifically. Again, try entering a query in Google and browse a few pages. Is the response reasonable? Google certainly doesn't cache the results of your search somewhere in a middle tier.
 
Wendy Gibbons
Bartender
Posts: 1110
Eclipse IDE Oracle VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ashwain, you CAN NOT return the first 500 results from a query, and then carry on with the same query. The DATABASE carries on till the end, and returns everything.

Jeane is helping you find other ways of doing it.

If you want to return the data to the user in pages, you are going to have to do 4 selects (in the example of 2000 rows displayed in pages of 500).

You could load all 2000 rows into memory, but then as has been said the user will have to wait, which you have said you don't want.
So please read jeane responses again, remembering these facts.

Bother Martin is a faster typer than me
 
Robert Aryan
Greenhorn
Posts: 21
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
You can use lambdaj.It is like list query in which you can manipulate the result very easly.

http://code.google.com/p/lambdaj/
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic