I have billions of records on the data base side. In the client side i will access these records and show it thru pagination. Which option is better a). get all db records and store it in the xml file, do some xml processsing and show those recs b). thru RowSet / CachedRowSet c). any other optimised way...
a). get all db records and store it in the xml file, do some xml processsing and show those recs b). thru RowSet / CachedRowSet c). any other optimised way...
David's answer might be a little flippant (though I don't see it as such) but it is accurate. Criticising or demanding better answers from people who have been kind enough to offer an opinion will not win you many friends here or on any forum site. That being said:
a) a silly idea. What does it gain you, moving a bunch of data from one persistant store to another? What is will mean is a huge degredation in application performance as you move the data from one place to another. And this XML file will always be out of date with the DB state. And where did you intend storing this file?
b) A disconnected RowSet/CachedRowSet with billions of records is another silly idea. Imagine the size of that object. You'll quickly hit OutOfMemoryExceptions.
c) so this really is your only option.
[ February 24, 2005: Message edited by: Paul Sturrock ]
Originally posted by kri shan: I have billions of records on the data base side. In the client side i will access these records and show it thru pagination.
Kri, As written, this is impossible to do in an efficient way. The network transfer of a billion records would take too long.
You can take advantage of the fact that nobody is paging through all billion records. On the first query, just get the records needed for the first page. Many users will stop at this point. If they click "next", you can get more records, but you still don't need a billion.
While I didn't provide the answer you wanted, I gave you the information that I had on hand at 3 am. The first two options will not work for the reasons the others gave. With billions of records, the following may still suffer performance problems, but with the right indexes it should be sufficient.
This thread from Ask Tom discusses in great detail the solution, but here's the summary.
You need to create a query that contains two nested queries. The innermost query performs the real query using whatever where conditions the user has chosen and sorts. The middle query selects from that query and adds a rownum column with a single where condition: where rownum < <max>. Finally, the outermost query selects from the above and applies where rownum >= <min>.Let me know if you have questions from this or after reading that thread. [ February 24, 2005: Message edited by: David Harkness ]
I have a similar situation although it is not with billions of records (that sounds like an overestimate to me anyway). But, assuming its a large amount of records nonetheless. My database is mySQL and I do need to show thousands of lines of information in page form. The only solution I've found that doesn't kill server and database is a two step process. One, cound total number of rows in the database:
Store this number in a session. Then, get the current viewable page, calculate the limits based on items per page. If its page one then start record will be 0 and end record will be number of items. Then, retreive this from database:
After that you simply want to loop through the ResultSet. To optimize (and since data doesn't change much), I cache in memory, last 3 pages. But thats because of my specific requirements.
As everyone says, use the best available optimizations you have.
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus