This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
Hi All, If i have to implement some pagination technique like suppose i want to retrieve 1-10 records or 10-20 records from database then do i have to fire 2 queries , one to know the total no of rows and another to get the required no of data. Any suggestions. Thanks, kundan
Failures are practice shoots for success.
Solutions would depend on the database that you are using and whether or not you can maintain the DB session across pages. Some SQL dialects allow you to specify "SELECT TOP n" or "SET ROWCOUNT". If you can maintain a DB session then you could simply use a server-side cursor or a stored procedure that maintains a cursor/temporary table of the results of the query. Ideally you would go with stored procedures that simply need the starting position and maximum result set size - that way you could encapsulate the DB details in there and you would would only need a relatively thin application wrapper.
I know that MySQL and PostgreSQL have a limit and offset keyword in their select stataments, Oracle have a ROWCOUNT. SQL Server however, at least in version 2000 just provided a SELECT TOP n, which, by itself, is not sufficient to implement this paging, unless there is way you can restrict you query not to bring the records brought in the last page.
Repeat the same query every time and pull out the right rows for the page.
Execute a query that gets exactly the right rows for any page - start at the first key > the last one you got before, use row numbers, etc.
Get all the primary keys in the first query, stash them in a giant array in session or some temporary storage, use them to retrieve the proper rows for each page.
Get all the real rows and stash them in storage - probably much bigger! I did this a time or two in my mainframe days - selected all rows from DB2 and stored page-size chunks in fast VSAM files.
Note that some of these techniques guarantee that page forward followed by page backward will show you exactly the same rows you had before while others can be modified by concurrent inserts or deletes. Some use lots of memory while some use lots of database and CPU time. Some may be faster than others. Now you have to pick one!
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
If you have some control over what goes in the database, you may be able to store some sort of sequence numbers in the table data, and use that in your select.
If you are looking at "most recent" pages (like this forum software does for the thread lists) you might be able to select based on timestamp (say just the last two hours or two days etc.) then do the entries-per-page counting from the loaded data.
If you are worried about the problem of a database hit for every page, you might want to look carefully at your usage model. It's very common in paged systems that the first page gets something like 90% of the visits, the second page 90% of the remainder and so on. (Honestly, how often have you ever looked at the fifth page of results on Google?) Selecting (and caching) the first three or four pages each time) might then answer 99.99% of your page views.
BTW: In the system I work on our solution was to do no such pagination. We'll give users up to n rows (usually 100) and no more. They shouldn't be spending time browsing through the database. Of course that last statement might only be true in our user population.