There is a jsp from which a user submits a request for some data using Submit button. Internally we fire a sql query, as the data is so large that the query takes almost 12mins to execute.
Now the prob is in this mean time we get Network Error (tcp_error) indicating the server is busy or an internal server error.
How to resolve this issue?
- Does your user really need all those rows in one go? You can think of paginating and using LIMIT clauses to limit how much data is fetched.
- Are you sending "select *" queries ? Select queries should ask for only columns which the app really needs for a use case.
- If the query has joins and where clauses, restructure the query so that the most restrictive subsets are selected first.
- Your DB will have a explain plan command. Go through the explain plan documentation for your DB. The plan describes how your DB is going to process your query, and approximate rows it might hit for each stage.
You can use it to find out the clauses or joins which hit too many rows. Iteratively create additional table indexes and reexamine explain plan, until it can't be optimized further. Usually, this step can improve query times dramatically.
- Increase your DB memory cache sizes in its configuration. More the memory you give it, more it can cache and return results from memory without hitting the slower storage drive.
- Finally, you can increase web server socket timeouts as a last ditch, but it's a bad idea. The focus should be on reducing that 12 minutes to 30 secs or 1 minute, rather than some hack to accommodate 12 minutes.
After all that, you may find that the query still too slow, perhaps under load when many users are trying. Then it's time to solve it at the architectural level, using something like memcached. The idea is same - cache as much as possible in memory, if you want quick response times.
Joined: Jul 04, 2011
I have tried with indexes n other logical optimization and the query time is reduced to 7min. But it is still not sufficient ... ne solutions ??
Hopefully you're using connection pooling and prepared statements everywhere? It's usually the case, but just checking. They won't improve it much, but if you're in a situation where every little bit helps, then these would help.
What DB are you using? Did you see its manuals to tune its memory caches?
And what data is being requested? Does it have some joins or "like" operator or something? Can't suggest anything more without knowing the schema and application use cases well.
Joined: Jul 04, 2011
we are using oracle11g with connection pooling and prepared statements.. query consist of complex join operations and case statements with functions..