I've inherited an application to work on and one of the issues I'm trying to address is the performance of database retrievals. There's a lot of business logic tied up in stored procedures and in some cases, these stored procedures are taking 30 seconds or more to execute. I was hoping that I could add some indexes to the tables (most are formed poorly and many don't even have primary keys) to help me improve performance without modifying the existing source code.
I have one table, for example, that lists a bunch of customers and in each record is the ID of the service provider that they use. So I might see records that look like this:
In this table, CustomerID is a primary key and indexed.
In one of these stored procedures there's a query that looks something like this:
The variable @providers is being passed in as a stored procedure parameter and might look like this:
So, when the query is evaluated, it looks like this:
What I've been finding in my testing is that the more provider ID's I query on at the same time, the longer this stored procedure executes before returning a result set. If I query on just one provider, I can get a result back in about 19 seconds. If I query for 24 providers, it takes me about 30 seconds.
So, in hopes of improving these times, I added an index to the table CUSTOMERS. I indexed the column ProviderID and used a clustered index. To do this, I had to remove the clustered index from the primary key field, Clinic.
Originally, the results were promising as, when I queried using just one provider, I got a result back in about 12 seconds - an improvement of 7 seconds over what I had previously. However, as I added additional providers to the query, I found that my performance got worse - and not just a little worse like before, but much, much worse. Before I added the index, I could get a result back on a 24 provider query in 30 seconds. With the index, it takes over 6 minutes.
I know indexing can cause performance issues when you're doing inserts and updates and the like, but these are straight select statements, so I thought, in such a case, indexing could only help make things faster.
Indexes can lead to poor performance in that database optimizes use 'best-guess' hueristics to determine the query path. They don't have time to evaluate all possible paths and still return the query in a reasonable time.
What the indexes are probably doing is encouraging the optimizer to take a path that in actuality is slower than a brute-force method. It doesn't mean your indexes are a bad idea, but it may mean you need more of them or tune them differently.
Could you give a little more information about the table and how it's used and how it is organized? are their other columns that might benefit you by partitioning (I'm assuming this is an especially large table)... i.e., is this customer transactions by date and provider? or is just customers of the provider? Could you run an explain plan on your query and has the table been updated for the optimizer (if the statistics are stale, your performance will suffer dramatically).