Welcome to the Ranch, Rahul!
7 databases in a single application is alarming. My own record is 3, and that's because I had to cross-correlate an SQL server, an Oracle Financials, and a general Oracle database. And actually Oracle could have presented all 3 as a single virtual database, had we been in a different position at the time.
But even more alarming is the number of concurrent web requests you're running against them. This doesn't sound like an ordinary webapp. In fact, stuff like this is usually the domain of a mainframe-style batch process. Or in modern times, perhaps something using noSQL technologies.
In short, it sounds like a brute-force solution, where the more common approach would be to abstract and reduce. Or at a minimum run a cluster of Tomcats to distribute the workload.
As it it, you're describing a process that's probably 2 orders of magnitude more than what a typical Tomcat webapp would handle.
So much for my concerns, however. I do hope that you're not
really using the "sa" userid for SQL server, though.
You asked about the removeAbandoned features. These are designed to counteract defective applications. An ideal webapp will pull a Connection from the pool, do work, and promptly release the connection (return it to the pool). If the app fails to release the Connection - which is often the case where Exceptions are thrown and thus bypass the usual close() logic - then connections leak and eventually the Connection pool will be exhausted, meaning that future requests will fail.
The Abandoned Connection features are intended to keep this from happening, but there is a price for it. Each Connection is tagged with a timestamp when it's pulled from the pool. When the pool detects that the Connection has not been returned to the pool (closed), then the Abandoned Connection logic assumes that the Connection has leaked and forcibly closes it. So if you have a short timeout and you're doing a long database operation, that potentially means that the Connection can be ripped out from under the database operation. Additionally, you can set up the ability to record the stack trace for abandoned connections, to help you find out where they're getting orphaned. As I said, this is an expensive thing to do, but can be very useful in debugging Connection problems. The suspectTimeout feature works in conjunction with JMX and supports the ability to determine whether an apparently abandoned Connection is really abandoned or not.
All of these features - repeating myself - have a cost. A well-written webapp will not use them because a well-written webapp will not be leaking Connections. You'd use these features only if you need to debug a webapp or for systems where recoverability is so critical that you're willing to pay the price in resource usage and performance.
The Tomcat docs don't say what happens when you set maxWait to -1. It would depend on the pool provider. However, if you're hitting the maxActive limit, realize that your choices have been reduced to having the web request "hang" waiting for a connection (which can get really ugly when you're dealing with 7 of them) or of failing because you couldn't get a connection at all.
The web request worker thread count and the JDBC connection pool limits are independent. You need a pool that can concurrently provide at least as many Connections as there are going to be webapp requests using connections simultaneously. If your pool limits are too small, then you'll run into the maxWait constraints. Also don't forget that just because you set your pool size high, that doesn't mean that the pool can reach that limit. The OS sets the ultimate limit on concurrent network connections to database servers.
And while I'm at it, I should repeat the standard warning:
Never, no NEVER hold a JDBC Connection between web requests.
Data can only be held between web requests (
servlet or
jsp calls) if it is stored in Session or App scope objects. Session-scope objects
must be Serializeable (Tomcat 7 will object if they are not). But Connection is an
interface, not a class, and therefore cannot be serialized. Plus, holding Connections between requests violates the tenet that
you should release Connections as soon as possible.
Returning to my original concerns, the whole idea of connecting to 7 databases at once is a matter of concern. It's fertile ground for all sorts of problems. Plus, it probably indicates a problem with your infrastructure. The 3-database program I mentioned required 3 databases because we had 2 product-specific databases plus the "universal" database that our general applications used. Had things been more complicated, we probably would have had a process that made everything the webapp needed appear in only one working database with data being migrated to/from the master databases according to some sort of schedule or mechanism that was insulated from the webapp. That, in fact, was one of the driving forces behind noSQL, along with the ability to
boost performance by flattening the record structure.