posted 9 years ago
Actually, the stock server.xml has 2 elements which illustrate maxThreadCount.
One is for use with a named thread pool, which allows several Tomcat Connectors to share a single thread pool instead of each constructing and using their own.
The other is a maxThreadCount on the model https Connector (port 8443).
The actual maxThreadCount by default is 200, so with non-shared thread pools and having uncommented the https Connector, you'd be able to handle 200 threads coming in on port 8080 (the default) and 150 from traffic coming in on port 8443 (specified by maxThreadCount on its Connector element).
You can actually have even more concurrent incoming requests, but after the processor thread pools are exhausted, the subsequent requests queue up waiting for a free thread until the number of queue slots defined by the Connector's "acceptCount" attribute are filled. The default for acceptCount is 100. So on port 8080, by default, if you have 300 or more simultaneous requests (pool size + acceptCount queue size), the later requests will be bounced back to the client with a "connection refused" error.
This is a lot of requests, so if you even think you're in danger of exceeding that many, it's probably a good idea to set up a multi-server cluster with some sort of load-balancing. And, of course, make sure the apps spend as little time processing as possible. Definitely no long-running SQL operations in the request-handling logic, for example. Not that that's a good idea, regardless.
The secret of how to be miserable is to constantly expect things are going to happen the way that they are "supposed" to happen.
You can have faith, which carries the understanding that you may be disappointed. Then there's being a willfully-blind idiot, which virtually guarantees it.