I am running a jax-ws web service on Tomcat (using Apache CXF). Whenever a service method throws a buisness exception, cxf (jaxws) sets a 500 HTTP status code, which triggers Jboss to insert a Connection: Close header into the HTTP response.
This behaviour is undesirable, as I want to re-use the connection in the client.
I'm afraid I don't understand. A "500" error is standard HTTP response meaning that the server encountered an error in the application and could not complete your request. It has nothing to do with network connections per se.
HTTP is not a client/server protocol. It's a series of requests paired with responses on a strict 1-to-1 basis. In primitive days, that meant that the client would open an outbound connection and set up a reply socket, send the request, then receive the response via the reply socket which it would then process as appropriate (generally formatting the output as a web page in a browser window). Once all this was done, the whole set of network resources would be discarded, to be re-created again and again on subsequent requests.
Modern day implementations reduce the overhead by implementing a "keep-alive" mechanism designed to retain and re-use resources where it would be beneficial, but this is all done transparently without any special coding. The default settings in the client and server are usually good enough.
Customer surveys are for companies who didn't pay proper attention to begin with.
Joined: Apr 06, 2008
Thanks for your help,
Please let me clarify.
It is evident that the server sets the Connection: Close header while issuing a 500 response (I checked the response header with WFetch).
Although the 500 status has nothing to do with connections, still, tomcat sets the above header. When the client receives that header, it closes the connection, so it cannot be reused.
Joined: Apr 06, 2008
looking at Coyote's code, particularly Http11Processor, it is obvious that Coyote closes the connection in case of those statuses:
400 /* SC_BAD_REQUEST */ ||
1720 status == 408 /* SC_REQUEST_TIMEOUT */ ||
1721 status == 411 /* SC_LENGTH_REQUIRED */ ||
1722 status == 413 /* SC_REQUEST_ENTITY_TOO_LARGE */ ||
1723 status == 414 /* SC_REQUEST_URI_TOO_LARGE */ ||
1724 status == 500 /* SC_INTERNAL_SERVER_ERROR */ ||
1725 status == 503 /* SC_SERVICE_UNAVAILABLE */ ||
1726 status == 501
Actually, from a (very) superficial reading of that code, I think that Tomcat is actually aborting in-transit response data and replacing it with the standard "500" error response stream.
Which would mean that doing it any other way would result in garbage (incomplete response data) being returned to the client.
I think you're worrying about this too much. A properly-functioning application should not be returning "500" response codes to begin with, because the app should be handling any business exceptions as application errors, not as something for the server to come along and try and pick up after. It's an imperfect world, however, so even the best of apps has the potential to throw a 500.
However, since this should be the exception rather than a common occurrence, I wouldn't obsess about it. The next good request/response should re-establish keep-alive.
Of course keep-alive is merely a streamlining process to begin with. If the users sit around and drool and stare at the monitor, keep-alive will expire anyway. As it should, since there's no point in typing up resources over-long. It's not that much overhead to re-establish the connection.