I recently ran into this same problem for a video conversion use case, and finally solved the problem with help from an expert right here on coderanch.
Video conversion is a long running operation, and I had the problem that client side wouldn't receive any response at all because the session had timed out.
Worse, the client wouldn't even get an error condition -
Tomcat silently dropped the connection, based on socket timeout because there was no response from server.
I tried all the usual configuration methods like increasing timeout values, session timeouts, etc etc. But they were never reliable solutions - there could always be a video which takes more time to convert.
I then shifted approach to handling this explicitly in code, and had these possible solutions:
1. When processing is going on, keep sending some dummy response text till the processing is complete, just to keep the connection alive. This was ok in my case because it was an ajax request and response expected was text. So, I could send dummy text, then send a delimiter, then the actual reponse. The client would strip off the dummy text, then use the actual response.
While this did solve the problem, it felt "wrong", so I got rid of this solution... Seeing your processing time ranges (2 hrs, 8 hrs, etc!), it feels even more wrong.
2. I thought about using the
reverse ajax (Comet) capability of Tomcat. This is one possibility for you and I've read even some high traffic sites like Quora use this (but it's not Tomcat - they've a complex architecture involving nginx and stuff like that for scalability). In my case, this involved a high learning curve because I've never used Comet - so I dropped this. Also, in hindsight, it seems a good idea for a second reason : there was going to be just 1 response - done or not done - and it seemed a waste to keep connections open for just 1 response, be it NIO connection or whatever. The user is intelligent and understands it may take time and he/she may have to login later to check if it's done.
3. The solution I finally used was good old polling. On the server side, I implemented a kind of job manager that provides a token (just a number) to the client and then starts off a job in the job queue. The client then polls every minute with that token and receives a simple 1 byte status. The client keeps the token in a cookie, and since my app also involved user logins, I could keep job statuses in DB in case client logged in later. Though it involved
some quite a bit of redesign work, I like this solution because it's a clean design, does not reduce availability to other users and is scalable without any fuss.
Since yours is a web client app (desktop or mobile), so you don't even have the problem of cookies and things like that.
After this exercise came the realization: HTTP is simply not the right protocol for long tenure communication.
William Brogden's advice "What you are describing - a very long term connection - is completely contrary to the spirit and design of HTTP so it is time to back off and redesign. " is really true and I wish I'd got this advice before starting with my application. Hope you won't repeat my mistake