Win a copy of Five Lines of Code this week in the OO, Patterns, UML and Refactoring forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
  • Campbell Ritchie
  • Bear Bibeault
  • Ron McLeod
  • Jeanne Boyarsky
  • Paul Clapham
  • Tim Cooke
  • Liutauras Vilda
  • Junilu Lacar
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • fred rosenberger
  • salvin francis
  • Piet Souris
  • Frits Walraven
  • Carey Brown

NIO with Servlet 3.0 for Streaming Comet

Posts: 18
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi everyone,

I am working on a project that reduces (in some sense) to a chat room. My prototype application uses Tomcat 7, NIO connector, and Async Servlet 3.0.

To receive updates from the server, the client opens up a connection to the servlet. The servlet does not close the connection, and streams data (when appropriate) to the client. Periodic bits are set to keep the connection alive.

I based the server (in some sense) on this article:

But the general idea is that when a new client joins, I create an AsyncContext and add that to a List. When new events occur on the server, I loop through the list and print data to that client using the AsyncContext.

My questions are the following:

1) I can't find a clean way of detecting if a client has left (ie moved to another page). I currently handle this by catch an exception when trying to write to that client's AsyncContext, if the exception is caught I assume that the client left and I remove the client's AsyncContext from the List of clients who need to receive data. Is there a better way of doing this?

2) I am wondering how well the solution at the above link will really scale? It seems that Async works best if there is some processing that needs to occur, which then frees up the thread to service other requests... but in this case the lots of clients would be listening and likely be receiving data back every few seconds. Does the above link present scenario that would allow this to scale well?

3) If one could expect that the client would be receiving *new* data every few seconds... would it be better to simply poll using a standard Blocking IO HTTP connector. Since Keep Alive should keep the connection persistent (in some sense) could this be better? It seems like the above link works well if new data is not often needed for the client, but would traditional polling work well if new data is needed often (ie at every poll new data is likely).

Any other thoughts or insights would be great. Thanks.

With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
    Bookmark Topic Watch Topic
  • New Topic