I have an application that I create a serverSocket to run in a thread to read data being sent to the server. I use the basic server scheme where I block in a while loop using input.readLine() and I place the recieived data in a StringBuffer. When the end of transmission is reached I break out of this loop and place the StringBuffer in a queue for another worker thread to chew on. I then return to the while loop where I block for more incoming data. This application works as long as the load of incoming messages is low (less than 30 messages per second using a message 1K in size). If I bump up the client to send messages at a rate higher then that I begin to get incomplete messages received at the serverSocket. If I display the data being sent at the client, the message is complete on that end. I am not sure what the issue is. I have some C and VB colleagues that are telling me that the JAVA TCP API should be throttling back at the port to avoid an overrun/flood of data by telling the TCP API at the client to throttle back on sending data. They tell me that they do not have these issues in those languages and are blaming Java for my issues. I have no idea as to why the messages are getting lost/truncated. Any clues as to why this is happening would be greatly appreciated. Note that I used the client/server tutorial at Sun to base my design on so I am not doing anything out of the ordinary to receive the incoming messages. I am using Java 1.4.2 for my SDK and JRE. Lon Allen
Your colleagues are spouting nonsense. How do you determine when a message is complete? Is there a specific marker for end of message? If there is, then I'd suspect an issue with synchronization between your multiple threads. If not, then your server thread may just be giving up too soon, or it could still be a synchronization issue.
The only thing I see that I need to synchronize with is the queue object, which I am. The queue object has synchronized methods for put and get also. But to remove that from the equation, I stripped all my code that did anything other than just reading the data, appending each line of data to the StringBuffer and sending the data to a file for inspection. I still get lost messages at several points in the transfer. Because the messages are assembled at the client, in the client code, and a true EOF was not being placed at the end of the message (which readLine() should return a null with), we have settled on adding a tag of our own that we agreed upon, and if I don't get that tag I never break out of the readLine() loop, which is not a problem. The parts of the message that I am not getting are at the beginning of the message. I am usually only getting the last 1/4 or less of the message. BTW, the messages are XML messages with linefeeds ("\n") at the end of each line so that I just do a readLine() for each line of data. That way I don't have to worry about the size of each data packet getting out of hand. I have also set the priority of the socket reading thread to one higher (10)than the other threads in my app (5). I have the other threads "wait(timespan)" after doing their work and I have the reading thread "Notify" after each object is placed in the queue. But, like I said, even if I don't do the other work and just display the messages to the System.out or a file I get the same message loss problem. Thanks, Lon
How many threads are you running? Just one with mySocket = myServerSocket.accept()? Or do you start a new one (or take one from a pool) every time you accept? You mentioned working from the Sun tutorials, so I'd expect the latter, but ya never know. If you're doing the former - a single thread - it would make sense it would become a bottleneck at volume. The "new thread for each request" can become a bottleneck, too, as thread creation and destruction is not free. Would it be overkill to look into a thread pool?
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi