aspose file tools*
The moose likes Sockets and Internet Protocols and the fly likes How to time socket throughput - Resolved Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of The Java EE 7 Tutorial Volume 1 or Volume 2 this week in the Java EE forum
or jQuery UI in Action in the JavaScript forum!
JavaRanch » Java Forums » Java » Sockets and Internet Protocols
Bookmark "How to time socket throughput - Resolved" Watch "How to time socket throughput - Resolved" New topic
Author

How to time socket throughput - Resolved

Alvin Watkins
Ranch Hand

Joined: May 25, 2011
Posts: 53
I created a client / server messaging system with Java taking subscribers and passing messages to the subscriber client sockets. The subscribers are Flex Mobile clients, the server is Java. When I run the app on my desktop simulating an Android or iOS device, all was good until the messages got too large. When that happened, I discovered that the bytes are not coming down all at once. For instance, I had a stream of 31,500 bytes and after my simulation client failed, I discovered that the bytes being sent were 31,500, but my client socket was reporting 3762 and then 16,060 and so on until it had reached 31,500.

To fix this, I did this in the flex code:


This code is in a Flex ProgressEvent listener. Once the ByteArray is fully constructed, I cast it to whatever it needs to be cast to (such as an Array) and then I reset the stream and set bytesIn = 0. This works fine on my desktop simulation, however on my Android and iOS test devices, I see the following happen:

1. The server sends the header message and the client sees that the total bytes is 192. Client reports ByteArray.length == 0.
2. The server sends the actual message and the client does not read it at all and so the ByteArray.length remains 0. I believe the subsequent message is being sent too quickly for the mobile device but my desktop is able to keep up.
3. The server sends a new message header stating that the total bytes are 61 for the next message the server is about to send (server does not yet know that client did not fully process the first message).
4. Client now reports that the ByteArray.length == 2 (which is because it missed the actual message in step 2 and thinks it's reading the actual message at this time but it is really reading the header bytes).
5. Client now reports that the ByteArray.length == 63 (which is the incorrect reading of 2 header bytes plus 61 message bytes and is due to the client thinking that 192 lost bytes are what it is receiving).

I believe my client is still processing the header message when the actual message is sent and so loses the actual message. What's the best strategy to solve this problem? Here is the server code.

Alvin Watkins
Ranch Hand

Joined: May 25, 2011
Posts: 53
Since no one offered any ideas, here's what I did...

I created a MessageManager object which takes all client messages and stores them in Maps. MM sends to the client a header message if the MM.clientRecievedLastMessage is true. The client responds with the size of the message it is looking for (based on the header message). MM then sends the client the message that corresponds to the header. Once the client has received all bytes, it notifies MM. MM then removes the messages and if there are more unsent messages, sends the next header and the process repeats.

Thanks.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
 
subject: How to time socket throughput - Resolved