I'm working on a multicast UDP reader on
Java and noticed that it tends to drop packets more often than I would like to. For
testing purposes, I use a file that I replay across my network using tcpreplay and read the file using both Java and tcpdump. Here's what I have so far:
- Multiple runs show me that tcpdump picks up all the packets all the time
- The Java client running on the same machine as tcpdump picks up a different # of packets each time I run it (Drops ~0.03% of total packets)
- The only time the Java client was able to pick up all the packets was when I removed all my packet-processing code and had it just count the number of packets that arrived.
So I've seem to hit on two things -- one is that everytime I add some post-processing to the UDP data that I receive, I tend to lose packets. The other is that, this loss of packets is not uniform across several tests. In fact, I ran two instances of the Java client on 2-3 machines and every one of them received a different # of packets (around ~0.01% different).
How can I ensure that the java program receives all the packets sent across the network? Is the standard DatagramSocket class not powerful enough to handle this? My hunch is that this packet loss is caused by two things:
- Buffer overflow at the UDP socket reader
- Java garbage collection running on my post-process code and blocking the UDP reader also causing buffer overflow.
Thoughts?
Thanks