i have 3 other machines at the same subnet, running the clients that should connect to the broker on JMSBROKER. all clients are run totally in the same way: 3 instances of the same application are running by tomcat containers installed on all of the machines. the clients are using the following discovery uri to find a broker: discoverymulticast://zNet)?initialReconnectDelay=100
now to the problem itself.
while 2 machines are successfully connecting to the broker, the third one just doesn't make it.
QueueConnection.start() method never returns, there are no exceptions or errors thrown.
when i run netstat command on each of the clients machines i see that those connected show the following:
Proto Local Address Foreign Address State
TCP client_machine's_ip:port JMSBROKER_ip:61616 ESTABLISHED
but on the machine that makes troubles i see the following:
TCP client_machine's_ip:port 192.168.2.222:61616 SYN_SENT
i can't figure out where that 192.168.2.222 (which sometimes turns into 192.168.2.221) is coming from.
when tomcat is down, there is no connection to port 61616, so there is no additional process working with the same port running on this machine.
i guess that multicast broadcast mistakenly returns a wrong ip or something.
has anyone ever seen such behavior and have an idea where should i look for a solution?
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: http://aspose.com
subject: wrong broker address with multicast lookup