This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
I'm looking for some advice on structuring my program which I'm trying to write to communicate with a popular (ish) game's dedicated server so I can implement my own automated administration features. The specification requires sending and receiving of packets, which I have implemented as byte arrays using the a socket's input and outputsteam which is encapsulated in a class I've called ConnectionService.
In practice, once the connection has been established and packets are flowing back and forth, there should be much more packets coming from the server then being sent to it, (though this is not guranteed). I therefore decided to use a thread each for the sockets input and output streams, with each picking up and placing into a BlockingQueue respectively. I had initially though about having a single thread writing to and reading from the streams, but I discarded this on the basis that a packet may not get sent while the thread was being blocked waiting for data from the inputstream.
Here is my code so far (using Brian Goetz's exellent book, Java Concurrency in Practice, as help):
There are some issues I need to tidy up with this, so please forgive any sins (such as not doing anything with caught exceptions).
This seams to work fine, but I'm thinking about how best to retrieve from the two BlockingQueues later on. Ideally I would like to implement a class called RequestResponsePair as each request should have a response which can be matched by the source of the packet and the packet's sequence number, so it made sense to have a single thread manage this. However this means that I'm back to the problem described earlier that my thread might be blocked while waiting for a packet to arrive when it should be sending one.
One solution I did think of was not to block my 'pairing' thread (which could be the main thread), but this would mean repeatedly polling the inbound BlockingQueue and where ever the client initiated request come from (user input, or more likely as a result of analysing the inbound packets). In this scenario wouldn't this thread be in a busy-waiting loop, which I assumed was undesirable.
Apols for the long-winded post.
All help, and any other guidance/advice is greatly appreciated.
I've investigated a little, and implemented a ScheduledExecutorService which initiates my Runnable every half a second or so and sends any packets before receiving any pending packets. This works fine, keeping a single thread busy some of the time, but my inexperience still leaves me with a question:
Is it better to have more threads blocked waiting for I/O, or a less threads polling every ½ to 1 second and acting if anything needs doing? I'm unable to judge when the overhead or more threads would consume more resource than polling regularly.
Some context: I would like this program to run on a virtualised Linux instance which also hosts a fairly low-traffic web site. If the game server is not populated, then I guess regular polling could be wasteful.