I have to design a mini database server. Machine that has the database will be called server, and the machine that sends request to it will be client. When the server machine boots, a server socket is opened on the machine to listen to client requests for the database. The server.java class has the main method that opens the server socket. The main method then spawns threads for read and write operations. The database server application is so designed that when there is no or only read operation(s) going on on a file(flat file database), allow further read operations and if a write request is made this write thread must wait till all the read threads exit. Similarly if a write operation is going on, any other read or write request thread must wait. A single client request may spawn multiple read, write threads on various files. Multiple clients may make simultaneous requests to the server. The problem is that, whenever a client has a blocked read or write thread, the next calling client is not granted a network connection to the database, i.e. the accept() method does not get called. This happens only because the control is not going back to the main thread. Whenever a thread has to wait, I make it sleep. This must automatically yield control to the main thread and let other clients connect. But it does not happen. In fact, if I have a client that has all read operations, and hence no thread is blocked, the next calling clients are accepted. But as soon as even a single thread of any one client has to wait, no next client is accepted. Although when the waiting thread gets the control in due time, the next client is connected.
I tried the same program for a test program where the main thread does not make network connections but runs a loop to print 50 numbers. In this case the main thread does resume control when a child thread gets blocked.
If I get your algorithm correctly, for any file I can get read access if there are no writers (don't care if there are other readers) and I can get write access if there are no readers and no writers. I think you'll find that if readers come along fast enough to start a new one before all the old ones are finished they'll lock the writers out completely! You might just run all requests on one file FIFO to give the writers a fighting chance.
The first algorithm sounded like something I could learn from, so I took a shot at it. See my toy implementation here: http://www.surfscranton.com/files/Locker.zip Play with the pause time before reading to simulate the frequency of requests and the pause time that simulates reading or writing the file to see readers and writers mix it up more or less. Any thread gurus, lemme know if this is a sane approach.
Oh, rats. I ignored the networking part altogether. Maybe you can adapt the getReadAccess and getWriteAccess to networked APIs? [ May 14, 2004: Message edited by: Stan James ]
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi