Would anyone be able to suggest a way that I can test if a file is done being written to?
1. server 1 writes an XML file to a DFS location. This file can potentially be large. 2. (I thought when we were 'done' writing the file) a JMS message is sent to server 2 to read that XML to process the request. However, it seems that many times, server 2 tries to read the file before it has completly been written, and therefore throws a premature EOF exception.
If I force server 2 to retry processing the same file, just seconds later, it is successful.
I thought about just putting in a thread wait line to make the server 2 process wait 15 seconds, after it receive the JMS message and before it tries to parse the file. But I'm afraid that if one day the file is extremely large, it will still exceed the 15 seconds, and server 2 will still fail. So, I want to have the process from server 2 look at the file and determine if it is complete before trying to parse it.
First the simple solutions is to test for writable before opening the file for reading. You can use File class canWrite() method to check. Writing to a file should lock the file resource in exclusive lock, probably the 1st process is taking longer to release all the resources associated. However I don't understand how it is possible. How is possible for the 1st process to send an JMS message saying file has been written? Is close() method on the outputstream called before sending out the message?
I'm a newbie a file I/O stuff. Thank you for opening my eyes. The FileOutputStream instance is not being closed. That is probably my problem.
But I have one more question. This premature EOF error only occurs in our QA environment where we have multiple servers (1 & 2). This error never occurs in our dev environment where there is ONLY one server that is doing both the reading & writing.
Does it always work in our dev environemt because the server can tell when one of it's own threads has the resource locked, and therefore it waits for it to be released, then processes it?
I once wrote a system to send and receive files via a socket connection. There I had the same kind of problem - other systems on my side shouldn't pick up received files before they were completely written to disk.
I solved it by writing the file that is being received to another directory first.
For example, my software would write the file to C:\Data\In\Temp while it was busy receiving the file. When the file was received completely, I'd move it to C:\Data\In, where the other system would pick it up.
Sorry, I should have been more clear. Actually, the issue occurs ONLY when server 1 is writing the file and server 2 premature tries to pick up the file from the location it's being written to. In QA, the 2 app server processes are in a clustered environment, 1 app on each app server. So, sometimes server 1 will write & server 2 will read/parse. Other times, one server is doing both. 10 of 10 process attempts would fail approximately 30% due to the opposite server trying to read/parse the file before it is completly written by the 1st server.
As a test, in QA, yesterday, I actually shut down server 2 to force server 1 to do the both reading and writing, and 10 of 10 process attempts were successful. The same as it works in our Dev environment.
Thank you for your suggestions.
Jesper, I am currently writing directly to a file system. The JMS message is just text informing the 2nd process to go get the file. So in your situation did you check for completion before you moved the file from temp to the final destination directory where your 2nd process could pick it up? In other words, how did your app know when it was ok to move the file to the 2nd directory?