I'm reading in a binary file and searching for key markers to determine what action to take with the data that follows the marker. I'm using a FileInputStream with a buffer to read the file.
My search algorithm was to read a buffer sized chunk (from 64K to 1Mb), search that buffer for my
pattern and save how many of the patterns occured in that buffer and repeat the read accumulating the total pattern match count for the file. Changing the buffer size gave me different numbers of pattern matches so I knew I was doing something wrong. My guess was that my pattern (which is 32 bits) was being split across buffer reads so changing buffer size would mean different numbers of matches. I wrote a quick fix that would append the last (pattern_length - 1) bytes from the previous buffer read to the beginning of the current buffer prior to searching for my pattern. This just made things much worse...
I finally just started looking at where the different buffer sizes were finding my pattern and it turns out that all the differences were at the very end of the file where, it appears, the last buffer reads past the end of the file getting garbage data. Disguarding the bad matched at the end gives the correct number of patter matches. I don't understand that...
Don't I have to worry about searching for a pattern when reading a file in buffered chunks like this? Isn't having the pattern split across reads a possibility?
I've attached the code. It's just something I threw together for
testing so please don't be too critical....