aspose file tools*
The moose likes Developer Certification (SCJD/OCMJD) and the fly likes NX: (HTL) Dynamic file reading using schema Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Certification » Developer Certification (SCJD/OCMJD)
Bookmark "NX: (HTL) Dynamic file reading using schema" Watch "NX: (HTL) Dynamic file reading using schema" New topic
Author

NX: (HTL) Dynamic file reading using schema

james render
Ranch Hand

Joined: May 08, 2003
Posts: 72
Hi Guys, finding it tough to get going on my assignment. Looking at the File I/O problem.
My question is:
Should I be dynamically reading data in from the db file using the information in the header and the schema, or okay to hardcode file positions.
I know in my heart of hearts it should be done dynamically, but how far to carry this through the system?
Plus reading dynamically still means that you've made assumptions that the file will be in the same format.
There aren't really any requirements mentioned, am I overengineering..
tying myself in knots about this!!
james


[SCJP][SCWCD][SCJD]
S. Ganapathy
Ranch Hand

Joined: Mar 26, 2003
Posts: 194
if you are reading the file for the first time, you read the file format, and read all the header information. use the same heaer information like from which position data starts. this is enough to say you are reading dynamically. who else will delete the data file / corrupt the file header during the process?
james render
Ranch Hand

Joined: May 08, 2003
Posts: 72
So you don't have to deal with the fact that in the future the file format might change, e.g. if someone adds another field..
S. Ganapathy
Ranch Hand

Joined: Mar 26, 2003
Posts: 194
that means your database schema changed. so you have to modify your database implementation accordingly to take care of this new field, though interface won't change.
Ta Ri Ki Sun
Ranch Hand

Joined: Mar 26, 2002
Posts: 442
Originally posted by james render:
So you don't have to deal with the fact that in the future the file format might change, e.g. if someone adds another field..

adding another field or similar schema change is likely to break a few things, so when you startup you check that everything you depend on is still the same way it was when you wrote the code, else throw an exception describing why your integrity checks failed, if these enhancements do happen then the coders doing the enhancing should have gotten fair warning from your documents, and they should update your data integrity check as well.
Ta Ri Ki Sun
Ranch Hand

Joined: Mar 26, 2002
Posts: 442
oh yes, having said that I haven't implemented that at all yet since I still haven't found a good enough reason to, I also haven't scrutised my code enough for such a reason yet, but on the surface it seems as though I can handle these schema changes, provided the header and schema description format remains the same.
I'll know soon enough I guess, when I change the database myself soon as I have some time.
S. Ganapathy
Ranch Hand

Joined: Mar 26, 2003
Posts: 194
always checking for the data integrity for each read may be expensive, and its performance will be poor. So better assume that there wont be any concurrent modifications to data file and does not change the integrity of database schema. this may be enough.
S. Ganapathy
Ranch Hand

Joined: Mar 26, 2003
Posts: 194
Hi Ta Ri Ki Sun,
I already made few attempts on this and come to this implementaion. Integrity checks for each read is expensive, and affects the performance too. I introduced private static final constants to check for the database integrity, and found that sum of the field lengths and length of the record matches, then the format is fine enough to proceed further.
What do you say
Ganapathy
Ta Ri Ki Sun
Ranch Hand

Joined: Mar 26, 2002
Posts: 442
Originally posted by S. Ganapathy:
Hi Ta Ri Ki Sun,
I already made few attempts on this and come to this implementaion. Integrity checks for each read is expensive, and affects the performance too. I introduced private static final constants to check for the database integrity, and found that sum of the field lengths and length of the record matches, then the format is fine enough to proceed further.
What do you say
Ganapathy

Hi Ganapathy, when I suggested the check to James I meant on startup, definately not for every read, if on startup all is well, and only your system can make changes, it should be safe to assume all is well until next startup.
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: NX: (HTL) Dynamic file reading using schema