Win a copy of Design for the Mind this week in the Design forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

NX: (HTL) Dynamic file reading using schema

 
james render
Ranch Hand
Posts: 72
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Guys, finding it tough to get going on my assignment. Looking at the File I/O problem.
My question is:
Should I be dynamically reading data in from the db file using the information in the header and the schema, or okay to hardcode file positions.
I know in my heart of hearts it should be done dynamically, but how far to carry this through the system?
Plus reading dynamically still means that you've made assumptions that the file will be in the same format.
There aren't really any requirements mentioned, am I overengineering..
tying myself in knots about this!!
james
 
S. Ganapathy
Ranch Hand
Posts: 194
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
if you are reading the file for the first time, you read the file format, and read all the header information. use the same heaer information like from which position data starts. this is enough to say you are reading dynamically. who else will delete the data file / corrupt the file header during the process?
 
james render
Ranch Hand
Posts: 72
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
So you don't have to deal with the fact that in the future the file format might change, e.g. if someone adds another field..
 
S. Ganapathy
Ranch Hand
Posts: 194
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
that means your database schema changed. so you have to modify your database implementation accordingly to take care of this new field, though interface won't change.
 
Ta Ri Ki Sun
Ranch Hand
Posts: 442
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by james render:
So you don't have to deal with the fact that in the future the file format might change, e.g. if someone adds another field..

adding another field or similar schema change is likely to break a few things, so when you startup you check that everything you depend on is still the same way it was when you wrote the code, else throw an exception describing why your integrity checks failed, if these enhancements do happen then the coders doing the enhancing should have gotten fair warning from your documents, and they should update your data integrity check as well.
 
Ta Ri Ki Sun
Ranch Hand
Posts: 442
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
oh yes, having said that I haven't implemented that at all yet since I still haven't found a good enough reason to, I also haven't scrutised my code enough for such a reason yet, but on the surface it seems as though I can handle these schema changes, provided the header and schema description format remains the same.
I'll know soon enough I guess, when I change the database myself soon as I have some time.
 
S. Ganapathy
Ranch Hand
Posts: 194
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
always checking for the data integrity for each read may be expensive, and its performance will be poor. So better assume that there wont be any concurrent modifications to data file and does not change the integrity of database schema. this may be enough.
 
S. Ganapathy
Ranch Hand
Posts: 194
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Ta Ri Ki Sun,
I already made few attempts on this and come to this implementaion. Integrity checks for each read is expensive, and affects the performance too. I introduced private static final constants to check for the database integrity, and found that sum of the field lengths and length of the record matches, then the format is fine enough to proceed further.
What do you say
Ganapathy
 
Ta Ri Ki Sun
Ranch Hand
Posts: 442
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by S. Ganapathy:
Hi Ta Ri Ki Sun,
I already made few attempts on this and come to this implementaion. Integrity checks for each read is expensive, and affects the performance too. I introduced private static final constants to check for the database integrity, and found that sum of the field lengths and length of the record matches, then the format is fine enough to proceed further.
What do you say
Ganapathy

Hi Ganapathy, when I suggested the check to James I meant on startup, definately not for every read, if on startup all is well, and only your system can make changes, it should be safe to assume all is well until next startup.
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic