Help coderanch get a
new server
by contributing to the fundraiser
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Paul Clapham
  • Devaka Cooray
  • Liutauras Vilda
Sheriffs:
  • Jeanne Boyarsky
  • paul wheaton
  • Henry Wong
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Tim Moores
  • Carey Brown
  • Mikalai Zaikin
Bartenders:
  • Lou Hamers
  • Piet Souris
  • Frits Walraven

NX:Reading and Writting to DB file - totally lost!!!!!

 
Ranch Hand
Posts: 234
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My start of file data schema reads:
Start of file
4 byte numeric, magic cookie value identifies this as a data file
4 byte numeric, offset to start of record zero
2 byte numeric, number of fields in each record
But when I read in these bytes they are either blank or special characters
that I cannot interpret (it looks like a bunch of junk, in fact two of the bytes look like smiley faces)? What am I doing
wrong:
try {
raf = new RandomAccessFile(file, "r");
} catch (IOException e) {
System.out.println(e.getMessage());
}
byte [] bArray;
bArray = new byte[10];
raf.readFully(bArray);
String str = new String(bArray,0,10,"US-ASCII");
System.out.println(str);
 
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have implemented the reading and writing of the DB with DataInputStream and DataOutputeStream as alluded to in the assignment (Bodgitt and Scarper.)
I works really well and is much easier that RandomAccessFile.
/Niall
 
Bill Robertson
Ranch Hand
Posts: 234
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
let me try them and I will get back to you. thanks!!
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have implemented the reading and writing of the DB with DataInputStream and DataOutputeStream as alluded to in the assignment (Bodgitt and Scarper.)
I works really well and is much easier that RandomAccessFile.

Really? Check out the DataInput and DataOutput interfaces. Note that these interfaces are shared by DataInputStream, DataOutputStream, and RandomAccessFile. If you can use DataInputStream, you can use RandomAccessFile much the same way. RAF has the conventience of having input and output in one class, and allows random access as well. To understand the latter, consider - how would you update record 1000 with a DataOutputStream? Do you write the previous 999 records first? (Or 1000 if you start with 0.) This might make sense if you're keeping all records in memory, and updating the whole file at once. Many people prefer to do their updates to the file immediately, however, and many do not want to keep everything in memory. That's why you see a lot of people using RAF and FileChannel, rather than streams.
 
Ranch Hand
Posts: 493
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello Bill,
Look up the Random Access File API in JDK 1.4+. It is not all that complicated once you get the hang of it. I suspect that you might be reading the file header information using the wrong functions. For example to read the magic cookie. You need to use the following method:
magicCookie = in.readInt();
Don't worry about the "US-ASCII" character encoding here since it only applies to the record-data and not the file header data. Moreover all record data is stored as fixed length strings.
After reading magic cookie you can read the record length by using the readInt method again as follows:
recordLength = in.readInt();
Similarly, to read "short" type data, you need to use the readShort() method as follows:
numOfFieldsinRec = in.readShort();
Note that I have used these functions in the order specified for the data file.
Write a small test program which reads data in small chunks and you will begin to gain confidence. If you get stuck, post it here and we will help you out.
Regards.
Bharat
 
Bill Robertson
Ranch Hand
Posts: 234
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
thanks for all the help. I have the reading of the file complete.
Now when it comes to writting to the file, how do a locate lets
say record 3. Lets say I only want to update/delete record 3 in the
file. Do I do something like seek(3*recordOffsets) to get to the
beginning of my record? For example, record 3 starts at the
201st bit place in my file. How do I determine this other than
hardcoding the length of the headers and data fields(32,64,64,6,8,8)
in my seek.
Am I making any sense?
 
Ranch Hand
Posts: 555
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Bill,
You don't have to hard-code it. I have URLyBird assignement and this values length of fields are saved in the header of the database file.
I assume you have something the same?
If so, you read the header and store in your class.
Vlad
 
Bharat Ruparel
Ranch Hand
Posts: 493
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello Bill,
Nice going!
To read records, you need to compute and store record offset data in a long member. I have a singleton class defined which is called DataSchema, this is the class that stores record offset data in a private long instance variable. You need to be able to read it by exposing a method such as getRecordOffset() to read this value whenever you need it
Next, make sure that you read the records and the deleted flag as units, i.e., you need to position your file pointer by using the seek method and the record offset computed right at the point where you would be reading the record status byte. Next using the header information that you read in the DataSchema class for the fields (mainly field lengths), read in the fixedString fields on by one in the order defined in the header. Try it, if you still have problems. Let us know.
Regards.
Bharat
 
Bill Robertson
Ranch Hand
Posts: 234
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
But I don't understand how this applies to updating (reading) to
a single record within the file???
 
Bill Robertson
Ranch Hand
Posts: 234
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ok, let me get this straight before I create the method. For writting,
not reading to the file, i do have to keep track of offset lengths and
take into account the valid (or deleted) record indicator, and then
use seek to position myself in the file to perform the write. Is this
correct?
I am assuming that for each delete/update/insert I only want to be working
with one record so obviously I do not want to write the entire file again.
Just the record at hand?
 
Bharat Ruparel
Ranch Hand
Posts: 493
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello Bill,
You wrote:


Ok, let me get this straight before I create the method. For writting,
not reading to the file, i do have to keep track of offset lengths and
take into account the valid (or deleted) record indicator, and then
use seek to position myself in the file to perform the write. Is this
correct?


If you read-up on the jsdk documentation supplied by Sun for Random Access Files, you will see that there is a concept of a file-pointer associated with it which is incremented as you read data off the file as well as write data to the file. Also, the file pointer can be positioned pretty much anywhere in the file. Therefore, in order to be reading/writing data from the file, you need to be positioned correctly. More specifically, you should be positioned just before the byte after which you need to be reading from or writing to. Now, you will not want to be overwriting the header data. That is where the concept of the record offset comes in.


I am assuming that for each delete/update/insert I only want to be working
with one record so obviously I do not want to write the entire file again.
Just the record at hand?


That is what I am doing. Others might differ.
Regards.
Bharat
 
Niall ORiordan
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi all,
I can see why you guys prefer RandomAccessFile to the DataOutputStream and DataInputStream classes for I/O to/from a file. However, if you are using RandomAccessFile I reckon you are making the assumption that you are changing the actual database file in real time. In other words you are reading and writing to the datbase with EVERY operation on the database. However, I looked at it from a different angle and read the database into an internal representation. Therefore, there was no need to read the database every time you wanted to perform an operation on the database....you just do everything against this internal representation. Then I could commit the database to file after every update,delete operation in order to provide a kind of transactioning system....
What do you think?
Do you think RandomAccessFile solution is a 'cleaner' solution?
 
author and jackaroo
Posts: 12200
280
Mac IntelliJ IDE Firefox Browser Oracle C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Niall
There are quite a few people here who are using some form of cache mechanism for their assignments.
As has been said many times before: "there is no one right answer for the issues in this assignment". So if you feel that DataOutputStream works well for you and is easy enough to understand and maintain, then you should be fine.
Regards, Andrew
 
Ranch Hand
Posts: 81
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

How do I determine this other than
hardcoding the length of the headers and data fields(32,64,64,6,8,8)
in my seek.



This is the way I did my read.

Will this get me in trouble for passing the exam?
 
Ranch Hand
Posts: 308
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Nial,
I did compare DataInputStream with RandomAccessFile. This is what I concluded.

RandomAccessFile provides seek function so you can traverse through the file easily.
On the other hand while using DataInputStream you need to load evrything into the cache as you are doing. This is OK for small files and for the assignment.

I was a bit concerned over the synchronization between cache and the file itself. What if jvm get's down unexpectedly. Should we lose data? To avoid this circumstance we will try to write to file before updating cache. This makes the persistence mechanism less portable. For my assignment the requirement said that the application will soon change.

My selection was the random access file. Best luck.
 
Ranch Hand
Posts: 531
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I use the RandomAccessFile, cache the records at startup and put code to guarantee that a record is updated with the data file before it is updated in the cache. The caching results in quicker finds and reads, and also results in contiguous writes to the data file whenever updated, and contiguous reads from the data file at the startup - when caching. So the benefits are numerous... The drawback is the footprint in RAM. However, if you would rather keep it all IO, performance does not seem to be a grading issue, as some forum participants have stated
[ October 26, 2004: Message edited by: Anton Golovin ]
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic