• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

RandomAccessFile problem during modification

 
Ranch Hand
Posts: 335
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have created file,

RandomAccessFile f = new RandomAccessFile("hello.maptxt", "rw");

f.writeBytes("A1?;D123\n");
f.writeBytes("DEF;ABC\n");
f.writeBytes("B*;EST\n");
f.writeBytes("C?;D13512332\n");


Now I want to modify record DEF;ABC with ABC;DEF.

RandomAccessFile f = new RandomAccessFile("hello.maptxt", "rw");

while(true) {
long begin = f.getFilePointer();
String str = f.readLine();

if(str.equals("DEF;ABC")) {
f.seek(begin);
f.writeBytes("ABC;DEF\n");

break;
}
if(str==null) {
break;
}
System.out.println(str);
}
This works fine but whenever number of char differs than next record gets affected.

like if I want to modify ABC EF with ABC EFG now next record gets affected?

what to do?
 
Ranch Hand
Posts: 1970
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
There's no operation for replacing different-sized data in a RandomAccessFile, therefore you have to do it yourself. If the data you are replacing is a different length to the new data, you must move the rest of the file contents appropriately. If the file is known to be small, this is pretty easy. If it could be big, you need to take plenty of care.

In general, it is bad to have a data file structure that requires you to do this type of thing. If you have the opportunity to change the file format, you probably should do so. You could consider allowing enough space for any likely value, and filling unused spaces with some special byte. Or you could go for a more advanced format, using indirection within the file.

On a minor point, you have a bug in your code regarding line endings. You are writing the line ending as '\n', and reading using readLine(). These two are not guaranteed to use the same line endings (I guess they do on the platform you currently use, if you program "works"). In addition, you should be aware that you are implicitly doing lots of String/byte and byte/String conversions using the JVM's default locale.
[ September 22, 2006: Message edited by: Peter Chase ]
 
Bartender
Posts: 9626
16
Mac OS X Linux Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
JavaRanch IO FAQ: Edit An Existing File
 
Santana Iyer
Ranch Hand
Posts: 335
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks Peter and Joe.

My requirement is that I should be able to write, read and modify file contents.
And file will contain around 1,00,000 records.

Creating temp file does not seem to be good solution to me.

What do you suggest?
[ September 22, 2006: Message edited by: Santana Iyer ]
 
Author and all-around good cowpoke
Posts: 13078
6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Creating temp file does not seem to be good solution to me.


I guarantee it will be easier than coming up with an "in-place" editing scheme that can handle un-equal record lengths. Furthermore, note that if in-place editing fails for any reason, the original data file will be corrupted.

If you absolutely have to pursue in-place editing of files too big to hold in memory, look at the way word processors handle this sort of thing.
Bill
 
Santana Iyer
Ranch Hand
Posts: 335
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thankyou All, thanks for suggestions.
 
Santana Iyer
Ranch Hand
Posts: 335
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
If I fix length of every record say 30 bytes. (Putting whitespaces so that every record has 30 bytes) than can I go for modifying same file.

Because I can have 2 million records in a file.

What is your suggestion.
 
Joe Ess
Bartender
Posts: 9626
16
Mac OS X Linux Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
if you can guarantee that records will not exceed a fixed size, RandomAccessFile will work fine. Of course, you have to consider what to do about inserting, deleting and even finding records in that big of a file. At some point you are doing a lot of work creating a low-level database and less work on your problem. At that point it's easier to use an in-process database like Berkeley DB or, for larger applications, a full-blown RDBMS.
[ September 27, 2006: Message edited by: Joe Ess ]
 
Santana Iyer
Ranch Hand
Posts: 335
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks yes you are right regarding use of db even we suggested that but requirement at other end is like this i.e. to use file and no db.

regarding deletion I am thinking of creating temp and renaming.
 
Bartender
Posts: 612
7
Mac OS X Python
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I guess the question to ask is why the requirement of a file not a db - berkley is quite good, supports transactions and rollbacks and multiple types of indexing.

Well what about packages that handle indexed files (such as b-trees, or ISAM/VSAM (I guess this could be called JSAM))?

And note: usually for such large files, one uses a delete flag then you can compact the file after it is backed up.
 
That new kid is a freak. Show him this tiny ad:
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com
reply
    Bookmark Topic Watch Topic
  • New Topic