• Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

Mannaging concurrent access to DB File

 
Itapaj� Takeguma
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I was remembering that some one said that, to support clients' access to the database, was opening a RandomAccessFile for every client (I didn't find that Topic and I'm opening this).

I though that to resolve concurrent access I just need to create kind of a pool to wrap the RandomAccessFiles. When I client needs one he asks for one, when he doesn't need, he puts it back in the pool.

The biggest problem with this solutin is when one needs to create a new record (if there are no empty slots), so the database file has to grow, and some space has to be reserved for that record in the file.

I'd like some comments from you again,
thanks,
Itapaj�.
 
Anton Golovin
Ranch Hand
Posts: 476
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Itapaj� Takeguma:
I was remembering that some one said that, to support clients' access to the database, was opening a RandomAccessFile for every client (I didn't find that Topic and I'm opening this).

I though that to resolve concurrent access I just need to create kind of a pool to wrap the RandomAccessFiles. When I client needs one he asks for one, when he doesn't need, he puts it back in the pool.

The biggest problem with this solutin is when one needs to create a new record (if there are no empty slots), so the database file has to grow, and some space has to be reserved for that record in the file.

I'd like some comments from you again,
thanks,
Itapaj�.



My project will only have one instance of the Data class and all classes below Data. There will be one RandomAccessFile open per method call for my DBIOManager (read, write, readAll.) I am caching the database because of speed considerations. It is efficient because I will have read all the records at startup, and I will only need to read, write individual records at program run.

I don't see that they would be expecting concurrent access to the file because they are asking for record locking. If you have concurrent access to the file, it will corrupt the file. You would have to lock the file for writing when doing so. Therefore, it's like record locking, because when you lock the file, you don't need to implement higher-level record locking. Therefore I think they except you to cache the detabase.
 
Itapaj� Takeguma
Ranch Hand
Posts: 41
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Some people says that it isn't very good to load all your database into your memory (at least when you have a very big database). Even beeing more fast it choosed to implement directely into the database.



I don't see that they would be expecting concurrent access to the file because they are asking for record locking.


I think there's no problem with concurrent access. When a client asks to lock a record, I think the other records shouldn't be locked (I mean, they should be accessable).

If you have concurrent access to the file, it will corrupt the file


I made some tests and I think that when you open the same file more than one time and write to some instance of it, every thing is done gracefully. I think the problem is when you write to the same position in file.

thanks for your reply,
Itapaj� Takeguma

[Andrew: I have edited your post to put the code between [quote] and [/quote] UBB tags rather than [code] and [/code] tags.]
[ August 19, 2004: Message edited by: Andrew Monkhouse ]
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic