Hi Seid -
Let's say your airline has 15 planes, and they each make 5 flighrs a day. There are 200 seats on each plane. For any given day, assuming you don't overbook, that's a maximum of (15 * 5 * 200) or 15000 possible files.
Now let's say like most airlines you take reservations 180 days in advance. You also decide that a record (file) should exist for each seat whether it's reserved or not, using a Null object strategy so a seat that's empty isn't confused with a seat that doesn't exist.
So that's 12,000,000 records, or individual files you'd have to maintain that are "fresh" at any given time. You'd also archive 15000 files each day that passes and generate 15000 more for each new day you're taking reservations.
Consider what happens when your customer wants to change seats, or wants a dozen seats, all together for a group. Won't be hard to do, necessarily, but you're going to have to open one file every time you want to look at one seat.
In short, there's are darn good reason for using flat files or tables to store records, and leaving issues in
granularity to the memory model of your data. A file system in which is every little thing is its own object would bring most systems to their knees. That's why it's worth the effort of most programmers to use a much smaller series of files, or even one file in some cases, and only treat the contents as objects while they're in memory.
Locking files is slow and expensive on system resources. You're better off locking in-memory objects that represent file records.
Also in general: the more locks you use for X amount number of objects, the more memory you need and the more time you send initializing and checking locks. The fewer locks you use for that data, the more delays a
thread might encounter waiting for access to an object.