Alexander Duenisch

Greenhorn
+ Follow
since May 29, 2006
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Alexander Duenisch

@Edwin


I mean, you could still implement it as they require, but if you already guarantee a database level lock, then the record level lock is completely useless. It would be just a waste of time. Do you not think this may affect your score?


Writing code that doesn't handle IO in a safe and correct manner might affect my score just as well. Seems like i'm caught between the devil and the deep blue sea :-(


@Frederic


We should use the lock mechanism for both update and delete (create is not really an issue). That is a good idea but we would then end up with dirty reads.


In your READ method, you could check whether the record is locked and if so, throw an exception to prevent dirty reads.
[ May 03, 2008: Message edited by: Alexander D�nisch ]


Basically, I have also tried to come up with a solution where reading and writing takes place on the record level but has Alexander stated this seems to be currently not possible.


I didn't state this was not possible. I just said i didn't have a clue and i would really appreciate, if someone could enlighten me.
@Edwin


Alexander, the way this lock approach works is not at record level, though.


Correct, this approach works at file level - the whole file is locked, when the write lock is engaged (at least, if all access to the file goes through the class that encapsulates the lock).


This means that if 5 clients asked the read record for records 1,2,3,4 and 5. And a 6th client asked the write record for record 6 at that time, then this 6th client willing to write his record will have to wait until the other 5 read operations have finished. Right?


Right.


Also, if three clients want to write different records. Let's say 8,9,10. The third client will have to wait until the other two have finished to get the write lock.


Also correct.


I hope I have not misunderstood you. At any rate, I have the impression that, unless you implement this at record level, this could be a performance issue. What do you think, Alexander?


Maybe so. That's what my initial question was all about: "Is it technically possible to have multiple threads write to different locations of the same file synchronously (meaning: there is no risk of something being overwritten) without getting Errors/Exceptions?". Because if this weren't possible, then there would be no way around locking the entire file. This issue has something to do with file system technology (reading/writing files) as well as the internal implementation of RandomAccessFile (which i do not know), not with Java programming in general. I still do not have a conclusive answer to that.

Aside from that, i'm not so sure that there actually is a considerable performance loss in the implementation above (using the ReadWriteLock).
Just consider the following:

In a single CPU multithreaded environment, threads are given timeslices by the thread scheduler of the OS. A thread is running, when it has been chosen by the scheduler and it goes into paused state, when the timeslice has elapsed. So there actually is no real synchronism here. The job to be done by the threads is just being chopped up in tiny little pieces and all the threads now take turns to handle those pieces. With locking you just make sure the task (i.e. the code to execute) isn't split up, but is run en bloc (you make the code in a locked section atomic).

One example to illustrate this. Let's assume i have two tasks to accomplish:
1. Cook a meal
2. Mow the lawn
With regard to our threading issue, there are two ways of handling this:
First, i could cook the meal from start to end and do the mowing only after the cooking is finished. Second, i could shuttle back and forth between the kitchen and the backyard and do a bit of mowing or a bit of cooking each time. The overall time it takes me to complete both tasks should be the same, regardless of which one of the two alternatives i choose (let's assume i don't loose time on the way).

In addition, a thread that is waiting on a lock doesn't consume any timeslices (or CPU cycles as this is called by the assignment) but gives them away to the running thread.

If i put all this together, then i come to the conclusion, that the total performance loss is not so great at all. The only overhead involved is for acquiring/releasing the lock.

Or am i completely mistaken?
[ May 02, 2008: Message edited by: Alexander D�nisch ]


With this design, you are simply preventing that two threads read a record simultaneously, which is unnecessary.



No, with this implementation i'm preventing any other threads from reading or writing while one thread holds the write lock. An arbitrary number of threads can read at the same time. Unlike the write lock, the read lock is not exclusive. For further reading see ReadWriteLock (Java Platform SE 6)


Now if the data class or FileAccess are singletons then you will have a single RAF object whether it be static or not.



Sure, but if this singleton instance is accessed by multiple threads concurrently and the RAF instances are created in the singleton's methods (like in the example above), then you will end up with multiple RAF instances. Just think about it for a minute.
To use the Observer/Observable Pattern in this context, you would need to open a server socket on the client, so the server can propagate changes on data back to all clients. Maybe this is not such a bad idea as it appears to be at first glance.

Another approach would be to use datagram sockets (UDP) for broadcasting changes to the clients. However, i'm not sure if this is permitted by the rules of the assigment. My assigment paper doesn't explicitly ban ("must not use ...") the use of datagram sockets. But in his book, Andrew Monkhouse discourages from the use of datagram sockets (2nd edition, page 202), arguing that "the requirements of the exam tend to be vague regrading this point".

However, the requirements of the exam are being vague about almost everything. So i've taken up the position, that all is permitted unless it's explicitly banned.
[ May 02, 2008: Message edited by: Alexander D�nisch ]
I even thought about transferring the dynamically obtained schema information (including the field names) to the client, so the client would also be capable of handling added/changed columns/fields correctly.


I didn't try copying the data to every GUI and updating it there. I created the Data object as a singleton and use that to update the database. The server is the only object allowed to update the database.



I did it exactly the same way.


The GUI just displays the recent view of the records form the last communication with the server.



That's exactly the problem here.
Clients don't get notified, when one client changes a record on the server. This means, that some of them might have obsolete data in their GUI, depending on whether the last communication with the server happened before or after the change. In the end, a client might book a record (i.e. a hotel room) without knowing, that it has already been booked by another client.

The assignment paper doesn't require me to implement any form of client authentication, so one could deduce that it is also acceptable to have clients overwrite each others bookings. But somehow that doesn't seem right.

For that reason i followed Edwin's recommendation to throw ConcurrentModificationException if the described scenario occurs. To me, this seems like the best and simplest solution.
[ May 01, 2008: Message edited by: Alexander D�nisch ]
No, what i meant is, if it is technically possible to have multiple parallel write operations on the same file. Or will this lead to IOExceptions.
I simply do not know how a RandomAccessFile (or many RandomAccessFiles in this case) behaves, if used concurrently. To be on the safe side, i have locked up everything, because as you have said yourself, a thread reading a record while simultaneously another thread is updating it would be a dirty read. I fully agree to that. I have done it like the following:



This certainly looks a bit awkward, but unless i'm missing something it is necessary to have the Lock. What do you think?
[ May 01, 2008: Message edited by: Alexander D�nisch ]
The question remains, if it is possible to have multiple parallel I/O operations on the same file. Let's assume Thread 1 using RAF A and Thread 2 using RAF B to write to different locations of the same file (we can guarantee, that Thread 1 doesn't overwrite Thread 2's data) at the same time. If this weren't possible, then we would have to lock the whole file anyway. I haven't found a conclusive answer to this issue yet.
[ April 30, 2008: Message edited by: Alexander D�nisch ]
Hi ranchers,

i have written a piece of code like the following:




This function is being called by multiple threads.
Let's assume it can be guaranteed by the choice of value for location
and the length of array 'stuff', that one thread doesn't
overwrite the bytes written by another thread.

Because the RandomAccessFile is local to the method,
each thread would create it's own instance and use the associated
file pointer.

Can this be done safely or do i have to put the call to write in a
synchronized block?
In other words:
Can two or mor threads write synchronously to different locations of
the same file using different RandomAccessFile objects and thus
different file pointers, if there is no risk of one thread overwriting
another's data?

Thanks in advance for your answers.

Alex D.
15 years ago
Hi ranchers

i'm doing URLyBird (v. 1.1.3). Currently i'm having a hard time figuring out, how to implement the client's business logic. The assignment paper requires my UI ...
- "to allow the user to search for all the records, or for records where
the name and/or location fields exactly match values specified by the
user"
- "to allow the user to book a selected record, updating the database
accordingly"

Now i'm wondering in how far i am supposed to deal with the special issues of concurrent access (that are not already covered by record locking). For example:

1. client A searches for all the records
2. client B books record no. 11
3. client A tries to book the same record (not knowing that B already did)

How do i deal with this, if at all. The assignment doesn't specifically make this a requirement. However, i'm not certain if this isn't a general prerequisite.

One idea i had, was to make the client regularly poll the server for recent changes. Another approach i came up with is to have a member variable called 'lastModified' in my Record class, which i can check and throw an Exception if a client tries to book a record that was modified after it was retrieved by that client.
Both solutions appear to be a little excessive.

What do you think?
[ April 28, 2008: Message edited by: Alexander D�nisch ]
@John Park

I figured the same as you did. Having a single RandomAccessFile and synchronizing it (and you have to synchronize due to the already mentioned filepointer issue) solves the record locking problem along the way. That's why i decided against a single RAF. Instead, each method that accesses the database creates it's own RAF object and only access to the physical file is synchronized.

What do you think?
You either harcode it or you include it in your suncertify.properties configuration file.
Basically, you're right.
But the interface specified in the assignment papers makes no assumptions on the method calling order. It is a general contract that has to account for the possibility of illegitimate use. Thus, it identifies clients in order to cope with "faulty" applications that do not call in the lock()->update()->unlock() order.