This is from a 10 years old book. The book mentions that reading a published object reference doesn't guarantee that the reading thread will see the most recent values of the data that constitute the internals of the reference object. It means the above code doesn't work. The reason I ask is because I am not sure if this is still accurate in Java6. Some expert here may familiar with this and should be able to give the right explanation. I think this is a good thing to know for either the person who will be Java certified or a new certified one like me. Thanks.
Note : I tried to put indent on the code above but it didn't work. I hope it's not too difficult to read.
First off, please UseCodeTags. People will be more willing to help you with your issues.
The code you posted has nothing to do with the internals of the published object. The method simply returns the object reference. The code indeed doesn't guarantee anything regarding the internals of the object, but that doesn't mean it doesn't work. It's simply not in its job description.
A bit more context -- I am going to assume that the book you read talked about a technique call "double checked locking" and how it doesn't work. So...
Let's define "work". This example is to ensure that only one copy of class gets created -- assuming that the constructor is private and there is no other way to create the object, this technique works.
The "double checking" is an optimization. It is used to avoid synchronizing unless it has to -- with the theory being that synchronization is very expensive. So, the method checks to see if the instance is null before grabbing the lock, and then check again before instantiation. The theory being that once the object has been created, it will never need the lock again. Unfortunately, this optimization doesn't work.
Due to race conditions, it is still possible for two locks to still acquire the lock. And worse, it is still possible for the java optimizer to see the double check, and do code motion to get rid of the outer one.
There is also an argument that since it is possible for a thread to never grab the lock to return the object, that object may not be completely instantiated before it is used. For this argument to be true, the method has to do extra initiation after the object has been constructed -- with everything being initialized in the constructor, this argument is not true for this example.
So yes, prior to Java 5, the optimization doesn't work, in that there is no guarantee that it won't always grab the lock.
Anyway, with the release of Java 5, the memory model was changed. The behavior of volatile variables is now different in that the optimizer has more restrictions on how it moves code. So, yes, if the variable is volatile, it is now possible for the synchronization lock to never be grabbed once the instance has been created.
There were also other changes that made synchronization very cheap. Having many threads sharing a lock became much less of a concern. This was further improved with Java 6.
So, yes, with Java 5, the optimization does work (if you add the volatile keyword), but it is not very useful as synchronization is no longer much of a concern...
There's a good explanation of lazy initialisation via the double checked locking idiom in Effective Java ...
(Post JSR 133) volatile can be used to fix the issue as can better use of final static in the new memory model (as covered by various books, forums), the explanation is best phrased in terms of "happens before ordering" and "visibility" as there actually several potential problems e.g. read/write ordering, compiler optimisations and memory barrier / fence type issues. This kind of thing goes through the JVM through OS all the way to a CPU level. I recently had to do some work on an open source library where everything was fine till they moved it to a Linux box and then these problems crept out.
In terms of do you need to know this stuff, you don't really stick to a few basic rules and you'll be fine (this code doesn't) , if you really do want to eek that last CPU cycle out of your code safely , read JSR 133http://g.oswego.edu/dl/jmm/cookbook.html and subscribe to Doug Lea's java memory model discussion group ;-).
My advice is keep it simple unless you really, really need to (i.e. prove you need to with a profiler) as even when these optimisations work they are bad for maintainability unless you do some pretty heavy documentation. Also the newer the Java the more Java optimises your code for you effectively e.g, lock ellision etc. http://www.ibm.com/developerworks/java/library/j-jtp10185/
"Eagles may soar but weasels don't get sucked into jet engines" SCJP 1.6, SCWCD 1.4, SCJD 1.5,SCBCD 5
posted 8 years ago
Thanks a lot for the answer and extra thanks for the additional explanation. It's very helpful.