The issue is how to guarantee lazy initialization of object members in a multi-threaded way.
Hibernate uses lazy initialization for value objects so I don't understand if this is an issue. Is it?
My understanding is lazy initialization should be safe in these cases because they (the value bean objects) are are all different objects. I could see this as an issue if you had objects using lazy initialization that were shared though.
DCL was a programming idiom used to try to avoid having to synchronise. It was broken for reasons discussed in the articles already quoted.
The best thing to do is to write your program with the clearest, simplest synchronisation that it needs to achieve thread safety, and without any "clever" attempts to avoid synchronisation.
If, and only if, your program performance is then poor, and you can prove that synchronisation is the reason, should you consider optimisation. And, even then, with great care. Multi-threading is hard enough, without trying to optimise at method-level for performance too.
Betty Rubble? Well, I would go with Betty... but I'd be thinking of Wilma.
Under normal order evaluation (outermost or call-by-name evaluation) an expression is evaluated only when its value is needed in order for the program to return (the next part of) its result. Updating means that if an expression's value is needed more than once (i.e. it is shared), the result of the first evaluation is remembered and subsequent requests for it will return the remembered value immediately without further evaluation
That sounds pretty much like what it is generally taken to mean by non-FP-savvy Java programmers (which includes me). So, what's the issue?
Perhaps I'm too stupid to work this out, but could you explain why these documents apply this erroneously? We are all here to learn, after all.
Well, one could argue that it is not erroneous - just that Java does not apply lazy eval to its first-class members, so instead of engaging in that, how about I explain what it is?
First, Java programmers use a poor man's implementation of lazy evaluation all the time. e.g. they use InputStream instead of byte (arrays are strictly evaluated). The use Iterator<T> instead of T and you can even argue that List<T> is lazily evaluated. You could even go on to say that a Iterator<Character> is a lazily evaluated String - imagine if it weren't so cumbersome! The problem though (this is where it all falls apart), is that Java is an imperative language - Lists are "mutable", etc. (Strings are not though as well know and so certain optimisations can take place).
In fact, an immutable type is one where all of its operations are referentially transparent. In pure functional programming, all functions possess this property, allowing all sorts of cool things to take place, including lazy evaluation.
You can have, for example, an infinite list and you can even perform operations on it! You could emulate this with Java, but you'd have to rewrite a lot of the core API - mostly so that types possess referential transparency, but you'd run into other problems  for example, lack of tail call elimination.
Take a look at the following at the GHC interpreter:
This can be read as "take the first 5 elements from the list that is 42 to infinity". It is only when the interpreter needs to write its output that evaluation begins - clearly it only evaluates the first 5 elements of the infinite list, since if it were to evaluate the entire list, it would go forever.
Many imperative programmers get a shock to realise that the GHC readFile function returns a String , until they learn about lazy evaluation
Hope this helps.
 Which leads to a proof by contradiction that Java and software are mutually exclusive.
 Glasgow Haskell Compiler (Haskell is a purely functional, lazily evaluated functional programming language). The GHC interpreter is also known as ghci.
 Actually, an IO String, which is called a "monad" - a concept taken from Category Theory.