For example, consider the following sequence of events:
1. Thread A notices that the value is not initialized, so it obtains the lock and begins to initialize the value.
2. Due to the semantics of some programming languages, the code generated by the compiler is allowed to update the shared variable to point to a partially constructed object before A has finished performing the initialization. 3. Thread B notices that the shared variable has been initialized (or so it appears), and returns its value. Because thread B believes the value is already initialized, it does not acquire the lock. If the variable is used before A finishes initializing it, the program will likely crash.
I don't understand this statement in bold. Anybody could help ?
Consider the classic example of a non-thread safe operation:
someStaticInt = someStaticInt + 1;
Without some additional code to perform synchronization, running the above statement concurrently in multiple threads results in a non-deterministic result. The crux of the problem is that the operation at large involves three steps:
a) fetch the current value of someStaticInt
b) compute the value of someStaticInt + 1
c) assign the computed value to someStaticInt
So even though the programmer only writes one line of code, there are multiple steps involved ... and there's no guarantee that a thread won't be interrupted after, say, step b.
To me, it sounds like
the compiler is allowed to update the shared variable to point to a partially constructed object before A has finished performing the initialization
is just an elaborate (if exact) way of saying that the single line of code:
helper = new Helper();
... really translates to multiple steps. As with the classic example, an operation that translates to multiple (non-atomic) steps is vulnerable to synchronization problems when performed in a multi-threaded environment.