Apologies if this the millionth time somebody's asked this question, but I've been unable to find an answer to this other than the standard "write synchronized code more carefully". And we all know or SHOULD know how to find that .
Also, I realize the VM and the compiler have a lot of tricks at this point to increase the performance, but let's assume we're really protecting a critical section PROPERLY and a large part of the overhead is in protecting a SMALL critical section that would otherwise run very fast. So, we can't resort to using a class in java.util.concurrent.atomic because we're protecting multiple statements and we need some locking mechanism (or equivalent).
2. Is there a concurrency class in the SDK designed for higher thread throughput, when protecting a critical section? e.g., is the locking/unlocking overhead for a binary Semaphore smaller than using Synchronized?
Joined: Apr 13, 2009
Ok, let me ask this: Are the concurrent classes built as convenience classes on top of synchronized, .wait, and .notify/.notifyAll?
The concurrency classes offer substantial benefits over the older mechanisms. But there's a *lot* to know about them, so I'd advise to peruse one of the preeminent books on the subject ("Java Threads" by Oaks/Wong or "Concurrency in Practice" by Goetz et al.).
One class you might consider in particular is a [Reentrant]ReadWriteLock instead of a synchronized block if not all threads need write access to the shared resource
The article is indeed very old. Particularly the Java 5 and Java 6 runtimes have made tremendous performance improvements in supporting concurrency, so I'd dismiss anything on the subject that's 5 years old or more.