This week's book giveaway is in the Design forum.
We're giving away four copies of Building Microservices and have Sam Newman on-line!
See this thread for details.
The moose likes Threads and Synchronization and the fly likes performance with synchronized Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login

Win a copy of Building Microservices this week in the Design forum!
JavaRanch » Java Forums » Java » Threads and Synchronization
Bookmark "performance with synchronized" Watch "performance with synchronized" New topic

performance with synchronized

aruna sydu
Ranch Hand

Joined: Jan 21, 2008
Posts: 98

Hi All,

We had a performance issue with the below code, it was synchronized at method level and so it was blocking all other threads and causing issue.

At a hight level the functionality of the method is to return unique NextSequence
getting a connection from the connection pool.
quering the database table to get the existing sequence.
Increment the sequence by 1
update the incremented sequence in the database table.
return the sequence.

Any thoughts on how to optimize the below code..?

Your inputs on how and why would be helpful.

Thanks ,

Attitude Determines Altitude

Martin Vajsar

Joined: Aug 22, 2010
Posts: 3718

The performance problem lies in the database call. You need to dramatically decrease the number of database calls to improve performance.

You might do so by caching the sequences in Java. You'll need to cache values for each branch separately, perhaps using a map. The cache would contain the current value and the maximum value. All accesses to the cache would be synchronized. If the current value is less than maximum, just increase the value. If it is equal to maximum, allocate a new batch for that branch from the database. This would be done by increasing the value by the batch size (1000, for example) - that way you can allocate 1000 values in one DB call.

All requests that came at the time the database call is happening will have to wait, but if you allocate the numbers 1000 at a time, you'll decrease the frequency of database calls thousandfold (on average). You might lower the collision rate further still if you use as many synchronization objects as there are branches. Concurrent calls using different branches could then be made not to block each other. Perhaps a ConcurrentHashMap could be utilized this way.

There are downsides to this approach: there is a possibility of gaps in the sequence, and if there are several independent processes using this logic, the numbers will generally be allocated unordered. (You might actually need to review your current DB processing in the database, if there would be several such processes running. Depending on your database, an explicit locking might be needed to prevent race conditions there.)

Perhaps you could also employ application key generators (the solution I propose is a crude app key generator, actually). I haven't used these key generators myself, but there is an article about them here, for example.
aruna sydu
Ranch Hand

Joined: Jan 21, 2008
Posts: 98

Hi Martin Vassar ,

Thanks for the elaborate reply.

It really helped me look at other dimension of the problem ! Multiple database calls. I was stuck up at Synchronized.

Considering the above do you think it helps moving the logic from java to procedure and restricting to single database call ?

We cannot afford the downside or gaps in the sequence as it is in production working for many years.

Lately I have read about Reentrant locks. Does it help in this scenario ?

Please advice.

Martin Vajsar

Joined: Aug 22, 2010
Posts: 3718

Reducing the number of DB calls might help just a bit, but that's about it. With one DB call per each number allocation, your solution will be inherently unscalable. I don't see a way how reentrant locks might change that.

I don't think that being in production precludes the allowance of gaps. For one thing, gaps can never be fully avoided. Consider a situation where you allocate a number from a database, but then an error in the processing - could be a network error, or application bug, for example - results in the record for which the number was allocated not being created. This results in a gap - and there isn't any possibility to definitely preclude such scenario. All production systems work this way.

Secondly, with the described solution, the gap (apart from the errors described above) will only occur when you restart the application, or throw away the cached values for some reason. That should not be very frequent. And even if it was frequent - if you store the generated numbers in a Long, with allocation of 1000 numbers each second the range of Long will last for thousands of years.
I agree. Here's the link:
subject: performance with synchronized
It's not a secret anymore!