Help coderanch get a
new server
by contributing to the fundraiser
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

commit performance

 
Ranch Hand
Posts: 1970
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have an application that adds a lot of rows to certain tables, as it runs. I have so far coded it in the simplest way, which is to commit after adding each row. This means it's doing tens of commits of this type per second.

If I were able to re-jig the application so that it could add several rows before committing, how likely is this to result in improved (or degraded) performance? It's not easy to make this change(*) so I would like some idea of how worthwhile it might be.

If it makes a difference, I'm using JavaDB.

(*) I don't just have an array or collection of items that I'm looping through. I am adding items in response to call-backs from other processes, and there is lots of infrastructure in the way between the call-back handlers and the code that hits the database.
 
Ranch Hand
Posts: 518
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I've not used JavaDB yet, but in general, committing after every 100 or 1,000 rows [or more] will probably result in a major improvement in the number of rows you can insert per second. Building and tearing down a transction can be quite a bit of overhead.

I've seen over 100+% improvement in performance by using this technique. You have to balance the performance improvement with the resources (i.e., locks) held for the duration of the transaction.

If you are going to allow the transaction to span several callbacks, things may be a little complex if, in the event of a failure, you need to notify the caller that that certain rows that you previously processed failed to be inserted (and not just the last/current one.)
 
passwords must contain 14 characters, a number, punctuation, a small bird, a bit of cheese and a tiny ad.
We need your help - Coderanch server fundraiser
https://coderanch.com/t/782867/Coderanch-server-fundraiser
reply
    Bookmark Topic Watch Topic
  • New Topic