Win a copy of Testing JavaScript Applications this week in the HTML Pages with CSS and JavaScript forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Bear Bibeault
  • Ron McLeod
  • Jeanne Boyarsky
  • Paul Clapham
Sheriffs:
  • Tim Cooke
  • Liutauras Vilda
  • Junilu Lacar
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • fred rosenberger
  • salvin francis
Bartenders:
  • Piet Souris
  • Frits Walraven
  • Carey Brown

Best solution for multiple inserts across two tables

 
Ranch Hand
Posts: 108
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi,

I am busy working on a system where there is a need to persist data into two tables - this is one transaction. We currently use Direct JDBC but I am open to looking at other solutions (Hibernate, EJB CMPs, etc).

The data in Table1 is inserted by an outside system and my application must update a row in the table if it exists (only one update to this table occurs in transaction).


The data in Table2 is inserted by my application for each field that is updated in Table1 - this is the audit trail of changes to Table1.




To prove the concept I currently build up all the SQL statements, insert them into a java.sql.Statement object as a batch and then execute the batch.

This is working fine during development but I dont want to get surprised in three months when we deliver to the client.

Thanks
Chris

[ March 09, 2006: Message edited by: Chris Brat ]

[ March 09, 2006: Message edited by: Chris Brat ]
[ March 09, 2006: Message edited by: Chris Brat ]
 
Ranch Hand
Posts: 333
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Well, that's certainly one way to do it, if a bit unusual...

More normally, this is done via:
1. set autocommit off
2. 2 seperate PreparedStatements get executed
3. commit

"real" databases all support transactionality, so this works just fine (MS Access is not a "real" database in this regard and shouldn't be used for multi-user access...).

Another approach would be to use triggers, if your databse supports them. They're often particularly appopriate for audit trails because they're harder to bypass. A trigger is a stored procedure that the database executes upon detecting a change to a row in a specified table.

Finally, batching... First, it doesn't affect transactionality at all; it's not intended to. If you're using the wrong transactionality (on other operations) then using batching might shorten the window of vulnerability to transaction errors, but doesn't eliminate it. Second, batching is intended for performance improvements and general achieves them in two ways; a) reducing "network time" by transfering statements in groups to the database b) reusing a parsed PreparedStatement with different bind variables. b) doesn't apply to you since you're using 2 different SQL statements. Different drivers/databases may or may not implment a); some drivers will simply simulate batching and in fact transfer the 2 statements seperately. (The better DBs do it right, so far as I know...). In other words, bacthing doesn't really change the logical effect of your statements on the database, just the timing/performance, maybe.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
    Bookmark Topic Watch Topic
  • New Topic