This week's book giveaway is in the Mac OS forum.
We're giving away four copies of a choice of "Take Control of Upgrading to Yosemite" or "Take Control of Automating Your Mac" and have Joe Kissell on-line!
See this thread for details.
The moose likes JDBC and the fly likes Simultanious Inserting Data Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


JavaRanch » Java Forums » Databases » JDBC
Bookmark "Simultanious Inserting Data" Watch "Simultanious Inserting Data" New topic
Author

Simultanious Inserting Data

sorabh jaiswal
Greenhorn

Joined: May 11, 2007
Posts: 10
Hi All,

I am using basic java1.4 and db2 version 9

I have 4 java classes , all having main methods. These 4 java classes are run from Unix. They can either be scheduled to execute at same time or at different times.
The 4 java classes read 4 different text files. The data in the 4 text files are different from each other.
The 4 java classes insert the values to same table (example: Employee Table) .

The problem is if they are all called at the same time how can we avoid table locking. As the data which these java classes are inserting in the Employee table is very huge more that 50000 records. How can we maintain the performance.

1. I was thinking to use synchronize keyword for the method where I insert the data. But it will degrade the performance.
2. Is there any way ....procedures or batch update can help?

Any suggestion or solution someone might have used?
Nitesh Kant
Bartender

Joined: Feb 25, 2007
Posts: 1638

Originally posted by sorabh jaiswal:

The problem is if they are all called at the same time how can we avoid table locking. As the data which these java classes are inserting in the Employee table is very huge more that 50000 records. How can we maintain the performance.

Are you inserting new records or updating exisiting records?
If you are inserting new records then you do not have to do anything. Database will handle that for you.
If you are updating, can you use Select For Update? That will only lock rows that are to be updated.

Originally posted by sorabh jaiswal:


1. I was thinking to use synchronize keyword for the method where I insert the data. But it will degrade the performance.
2. Is there any way ....procedures or batch update can help?

Any suggestion or solution someone might have used?


Synchronization will not help as the programs inserting records are in different JVMs.
Batch updates will definetly make it perform better but is no way related to table locking and concurrent inserts.


apigee, a better way to API!
Yelamuri Chandu
Greenhorn

Joined: Jan 08, 2008
Posts: 15
The above posted repy by nitesh kant is perfect. Please follow that.
Gabriel Claramunt
Ranch Hand

Joined: May 26, 2007
Posts: 375
I would serialize the inserts (i.e. run one program after the other), because after N number of rows are locked, DB2 escalates the lock and locks the entire table, that could lead to deadlock errors from the database. Probably you could get a more specific answer in a DB2 forum.


Gabriel
Software Surgeon
Nitesh Kant
Bartender

Joined: Feb 25, 2007
Posts: 1638

Originally posted by Gabriel Claramunt:
I would serialize the inserts (i.e. run one program after the other), because after N number of rows are locked, DB2 escalates the lock and locks the entire table, that could lead to deadlock errors from the database. Probably you could get a more specific answer in a DB2 forum.


I have not worked on DB2 so i can not really comment but the above looks a bit dirty. Escalating the lock to table level is done by the DB2 server and not explicitly by the user. Deadlocks will occur if two programs take nested locks in mutually reverse order. I dont see such a condition here (I may be missing something)
Jeanne Boyarsky
author & internet detective
Marshal

Joined: May 26, 2003
Posts: 30580
    
154

If you don't need the possibility of a rollback for the whole program, you could commit more often rather than waiting for the end. This clears the locks more often which reduces contention. If you have the possibility of a rollback, I second the recommendation to run them serially. The contention would be extremely high, so you are better off avoiding it.


[Blog] [JavaRanch FAQ] [How To Ask Questions The Smart Way] [Book Promos]
Blogging on Certs: SCEA Part 1, Part 2 & 3, Core Spring 3, OCAJP, OCPJP beta, TOGAF part 1 and part 2
Raees Uzhunnan
Ranch Hand

Joined: Aug 15, 2002
Posts: 126
sorabh ,

Yes batch updates and stored procedures will help ; but what will make big difference is
1. if you can partition the table at the backend. This will open up more insert points and inserts will be faster.

2. Check if you have table level lock scheme set for the table. If so make it to row level lock scheme. This will reduce contention in big way

3. Also since you said data is different in all the files ; I don't see a possibility of update of the same data, and hence need of an index to avoid table scans.

Thanks
raees
[ January 20, 2008: Message edited by: Raees Uzhunnan ]

Sun Certified Enterprise Architect
Java Technology Blog
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Simultanious Inserting Data