Hello , Am trying to do a batch insert into the database table . However what is seen is that there are no errors - but total records is much less than expected . ( Expecting 100,000 records to be inserted in a batch ) Printing "int insertResult = pStmt.executeBatch();" shows 100,000 but database doesnt show that many records .
Here is the sample code :
The table is a simple three field table .
My queries are : 1 > Is there a limit to no of records that can be added in a batch ? 2 >is there a limit from db side as to no of records that it can committ ?
I don't think there's a limit to the batch size, but I wouldn't run transactions with this many statements. When I need to perform batch insert I generally commit every 1000 items or so. This ensures that the transaction doesn't become too big (which eats up memory on the client, and takes a toll on the DB as well, since it needs to keep track of the rollback information).
Joined: Dec 26, 2005
Thanks Ulf for your response . When you specify a commit after every 1000 records a question comes to mind :
If my entire transaction ( 100,000 ) records should be atomic - committing every 1000 may not satisfy the condition .
Queries : 1 >How do I handle such a situation ? 2 >Could you kindly elaborate why keeping a batch size of 1000 is better ?
Thanks once again for your response.
Joined: Mar 22, 2005
Yes, rollback of all inserted rows isn't so easy any more. You need to be able to identify all the previously inserted (and committed) rows. How you might do that (and it may not actually be possible) depends on the nature of th edata you work with, and the table attributes they end up in.
It's advantageous for the reasons I mentioned: memory usage on the client, and resource usage in the DB. Compare inserting 1000 rows a 1000 times with inserting a million rows once and you'll see the difference.