• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

JDBC manually commit

 
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Now i have many many statements to execute.some of them are with a ResultSet and somes are not.

I am not sure how to use manually commiting to database.i have found some topics.when i have searched the forum.
They are so simple. so still i am not sure how.

assume that you have in an order of some sql statements like this;

delete from x table

select from x table

insert x table

select x table

insert y table

update y table

select y table

update z table

and on...

I kept it some long for this example.But its just similar to this.

I want to know that do i have to commit after insert delete , update before selection sqls?
Or must i do it only for once in a try catch block at the end of processes?

for example:

Connection conn=DriverManager.createConnection(..);

conn.setAutoCommit(false);

delete from x table
conn.commit();

select from x table

insert x table
conn.commit()

select x table

insert y table

update y table
conn.commit();

select y table

update z table
conn.commit();

And i want to know where to rollback if an exception thrown?

[ June 30, 2005: Message edited by: rasit fidanov ]

[ June 30, 2005: Message edited by: rasit fidanov ]
[ June 30, 2005: Message edited by: rasit fidanov ]
 
Ranch Hand
Posts: 1683
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
conn.setAutoCommit(false); // start of JDBC transaction
delete from x table
conn.commit(); // end of JDBC transaction

You use transactions for units of work which are atomic: they must either succeed or fail as a unit. You commit in the try block, rollback in the catch block.
 
Ranch Hand
Posts: 169
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Your question is not clear,

You have a set of SQL Statements, you want to execute them in batch or separately?

If you want to execute them in a batch you can use addBatch() to the connection and execute all the statements using executeBatch()

Or if you are asking about committing manually as you mentioned in the use con.setAutoCommit(false) -- this method sets the auto commit to false
You need to explicitly commit using commit() method on connection
if you dont use setAutoCommit() method, by default SQL Connection commits to the database after executing each statement.

Srilakshmi
 
rasit fidanov
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
thanks for answers,

@Roger Chung-Wee :
you mean when ever i commit an sql query i have to put it in a try catch block or only one try catch block at the end of the querries?

i mean;

try{
conn.setAutoCommit(false); // start of JDBC transaction
delete from x table
conn.commit(); // end of JDBC transaction
}catch(Exception e){
conn.rollback();
}
.
.
.//another commit

try{
conn.setAutoCommit(false); // start of JDBC transaction
update x table
conn.commit(); // end of JDBC transaction
}catch(Exception e){
conn.rollback();
}
.
.
.


or I must do roll back only once at the end of the page in catch of a try catch ?

@Srilakshmi Vara :

Your question is not clear,



sorry.
I am a beginner

Batch?umm i think bacth works like this;
it adds all querries in an order in the statement and sends to database in one when executeBatch() method is invoked..

so i wonder how can i rollback if a problem occured in an sql query which is in the middle of the bacth?

thx for the answers again...

@Roger Chung-Wee :I have heard of it several times ,atomic.
I wonder how can i make my queries atomic or what atomic means in fact?

can you advice me an ebook?or can i demand from you if you have a document.
and if you dont mind
[ July 01, 2005: Message edited by: rasit fidanov ]
 
Roger Chung-Wee
Ranch Hand
Posts: 1683
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


you mean when ever i commit an sql query i have to put it in a try catch block or only one try catch block at the end of the querries?

i mean;

try{
conn.setAutoCommit(false); // start of JDBC transaction
delete from x table
conn.commit(); // end of JDBC transaction
}catch(Exception e){
conn.rollback();
}
.
.
.//another commit

try{
conn.setAutoCommit(false); // start of JDBC transaction
update x table
conn.commit(); // end of JDBC transaction
}catch(Exception e){
conn.rollback();
}


Each transaction must either commit or rollback in one try/catch block. Remember, it is the commit or rollback which ends the transaction. It is up to you to determine where the transaction boundaries are.


Batch?umm i think bacth works like this;
it adds all querries in an order in the statement and sends to database in one when executeBatch() method is invoked..

so i wonder how can i rollback if a problem occured in an sql query which is in the middle of the bacth?


If an exception is thrown in the middle of the batch, BatchUpdateException is thrown and the remaining batches may be executed. (This depends on the JDBC driver.) If your batches are being executed in a transaction, your rollback() method will be invoked in your catch block, so all updates will not be saved.

can you advice me an ebook?or can i demand from you if you have a document.


Transactions
 
rasit fidanov
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
ummm,I wrote a reply for 1-2 hours ago.
However some connection failures prevented me enter new reply.

within this time i read some more about transactions.A Transaction is indiviseable unit jobs.

Hmmm.I think i got it.

if an sql depends to others or you cant do it without them they all must be in same transaction.
(atomic =atom was know as indiviseable in past as you know,i got it.)

hmmm.
But another question occured in my mind.
I thik its important too.

The question is;

assume that we have a transaction, agroup of several sql groups.Some of them selects and some of them inserts,deletes etc...

We know that before commit it doesnt write it to physically to database.
what if we have a milions of rows of a sql query?and

while in transaction and not commited ,another user makes a query and commits to db before we commit?

so then what will happen?We will lose some datas i think...Becouse transacion did not finish and not committed.

I thank you all.Thanks for your helps.
[ July 01, 2005: Message edited by: rasit fidanov ]
 
Roger Chung-Wee
Ranch Hand
Posts: 1683
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You won't lose data due to DBMS row locking. Locks are held for a statement until the transaction is committed or rolled back. If there is a conflict (such as two transactions trying to acquire the same lock), the first transaction will typically succeed, and the second will be blocked until the first releases the lock (or the attempt to acquire the lock times out and the operation fails).

How locks are set is determined by what is called a transaction isolation level. There are four levels but all are not necessarily supported by a database. These levels are:

TRANSACTION_READ_UNCOMMITTED
This is the fastest properly valid transaction level. It lets you read changes made by other concurrent transactions that have not yet been committed to the database. Dirty reads (the reading of a value which has not been committed and may be rolled back), non-repeatable reads (different results are seen from multiple reads), and phantom reads (the reading of rows which have been added) can all occur with this transaction level.

TRANSACTION_READ_COMMITTED
You can read changes made by other concurrent transactions that have been committed to the database. Dirty reads are prevented with this transaction level, but non-repeatable reads and phantom reads can both occur. This level is the default for databases such as Oracle9i.

TRANSACTION_REPEATABLE_READ
Reads are repeatable, meaning that reading the same field twice should always result in the same value being read except when the transaction itself has changed the value. Dirty reads and non-repeatable reads are prevented with this transaction level, but phantom reads can occur.

TRANSACTION_SERIALIZABLE
This is the slowest transaction level, but it is fully ACID-compliant (so every "proper" database should support this level). Serializable refers to ACID compliance, in which your transaction is deemed to have taken place in its entirety as if all other committed transactions have taken place in their entirety either before or after the transaction. In other words, the transactions are serialized.

You can get problems with dirty reads or non-repeatable reads if decisions are made which are based on data which subsequently turns out to be wrong. So, some applications must use TRANSACTION_SERIALIZABLE even though it is slow.
 
rasit fidanov
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ok.I think i got it.
Now i must practise.

Thank you very much.

Special thanks to Roger.
Thanks master...
 
author & internet detective
Posts: 41860
908
Eclipse IDE VI Editor Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Roger,
That's a good description. Did it come from a website that you can link to?
 
Roger Chung-Wee
Ranch Hand
Posts: 1683
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I ddn't get this info from any one website, I did some research and used a bit of my own knowledge. Sorry, I haven't got any URLs as I didn't bookmark any, but there are some buried deep in oracle.com. I'm sure that you'll good find them with Google (easier than using Oracle's own search facility ).
 
Jeanne Boyarsky
author & internet detective
Posts: 41860
908
Eclipse IDE VI Editor Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Roger,
No problem. It looked like something that was cut and pasted. You write really well!
 
Ranch Hand
Posts: 1209
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm still a little confused. I guess oracle database must be supporting optimistic locking right? I mean there are on locks actually, everybody is free to read and write except that the row timestamp/version is checked before doing an update. Now if trans-1 and trans-2 get to read the rows and trans-1 gets to update before trans-2. Say I have READ_COMMITED (oracle default as you mentioned) as isolation level. So trans-2 will get to see the updated row before it runs its update. The timestamp it read upfront wouldnt match the current timestamp and the update will not go through?.
 
Karthik Guru
Ranch Hand
Posts: 1209
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

We know that before commit it doesnt write it to physically to database.
what if we have a milions of rows of a sql query?and


Uhhm, I'm not too sure. I think it does write it to the database except that it stores the previous version (snapshot) in the rollback segment.
If there is a problem it can restore the data from the rollback segment else the data is already there in the DB and the rollback segment can hooefully be cleaned up for that part of the data. May be somebody can verify.
 
Roger Chung-Wee
Ranch Hand
Posts: 1683
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

I'm still a little confused. I guess oracle database must be supporting optimistic locking right? I mean there are on locks actually, everybody is free to read and write except that the row timestamp/version is checked before doing an update. Now if trans-1 and trans-2 get to read the rows and trans-1 gets to update before trans-2. Say I have READ_COMMITED (oracle default as you mentioned) as isolation level. So trans-2 will get to see the updated row before it runs its update. The timestamp it read upfront wouldnt match the current timestamp and the update will not go through?.


Indeed, the database will not commit if the timestamps differ. But that isn't the end of it. Remember, optimistic locking is not just provided by the DBMS: the developer has to code for this as well. So, you would expect the transaction to be immediately rolled back and, probably, the DB to be reread and a new transaction to be started.

I would also expect the developer to do some more work to ensure data consistency.

1. Enforce concurrency by using database triggers, otherwise you cannot guarantee database integrity.

2. Use a suitable integer for the concurrency key as the traditional date/time stamp only offers a one-second resolution for the lock.

3. Use an offset when writing back the key.
 
Karthik Guru
Ranch Hand
Posts: 1209
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Roger Chung-Wee:

Indeed, the database will not commit if the timestamps differ. But that isn't the end of it. Remember, optimistic locking is not just provided by the DBMS: the developer has to code for this as well. So, you would expect the transaction to be immediately rolled back and, probably, the DB to be reread and a new transaction to be started.

I would also expect the developer to do some more work to ensure data consistency.

1. Enforce concurrency by using database triggers, otherwise you cannot guarantee database integrity.

2. Use a suitable integer for the concurrency key as the traditional date/time stamp only offers a one-second resolution for the lock.

3. Use an offset when writing back the key.



Ok am not very clear yet .., if i write a trigger and make sure that i raise an exception by detecting the timestamp, then its like I'm taking care of optimistic locking ?
The only thing the DB has probably done is due to READ_COMMITED isolation it allowed me to look at the updated commited data and may be rollig back the transaction because i raised an exception in the trigger ?
Would it be correct to say that a developer provides for optimistic locking and not the database.
 
rasit fidanov
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have some more questions.
suppose that we have a prepared statement and we update some values with it in a loop.

loop has big number of looping.for example 100000;

with autocommit false we do commit manually and if there is something going wrong we can rollback it in catch block.
so will everthing be ok when an exception occures in 99000. of loop?

can it rollback all it did.

is there a size of it to rollback?

how is it working in fact ?
rolback mechanism?
[ July 12, 2005: Message edited by: rasit fidanov ]
 
rasit fidanov
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Another question is can i batch how many times i want, in statement or p.statement?

Is there limit for it?
In previous question too is there a limit of rolling back?
reply
    Bookmark Topic Watch Topic
  • New Topic