This week's giveaway is in the Android forum.
We're giving away four copies of Android Security Essentials Live Lessons and have Godfrey Nolan on-line!
See this thread for details.
The moose likes JDBC and the fly likes data locking Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Android Security Essentials Live Lessons this week in the Android forum!
JavaRanch » Java Forums » Databases » JDBC
Bookmark "data locking" Watch "data locking" New topic
Author

data locking

parag dharmadhikari
Greenhorn

Joined: Dec 01, 2000
Posts: 8
Hi all,
In web application it is common that if one user is accessing one particular record for one operation and another user is also accessing same record for another opertion then it will cause unpredictable results many times.I know that solution is data locking,but can anybody pls tell me how can it will be implemented.
Will this solution become database dependent or independent?
Regards
Parag
Jamie Robertson
Ranch Hand

Joined: Jul 09, 2001
Posts: 1879

Have a look at the Connection.setTransactionIsolation( int isolation_level ) method.
different isolation levels ( taken fromt the API ):
TRANSACTION_NONE
public static final int TRANSACTION_NONE - Indicates that transactions are not supported.
--------------------------------------------------------------------------------
TRANSACTION_READ_UNCOMMITTED
public static final int TRANSACTION_READ_UNCOMMITTED - Dirty reads, non-repeatable reads and phantom reads can occur. This level allows a row changed by one transaction to be read by another transaction before any changes in that row have been committed (a "dirty read"). If any of the changes are rolled back, the second transaction will have retrieved an invalid row.
--------------------------------------------------------------------------------
TRANSACTION_READ_COMMITTED
public static final int TRANSACTION_READ_COMMITTED - Dirty reads are prevented; non-repeatable reads and phantom reads can occur. This level only prohibits a transaction from reading a row with uncommitted changes in it.
--------------------------------------------------------------------------------
TRANSACTION_REPEATABLE_READ
public static final int TRANSACTION_REPEATABLE_READ - Dirty reads and non-repeatable reads are prevented; phantom reads can occur. This level prohibits a transaction from reading a row with uncommitted changes in it, and it also prohibits the situation where one transaction reads a row, a second transaction alters the row, and the first transaction rereads the row, getting different values the second time (a "non-repeatable read").
--------------------------------------------------------------------------------
TRANSACTION_SERIALIZABLE
public static final int TRANSACTION_SERIALIZABLE - Dirty reads, non-repeatable reads and phantom reads are prevented. This level includes the prohibitions in TRANSACTION_REPEATABLE_READ and further prohibits the situation where one transaction reads all rows that satisfy a WHERE condition, a second transaction inserts a row that satisfies that WHERE condition, and the first transaction rereads for the same condition, retrieving the additional "phantom" row in the second read.
Jamie
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: data locking
 
Similar Threads
Concurrency issue with create(...)/createRecord(...) [URLyBird]
Locking Question
URLyBird: Question about find.
DB Access Question
[URLyBird 1.3.3] - What if I need to create new record without deleted flag?