Hi, i have a problem that i do not think can be solved ( given the limitations by my superiors). I just though that it would be good to see what other people thought before i told them i could not do it. I was told to write an app that called a db2 database and retreived a large result set of data. The app then iterates over the set and does some work(sends mq messages). Every time it completes an iteration it calls the database and updates a flag in the row to say "i have been fulfilled". Now it now seems that every process in this company has to be doubled up and my argument is that without a major enhancement cannot take place. I believe that if one application grabs the result set then the other can get it at the same time. Even if i out on read locks, which i am not alloed to do, when one process grabs the result set then moves on to the work cycle it drops the lock and another process can take the same result set. basically i cannot crete a lock before read and hold it until after the writing phase. My solutions were to have a connection manager process beside the database that the apps looked up but that is in essence a single process which they don't want. I ma told that it would be near impossible to get a major database enhancement done. I looked at semiphores but they have to be read before written so it does not really cure the problem. It is just the same as the larger problem. Does anybody know of a possible soulution. Thanks in advance for any help. David thanks in advance David
David, I am intrigued... cannot say I have an answer directly.. but i have been thinking on this for a while.. it has all the bearings of a catch-22 situation... first thoughts are: - Since we are dealing with large record sets, would some thing of this nature help... loop through the result and first create a vector.. close the result set. No updates are performed here... Now loop thru the vector created.. perform ancillary activities (sendmail etc)... and then based on success of that job, issue an update to the table. Simple single SQL statement.. or a single update to an entity bean. The advantages are that locking can be minimised... U are only reading first. Not doing any updates at all. The disadvantage is that u need a large footprint of the result set in memory and this might lead to complications. - Process a small part of a result set at a time. Even if the condition is satisfied by 10000 records, pick , say, only the first 100. Finish processing them and then issue the same query and pick the rest... and so on. i.e divide and conquer. - check if database isolation levels can be modified to allow uncommitted reads etc... (Not very certain on this point...) as i said earlier.. not a sureshot black and white answer... but some thoughts... let me know what your thoughts are.. perhaps we might end up getting a solution!!
I think that you need to play with the transaction isolation level on your Connection object ( what Nagendra suggested ). I know that DB2 drivers can be a little finicky ( throw Exceptions ) when you increase to higher isolation levels depending on your DB2 settings though. Jamie