I have a scenrio where, while iterating over a resultset, the partitions in the underlying table may get deleted due the
scheduled purging activity. ORA-01410: invalid ROWID is thrown if the row in resultset gets dropped due to partition deletion.
I would like to handle this and continue to next row of the resultset.
Calling next() on a resultset, for which a previous next() call has raised exception will work or not?
Still in process of unittesting This is how I try to handle it.
moveNext() method definition
I am using Oracle 10, jdk 1.6
The exception stack trace is
java.sql.SQLException: ORA-01410: invalid ROWID
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
Just to clarify, the "partition deletion" is dropping a partition from a table? Are there any global indexes on the table? How are they maintained during the drop?
Anyway, you're probably treading in muddy waters. I'm afraid the exact behavior may pretty well be undefined in this situation, as it depends both on the database and the JDBC driver. I personally would not rely on the subsequent operations to be 100% correct after the ORA-01410.
I'd also suggest to review your processes. Partitions are usually expected to be dropped when they are no longer in any use. You should either modify the criteria of your query to skip rows from partitions that are eligible for purging (this might actually speed up your query, as partition pruning might kick in), or modify the purging to postpone dropping partitions so that your regular processing has enough time to process all the records.
In your current state of affairs, even if made to handle the exception reliably, whether a row from about-to-be-dropped partition would be processed depends on the timing of the scheduled purge. At the first blush, it does not look like a good design.
If the partition gets dropped in the middle of your processing, then you have some serious problems.
1. You are processing data which probably is not supposed to be processed(since the purging is not waiting for the processing to over).
2. Or, you are prone to loosing some data in the midst of your processing. Since purge runs when you are reading for processing and thus the unread rows are "lost".
As Martin has suggested, looks like not a very good design and probably need to revisit the design again.
Joined: Nov 21, 2007
Are there any global indexes on the table?
No there are no global indexes on the table. Only local index on a KEYCOLUMN
Partitions are usually expected to be dropped when they are no longer in any use. You should either modify the criteria of your query to skip rows from partitions that are eligible for purging (this might actually speed up your query, as partition pruning might kick in), or modify the purging to postpone dropping partitions so that your regular processing has enough time to process all the records
Partitions which are being dropped are indeed no longer required. Filtering(skipping rows) is done in the application. If the criteria of the query is modified to skip the rows, it becomes expensive. Current query is as simple as
select * from TARGETTABLE WHERE KEYCOLUMN='VALUE' - which returns rows from multiple partition.
Partition purging has to be done online; any downtime is to be avoided.
Yet to reproduce this issue in test.
Sure, I will revisit the design.
I believe that the query with filtering can be tuned not to be expensive. If the filtering can be efficiently done in the application, it must be even more efficiently doable in the database (even if you had to join in a table or two), because you'll save LIO and network roundtrips for the filtered out rows. Plan of the first query seems to be an index access, and since it is locally partitioned, partition pruning should certainly work in your case.
Maybe you need to add partition key columns into that local index, so that the row filtering criteria can be answered from the index - that way the cost of the query should not increase due to additional table access. You might even be able to add them to the front of the index (depending on your other queries) and use index compression - that way the index size might not increase at all. Just guessing, it all depends on your schema, of course.
Also remember that dropping a partition is a DDL operation and as such it is not protected by Oracle's multiversioning. If your partitions age out fast, it is possible that even with filtering criteria in the query, partition that was there when the processing started will already be gone when you'll get to process its rows. You should protect yourself against this possibility somehow (by delaying the drop by the maximum time it takes to process the rows for example), if it is the case.