aspose file tools*
The moose likes JDBC and the fly likes How to handle java.sql.SQLException: ORA-01410: invalid ROWID Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Databases » JDBC
Bookmark "How to handle java.sql.SQLException: ORA-01410: invalid ROWID" Watch "How to handle java.sql.SQLException: ORA-01410: invalid ROWID" New topic
Author

How to handle java.sql.SQLException: ORA-01410: invalid ROWID

chandrakant karale
Ranch Hand

Joined: Nov 21, 2007
Posts: 41
I have a scenrio where, while iterating over a resultset, the partitions in the underlying table may get deleted due the
scheduled purging activity. ORA-01410: invalid ROWID is thrown if the row in resultset gets dropped due to partition deletion.
I would like to handle this and continue to next row of the resultset.

Calling next() on a resultset, for which a previous next() call has raised exception will work or not?
Still in process of unit testing
This is how I try to handle it.



moveNext() method definition



I am using Oracle 10, jdk 1.6
The exception stack trace is

java.sql.SQLException: ORA-01410: invalid ROWID



at oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)

at oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:133)

at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:206)

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:455)

at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:413)

at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:1034)

at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)

at oracle.jdbc.driver.T4CStatement.fetch(T4CStatement.java:1006)

at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)

at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)

at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:39)

at java.lang.reflect.Method.invoke(Method.java:612)

at oracle.ucp.jdbc.proxy.ResultSetProxyFactory.invoke(ResultSetProxyFactory.java:190)

at $Proxy2.next(Unknown Source)


Martin Vajsar
Sheriff

Joined: Aug 22, 2010
Posts: 3610
    
  60

Just to clarify, the "partition deletion" is dropping a partition from a table? Are there any global indexes on the table? How are they maintained during the drop?

Anyway, you're probably treading in muddy waters. I'm afraid the exact behavior may pretty well be undefined in this situation, as it depends both on the database and the JDBC driver. I personally would not rely on the subsequent operations to be 100% correct after the ORA-01410.

I'd also suggest to review your processes. Partitions are usually expected to be dropped when they are no longer in any use. You should either modify the criteria of your query to skip rows from partitions that are eligible for purging (this might actually speed up your query, as partition pruning might kick in), or modify the purging to postpone dropping partitions so that your regular processing has enough time to process all the records.

In your current state of affairs, even if made to handle the exception reliably, whether a row from about-to-be-dropped partition would be processed depends on the timing of the scheduled purge. At the first blush, it does not look like a good design.
Sudheer Bhat
Ranch Hand

Joined: Feb 22, 2011
Posts: 75
I completely agree with Martin.

If the partition gets dropped in the middle of your processing, then you have some serious problems.
1. You are processing data which probably is not supposed to be processed(since the purging is not waiting for the processing to over).
2. Or, you are prone to loosing some data in the midst of your processing. Since purge runs when you are reading for processing and thus the unread rows are "lost".

As Martin has suggested, looks like not a very good design and probably need to revisit the design again.

chandrakant karale
Ranch Hand

Joined: Nov 21, 2007
Posts: 41
Are there any global indexes on the table?


No there are no global indexes on the table. Only local index on a KEYCOLUMN

Partitions are usually expected to be dropped when they are no longer in any use. You should either modify the criteria of your query to skip rows from partitions that are eligible for purging (this might actually speed up your query, as partition pruning might kick in), or modify the purging to postpone dropping partitions so that your regular processing has enough time to process all the records


Partitions which are being dropped are indeed no longer required. Filtering(skipping rows) is done in the application. If the criteria of the query is modified to skip the rows, it becomes expensive. Current query is as simple as
select * from TARGETTABLE WHERE KEYCOLUMN='VALUE' - which returns rows from multiple partition.

Partition purging has to be done online; any downtime is to be avoided.

Yet to reproduce this issue in test.
Sure, I will revisit the design.

Thanks a lot.



Martin Vajsar
Sheriff

Joined: Aug 22, 2010
Posts: 3610
    
  60

I believe that the query with filtering can be tuned not to be expensive. If the filtering can be efficiently done in the application, it must be even more efficiently doable in the database (even if you had to join in a table or two), because you'll save LIO and network roundtrips for the filtered out rows. Plan of the first query seems to be an index access, and since it is locally partitioned, partition pruning should certainly work in your case.

Maybe you need to add partition key columns into that local index, so that the row filtering criteria can be answered from the index - that way the cost of the query should not increase due to additional table access. You might even be able to add them to the front of the index (depending on your other queries) and use index compression - that way the index size might not increase at all. Just guessing, it all depends on your schema, of course.

Also remember that dropping a partition is a DDL operation and as such it is not protected by Oracle's multiversioning. If your partitions age out fast, it is possible that even with filtering criteria in the query, partition that was there when the processing started will already be gone when you'll get to process its rows. You should protect yourself against this possibility somehow (by delaying the drop by the maximum time it takes to process the rows for example), if it is the case.
 
It is sorta covered in the JavaRanch Style Guide.
 
subject: How to handle java.sql.SQLException: ORA-01410: invalid ROWID