I am working on a JAVA e-commerce application with Oracle 9i database. All the transactions that take place are logged in Table-A on Server-A. The requirement is that all the transactions that more than 60 days old should be moved to Table-B on Server-B. The solution should be executed everyday at fixed time.
1: Structure of table A and B is identical and is with no partitions.
2: Server A and B are two different machine running identical Oracle Database.
3: The data that may be moved from A to B will be in thousands of records.
This does not seem to be that complicated. Thousands of records should not pose any significant problems, it's tiny amount of data by today's standards.
If you do the obvious and use a Java application to select data from Server-A and insert them on Server-B, make sure that you're using Oracle's own update batching mechanism, or use Oracle 11g driver, which provides efficient mechanism even for JDBC standard update batching. Oracle JDBC drivers older than 11g do not provide the topmost performance with JDBC update batching, and this can be significant. We've got a few notes on driver compatibility and update batching in our Oracle FAQ.
If you want to improve the performance further anyway, I can see two clear possibilities:
1) Create a database link between the two databases and copy the data using a single command INSERT INTO AS SELECT (possibly using /*+ APPEND */ hint, but learn the ramifications first - if you delete data from Table-B, this approach would never reuse space freed by deleting the old data). This makes the data to travel over network just once; if you copy them using Java on an application server, the data would travel over the network twice. Check the documentation for limits and restriction associated with a link, I believe that either LOBs or LONGs could pose some problems.
You probably also could code a stored Java procedure on one of the databases, which would connect to the other database and copy the data by SELECTing and then INSERTing them. This would obviously eliminate one of the two network journeys for the data. I believe this (that is connection to another server from java stored procedure) should be possible, but have no experience on that. Also, I've moved from 9i long ago.
2) Partition the Table-A according to the transaction date and erase old data using ALTER TABLE DROP PARTITION, which is instantaneous and does not generate undo/redo for the data. You'll have to set up a process to add new partitions on the fly, but this all could be part of the process to move the data. This may significantly decrease load and undo/redo space utilization on the Server-A.
Joined: Aug 11, 2007
Thank you for the response.
The application makes use of EJB 3.0 and JPA for data access and persistence. I had thought of implementing a solution using JPA; however on doing further study realized that it may not be an efficient way for archiving and then purging huge data.
I will first try and implement a solution using JDBC Update Batching first and then try out database solutions.