I have a requirement to delete 10 yrs data from database. The deletion is triggered through Linux crob job scheduler by calling pl/sql procedure. There are around one lakh records to be deleted once scheduler triggers proc. Scheduler runs once in every one hour. Due to huge records, there will definitely be a pull of performance at database side.
Would there be way to break this data into different chunks such that there will not be any issues. Can this be done from database side by handling in procedure or through scheduler script?
Can any one through some light on this. Excuse me if this is not the right section to post.
Are all the records to be deleted in the same table? If so, you can use a where clause to introduce some criteria rather than deleting all 10 years of data at the same time. With multiple tables you can do the same, but a where clause makes it more to write.
I don't see why there would be a pull on the scheduler though. Once a job is kicked off, it runs asynchronously from a scheduler. Assuming the jobs is scheduled for the middle of the night, what would a large database load be impacting?
As there is huge data that would be deleted through procedure..I wanted to know if there would be a hit to database resources in case...as many concurrent application use the database. Will this activity pull DB resources?
author & internet detective
Can breaking into chunks be handled at query level by looping certain amount of records and committing at end of the loop. Does this helps? or need to edit the script of the scheduler to do this. Please let me know which would be the best solution.
You can always loop in the pl/sql procedure itself.. But if you delete so often from the system, then you may consider partitioning the data. If you have partitions to be deleted, then you can simply run a truncate statement against the partition which is way way faster than DELETE.