I had to take care of someone else code..
He made a stored procedure using Oracle 10g but there is a little problem.
In this proc, there are two UPDATE on the same row .
The first let the field be NULL to make sure that if there is no update, fields are reset.
The second one update the fields if any values.
The problem is : the order of execution is random. Sometimes, first it's updated to good values, then reset..
Even more weird, it depends on who execute the stored proc :|
When i look at the timestamp (at which time was the update done, both are executed at exactly the same time)
I even tryed to add a commit after the first update, but it doesn't change anything..
you don't need to use PL/SQL, it can be done in a single merge operation.
If something can be done in pure SQL - do it in SQL, do not use PL/SQL, it will be almost always more efficient and less error-prone.
Database is much better and faster in selecting/joining/crunching rows than fetching/looping and updating them manually in the procedural code.
This merge statement is executed by DB as a single atomic operation, even if it applies to hundreds million of rows.
Of course you can wrap it into a PL/SQL procedure:
The base question is - what is a reason to do it in this way ? Why to always set all rows in this table to null ?
And how many rows both tables have, how frequently this code could be executed and how many users could simultaneously try to execute it?
Joined: Jun 07, 2010
The reason to do it in this way is that it was made there is a long time ago by someone else ^^
I didn't write the complete procedure. In the real one, TableA is filled in in part by TableB.
It's something weird i agree.. We already replaced the code and it's now working better. But still, i was so curious that i needed to understand
why the values where filled in before the null .. :/
It's not logical, it looks impossible since we first set it to null and then enter the values.
It's first set to null because if there is no entries, we don't want to keep the old values.