This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
I have one table which contains my application transaction entries and i created a history table for that transaction table. I have created a sequence for this his table. And have creates a trigger for inserting history the table with sequence and insert date as sysdate for every update or insert on my transaction table. But my history table has wrong sequence entries.
For example my history table has sequence no is 10 and insert date is some 22.11.2012 11.00 PM
and another record has sequence no 12 and has insert date as 22.11.2012 10.49 PM how this is possible....
Welcome to Java Ranch. I have split this post into a new topic in our Oracle forum. You may want to post additional details about the trigger and sequence you have defined so our Oracle experts will have a little more to work with.
As Martin says, don't worry about the actual value of the sequence, but you can rely on it being unique.
Just FYI, you may also find that sequence values jump and leave gaps e.g. if you have defined the sequence to cache values. This is often useful e.g. if you are creating a lot of records in a big SQL INSERT statement where you use the sequence to get a new key value. Caching sequence values speeds things up because Oracle can fetch e.g. 1000 sequence values in one call and use them for the inserts etc, instead of having to fetch each value from the sequence individually. The next call to the sequence will take the next 1000 values, and so on. But the cached values have been "used up", even if you don't actually insert 1000 records, for example, so the sequence values may appear to jump between one set of inserted records and the next.
Caching sequence values can make a big difference in the performance of your SQL, because each call to the sequence is treated like a "SELECT" statement and is relatively slow. For example, on a recent project we were writing millions of records, so telling the sequence to cache 500,000 values at a time made things run much faster. Of course, caching the sequence values takes up memory, but it's a trade-off, and numbers don't take up much space anyway.
Also, don't worry about running out of sequence numbers even if you skip some values, as the maximum value a sequence can return is 10^27 (http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_6015.htm), which is a really big number e.g. if you'd started consuming 2 billion sequence values per second just after the Big Bang, you still wouldn't have run out of values yet.
Just trust the sequence to do its thing and don't worry about it.
No more Blub for me, thank you, Vicar.
Joined: Nov 22, 2012
Thanks for your replies, Martin I can use date with timestamp for ordering my table, but in some cases the sequence order is incorrect when we use date for ordering
for example --
if i have two tasks say 1 and 2,
My task 1 will change my table reserve amount .
My task 2 will also changes the reserve amount.
By my application flow you can't perform task 2 without completing task1 ie) -- Task 1 amount is used for calculating task 2 amount.
But my table has task 2 entry first and task 1 entry second and when order by date.
If i use date for ordering the DWBI report cant generated for this kind of scenarios. And we cant give assurance to DWBI that this wont come again.
I have also attached my table export with this post. Kindly help me to resolve this sequence issue permanently.
There are two attchements one uses his_id for ordering and other uses create_date for ordering.
Well, this is expected behavior in Oracle. Sequences can generally end up being assigned in what looks like "incorrect" order.
However, there is another issue: how do you know that Task 1 has to be processed before Task 2? I assume these numbers come from sequence, and therefore - as you cannot rely on them being assigned in increasing order - they do not relate to order in which tasks should be processed. I'd expect the tasks to be processed in the order in which they were created, and this is what the timestamp column is for. Simply, for all purposes including task processing, order the tasks by the creation time.
If the order of the task is determined by some external process or property, you should add a column (say, processing_order), populate this column correctly by the process that creates the tasks and then process the tasks as ordered by this column.
There is another potential issue, which arises independently on the mechanism you use to order the tasks. I'll try to demonstrate this on an example:
Say that you have a process that executes the tasks. This process looks into the database periodically (say, once per second), and if it finds unprocessed tasks there, it picks up the oldest unprocessed tasks, marks it as being executed, and starts the processing. When finished, it marks the task as done and looks for another task.
Now, say that two independent processes create two tasks (let's call them 'A' and 'B') roughly at the same time. Task A was created earlier (and has lower older timestamp), but for some reason the session that created this tasks commits after the session that created Task B. Accidentally, the executing process starts looking into the database right after the Task B was committed, but before Task A was committed. In this case, the executor will see only unprocessed task B and will start processing it. When done it will look again and see unprocessed Task A, and will process it. Thus, the tasks will end up being processed in order different from their creation times!
There isn't an easy remedy against this situation. If there is some other mechanism that defines order in which tasks should be created, you need to incorporate that mechanism into your data model and task processing mechanism. If there isn't - well, in this case you cannot even decide whether one particular ordering of tasks is wrong or right.