Hello,
I have an oracle table from which rows are retrieved every 10 mins. Some other application puts rows into the same table. There is a timestamp field and I am planning to use that inorder to retrieve new rows. I am concerned about transactions. There might be a point where that application inserts a row when I try to retrieve and so, I will miss that row no matter I check the timestamp. Can somebody throw some light on this?
Also, when I retrieve rows and process, there might be a situation where the processing is faulty and therefore, that row has to be fetched again sometime.
I am thinking of using a new table. Please suggest some ideas regarding my design.
P.S
I am very interested in knowing such scenarios and I would like to know how to deal with such situations at enterprise level ( I wonder what would be the design in time critical applications where multiple applications put and retrieve rows on a single table using timestamps).Don't add a new table to your schema. It will only make things more complicated then they should be.
I suggest the following :
1. Add a column to your table, say : process_timestamp date not null default to_date('01.01.1901','DD.MM.YYYY'). In fact, you can assign any default value you want, but don't use NULL.
2. Your program that retrieves the rows should only read those having process_timestamp = to_date('01.01.1901','DD.MM.YYYY')
3. If the execution succeeds, update process_timestamp to sysdate. If the execution fails, leave it unchanged.
Doing so,
1. You don't have to change anything to the programs that are inserting rows into your table. The default value (to column process_timestamp) will be assigned automatically.
2. You don't have to worry about rows that are inserted while you are processing. You will pick them up later on.
3. On failure, you will be able to reprocess the rows as many times as necessary.|||Many thanks for the reply!
No comments:
Post a Comment