Now you've made me think (and you math is correct BTW).  This might be 
unrelated and if so, just ignore, but let me explain a simplified version of 
our architecture and ask for some advice regarding transaction handling.

We have an application that almost continuously runs through a set of "rules" 
and processes them. Lets say there are 6000 rules stored in a table. In order 
to process each "rule" the application has to perform a set of selects and 
updates from many other tables (mostly storing warehouse inventory). Depending 
on the outcome of the process, the rule's status, lastexecutetime, etc is 
updated. These rules are then continuously displayed in a summarized view in a 
dashboard application.

We currently process each rule in its own transaction, so to commit or 
roll-back depending on retuned values from the other statements. We cannot 
process all the rules in one transaction (unless if we use savepoints?). Also 
the inventory changes constantly so getting a snapshot at starttime will not 
make sense when getting to rule 5000+.

Given the above makes sense (to anyone except me:-), I do not see another way 
to handle this without increasing the TransactionIDs continuously?

Reply via email to