> Are the data sensible after, say 3500 rules are run? If each rule is a
> separate unit of work and you do not require all rules to be run for the data
> to be ready for further use, then it might be understandable with many
> transactions.
Yes this is the case - as each rule finishes the ou
Thank you for the advice, currently our "read only" transactions are not true
"read" only, they contain the "write" parameter - I will try to change them
tonight (should be possible)
More importantly then are you saying that "truly read-only" transactions are
not counted as "interesting active
'Rule' and shipping it back
> to the database. If anything fails in
> the SQL procedure or a data error was
> detected by some 'Rule' and exception
> rolls everything back.
>
> -Original Message-
> From: firebird-support@yahoogroups.com
> [mail
Now you've made me think (and you math is correct BTW). This might be
unrelated and if so, just ignore, but let me explain a simplified version of
our architecture and ask for some advice regarding transaction handling.
We have an application that almost continuously runs through a set of "rule
Again, thank you guys - I've found the issue in a application that started a
"read-only" transaction to update some labels on a form and keeping the OAT
stuck.
With my luck, I closed the app and froze the poor server, now trying to sweep 7
days worth of transactions from all other tables... ouc
Thank you very much for your responses - this makes sense. We have a couple of
server side applications that run constantly. Although, they should all be
closing their connections. We will investigate further.
Could this somehow be related to the .NET driver with connection pooling? The
one ap
I have 20+ databases in production, most of them between 3 and 8GB with no
BLOBs. My problem seems to be that sweeping is not executing when I either
perform a full backup or doing full table scans. Below is the gstat header
info and as you can clearly see that it should have kicked in long ag