RE: [firebird-support] Transactions

2019-04-05 Thread 'Paul Beach' pbe...@mail.ibphoenix.com [firebird-support]
Kevin,

You might want to read this...
https://www.ibphoenix.com/resources/documents/search/doc_67

Regards
Paul


-Original Message-
From: firebird-support@yahoogroups.com 
[mailto:firebird-supp...@yahoogroups..com]
Sent: 03 April 2019 06:37
To: Firebird Support
Subject: [firebird-support] Transactions


  
Greetings,

I’m sure this has been asked before - is there a knowledge base somewhere?

I’m trying to get a handle on transactions from gstat and I just want to make 
sure I’m reading this correctly. One of my installs is having some issues, 
Firebird 1.56, Classic Server. I have been reading Helen’s book, Chapter 25 - 
Overview of Transactions.

My app (a large one) is written in Delphi, using Devart’s IBDAC components.

Also, my end users tend to End Task on my app if a report doesn’t pop up 
immediately. I’ve tried to educate them. So with Classic (or Superserver I’m 
guessing) when the user kills a client app, I’m assuming it leaves not only a 
connection to the database active (as seen in Task Manager on the server) but 
also potentially one or more transactions active.

It appears from the book that I should be concerned with the gap between the 
OIT and the OAT and making sure they both move ahead. So from the gstat figures 
below, my app needs a bunch of improvements in transaction handling which I’m 
diving into as we speak.

4/1/2019 - before backup / restore
Oldest transaction 1388 (OIT)
Oldest active 812976 (OAT)
Oldest snapshot 811580
Next transaction 905141
Note: Performance was horrible on this day and usually is the first day of the 
month.

4/1/2109 - after backup / restore:
Oldest transaction 1294
Oldest active 1295
Oldest snapshot 1295
Next transaction 1298
Note: This looks perfect to me.

4/2/2019 - around mid-day:
Oldest transaction 1297
Oldest active 1298
Oldest snapshot 1295
Next transaction 56071
Note: Not good. OIT and OAT not moving forward.

4/2/2019 - around end of day:
Oldest transaction 1297
Oldest active 1298
Oldest snapshot 1295
Next transaction 91583
Note: Even worse. OIT and OAT not moving forward.

4/2/2019 - after backup and sweep:
Oldest transaction 1626
Oldest active 91623
Oldest snapshot 91623
Next transaction 91631
Question: I would’ve thought performing a sweep would’ve not only done garbage 
collection but also dealt with rolled-back transactions and remove “stumps” of 
deleted records (from the book) increasing the OIT.

4/2/2019 - after a 2nd sweep:
Oldest transaction 91631
Oldest active 91623
Oldest snapshot 91623
Next transaction 91632
Note: Now this is interesting. This looks pretty perfect to me. 
Questions: Why didn’t the first sweep give me this kind of result? And should I 
schedule two sweeps every night? 

Thanks in advance for any help here!

Best Regards,
Kevin

Some more background:

Batch file that does backup / sweep every night:

del "e:\fb databases\lumber\lumber_bu.log"
gbak -b -user SYSDBA -password masterkey "127.0.0.1:e:\fb 
databases\lumber\lumber.fdb" "e:\fb databases\lumber\lumber.fbk" -v -y "e:\fb 
databases\lumber\lumber_bu.log"
gfix -sweep -user sysdba -password masterkey "127.0.0.1:e:\fb 
databases\lumber\lumber.fdb"

Sweep batch file I run interactively:
e:
cd "e:\program files (x86)\firebird15\bin\"
gfix -sweep -user sysdba -password masterkey "127.0.0.1:e:\fb 
databases\lumber\lumber.fdb"
pause

[Non-text portions of this message have been removed]






[firebird-support] Transactions

2019-04-02 Thread Kevin Stanton kevin.stan...@rdb-solutions.com [firebird-support]
Greetings,

I’m sure this has been asked before - is there a knowledge base somewhere?

I’m trying to get a handle on transactions from gstat and I just want to make 
sure I’m reading this correctly.  One of my installs is having some issues, 
Firebird 1.56, Classic Server.  I have been reading Helen’s book, Chapter 25 - 
Overview of Transactions.

My app (a large one) is written in Delphi, using Devart’s IBDAC components.

Also, my end users tend to End Task on my app if a report doesn’t pop up 
immediately.  I’ve tried to educate them.  So with Classic (or Superserver I’m 
guessing) when the user kills a client app, I’m assuming it leaves not only a 
connection to the database active (as seen in Task Manager on the server) but 
also potentially one or more transactions active.


It appears from the book that I should be concerned with the gap between the 
OIT and the OAT and making sure they both move ahead.  So from the gstat 
figures below, my app needs a bunch of improvements in transaction handling 
which I’m diving into as we speak.


4/1/2019 - before backup / restore
Oldest transaction  1388(OIT)
Oldest active   812976  (OAT)
Oldest snapshot 811580
Next transaction905141
Note:  Performance was horrible on this day and usually is the first day of the 
month.


4/1/2109 - after backup / restore:
Oldest transaction  1294
Oldest active   1295
Oldest snapshot 1295
Next transaction1298
Note:  This looks perfect to me.


4/2/2019 - around mid-day:
Oldest transaction  1297
Oldest active   1298
Oldest snapshot 1295
Next transaction56071
Note:  Not good.  OIT and OAT not moving forward.

4/2/2019 - around end of day:
Oldest transaction  1297
Oldest active   1298
Oldest snapshot 1295
Next transaction91583
Note:  Even worse.  OIT and OAT not moving forward.

4/2/2019 - after backup and sweep:
Oldest transaction  1626
Oldest active   91623
Oldest snapshot 91623
Next transaction91631
Question:  I would’ve thought performing a sweep would’ve not only done garbage 
collection but also dealt with rolled-back transactions and remove “stumps” of 
deleted records (from the book) increasing the OIT.

4/2/2019 - after a 2nd sweep:
Oldest transaction  91631
Oldest active   91623
Oldest snapshot 91623
Next transaction91632
Note:  Now this is interesting.  This looks pretty perfect to me.  
Questions:  Why didn’t the first sweep give me this kind of result?  And should 
I schedule two sweeps every night?  


Thanks in advance for any help here!


Best Regards,
Kevin


Some more background:


Batch file that does backup / sweep every night:

del "e:\fb databases\lumber\lumber_bu.log"
gbak -b -user SYSDBA -password masterkey "127.0.0.1:e:\fb 
databases\lumber\lumber.fdb" "e:\fb databases\lumber\lumber.fbk" -v -y "e:\fb 
databases\lumber\lumber_bu.log"
gfix -sweep -user sysdba -password masterkey "127.0.0.1:e:\fb 
databases\lumber\lumber.fdb"


Sweep batch file I run interactively:
e:
cd "e:\program files (x86)\firebird15\bin\"
gfix -sweep -user sysdba -password masterkey "127.0.0.1:e:\fb 
databases\lumber\lumber.fdb"
pause



[Non-text portions of this message have been removed]



Re: [firebird-support] Transactions isolation levels and collisions

2017-03-01 Thread Aldo Caruso aldo.car...@argencasas.com [firebird-support]

Ann,

thank you very much for your answer.

Aldo


El 01/03/17 a las 11:44, Ann Harrison aharri...@ibphoenix.com 
[firebird-support] escribió:


On Mar 1, 2017, at 9:53 AM, Aldo Caruso wrote:


Ann,

  Thanks for your detailed answer.

But this leads me to the following question: If snapshot 
transactions have their own copy of the Transaction Inventory Pages ( 
TIP ), taken when the transaction started, transaction A couldn't see 
the state of transaction B or transaction C ( because they are not in 
this copy ).


So I guess that snapshot transactions use its own copy of TIP for 
reading ( selects ), but when it comes to updates or deletes, in 
order to search for collisions, they should also access the global 
shared TIP.





Not exactly.  A snapshot transaction has a copy of rhe relevant 
section of the TIP as of the time it starts. Transactions started 
later alway count as "active" because they could not have committed 
before the started.


When a snapshot transaction tries to update a record and finds that 
the most recent version was created by a transaction that was active 
or unknown according to its copy of the TIP, it attempts to take a 
lock on that transaction's id.


Every transaction holds an exclusive lock on its id until it ends. 
 When the lock is granted to the waiting transaction, it knows that 
the other transaction has ended and asks the transaction management 
module for the final state of the dead transaction.


Good luck,

Ann








Re: [firebird-support] Transactions isolation levels and collisions

2017-03-01 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Mar 1, 2017, at 9:53 AM, Aldo Caruso wrote:
> 
> 
> 
> Ann,
> 
>   Thanks for your detailed answer.
> 
> But this leads me to the following question: If snapshot transactions 
> have their own copy of the Transaction Inventory Pages ( TIP ), taken when 
> the transaction started, transaction A couldn't see the state of transaction 
> B or transaction C ( because they are not in this copy ).
> 
> So I guess that snapshot transactions use its own copy of TIP for reading 
> ( selects ), but when it comes to updates or deletes, in order to search for 
> collisions, they should also access the global shared TIP.
> 


Not exactly.  A snapshot transaction has a copy of rhe relevant section of the 
TIP as of the time it starts. Transactions started later alway count as 
"active" because they could not have committed before the started.  

When a snapshot transaction tries to update a record and finds that the most 
recent version was created by a transaction that was active or unknown 
according to its copy of the TIP, it attempts to take a lock on that 
transaction's id. 

Every transaction holds an exclusive lock on its id until it ends.  When the 
lock is granted to the waiting transaction, it knows that the other transaction 
has ended and asks the transaction management module for the final state of the 
dead transaction.  

Good luck,

Ann
> 


Re: [firebird-support] Transactions isolation levels and collisions

2017-03-01 Thread Aldo Caruso aldo.car...@argencasas.com [firebird-support]

Ann,

  Thanks for your detailed answer.

  I agree with you: the only isolation level which each transaction 
takes care of is its own isolation level. So this behavior happens 
because A isolation level is snapshot ( regardless of the isolation 
level of B or C ).


Also I suspected your last conclusion: Firebird only checks the 
most recent record version.


But this leads me to the following question: If snapshot 
transactions have their own copy of the Transaction Inventory Pages ( 
TIP ), taken when the transaction started, transaction A couldn't see 
the state of transaction B or transaction C ( because they are not in 
this copy ).


So I guess that snapshot transactions use its own copy of TIP for 
reading ( selects ), but when it comes to updates or deletes, in order 
to search for collisions, they should also access the global shared TIP.


Is that true ?

Aldo




El 01/03/17 a las 09:29, Ann Harrison aharri...@ibphoenix.com 
[firebird-support] escribió:



> On Feb 28, 2017, at 3:07 PM, Aldo Caruso wrote:
>
> I'm trying to understand interactions between transactions with
> different isolation levels.

The problem is not mixed isolation levels. You would get the same 
behavior if all transactions were Snapshot


The scenario (described below in detail) is

Start transaction A

Start transaction B

B updates some record

B commits

Start transaction C

C updates the same record

A attempts to update that record and waits for C to end

Whatever C dies A will fail because of a conflict with C if C commits 
or a conflict with B if C rolls back. Why doesn't A just fail immediately?


The answer is that Firebird checks only the most recent record version 
for conflicts. Checking the next back version would avoid having A 
wait in this case but would require more reading in general.


Good luck,

Ann

>
> My environment is Firebird Super Server 2.5 (v. 2.5.2) on Linux
> Ubuntu 14.04
>
> I'm testing by opening two consoles and running isql-fb on both 
consoles.

>
> On the first one I start a SNAPSHOT transaction ( lets call it
> transaction A ).
>
> Afterwards, on the second console, I start another transaction with a
> READ COMMITTED RECORD_VERSION isolation level ( lets call it 
transaction

> B ).
>
> In transaction B, I update a record X and commit the transaction.
>
> In the same second console I start a third transaction, also READ
> COMMITTED RECORD_VERSION ( lets call it transaction C ).
>
> In transaction C, I update record X, but neither commit it nor roll it
> back ( transaction C remains active ).
>
> On the first console, in transaction A, I update record X and it
> obviously waits for a commit or a rollback of transaction C.
>
> Back on the second console, in transaction C, I roll it back.
>
> Not surprisingly, on the first console, transaction A ends waiting with
> a collision error, because although transaction C rolled back,
> transaction B, that had started after transaction A, had made an update
> and committed it. As the isolation level of transaction A was SNAPSHOT,
> it finds a collision and so an error is risen.
>
> So we have the following situation:
>
> If transaction C ends rolling back (as in the example), transaction A
> raises an error because of the collision with the previous transaction
> that touched and committed the record ( transaction B ).
>
> If transaction C ends committing, transaction A also raises an error
> because of the collision with transaction C.
>
> Whichever way transaction C ends ( committing or rolling back ), the
> waiting update in transaction A is aborted with an error message 
because

> of a collision.
>
> The question is which was the point of waiting in transaction A for the
> end of transaction C ? Why didn't it raise an error in the same 
moment I

> tried to do an update in transaction A ? Couldn't it foresee that the
> record was previously updated by a committed transaction B, which
> started after the SNAPSHOT transaction A, and so whichever the outcome
> of C were there would be a collision ?
>
> Note also that if in the example above, we wouldn't have started
> transaction C, transaction A would raise an error as soon as it 
tried to

> update the record that was updated and committed by B ( I tested it ).
>
> Thanks for any answer.
>
> Aldo Caruso
>
>
>
>
> 
> Posted by: Aldo Caruso 
> 
>
> ++
>
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu. Try FAQ and other links from the left-side 
menu there.

>
> Also search the knowledgebases at 
http://www.ibphoenix.com/resources/documents/

>
> ++
> 
>
> Yahoo Groups Links
>
>
>






Re: [firebird-support] Transactions isolation levels and collisions

2017-03-01 Thread Ann Harrison aharri...@ibphoenix.com [firebird-support]

> On Feb 28, 2017, at 3:07 PM, Aldo Caruso wrote:
> 
>   I'm trying to understand interactions between transactions with 
> different isolation levels.

The problem is not mixed isolation levels. You would get the same behavior if 
all transactions were Snapshot

The scenario (described below in detail) is

Start transaction A

Start transaction B

B updates some record

B commits

Start transaction C

C updates the same record

A attempts to update that record and waits for C to end

Whatever C dies A will fail because of a conflict with C if C commits or a 
conflict with B if C rolls back.  Why doesn't A just fail immediately?

The answer is that Firebird checks only the most recent record version for 
conflicts.  Checking the next back version would avoid having A wait in this 
case but would require more reading in general. 

Good luck,

Ann


> 
>   My environment is Firebird Super Server 2.5 (v. 2.5.2) on Linux 
> Ubuntu 14.04
> 
>   I'm testing by opening two consoles and running isql-fb on both consoles.
> 
> On the first one I start a SNAPSHOT transaction ( lets call it 
> transaction A ).
> 
> Afterwards, on the second console, I start another transaction with a 
> READ COMMITTED RECORD_VERSION isolation level ( lets call it transaction 
> B ).
> 
> In transaction B, I update a record X and commit the transaction.
> 
> In the same second console I start a third transaction, also READ 
> COMMITTED RECORD_VERSION ( lets call it transaction C ).
> 
> In transaction C, I update record X, but neither commit it nor roll it 
> back ( transaction C remains active ).
> 
> On the first console, in transaction A, I update record X and it 
> obviously waits for a commit or a rollback of transaction C.
> 
> Back on the second console, in transaction C, I roll it back.
> 
> Not surprisingly, on the first console, transaction A ends waiting with 
> a collision error, because although transaction C rolled back, 
> transaction B, that had started after transaction A, had made an update 
> and committed it. As the isolation level of transaction A was SNAPSHOT, 
> it finds a collision and so an error is risen.
> 
> So we have the following situation:
> 
> If transaction C ends rolling back (as in the example), transaction A 
> raises an error because of the collision with the previous transaction 
> that touched and committed the record ( transaction B ).
> 
> If transaction C ends committing, transaction A also raises an error 
> because of the collision with transaction C.
> 
> Whichever way transaction C ends ( committing or rolling back ), the 
> waiting update in transaction A is aborted with an error message because 
> of a collision.
> 
> The question is which was the point of waiting in transaction A for the 
> end of transaction C ? Why didn't it raise an error in the same moment I 
> tried to do an update in transaction A ? Couldn't it foresee that the 
> record was previously updated by a committed transaction B, which 
> started after the SNAPSHOT transaction A, and so whichever the outcome 
> of C were there would be a collision ?
> 
> Note also that if in the example above, we wouldn't have started 
> transaction C, transaction A would raise an error as soon as it tried to 
> update the record that was updated and committed by B ( I tested it ).
> 
> Thanks for any answer.
> 
> Aldo Caruso
> 
> 
> 
> 
> 
> Posted by: Aldo Caruso 
> 
> 
> ++
> 
> Visit http://www.firebirdsql.org and click the Documentation item
> on the main (top) menu.  Try FAQ and other links from the left-side menu 
> there.
> 
> Also search the knowledgebases at 
> http://www.ibphoenix.com/resources/documents/ 
> 
> ++
> 
> 
> Yahoo Groups Links
> 
> 
> 


[firebird-support] Transactions isolation levels and collisions

2017-02-28 Thread Aldo Caruso aldo.car...@argencasas.com [firebird-support]
Hello,

   I'm trying to understand interactions between transactions with 
different isolation levels.

   My environment is Firebird Super Server 2.5 (v. 2.5.2) on Linux 
Ubuntu 14.04

   I'm testing by opening two consoles and running isql-fb on both consoles.

On the first one I start a SNAPSHOT transaction ( lets call it 
transaction A ).

Afterwards, on the second console, I start another transaction with a 
READ COMMITTED RECORD_VERSION isolation level ( lets call it transaction 
B ).

In transaction B, I update a record X and commit the transaction.

In the same second console I start a third transaction, also READ 
COMMITTED RECORD_VERSION ( lets call it transaction C ).

In transaction C, I update record X, but neither commit it nor roll it 
back ( transaction C remains active ).

On the first console, in transaction A, I update record X and it 
obviously waits for a commit or a rollback of transaction C.

Back on the second console, in transaction C, I roll it back.

Not surprisingly, on the first console, transaction A ends waiting with 
a collision error, because although transaction C rolled back, 
transaction B, that had started after transaction A, had made an update 
and committed it. As the isolation level of transaction A was SNAPSHOT, 
it finds a collision and so an error is risen.

So we have the following situation:

If transaction C ends rolling back (as in the example), transaction A 
raises an error because of the collision with the previous transaction 
that touched and committed the record ( transaction B ).

If transaction C ends committing, transaction A also raises an error 
because of the collision with transaction C.

Whichever way transaction C ends ( committing or rolling back ), the 
waiting update in transaction A is aborted with an error message because 
of a collision.

The question is which was the point of waiting in transaction A for the 
end of transaction C ? Why didn't it raise an error in the same moment I 
tried to do an update in transaction A ? Couldn't it foresee that the 
record was previously updated by a committed transaction B, which 
started after the SNAPSHOT transaction A, and so whichever the outcome 
of C were there would be a collision ?

Note also that if in the example above, we wouldn't have started 
transaction C, transaction A would raise an error as soon as it tried to 
update the record that was updated and committed by B ( I tested it ).

Thanks for any answer.

Aldo Caruso




[firebird-support] Transactions for read-only selects

2016-07-12 Thread 'Daniel Miller' dmil...@amfes.com [firebird-support]
When closing a read-only transaction for a read operation - does it 
matter if COMMIT vs ROLLBACK is used?


--
Daniel

[firebird-support] Transactions : about OIT, OAT, OST, NEXT

2015-03-16 Thread sdnetw...@gmail.com [firebird-support]
Hello,
 

 i have remarked a strange thing, if you do only one transaction by connection 
these counters are not updated , it is normal ?
 

 i have done the test with dotnet provider and node-firebird (they uses the xdr 
remote protocol).
 

 it is very easy to reproduce :
 

 1) Open Connection
 2) Open transaction
 3) do a select query
 4) hard commit on transact
 5) Close connection
 

 ps : if i do a second transaction on the same connection these counters are 
updated.
 

 so if you do a little script that launch N times this little program you 
obtain with gstat -h
 

 IF N = 100
 

 BEFORE :
 

 OIT 1, OAT 2, OST 2, Next 3

 

 AFTER
 

 OIT 1, OAT 2, OST 2,  NEXT 103
 

 [CODE = DOTNET]
 using FirebirdSql.Data.FirebirdClient;
 using System;
 using System.Collections.Generic;
 using System.Linq;
 using System.Text;
 using System.Threading.Tasks;
 

 namespace dotnet_firebird
 {
 class Program
 {
 static void Main(string[] args)
 {
 FirebirdSql.Data.FirebirdClient.FbConnectionStringBuilder scnx = 
new FirebirdSql.Data.FirebirdClient.FbConnectionStringBuilder();
 scnx.UserID = SYSDBA;
 scnx.Password = masterkey;
 scnx.DataSource = 127.0.0.1;
 scnx.Database = c:\\test.fdb;
 int i = 0;
 FbConnection cnx = new FbConnection(scnx.ToString());
 cnx.Open();
 FbTransaction tr = cnx.BeginTransaction();
 FbCommand cmd = new FbCommand(SELECT COUNT(1) FROM TRANSACT, 
cnx, tr);
 FbDataReader _reader = cmd.ExecuteReader();
 while (_reader.Read())
 Console.WriteLine(_reader.GetString(0));
 _reader.Close();
 tr.Commit();
 i++;
 cnx.Close();
 cnx.Dispose();
 }
 }
 }
 [/CODE]
 

 [CODE = node]
 var fb = require('node-firebird');
 

 var _connection = {
 user : 'SYSDBA',
 password : 'masterkey',
 host : '127.0.0.1',
 port : 3050,
 database : 'test.fdb',
 }
 

 fb.attach(_connection, function (err, cnx) {
 if (err) {
 console.log(can't connect to db);
 return;
 }
 cnx.query(SELECT count(1) FROM TRANSACT, function (err, data) {
 console.log(data);
 
 cnx.detach(function (err) {
 if (err)
 console.log(err);
 });
 });
 
 });
 [/CODE]


RE: [firebird-support] Transactions

2014-04-12 Thread Svein Erling Tysvær
Please help my understand what will happen in the following cases involving 
transactions.
System: Delphi XE2, Firebird 2,5x, Using DBX components.  Isolation level: 
ReadCommitted.

The actual statements are much more complex and involves various statements in 
the transaction and/or triggers that updates/insert into up to 8 tables.

Case 1
App 1 starts a transaction (T1) and in an After Insert trigger for MasterTable 
do something like:
Update TableSub set X = X + MasterTable.X where Y=Y.MasterTable
Before App1 commits, App2 starts a transtaction (T2) and  does the same 
statement.  Then App1 commits(T1) and then App2 commits(T2).
Will this result in a deadlock or will T1 run and then T2 or will X.old in 
both transaction have the same value?

You will get a lock conflict on T2 (a deadlock involves several tables where T1 
waits for T2 and T2 waits for T1, your situation is just T2 waiting for T1 and 
hence just a lock conflict).

Case 2
Same case as above but the trigger use X = GEN_ID(Generator1,1).
Will X be, for example 10, in T1 and 11 in T2, or will it result in a 
deadlock, or will X be 10 in both transactions?

Generators are not transaction bound and you would get different numbers for T1 
and T2.

Any information will be appreciated and even more so hints as on how to handle 
this.  I would like T1 to start and run to completion and then T2 taking
into account what happen in T1.

The only way to make T2 take into account what has happened in T1 is for T2 
reading the data after T1 has committed and only then do the changes. Sometimes 
concurrency issues can be avoided by using INSERT rather than UPDATE and then 
use SUM whenever you have to read the value.

HTH,
Set


Re: [firebird-support] Transactions

2014-04-12 Thread Gordon Hamm
BTW, I took the code out, and made a simple query.. I stripped it down 
to nothing, and gives me an error of..

No current record for fetch operation

Im lost as to why this simple query wont work..


Select P.product,SI.Rec_ID
FROM stores S,Products P
 LEFT JOIN Stores_Inventory SI ON (S.Rec_ID =12)








On 4/12/14, 4:38 AM, Svein Erling Tysvær wrote:


Please help my understand what will happen in the following cases 
involving transactions.


System: Delphi XE2, Firebird 2,5x, Using DBX components. Isolation 
level: ReadCommitted.




The actual statements are much more complex and involves various 
statements in the transaction and/or triggers that updates/insert into 
up to 8 tables.




Case 1

App 1 starts a transaction (T1) and in an After Insert trigger for 
MasterTable do something like:


Update TableSub set X = X + MasterTable.X where Y=Y.MasterTable

Before App1 commits, App2 starts a transtaction (T2) and  does the 
same statement. Then App1 commits(T1) and then App2 commits(T2).


Will this result in a deadlock or will T1 run and then T2 or will 
X.old in both transaction have the same value?


You will get a lock conflict on T2 (a deadlock involves several tables 
where T1 waits for T2 and T2 waits for T1, your situation is just T2 
waiting for T1 and hence just a lock conflict).


Case 2

Same case as above but the trigger use X = GEN_ID(Generator1,1).

Will X be, for example 10, in T1 and 11 in T2, or will it result in a 
deadlock, or will X be 10 in both transactions?


Generators are not transaction bound and you would get different 
numbers for T1 and T2.


Any information will be appreciated and even more so hints as on how 
to handle this. I would like T1 to start and run to completion and 
then T2 taking


into account what happen in T1.

The only way to make T2 take into account what has happened in T1 is 
for T2 reading the data after T1 has committed and only then do the 
changes. Sometimes concurrency issues can be avoided by using INSERT 
rather than UPDATE and then use SUM whenever you have to read the value.


HTH,

Set






Re: [firebird-support] Transactions

2014-04-12 Thread Helen Borrie
At 07:36 a.m. 13/04/2014, Gordon Hamm wrote:


BTW, I took the code out, and made a simple query.. I stripped it down to 
nothing, and gives me an error of..
No current record for fetch operation

Im lost as to why this simple query wont work.. 


Select P.product,SI.Rec_ID
FROM stores S,Products P
LEFT JOIN Stores_Inventory SI ON (S.Rec_ID =12)

No JOIN criteria.  JOIN has to join two sets (surprise!!).  While it is 
possible to provide WHERE criteria in the ON sub-clause (sometimes with 
unexpected results), a search criterion is not a JOIN criterion.

At a minimum you need 

Select P.product,SI.Rec_ID
FROM stores S,Products P
LEFT JOIN Stores_Inventory SI ON (S.Rec_ID = P.something)

and if you want the join value to be limited to Rec_ID =12 then provide the 
WHERE clause:

WHERE S.Rec_ID = 12


Helen Borrie, Support Consultant, IBPhoenix (Pacific)
Author of The Firebird Book and The Firebird Book Second Edition
http://www.firebird-books.net
__ 



Re: [firebird-support] Transactions

2014-04-12 Thread Saunders, Rich
On 2014-04-12 18:56, Helen Borrie wrote:
 At a minimum you need
 
 Select P.product,SI.Rec_ID
 FROM stores S,Products P
 LEFT JOIN Stores_Inventory SI ON (S.Rec_ID = P.something)
 
 and if you want the join value to be limited to Rec_ID =12 then
 provide the WHERE clause:
 
 WHERE S.Rec_ID = 12

I think you need a join between S and P and another between S and SI.
Then you can add the WHERE clause from above.

-- 

Cheers!
Rich Saunders


[firebird-support] Transactions

2014-04-11 Thread cornievs
Please help my understand what will happen in the following cases involving 
transactions. System: Delphi XE2, Firebird 2,5x, Using DBX components.  
Isolation level: ReadCommitted.

 

 The actual statements are much more complex and involves various statements in 
the transaction and/or triggers that updates/insert into up to 8 tables.
 

 Case 1
 App 1 starts a transaction (T1) and in an After Insert trigger for MasterTable 
do something like:

 Update TableSub set X = X + MasterTable.X where Y=Y.MasterTable
 Before App1 commits, App2 starts a transtaction (T2) and  does the same 
statement.  Then App1 commits(T1) and then App2 commits(T2).  
 Will this result in a deadlock or will T1 run and then T2 or will X.old in 
both transaction have the same value?
 

 Case 2
 Same case as above but the trigger use X = GEN_ID(Generator1,1).  
 Will X be, for example 10, in T1 and 11 in T2, or will it result in a 
deadlock, or will X be 10 in both transactions?
 

 Any information will be appreciated and even more so hints as on how to handle 
this.  I would like T1 to start and run to completion and then T2 taking into 
account what happen in T1.
 

 Kind Regards
 

 

 Cornie van Schoor
 InfoStar Software
 South Africa
 

 

 

 



RE: [firebird-support] Transactions exception trapping

2012-08-16 Thread Rick Debay
 The handler is not embedded in the loop that it handles: it follows
directly after

What I originally posted is exactly what our code looks like.  The WHEN
handler is the last statement INSIDE the loop.
If an insert statement throws an exception, we see the loop continue and
the exception logged.
The issue is that it appears that sometimes the insert succeeds and the
update fails and the loop continues.

-Original Message-
From: firebird-support@yahoogroups.com
[mailto:firebird-support@yahoogroups.com] On Behalf Of Helen Borrie
Sent: Friday, August 10, 2012 4:25 PM
To: firebird-support@yahoogroups.com
Subject: RE: [firebird-support] Transactions  exception trapping

At 05:00 AM 11/08/2012, Rick Debay wrote:
FOR ... BEGIN
  INSERT_PROCEDURE RETURNING VALUE
  UPDATE TABLE WITH VALUE
  WHEN ... BEGIN
LOG
  END
END

If the insert procedure directly or indirectly causes an exception the 
insert will be abandoned, the update will be skipped, and all data from

previous or subsequent non-erroneous loop iterations will be preserved.

Now if the update fails, would the inserted data within the same loop 
iteration be preserved?  I've been assuming that everything within the 
BEGIN...END was abandoned.

The handler is not embedded in the loop that it handles: it follows
directly after. The flow should be:

FOR ... do
  BEGIN
INSERT_PROCEDURE RETURNING VALUE
UPDATE TABLE WITH VALUE
  end
WHEN ... do
  BEGIN
LOG
  END

If an exception occurs anywhere in an iteration of that loop, everything
within that iteration is abandoned, control drops into the WHEN block
and executes that before returning to the top of the loop.  

If a handler block is not found directly after the looping block,
control moves out through the layers until one is found that fits the
exception.   Everything enclosed by the block whose handler is found is
abandoned.  

Ultimately, if no explicit handler is found in the enclosing layers that
fits the exception, control drops to the EXIT statement and the entire
work of the procedure is abandoned.

./heLen






++

Visit http://www.firebirdsql.org and click the Resources item on the
main (top) menu.  Try Knowledgebase and FAQ links !

Also search the knowledgebases at http://www.ibphoenix.com 

++
Yahoo! Groups Links





RE: [firebird-support] Transactions exception trapping

2012-08-10 Thread Helen Borrie
At 05:00 AM 11/08/2012, Rick Debay wrote:
FOR ... BEGIN
  INSERT_PROCEDURE RETURNING VALUE
  UPDATE TABLE WITH VALUE
  WHEN ... BEGIN
LOG
  END
END

If the insert procedure directly or indirectly causes an exception the
insert will be abandoned, the update will be skipped, and all data from
previous or subsequent non-erroneous loop iterations will be preserved.

Now if the update fails, would the inserted data within the same loop
iteration be preserved?  I've been assuming that everything within the
BEGIN...END was abandoned.

The handler is not embedded in the loop that it handles: it follows directly 
after. The flow should be:

FOR ... do
  BEGIN
INSERT_PROCEDURE RETURNING VALUE
UPDATE TABLE WITH VALUE
  end
WHEN ... do
  BEGIN
LOG
  END

If an exception occurs anywhere in an iteration of that loop, everything within 
that iteration is abandoned, control drops into the WHEN block and executes 
that before returning to the top of the loop.  

If a handler block is not found directly after the looping block, control moves 
out through the layers until one is found that fits the exception.   Everything 
enclosed by the block whose handler is found is abandoned.  

Ultimately, if no explicit handler is found in the enclosing layers that fits 
the exception, control drops to the EXIT statement and the entire work of the 
procedure is abandoned.

./heLen




[firebird-support] Transactions exception trapping

2012-08-09 Thread Rick Debay
I have a table with before and after insert triggers.  I also have a
stored procedure that loops, inserting rows in to the table.  The last
statement in the loop is WHEN ANY to catch and log errors to an external
table and continue looping.

If an insert trigger throws an exception, will the insert be rolled back
even though I'm catching the exception and continuing?  Or will this
subvert the triggers and allow inconsistent data?


Re: [firebird-support] Transactions exception trapping

2012-08-09 Thread Helen Borrie
At 04:49 AM 10/08/2012, Rick Debay wrote:
I have a table with before and after insert triggers.  I also have a
stored procedure that loops, inserting rows in to the table.  The last
statement in the loop is WHEN ANY to catch and log errors to an external
table and continue looping.

If an insert trigger throws an exception, will the insert be rolled back
even though I'm catching the exception and continuing?  Or will this
subvert the triggers and allow inconsistent data?

Remember that an uncommitted transaction does not change database state.  
Whatever is happening inside a transaction is known only to that transaction.  

When an exception occurs, control will pass down to the first WHEN... block it 
can find.  If that WHEN block is located directly at the end of the block where 
the exception occurred, that block is executed (writes out your log record) 
and, provided it has no direction to raise the exception to an outer level, 
control passes back to the top of the same loop. The abandoned operation, 
including anything it did or was expected to do in triggers, is simply excluded 
from the list of stuff the transaction has to do when it finally commits. ACID.

Previous or subsequent operations will not be affected by the abandonment of 
the offending operation, unless something in the procedure logic makes the 
iterations interdependent.  The database engine cannot undo changes made to 
local objects, such as variables and parameters so, when swallowing exceptions, 
there's more edge on the imperative to be explicit and correct with the 
initialisation and timing logic.

./hb






[firebird-support] Transactions and cleaning up of rows after a massive deletion

2011-10-25 Thread fernnaj
Hello,

I have the following situation. A table is used to store values that are 
generated quite frequently. A second process will read the values and do 
something with them. After that processing the read values are no longer needed 
and they can (should!) be deleted from the database. However, the second 
process might fail (if the network is not available for example) and in this 
case the values pile up in the table (real situation: 2 GB of data in that 
single table after a network outage of 2 days). Eventually the process will 
work again and those values will be processed - and deleted.

And that is where my problem is. Firebird has a thing with bulk deletions - the 
next time you do a select from the table the deleted records seem to be 
cleaned up (don't know the technical expression), and if you have deleted a 
lot of rows, this can take forever.

The question is: if I have a select in one transaction that is suffering from 
the cleaning up after a deletion of say 1 million rows (and it's taking forever 
to return results), will another transaction that just writes a new row in the 
same table be also delayed? Writing speed in that table is critical, reading is 
important but not critical.

Of course I am going to do tests inhouse, but I would really appreciate an 
answer from the point of view of the inside work of Firebird.

Thanks a lot in advance,

Fernando Nájera



Re: [firebird-support] Transactions and cleaning up of rows after a massive deletion

2011-10-25 Thread Ann Harrison
On Tue, Oct 25, 2011 at 11:24 AM, fernnaj y...@fernandonajera.com wrote:



 The question is: if I have a select in one transaction that is suffering
 from the cleaning up after a deletion of say 1 million rows (and it's taking
 forever to return results), will another transaction that just writes a new
 row in the same table be also delayed? Writing speed in that table is
 critical, reading is important but not critical.


We call that process garbage collection.   The answer depends slightly on
the architecture of Firebird you're using.  With Classic, the single client
doing the select will be very slow, and others will be slower than normal
because of the overall system load, but will be scheduled by the operating
system according to its priorities.  In SuperServer,  all queries are
threads in the server process which schedules them on a round-robin basis,
giving each a quantum of time.  So again, the inserts will be slower than
normal, but will not run while the garbage is being collected.   More or
less the same answer for both, but in Classic you can blame the OS for not
scheduling things when you want, and in SuperServer, blame the server.

Please don't try to speed up the inserts in Classic  by increasing their
priority.  They will be contending with the garbage collecting query for
pages and if it can't run to release them, the whole system will slow down.

Good luck,

Ann


[Non-text portions of this message have been removed]



Re: [firebird-support] Transactions and cleaning up of rows after a massive deletion

2011-10-25 Thread Thomas Steinmaurer
 I have the following situation. A table is used to store values that are 
 generated quite frequently. A second process will read the values and do 
 something with them. After that processing the read values are no longer 
 needed and they can (should!) be deleted from the database. However, the 
 second process might fail (if the network is not available for example) and 
 in this case the values pile up in the table (real situation: 2 GB of data in 
 that single table after a network outage of 2 days). Eventually the process 
 will work again and those values will be processed - and deleted.

 And that is where my problem is. Firebird has a thing with bulk deletions - 
 the next time you do a select from the table the deleted records seem to be 
 cleaned up (don't know the technical expression), and if you have deleted a 
 lot of rows, this can take forever.

 The question is: if I have a select in one transaction that is suffering from 
 the cleaning up after a deletion of say 1 million rows (and it's taking 
 forever to return results), will another transaction that just writes a new 
 row in the same table be also delayed? Writing speed in that table is 
 critical, reading is important but not critical.

 Of course I am going to do tests inhouse, but I would really appreciate an 
 answer from the point of view of the inside work of Firebird.

Perhaps your read process couldn't delete the record but mark it with a 
flag and a cleanup process scheduled e.g. in the night deletes the 
marked records followed by initiating a manual SELECT COUNT(*) on that 
table in case of co-operative garbage collection.

Btw, in respect to client transaction management. Are you using hard 
commits or soft commits aka commit retaining?


-- 
With regards,
Thomas Steinmaurer

* Firebird Foundation Committee Member
http://www.firebirdsql.org/en/firebird-foundation/

* Upscene Productions - Database Tools for Developers
http://www.upscene.com/

* My Blog
http://blog.upscene.com/thomas/index.php