Re: [SQL] transaction isolationa level - SERIALIZABLE

2013-05-13 Thread Marcin Krawczyk
That's what I thought. Thank you.


regards
mk


2013/5/13 Adrian Klaver 

> On 05/13/2013 02:22 AM, Marcin Krawczyk wrote:
>
>> Hi list,
>>
>> I have some problems with SERIALIZABLE isolation level, namely my users
>> are plagued with concurrency errors. As of postgres 9.1 (which I'm
>> running) there has been a change to SERIALIZABLE logic, unfortunately my
>> application has not been updated to work with the new logic. I don't
>> have an access to it's code and the only thing I can do is to report the
>> issue to the authors. But before I do it, since I don't actually
>> need SERIALIZABLE for my use, is it possible to have transactions always
>> run in default READ COMMITTED mode, regardless of application level SET
>> SESSION CHARACTERISTICS AS TRANSACTION command ... ? (like e.g in
>> postgres 8.1 where SERIALIZABLE = READ COMMITED)
>>
>
> I don't think so:
>
> http://www.postgresql.org/**docs/9.1/interactive/config-**setting.html
>
> "Furthermore, it is possible to assign a set of parameter settings to a
> user or a database. Whenever a session is started, the default settings for
> the user and database involved are loaded. The commands ALTER ROLE and
> ALTER DATABASE, respectively, are used to configure these settings.
> Per-database settings override anything received from the postgres
> command-line or the configuration file, and in turn are overridden by
> per-user settings; both are overridden by per-session settings.
>
>
>>
>> regards
>> mk
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [SQL] transaction isolationa level - SERIALIZABLE

2013-05-13 Thread Adrian Klaver

On 05/13/2013 02:22 AM, Marcin Krawczyk wrote:

Hi list,

I have some problems with SERIALIZABLE isolation level, namely my users
are plagued with concurrency errors. As of postgres 9.1 (which I'm
running) there has been a change to SERIALIZABLE logic, unfortunately my
application has not been updated to work with the new logic. I don't
have an access to it's code and the only thing I can do is to report the
issue to the authors. But before I do it, since I don't actually
need SERIALIZABLE for my use, is it possible to have transactions always
run in default READ COMMITTED mode, regardless of application level SET
SESSION CHARACTERISTICS AS TRANSACTION command ... ? (like e.g in
postgres 8.1 where SERIALIZABLE = READ COMMITED)


I don't think so:

http://www.postgresql.org/docs/9.1/interactive/config-setting.html

"Furthermore, it is possible to assign a set of parameter settings to a 
user or a database. Whenever a session is started, the default settings 
for the user and database involved are loaded. The commands ALTER ROLE 
and ALTER DATABASE, respectively, are used to configure these settings. 
Per-database settings override anything received from the postgres 
command-line or the configuration file, and in turn are overridden by 
per-user settings; both are overridden by per-session settings.





regards
mk



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] Transaction-specific global variable

2011-02-03 Thread Tom Lane
Florian Weimer  writes:
> hstore greatly simplifies creating triggers for logging table changes,
> which is great.  However, when creating a log record, I would like to
> include information about the party who made this change.  We
> generally do not allow direct database access for application code, so
> the PostgreSQL user does not provide sufficient information on its
> own.  Instead, I'd like to create a transaction-specific variable
> which stores context information to be included in the log table.  I
> suppose I could create a stored procedures in C which provides this
> functionality, but I wonder if there is already something similar I
> could reuse.  For instance, I could reuse the application_name
> configuration variable, but this seems a bit gross.

You could abuse the custom-GUC-variable mechanism: just set
custom_variable_classes and then define some variable that doesn't
actually have any underlying loadable module.

regards, tom lane

-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql


Re: [SQL] transaction and triggers

2008-01-21 Thread Gerardo Herzig

D'Arcy J.M. Cain wrote:


On Fri, 18 Jan 2008 12:16:04 -0300
Gerardo Herzig <[EMAIL PROTECTED]> wrote:
 

Right.  But  today, that trigger do some other work, wich includes 
writing some files to disk, so there is my problem. Crap, i guess i will 
have to review the main logic.
   



I built a replication system that syncs up dozens of systems in a
multi-master environment spanning multiple continents in almost
real-time and it works flawlessly so don't give up hope.


And im trying with 3 virtual machines...this is embarrasing :)


 It is
doable.  I can't give you the code because it was written under
contract and it was based heavily on our specific business requirements
but I can give you a few pointers.

You have discovered the basic problem of trying to replicate in full
real time.  You'll probably have to give up on that.  Instead, focus on
making updates to the local database.  Create a replication table or
tables that you update with triggers.  Basically this needs to be a log
of every change to the database in a structured way.
 

Crap. That was my first approach! I later chose the inmediate file 
writing, trying to minimize the changes that would be lost in case of 
primary system crash. I guess i will come with it again.



Once you have the replication table(s) you can create external programs
that connect to the master and update the slave.  In the slave you can
track the last ID that completed.  Do the insert/update/delete in a
transaction so that you have a guarantee that your database is up to
date to a very specific point.  Note that you can have multiple slaves
in this scenario and, in fact, the slaves can have slaves using the
exact same scheme giving you a hierarchy.

If you need multi-master you just need to have another process to feed
your local changes up to the master.  This is not just a matter of
making the master a slave though.  If you do that you get into a
feedback loop.

Also, if you need multi-master, you have to think about your
sequencing.  If you need unique IDs on some tables you will have to
think about setting up ranges of sequences based on server or have a
central sequence server.  We used a combination of both as well as
specifying that certain tables could only be inserted to on one
system.  Of course, this system doesn't need to be the same as the top
of the hierarchy and, in fact, different tables can have different
generator systems.

 


What i want to do is something like:
If the master fails, it will be a peace of soft that would change the 
conf files (which indicate who's the master, slaves, and so on), so one 
of the slaves take the master's place. Since those are a common pc, when 
the real master come back to life, it has to be re-sync, and take his 
place as the master again. Im thinking in something as simple as posible 
(since im not a senior programmer), something like a ip address change 
could do the trick



Hope this gets you started.  There's still lots of gotchas on the way.
 


Oh yes, im specting so much fun
Thanks for sharing your knowledge with us!!

Mamooth replicator, Slone-I, feel the fear! :)

Thanks again, D'arcy!

Gerardo


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [SQL] transaction and triggers

2008-01-18 Thread D'Arcy J.M. Cain
On Fri, 18 Jan 2008 12:16:04 -0300
Gerardo Herzig <[EMAIL PROTECTED]> wrote:
> Right.  But  today, that trigger do some other work, wich includes 
> writing some files to disk, so there is my problem. Crap, i guess i will 
> have to review the main logic.

I built a replication system that syncs up dozens of systems in a
multi-master environment spanning multiple continents in almost
real-time and it works flawlessly so don't give up hope.  It is
doable.  I can't give you the code because it was written under
contract and it was based heavily on our specific business requirements
but I can give you a few pointers.

You have discovered the basic problem of trying to replicate in full
real time.  You'll probably have to give up on that.  Instead, focus on
making updates to the local database.  Create a replication table or
tables that you update with triggers.  Basically this needs to be a log
of every change to the database in a structured way.

Once you have the replication table(s) you can create external programs
that connect to the master and update the slave.  In the slave you can
track the last ID that completed.  Do the insert/update/delete in a
transaction so that you have a guarantee that your database is up to
date to a very specific point.  Note that you can have multiple slaves
in this scenario and, in fact, the slaves can have slaves using the
exact same scheme giving you a hierarchy.

If you need multi-master you just need to have another process to feed
your local changes up to the master.  This is not just a matter of
making the master a slave though.  If you do that you get into a
feedback loop.

Also, if you need multi-master, you have to think about your
sequencing.  If you need unique IDs on some tables you will have to
think about setting up ranges of sequences based on server or have a
central sequence server.  We used a combination of both as well as
specifying that certain tables could only be inserted to on one
system.  Of course, this system doesn't need to be the same as the top
of the hierarchy and, in fact, different tables can have different
generator systems.

Hope this gets you started.  There's still lots of gotchas on the way.

-- 
D'Arcy J.M. Cain <[EMAIL PROTECTED]> |  Democracy is three wolves
http://www.druid.net/darcy/|  and a sheep voting on
+1 416 425 1212 (DoD#0082)(eNTP)   |  what's for dinner.

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] transaction and triggers

2008-01-18 Thread Gerardo Herzig

Alvaro Herrera wrote:


Gerardo Herzig escribió:

 

Right.  But  today, that trigger do some other work, wich includes writing 
some files to disk, so there is my problem. Crap, i guess i will have to 
review the main logic.
   



Probably it's better to move the actual file writing to a listener
external process -- the transaction only does a NOTIFY, which is certain
to be delivered only when the transaction commits.  So if it aborts, no
spurious write occurs.

 


Mmmhmm, sounds good...I will give it a try on monday. Now its beer time :)

Thanks all.

Gerardo

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [SQL] transaction and triggers

2008-01-18 Thread Alvaro Herrera
Gerardo Herzig escribió:

> Right.  But  today, that trigger do some other work, wich includes writing 
> some files to disk, so there is my problem. Crap, i guess i will have to 
> review the main logic.

Probably it's better to move the actual file writing to a listener
external process -- the transaction only does a NOTIFY, which is certain
to be delivered only when the transaction commits.  So if it aborts, no
spurious write occurs.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [SQL] transaction and triggers

2008-01-18 Thread Gerardo Herzig

Filip Rembiałkowski wrote:


2008/1/18, Gerardo Herzig <[EMAIL PROTECTED]>:
 


Hi all. Im puzzled again. Just thinking:

As im having fun trying to make my own replication system, im stuck in
this situation:
Consider a simple table with a unique index on the `id' field, and a
function who will fail, such as

insert into test (id) values (1);
insert into test (id) values (1);

This will fail and the transaction will be rollback'ed, but as the basis
of my replication system is on row level triggers, the first time the
insert is called, the trigger will be executed, and i will like to be
able to stack the triggers in some way, in order to be fired only after
a succesfull execution of the hole function.
   



If the transaction is rolled back, changes made by your trigger to
local database will be also canceled.

Unless you make any manipulation on remote databases, you have no problem.

Any changes made to remote databases, for example if you call some
dblink functions, are not transactional, and will not be rolled back.

In this case you have to rethink your design, as there is no "ON
COMMIT" trigger (yet?)
 

Right.  But  today, that trigger do some other work, wich includes 
writing some files to disk, so there is my problem. Crap, i guess i will 
have to review the main logic.


Thanks!
Gerardo

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] transaction and triggers

2008-01-18 Thread Filip Rembiałkowski
2008/1/18, Gerardo Herzig <[EMAIL PROTECTED]>:
> Hi all. Im puzzled again. Just thinking:
>
> As im having fun trying to make my own replication system, im stuck in
> this situation:
> Consider a simple table with a unique index on the `id' field, and a
> function who will fail, such as
>
> insert into test (id) values (1);
> insert into test (id) values (1);
>
> This will fail and the transaction will be rollback'ed, but as the basis
> of my replication system is on row level triggers, the first time the
> insert is called, the trigger will be executed, and i will like to be
> able to stack the triggers in some way, in order to be fired only after
> a succesfull execution of the hole function.

If the transaction is rolled back, changes made by your trigger to
local database will be also canceled.

Unless you make any manipulation on remote databases, you have no problem.

Any changes made to remote databases, for example if you call some
dblink functions, are not transactional, and will not be rolled back.

In this case you have to rethink your design, as there is no "ON
COMMIT" trigger (yet?)









-- 
Filip Rembiałkowski

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [SQL] transaction in function

2006-12-05 Thread Din Adrian
Any function runs into a single transaction so you don't need to worry  
about rolling back on exceptions - this is the standard behavior. If there  
is an exception while running any statement   inside the function it's  
rolls back automatically.



On Tue, 05 Dec 2006 17:25:31 +0200, Marian POPESCU  
<[EMAIL PROTECTED]> wrote:



Hi,

I want to write a function that updates several tables; if there is an
exception while updating, it should rollback the transaction and raise an
error.

How can this be done in pgPLSQL in PostgreSQL 8.1 ?

Thank you for helping me out on this one !



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match




--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [SQL] transaction in function

2006-12-05 Thread Tom Lane
Marian POPESCU <[EMAIL PROTECTED]> writes:
> I want to write a function that updates several tables; if there is an
> exception while updating, it should rollback the transaction and raise an
> error.

> How can this be done in pgPLSQL in PostgreSQL 8.1 ?

You just do it.  You would have to work at it to *not* have that behavior.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] transaction in function

2006-12-05 Thread A. Kretschmer
am  Tue, dem 05.12.2006, um 16:25:31 +0100 mailte Marian POPESCU folgendes:
> Hi,
> 
> I want to write a function that updates several tables; if there is an
> exception while updating, it should rollback the transaction and raise an
> error.
> 
> How can this be done in pgPLSQL in PostgreSQL 8.1 ?

A function runs within a transaction. You can trapp errors within a
function, read this:

http://www.postgresql.org/docs/8.1/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING


Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47215,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [SQL] Transaction in plpgslq

2005-05-24 Thread Andrew Hammond
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Rafa Couto wrote:
> I understand "FOR UPDATE" clause is locking while is selecting rows
> only. It does not secure atomic execution from 'SELECT' to 'UPDATE' in
> next statement. Is not it?

Locks adhere until the transaction ends. I included links to relevant
documentation in my original post. If you read up on locking, you'll
find the answer to your problem. Please post any further questions you
have after reading the documentation.

- --
Andrew Hammond416-673-4138[EMAIL PROTECTED]
Database Administrator, Afilias Canada Corp.
CB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQFCk1INgfzn5SevSpoRAjVlAJ4kGMlTQFaI1BW+9O9GT8He19TyLACcCYtU
Tucg2FuGiDggsAwV7JA2HHs=
=9rus
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] Transaction in plpgslq

2005-05-24 Thread Richard Huxton

Rafa Couto wrote:

I have got a plpgsql function:



  -- BEGIN;

SELECT min(id) INTO _contacto_id FROM contactos 
  WHERE contactos.operadora_id IS NULL AND contactos.actividad_id

= _actividad_id;

UPDATE contactos SET operadora_id = _operadora_id WHERE id = _contacto_id;

  -- COMMIT;



and it works right, but I need atomic execution from --BEGIN and
--COMMIT, and manual says it is not possible to have transactions in
PL/pgSQL procedures :-(


OK - the WHOLE FUNCTION takes place within one transaction. So, in that 
sense all changes are atomic.


However, if you want to prevent any changes to "contactos" in-between 
those two statements you'll want additional locks. Read the chapter on 
"concurrency control" for details. You might well want SELECT FOR UPDATE 
 (and also just ORDER BY id LIMIT 1 rather than using min(id)).


--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [SQL] Transaction in plpgslq

2005-05-24 Thread Jan B.

Rafa Couto wrote:

2005/5/20, Andrew Hammond <[EMAIL PROTECTED]>:



The solution to your problem is locking (or concurrency control if you
prefer). While we're at it, we might as well optimize your statement a
little too using ORDER BY with LIMIT instead of min().

SELECT id INTO _contacto_id
FROM contactos
WHERE contactos.operadora_id IS NULL
 AND contactos.actividad_id > = _actividad_id
ORDER BY id LIMIT 1
FOR UPDATE;

Take a look at the "FOR UPDATE" section of the SELECT description for an
explanation of how this works.




I understand "FOR UPDATE" clause is locking while is selecting rows
only. It does not secure atomic execution from 'SELECT' to 'UPDATE' in
next statement. Is not it?




After the SELECT FOR UPDATE command locked some rows, other concurrent 
changes to the database could be made, but changes, which require to 
lock that rows will be deferred.


The lock will be hold until the end of the transaction (that means at 
least until the function returns).


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [SQL] Transaction in plpgslq

2005-05-24 Thread Rafa Couto
2005/5/20, Andrew Hammond <[EMAIL PROTECTED]>:

> The solution to your problem is locking (or concurrency control if you
> prefer). While we're at it, we might as well optimize your statement a
> little too using ORDER BY with LIMIT instead of min().
> 
> SELECT id INTO _contacto_id
> FROM contactos
> WHERE contactos.operadora_id IS NULL
>   AND contactos.actividad_id > = _actividad_id
> ORDER BY id LIMIT 1
> FOR UPDATE;
> 
> Take a look at the "FOR UPDATE" section of the SELECT description for an
> explanation of how this works.


I understand "FOR UPDATE" clause is locking while is selecting rows
only. It does not secure atomic execution from 'SELECT' to 'UPDATE' in
next statement. Is not it?


-- 
Rafa Couto (caligari)

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] Transaction in plpgslq

2005-05-20 Thread Andrew Hammond
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The manual is correct, you can not do transactions within a procedure
since the procedure must be called within a transaction. If you're
working with postgres 8, you can achieve similar functionality using
checkpoints. But that won't solve the problem you have below.

The solution to your problem is locking (or concurrency control if you
prefer). While we're at it, we might as well optimize your statement a
little too using ORDER BY with LIMIT instead of min().

SELECT id INTO _contacto_id
FROM contactos
WHERE contactos.operadora_id IS NULL
  AND contactos.actividad_id > = _actividad_id
ORDER BY id LIMIT 1
FOR UPDATE;

Take a look at the "FOR UPDATE" section of the SELECT description for an
explanation of how this works.

http://www.postgresql.org/docs/8.0/static/sql-select.html

If you still have questions, then you might want to take a look at the
concurrency control section of the manual.

http://www.postgresql.org/docs/8.0/static/mvcc.html

- --
Andrew Hammond416-673-4138[EMAIL PROTECTED]
Database Administrator, Afilias Canada Corp.
CB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A


Rafa Couto wrote:
> I have got a plpgsql function:
> 
> CREATE FUNCTION nueva_llamada(integer, integer) RETURNS integer
> 
> as
> 
> DECLARE
>   _operadora_id ALIAS FOR $1;
>   _actividad_id ALIAS FOR $2;
>   _contacto_id integer;
> 
> BEGIN
> 
>   -- BEGIN;
> 
> SELECT min(id) INTO _contacto_id FROM contactos 
>   WHERE contactos.operadora_id IS NULL AND contactos.actividad_id
> = _actividad_id;
> 
> UPDATE contactos SET operadora_id = _operadora_id WHERE id = _contacto_id;
> 
>   -- COMMIT;
> 
>   INSERT INTO llamadas (contacto_id, operadora_id, fecha) 
> VALUES (_contacto_id, _operadora_id, now());
> 
>   RETURN _contacto_id;
> END
> 
> and it works right, but I need atomic execution from --BEGIN and
> --COMMIT, and manual says it is not possible to have transactions in
> PL/pgSQL procedures :-(
> 
> May be with LOCK TABLE?
> 
> 
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQFCjiRXgfzn5SevSpoRAlZRAJ4pg7UohNBy+RhgoOfbqy0W9wbIXQCff6F1
VEPjPfo4tSxn+kMg6snBbSI=
=bzri
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] transaction

2004-04-26 Thread denis

Hi,

You can achieve this by:

1. Create a new table
2. Insert the data in this.
3. Write a trigger on this table
4. In trigger issue UPDATE and check whether it updated any records. If NO,
fire INSERT.
( here, i am updating first and inserting.. just reverse )

The code looks like:

update tempxitag set qty = qty + nqty where
ccod = cccod
GET DIAGNOSTICS nFound = ROW_COUNT;
If nFound = 0 then
insert into tempxitag( ccod, qty)
values (cccod, nqty );
End if;

HTH

Denis


- Original Message -
From: Bruno Wolff III <[EMAIL PROTECTED]>
To: H.J. Sanders <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, April 21, 2004 7:30 AM
Subject: Re: [SQL] transaction


> On Tue, Apr 20, 2004 at 21:14:48 +0200,
>   "H.J. Sanders" <[EMAIL PROTECTED]> wrote:
> >
> > Hello list.
> >
> > We are migrating from Informix to PSQL.
> >
> > In Informix we used to do:
> >
> > - BEGIN WORK
> >
> > - INSERT ROW
> >
> > - IF FAILED THEN UPDATE ROW
> >
> > - COMMIT WORK
> >
> >
> > In PSQL  this does not seem to work because after the first error (the
> > insert)
> > everything is omitted.
> >
> > Has someone found a solution for this (we have 1000's of this type).
>
> This question has come up a few times over the last year and there isn't
> a great answer. Locking the table is a simple solution, but can lead to
> problems because of contention.
>
> ---(end of broadcast)---
> TIP 7: don't forget to increase your free space map settings



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] transaction

2004-04-21 Thread Bruno Wolff III
On Wed, Apr 21, 2004 at 12:58:56 +0530,
  [EMAIL PROTECTED] wrote:
> 
> The code looks like:
> 
> update tempxitag set qty = qty + nqty where
> ccod = cccod
> GET DIAGNOSTICS nFound = ROW_COUNT;
> If nFound = 0 then
> insert into tempxitag( ccod, qty)
> values (cccod, nqty );
> End if;

You still can get errors if two transactions try to refer to the same
nonexistant record at the same time. Postgres doesn't do predicate
locking so the update won't lock the to be inserted row and both
transactions may see the record as not existing and both try to do
an insert.

Updating, checking the count and then trying an insert if the count was
0 and retrying if the insert fails may be a better approach than locking
the table. However, since this is an existing application it may be hard
to make this complicated of a change.

If there is flexibility in how the task gets done, switching to something
based on sequences is probably the way to go.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] transaction

2004-04-21 Thread Andrew Sullivan
On Tue, Apr 20, 2004 at 09:14:48PM +0200, H.J. Sanders wrote:
> 
> - BEGIN WORK
> 
> - INSERT ROW
> 
> - IF FAILED THEN UPDATE ROW
> 
> - COMMIT WORK

You can do it the other way.  Begin, update; if 0 rows are updated
then insert.

A
-- 
Andrew Sullivan  | [EMAIL PROTECTED]

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] transaction

2004-04-20 Thread Bruno Wolff III
On Tue, Apr 20, 2004 at 21:14:48 +0200,
  "H.J. Sanders" <[EMAIL PROTECTED]> wrote:
> 
> Hello list.
> 
> We are migrating from Informix to PSQL.
> 
> In Informix we used to do:
> 
> - BEGIN WORK
> 
> - INSERT ROW
> 
> - IF FAILED THEN UPDATE ROW
> 
> - COMMIT WORK
> 
> 
> In PSQL  this does not seem to work because after the first error (the
> insert)
> everything is omitted.
> 
> Has someone found a solution for this (we have 1000's of this type).

This question has come up a few times over the last year and there isn't
a great answer. Locking the table is a simple solution, but can lead to
problems because of contention.

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [SQL] transaction processing after error in statement

2003-11-12 Thread Jan Wieck
Holger Jakobs wrote:

Calling a procedure is a statement as well, and it includes all other
procedures called from this procedure. So the statement level is always
the statements that were carried out directly in the transaction. If
anything within one statement fails, the statement was not carried out
and must not have any effect. It is not important whether the procedure
was fired by a trigger or called by another procedure.
So you define the smalles unit being one single statement as issued by 
the client application and receiving one single returncode over the 
frontend/backend protocol.

That's almost what people think of as subtransactions. I think if we 
ever implement them, we will have some session setting that lets the 
backend behave like that and your needs will be satisfied.

Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


Re: [SQL] transaction processing after error in statement

2003-11-11 Thread Holger Jakobs


On 11 Nov, Jan Wieck wrote:
> As long as we talk in an SQL context, can you please stick to SQL
> terms? I don't know exactly what you mean with "operation". If for
> example the statement
> 
>  DELETE FROM order_line WHERE ol_ordernum = 4711;
> 
> has 12 matching rows in order_line, is an operation the removal of one 
> single order line or do the actions performed by the triggers fired
> due to their removal count as separate operations for you? And if
> there is one that cannot be deleted because a row in another table
> with a foreign key references it, do you delete none of them or the
> remaining 11? And if you decide to delete none, how do you magically
> undo the work of the BEFORE triggers if you hit the foreign key after
> successfully processing 5 rows? Is there an SQL return code for
> "partial success"?
OK, let's say "statement" instead of "operation". 

No, there is no partial success. Either a statement delivers an "OK" or
it doesn't. Actually, you will have to undo anything the statement did
before the first error occurs. This may mean that you need some kind of
savepoint. If so, the necessity to implent this shows and should be
given a high priority. I don't know how the other RDMBS do it, but they
do. I am talking from the perspective of an RDBMS user, not as an
implementor.

Calling a procedure is a statement as well, and it includes all other
procedures called from this procedure. So the statement level is always
the statements that were carried out directly in the transaction. If
anything within one statement fails, the statement was not carried out
and must not have any effect. It is not important whether the procedure
was fired by a trigger or called by another procedure.


Are there any Open Source RDBMS which behave like Oracle, Access and
Allbase? If so, one might look into their code to find out how they have
implented it.


Coming back to Standards, here is a quote from the ANSI document:

   4.28 SQL-transactions

   The execution of a  may be initiated implicitly
   by an implementation when it detects unrecoverable errors. When
   such an error occurs, an exception condition is raised: transaction
   rollback with an implementation-defined subclass code.

This means that a rollback does not have to be initiated if an 
unrecoverable error occurs, it only _may_ happen. Since it is 
impractical, it should not.

AN EXAMPLE:

Let's have two tables, employees and doctors in a hospital.

create table emp (
  empno  integer primary key,
  name varchar(40)
);

create table doctor (
  empno integer primary key references emp,
  beepernumber integer unique
);

Now let a user enter the data of a doctor. First the data of the
employee part are sent to the database:
  insert into emp values (1, 'Fred');
  --> success
  
Second the doctor-special data are sent to the database:
  insert into doctor values (1, 34);
  -->error, beepernumber already present, unique key violation

Since there was an error, we let the user key in a different 
beeper number for the doctor and send the data to the
database:
  insert into doctor (1, 45);
  -->should be successful (if 45 is not already there) and it _is_ in
  Allbase, Oracle, Access
  -->Postgres tells you something about and *ABORT* state

We commit the transaction: 
  commit work;
  
Effect in all other databases: a successfully entered doctor
Effect in PostgreSQL: nothing!

To get the same effect in PostgreSQL, we would have to rollback
(or commit, no difference) the transaction after the attempt of
inserting the non-unique beeper number and then re-insert all
data into all tables. WHY? It only makes programming more
complex, thus error-prone.


I would appreciate if most of the difficult tasks could be done within
the database system - that's what it's there fore. The first reason is
that the implementors usually are far more advanced programmers than the
programmers of applications. The second reason is that this isolates the
tricky programming from the already complex logic of the application.

Additionally, whether one might personally think that this behaviour is
important or not, it is what most commercial RDBMS do. So it would make
porting of applications from e. g. Oracle to PostgreSQL dramatically
easier. Until now, this has been one of the pitfalls. That's why I
brought this up in the first place.


Sincerely,

Holger

-- 
[EMAIL PROTECTED], Bergisch Gladbach, Germany
Telefon (0 22 02) 5 99 91 oder (01 77) 7 92 24 66


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] transaction processing after error in statement

2003-11-11 Thread Holger Jakobs
> 
> Why is that "funny behaviour" for you? By putting the statements into
> a transaction block you told the data management system "I want this
> group of statements to be atomic". Atomic means all or nothing. It
> might not be exactly what you intended to say, and you have a point
> if you conclude that PostgreSQL is limited because it doesn't let you
> say anything in between all or nothing. But after all, thus far it is
> only doing what you asked for.
> 

It is "funny behaviour", because I expect those operations of the
transaction, which executed successfully, to be performed in an atomic
way. It is obvious that I cannot expect an operation which reported an
error to have any effect.

"Atomic" means that all operations (whether successful or not) will be
carried out all together or none of them - but only the successful ones
will have had an effect.

Again: Why not make a difference between "commit" and "rollback" in a
transaction in this case? Why not let the user decide which parts should
be commited? The practical reason is that programming would become a lot
more convenient. (if there is a practical reason it does not necessarily
need a technical reason, I believe.)

Sincerely,

Holger

-- 
[EMAIL PROTECTED], Bergisch Gladbach, Germany
Telefon (0 22 02) 5 99 91 oder (01 77) 7 92 24 66


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] transaction processing after error in statement

2003-11-11 Thread Jan Wieck
Holger Jakobs wrote:

Why is that "funny behaviour" for you? By putting the statements into
a transaction block you told the data management system "I want this
group of statements to be atomic". Atomic means all or nothing. It
might not be exactly what you intended to say, and you have a point
if you conclude that PostgreSQL is limited because it doesn't let you
say anything in between all or nothing. But after all, thus far it is
only doing what you asked for.
It is "funny behaviour", because I expect those operations of the
transaction, which executed successfully, to be performed in an atomic
way. It is obvious that I cannot expect an operation which reported an
error to have any effect.
"Atomic" means that all operations (whether successful or not) will be
carried out all together or none of them - but only the successful ones
will have had an effect.
As long as we talk in an SQL context, can you please stick to SQL terms? 
I don't know exactly what you mean with "operation". If for example the 
statement

DELETE FROM order_line WHERE ol_ordernum = 4711;

has 12 matching rows in order_line, is an operation the removal of one 
single order line or do the actions performed by the triggers fired due 
to their removal count as separate operations for you? And if there is 
one that cannot be deleted because a row in another table with a foreign 
key references it, do you delete none of them or the remaining 11? And 
if you decide to delete none, how do you magically undo the work of the 
BEFORE triggers if you hit the foreign key after successfully processing 
5 rows? Is there an SQL return code for "partial success"?

The question about "partial success" is the important part here. Imagine 
a stored procedure in PL/pgSQL consisting of two INSERTs. One fails with 
a duplicate key error, the other one succeeds. The language lacks for 
technical reasons an exception handling mechanism, so you have to define 
if the other statement or nothing of the procedure succeeds, because you 
have no chance to report "partial success", there is no return code 
defined for that.

Again: Why not make a difference between "commit" and "rollback" in a
transaction in this case? Why not let the user decide which parts should
be commited? The practical reason is that programming would become a lot
more convenient. (if there is a practical reason it does not necessarily
need a technical reason, I believe.)
Nobody said that it should not be possible. But you have to dig a little 
deeper and make a bit more of a complete proposal for this, covering the 
different possible failure reasons, definitions how exactly to react in 
case of statements affecting multiple rows, related triggers and so on 
and so forth. "Make a difference between commit and rollback" is way too 
fuzzy here.

Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [SQL] transaction processing after error in statement

2003-11-10 Thread Rod Taylor
> Although i am not aware of the roots of this discussion but would like
> to
> comment at this point .
> 
> When we work with sequences an aborted transaction does have
> a permanent effect on the last value  of sequence. Is this behaviour 
> not a violation of above defination of transaction ?

I believe you are correct, which is probably why Oracle offers
serialized sequences for those who want them to be. Sequences have been
explicitly documented as functioning the way they do as a concession for
performance.


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] transaction processing after error in statement

2003-11-10 Thread Rajesh Kumar Mallah




Rod Taylor wrote:

  
be recovered either. When committing a transaction the effects of all
operations that did not fail will be made permanent. This is how
transaction processing is described in the literature.

  
  
I would be interested in reading that (URLs please) as I didn't see
anything in the spec that was interesting on this topic.

4.8.5 from Framework (part 01)
An SQL-transaction (transaction) is a sequence of executions of
SQL-statements that is atomic with respect to recovery. That is
to say: either the execution result is completely successful, or
it has no effect on any SQL-schemas or SQL-data.

Although i am not aware of the roots of this discussion but would like
to
comment at this point .

When we work with sequences an aborted transaction does have
a permanent effect on the last value  of sequence. Is this behaviour 
not a violation of above defination of transaction ?


Regds
Mallah.


  

The "execution result is completely successful" could certainly be used
to back up PostgreSQLs choice to force a rollback. However, it doesn't
differentiate between execution of what the user requested, and
execution of recovery procedures on the successful user elements.

Irregardless, I wish a commit on a failed transaction would throw an
error -- END is good enough for Rollback or Commit.

For PostgreSQL to implement this we need Savepoints or nested
transactions internally since in many cases data is physically written
in order to perform things like Foreign Key constraint checks.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html
  







Re: [SQL] transaction processing after error in statement

2003-11-10 Thread Rod Taylor
> be recovered either. When committing a transaction the effects of all
> operations that did not fail will be made permanent. This is how
> transaction processing is described in the literature.

I would be interested in reading that (URLs please) as I didn't see
anything in the spec that was interesting on this topic.

4.8.5 from Framework (part 01)
An SQL-transaction (transaction) is a sequence of executions of
SQL-statements that is atomic with respect to recovery. That is
to say: either the execution result is completely successful, or
it has no effect on any SQL-schemas or SQL-data.

The "execution result is completely successful" could certainly be used
to back up PostgreSQLs choice to force a rollback. However, it doesn't
differentiate between execution of what the user requested, and
execution of recovery procedures on the successful user elements.

Irregardless, I wish a commit on a failed transaction would throw an
error -- END is good enough for Rollback or Commit.

For PostgreSQL to implement this we need Savepoints or nested
transactions internally since in many cases data is physically written
in order to perform things like Foreign Key constraint checks.


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [SQL] transaction processing after error in statement

2003-11-10 Thread Jan Wieck
Holger Jakobs wrote:

Hi Stephan,

On  9 Nov, Stephan Szabo wrote:
On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote:

Whenever an error occurs within the transaction, PostgreSQL puts the
whole transaction in an *ABORT* state, so that there is no difference
at all between COMMITing or ROLLBACKing it. Even commands
successfully carried out before the error ocurred are rolled back,
even if I COMMIT the transaction, where no error message whatsoever
is shown.
In PostgreSQL all errors are currently considered unrecoverable, and
all statements in a transaction must commit or rollback together as a
single unit. In the future an implementation of nested transactions or
savepoints would presumably relax this limitation to only the
successfully committed subtransactions or statements that were not
separately rolled back to a previous savepoint.
 
What I meant was not subtransactions or savepoints, but the funny
behaviour that operations already successfully carried out never will be
committed, just because some other operation later within the same
transaction fails. This is far different from the behaviour of all other
DMBS I know. Why not:
Why is that "funny behaviour" for you? By putting the statements into a 
transaction block you told the data management system "I want this group 
of statements to be atomic". Atomic means all or nothing. It might not 
be exactly what you intended to say, and you have a point if you 
conclude that PostgreSQL is limited because it doesn't let you say 
anything in between all or nothing. But after all, thus far it is only 
doing what you asked for.

Jan

   begin work;
   insert into x values (1, 'hi');
   --> success
   insert into x values (1, 'there');
   --> failure due to primary key violation
   insert into x values (2, 'foo');
   --> success
   commit work;
and have two new tuples in the table? Why do _all_ of these operations
have to be rolled back? I just don't get it that this has anything to do
with savepoints or so. I don't see any problem with an error being
recoverable, because the second insert failed and does not have to be
recovered while the first and the third worked fine and does not have to
be recovered either. When committing a transaction the effects of all
operations that did not fail will be made permanent. This is how
transaction processing is described in the literature.
If a programmer wants the whole transaction to fail because one part
failed, (s)he can always program a rollback in case of at least one
error. But there should always be a difference between a rollback and a
commit, after at least one statement changing data has reported a
success.
Hopefully this can be cleared and perhaps improved within PostgreSQL.
Otherwise, PostgreSQL always claims to be close to standards.
Sincerely,

Holger


--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] transaction processing after error in statement

2003-11-10 Thread Holger Jakobs
Hi Stephan,

On  9 Nov, Stephan Szabo wrote:
> On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote:
> 
>> Whenever an error occurs within the transaction, PostgreSQL puts the
>> whole transaction in an *ABORT* state, so that there is no difference
>> at all between COMMITing or ROLLBACKing it. Even commands
>> successfully carried out before the error ocurred are rolled back,
>> even if I COMMIT the transaction, where no error message whatsoever
>> is shown.
> 
> In PostgreSQL all errors are currently considered unrecoverable, and
> all statements in a transaction must commit or rollback together as a
> single unit. In the future an implementation of nested transactions or
> savepoints would presumably relax this limitation to only the
> successfully committed subtransactions or statements that were not
> separately rolled back to a previous savepoint.
 
What I meant was not subtransactions or savepoints, but the funny
behaviour that operations already successfully carried out never will be
committed, just because some other operation later within the same
transaction fails. This is far different from the behaviour of all other
DMBS I know. Why not:

   begin work;
   insert into x values (1, 'hi');
   --> success
   insert into x values (1, 'there');
   --> failure due to primary key violation
   insert into x values (2, 'foo');
   --> success
   commit work;

and have two new tuples in the table? Why do _all_ of these operations
have to be rolled back? I just don't get it that this has anything to do
with savepoints or so. I don't see any problem with an error being
recoverable, because the second insert failed and does not have to be
recovered while the first and the third worked fine and does not have to
be recovered either. When committing a transaction the effects of all
operations that did not fail will be made permanent. This is how
transaction processing is described in the literature.

If a programmer wants the whole transaction to fail because one part
failed, (s)he can always program a rollback in case of at least one
error. But there should always be a difference between a rollback and a
commit, after at least one statement changing data has reported a
success.

Hopefully this can be cleared and perhaps improved within PostgreSQL.
Otherwise, PostgreSQL always claims to be close to standards.

Sincerely,

Holger
-- 
[EMAIL PROTECTED], Bergisch Gladbach, Germany
Telefon (0 22 02) 5 99 91 oder (01 77) 7 92 24 66


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] transaction processing after error in statement

2003-11-10 Thread Stephan Szabo

On Mon, 10 Nov 2003, Holger Jakobs wrote:

> Hi Stephan,
>
> On  9 Nov, Stephan Szabo wrote:
> > On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote:
> >
> >> Whenever an error occurs within the transaction, PostgreSQL puts the
> >> whole transaction in an *ABORT* state, so that there is no difference
> >> at all between COMMITing or ROLLBACKing it. Even commands
> >> successfully carried out before the error ocurred are rolled back,
> >> even if I COMMIT the transaction, where no error message whatsoever
> >> is shown.
> >
> > In PostgreSQL all errors are currently considered unrecoverable, and
> > all statements in a transaction must commit or rollback together as a
> > single unit. In the future an implementation of nested transactions or
> > savepoints would presumably relax this limitation to only the
> > successfully committed subtransactions or statements that were not
> > separately rolled back to a previous savepoint.
>
> What I meant was not subtransactions or savepoints, but the funny
> behaviour that operations already successfully carried out never will be
> committed, just because some other operation later within the same
> transaction fails. This is far different from the behaviour of all other
> DMBS I know. Why not:
>
>begin work;
>insert into x values (1, 'hi');
>--> success
>insert into x values (1, 'there');
>--> failure due to primary key violation
>insert into x values (2, 'foo');
>--> success
>commit work;
>
> and have two new tuples in the table? Why do _all_ of these operations
> have to be rolled back? I just don't get it that this has anything to do

Right now there's a technical reason (the system won't handle partially
completed statements reasonably - but nested transactions will give a way
for that to work presumably hence their mention) and a logical reason
(many of the developers seem believe that partial commit isn't a good
behavior and that transactions should be all commit or all fail).
Personally, I think it'd be nice to have some way to deal with errors
other than retrying the whole sequence, but at the mean time we have a
system which basically meets the words if not the intent of the spec.

> Hopefully this can be cleared and perhaps improved within PostgreSQL.
> Otherwise, PostgreSQL always claims to be close to standards.

This is actually fairly standard complient. Note my message about errors
being unrecoverable. The SQL spec allows an entire transaction to be
rolled back upon unrecoverable errors. Our failed state behavior may not
be (if we sent an error on commit, I believe it would be), but forcing the
entire transaction to roll back is.
That may not be what the spec intended, but it does seem to be allowed.

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [SQL] transaction processing after error in statement

2003-11-09 Thread Stephan Szabo
On Fri, 7 Nov 2003 [EMAIL PROTECTED] wrote:

> Whenever an error occurs within the transaction, PostgreSQL puts the
> whole transaction in an *ABORT* state, so that there is no difference at
> all between COMMITing or ROLLBACKing it. Even commands successfully
> carried out before the error ocurred are rolled back, even if I COMMIT
> the transaction, where no error message whatsoever is shown.

In PostgreSQL all errors are currently considered unrecoverable, and all
statements in a transaction must commit or rollback together as a single
unit.
In the future an implementation of nested transactions or savepoints would
presumably relax this limitation to only the successfully committed
subtransactions or statements that were not separately rolled back to a
previous savepoint.

> Additionally, I have discovered that phantom reads occur in PostgreSQL
> even if isolation mode serializable is used. Also not so nice!

You're going to have to give more information for anyone to even begin to
understand the case you're seeing.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [SQL] transaction management in plpgsql functions

2003-11-09 Thread Chester Kustarz
http://www.postgresql.org/docs/7.2/interactive/plpgsql-structure.html

"It is important not to confuse the use of BEGIN/END for grouping statements in 
PL/pgSQL with the database commands for transaction control. PL/pgSQL's BEGIN/END are 
only for grouping; they do not start or end a transaction. Functions and trigger 
procedures are always executed within a transaction established by an outer query --- 
they cannot start or commit transactions, since PostgreSQL does not have nested 
transactions."

Makes sense if you think about it.

On Thu, 6 Nov 2003, Cris Carampa wrote:
> It seems that transaction management statements (begin work...commit
> work) are not allowed into plpgsql functions. Is it true? If true, what
> happens if I put a DML statement into a function? Is it automatically
> commited every time the function executes? Is there no way to rollback
> the changes?


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] transaction management in plpgsql functions

2003-11-09 Thread Christopher Browne
After takin a swig o' Arrakan spice grog, Cris Carampa <[EMAIL PROTECTED]> belched 
out...:
> It seems that transaction management statements (begin work...commit
> work) are not allowed into plpgsql functions. Is it true? If true,
> what happens if I put a DML statement into a function? Is it
> automatically commited every time the function executes? Is there no
> way to rollback the changes?

The "problem" with using BEGIN/COMMIT in plpgsql is fundamentally that
those functions have to be _started_ in the context of a transaction,
so by the time they get started, there is already a transaction in
progress.

If-and-when support for nested transactions gets into place, you would
presumably be able to have nested transactions inside functions.

What happens may be a little different from what you think; things are
not COMMITted when the function executes, but rather when the COMMIT
takes place /on the transaction in which the function runs/.

Thus...

BEGIN;
INSERT INTO T1 (4, 5);
INSERT INTO T2 (6, 7, NOW());
SELECT FUNNY_FUNCTION(4,5,6,7, NOW());
DELETE FROM T1;
DELETE FROM T2;
COMMIT;

All of the changes commit as of the COMMIT statement at the end, and
not before.

If you had DML creating table T3 in FUNNY_FUNCTION, then T3 would not
become visible to other users until the COMMIT, although the current
transaction could readily add/modify records before the COMMIT.
-- 
(format nil "[EMAIL PROTECTED]" "cbbrowne" "cbbrowne.com")
http://cbbrowne.com/info/sap.html
If you're sending someone some Styrofoam, what do you pack it in?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [SQL] transaction locking

2003-09-18 Thread tom baker
On Thursday 18 September 2003 11:24, Stephan Szabo pronounced:
> On Thu, 18 Sep 2003, tom baker wrote:
> > ah, yes, i DID not see the NOT. that is fixed. here is the code that is
> > having problems (paired down; all error checking has been removed!)
> >
> > $res=pg_query( "BEGIN TRANSACTION" ) ;
> >
> > $sql = "LOCK TABLE parts IN SHARE ROW EXCLUSIVE MODE" ;
> > $res = pg_query( $sql ) ;
> >
> > $sql = "SET CONSTRAINTS ALL DEFERRED" ;
> > $res = pg_query( $sql ) ;
> >
> > if ( $amc_part_no == "" )
> > {
> > $sql = "SELECT max(amc_part_no) FROM parts WHERE amc_part_no like 'G%'"
> > ; $res = pg_query( $sql ) ;
> > $amc_part_no = pg_fetch_result( $res , 0 , 0 ) ;
> > # and generate the next part number...
> > }
> >
> > $res = pg_query( $sql ) ;  <<
> >
> > $sql = "INSERT INTO parts VALUES ( '".$make."', '".$amc_part_no."',
> > '".$group_no."', '".$subgroup_no."', '".$part_name."',
> > '".$description."', '".$prim_grp."', '".$prim_sbg."', '".$no_req."',
> > '".$weight."',
> > '".$graphic."' )" ;
> > $res = pg_query( $sql ) ;
> >
> > if ( ( $alt_group > "" ) )
> > {
> > $sql = "INSERT INTO part_group ( make, amc_part_no, group_no ) VALUES (
> > '$make' , '$amc_part_no' , '$alt_group' ) " ;
> > $res = pg_query( $sql ) ;
> > }
> >
> > $sql = "INSERT INTO application VALUES (
> > '$make','$amc_part_no','$tyears', '$Amodel' )" ;
> > $res = pg_query( $sql ) ;
>
> Can you print out the queries you think you're sending and turn on query
> logging and see what the database things?  Inserting only the first three
> columns into parts and then a matching application row doesn't seem to
> fail for me in 7.3.4 or 7.4 beta 1.

stephan:

i want to give you a great big thank you. THANK YOU!
i was shooting myself in the foot vigorously. one sql statements were being 
executed twice, without the appropriate error checking (do i feel foolish or 
what :(( ). see  where i put "<<=" above!


-- 
regards,
tom baker
former ingres programmer...
Magary's Principle:
When there is a public outcry to cut deadwood and fat from any
government bureaucracy, it is the deadwood and the fat that do
the cutting, and the public's services are cut.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [SQL] transaction locking

2003-09-18 Thread Stephan Szabo
On Thu, 18 Sep 2003, tom baker wrote:

> ah, yes, i DID not see the NOT. that is fixed. here is the code that is having
> problems (paired down; all error checking has been removed!)
>
> $res=pg_query( "BEGIN TRANSACTION" ) ;
>
> $sql = "LOCK TABLE parts IN SHARE ROW EXCLUSIVE MODE" ;
> $res = pg_query( $sql ) ;
>
> $sql = "SET CONSTRAINTS ALL DEFERRED" ;
> $res = pg_query( $sql ) ;
>
> if ( $amc_part_no == "" )
> {
>   $sql = "SELECT max(amc_part_no) FROM parts WHERE amc_part_no like 'G%'" ;
>   $res = pg_query( $sql ) ;
>   $amc_part_no = pg_fetch_result( $res , 0 , 0 ) ;
>   # and generate the next part number...
> }
>
> $res = pg_query( $sql ) ;
>
> $sql = "INSERT INTO parts VALUES ( '".$make."', '".$amc_part_no."',
> '".$group_no."', '".$subgroup_no."', '".$part_name."', '".$description."',
> '".$prim_grp."', '".$prim_sbg."', '".$no_req."', '".$weight."',
> '".$graphic."' )" ;
> $res = pg_query( $sql ) ;
>
> if ( ( $alt_group > "" ) )
> {
>   $sql = "INSERT INTO part_group ( make, amc_part_no, group_no ) VALUES (
> '$make' , '$amc_part_no' , '$alt_group' ) " ;
>   $res = pg_query( $sql ) ;
> }
>
>   $sql = "INSERT INTO application VALUES ( '$make','$amc_part_no','$tyears',
> '$Amodel' )" ;
>   $res = pg_query( $sql ) ;

Can you print out the queries you think you're sending and turn on query
logging and see what the database things?  Inserting only the first three
columns into parts and then a matching application row doesn't seem to
fail for me in 7.3.4 or 7.4 beta 1.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [SQL] transaction locking

2003-09-18 Thread tom baker
On Wednesday 17 September 2003 22:33, Stephan Szabo pronounced:
> On Wed, 17 Sep 2003, tom baker wrote:
> > i am (probably) shooting myself in the foot, but here goes the question.
> >
> > inside of a begin transaction/commit block, i am attempting to insert a
> > record into a parts table, and then insert a record that references the
> > part into an application table. and the transaction fails with this error
> > message:
> >
> > ERROR: app_part_ins_trig referential integrity violation - key referenced
> > from application not found in parts
> >
> > i understand that since the record was inserted into the parts table
> > *after* the BEGIN TRANSACTION statement, the insert into the application
> > table cannot see that a record exists until a commit.
>
> Assuming that they are both in the same transaction, the second insert
> should be able to see the results of the first insert, can you send an
> example sequence of inserts as well? (Also see the note below)
>
> > and the constraint that is causing problems is:
> > CREATE CONSTRAINT TRIGGER app_part_ins_trig AFTER INSERT OR UPDATE ON
> > application  FROM parts NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW
> > EXECUTE PROCEDURE "RI_FKey_check_ins" ('app_part_ins_trig',
> > 'application', 'parts', 'UNSPECIFIED', 'make', 'make', 'amc_part_no',
> > 'amc_part_no');
> >
> > $sql = "SET CONSTRAINTS ALL DEFERRED" ;
>
> I'm not sure if you know, but this is not going to deferr the constraint
> above because it was created with NOT DEFERRABLE.
>
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if your
>   joining column's datatypes do not match

ah, yes, i DID not see the NOT. that is fixed. here is the code that is having 
problems (paired down; all error checking has been removed!)

$res=pg_query( "BEGIN TRANSACTION" ) ;

$sql = "LOCK TABLE parts IN SHARE ROW EXCLUSIVE MODE" ;
$res = pg_query( $sql ) ;

$sql = "SET CONSTRAINTS ALL DEFERRED" ;
$res = pg_query( $sql ) ;

if ( $amc_part_no == "" )
{
$sql = "SELECT max(amc_part_no) FROM parts WHERE amc_part_no like 'G%'" ;
$res = pg_query( $sql ) ;
$amc_part_no = pg_fetch_result( $res , 0 , 0 ) ;
# and generate the next part number...
}

$res = pg_query( $sql ) ;

$sql = "INSERT INTO parts VALUES ( '".$make."', '".$amc_part_no."', 
'".$group_no."', '".$subgroup_no."', '".$part_name."', '".$description."', 
'".$prim_grp."', '".$prim_sbg."', '".$no_req."', '".$weight."', 
'".$graphic."' )" ;
$res = pg_query( $sql ) ;

if ( ( $alt_group > "" ) )
{
$sql = "INSERT INTO part_group ( make, amc_part_no, group_no ) VALUES ( 
'$make' , '$amc_part_no' , '$alt_group' ) " ;
$res = pg_query( $sql ) ;
}

$sql = "INSERT INTO application VALUES ( '$make','$amc_part_no','$tyears', 
'$Amodel' )" ;
$res = pg_query( $sql ) ;

if ( $cmplt == 0 || $failed )
{
pg_query( "ROLLBACK TRANSACTION" ) ;
}
else
{
pg_query( "COMMIT TRANSACTION" ) ;
}

-- 
regards,
tom baker
former ingres programmer...
You'll never be the man your mother was!


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] transaction locking

2003-09-17 Thread Stephan Szabo

On Wed, 17 Sep 2003, tom baker wrote:

> i am (probably) shooting myself in the foot, but here goes the question.
>
> inside of a begin transaction/commit block, i am attempting to insert a record
> into a parts table, and then insert a record that references the part into an
> application table. and the transaction fails with this error message:
>
> ERROR: app_part_ins_trig referential integrity violation - key referenced from
> application not found in parts
>
> i understand that since the record was inserted into the parts table *after*
> the BEGIN TRANSACTION statement, the insert into the application table cannot
> see that a record exists until a commit.

Assuming that they are both in the same transaction, the second insert
should be able to see the results of the first insert, can you send an
example sequence of inserts as well? (Also see the note below)

> and the constraint that is causing problems is:
> CREATE CONSTRAINT TRIGGER app_part_ins_trig AFTER INSERT OR UPDATE ON
> application  FROM parts NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW
> EXECUTE PROCEDURE "RI_FKey_check_ins" ('app_part_ins_trig', 'application',
> 'parts', 'UNSPECIFIED', 'make', 'make', 'amc_part_no', 'amc_part_no');
>
> $sql = "SET CONSTRAINTS ALL DEFERRED" ;

I'm not sure if you know, but this is not going to deferr the constraint
above because it was created with NOT DEFERRABLE.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [SQL] Transaction Newbie

2002-09-09 Thread Christopher Kings-Lynne

> if the primary key is table_id, with default value
> "nextval('table_seq') - then these two statements:
>
> insert into table (field1,field2,field3) values (value1,value2,value3)
> select currval('table_seq')
>
> work to get me the value I need. Except, of course if someone else
> has inserted a row inbetween these two statements.

Hmmm - I'm not sure currval has that problem - have you actually tried it
with two psql windows?

> I tried a transaction test, and this is what I got:
>
> pew=# begin work;

You can just go 'begin;'

> BEGIN
> pew=# insert into categories values
> ('23423423','test','testing','3','today','today','mpm','test
> category');
> INSERT 83910 1
> pew=# select currval('category_id');
> NOTICE:  current transaction is aborted, queries ignored until end of
> transaction block
> *ABORT STATE*

As soon as you see this, it means you have made a syntax error or something
in your sql, which causes an automatic abort.

> pew=# commit work
> pew-# ;

You can't commit once the transaction is aborted, you need to ROLLBACK;

> COMMIT
> pew=# select * from categories;
>
> And the insert didn't happen.

It didn't happen because something caused the whole transaction to be
aborted.

> Am I thinking about this right? Is there a better way to get the
> value of a newly inserted record?

Chris


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])



Re: [SQL] transaction in plpgsql

2002-06-27 Thread Nils Höglund


> how can i implement transactions inside my plpgsql functions 

I would guess that this is not possible since postgres don't support
nested transactions(?).


-- 
/Nils Höglund, Naqua KB

E-mail: [EMAIL PROTECTED]
Web:http://www.naqua.se/
Home Phone: +46 (0)18  30 09 19
Cellular Phone: +46 (0)709 91 97 65
Address:Nya Valsätrav. 26 B
SE-756 46
Uppsala, Sweden



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [SQL] transaction in plpgsql

2002-06-27 Thread Josh Berkus

Joe,

> how can i implement transactions inside my plpgsql functions 

You can't.

Each PL/pgSQL function is automatically treated as a single transaction
by Postgres.   As such, the entire function either succeeds or fails.

To manage a transaction *inside* the PL/pgSQL function, Postgres would
have to support nested transactions ... which we currently don't.
 Nested transactions and MVCC are somewhat of a tangle to reconcile,
and we don't expect a solution until Postgres 8.0 (if then).

-Josh Berkus



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [SQL] Transaction isolation level for plpgsql

2000-09-30 Thread Peter Eisentraut

Keith Wong writes:

> Does anybody know the transaction isolation level default when a plpgsql 
> stored procedure is called?
> Is it possible to set it? Or is this controlled by SPI?

It's whatever the transaction that called the function was set to.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [SQL] Transaction isolation level for plpgsql

2000-09-30 Thread Poul L. Christiansen

Use the command "\h set" and you'll see the parameters.

test=# \h set
Command: SET
Description: Set run-time parameters for session
Syntax:
SET variable { TO | = } { value | 'value' | DEFAULT }
SET CONSTRAINTS {ALL | constraintlist} mode
SET TIME ZONE { 'timezone' | LOCAL | DEFAULT }
SET TRANSACTION ISOLATION LEVEL { READ COMMITTED | SERIALIZABLE }

test=# show transaction isolation level;
NOTICE:  TRANSACTION ISOLATION LEVEL is READ COMMITTED
SHOW VARIABLE
test=#

HTH,
Poul L. Christiansen


Keith Wong wrote:
> 
> Hi all,
> 
> Does anybody know the transaction isolation level default when a plpgsql
> stored procedure is called?
> Is it possible to set it? Or is this controlled by SPI?
> 
> Cheers,
> Keith.