On Dec 29, 2011, at 23:25, Jan Bakuwel <jan.baku...@greenpeace.org> wrote:

> Hi,
> 
> Maybe there is a simple solution for PostgreSQL behaviour that is
> annoying me...
> 
> I've got users making updates to a master table and a number of detail
> tables. All changes to the master record and related detail records are
> encapsulated in a transaction so everything can be rolled back if
> necessary and also to lock those master and related records for the user
> making the changes.
> 
> When they do something that violates a constraint (for example adding a
> duplicate detail record where that is not allowed), PostgreSQL aborts
> the transaction. What I would much rather have is that PostgreSQL
> returns an error but does not cancel the transaction as it's perfectly
> OK (from a user's point of view) to try to do something that violates a
> constraint.
> 
> What annoys me is that I don't think that a constraint violation made by
> a user should result in an aborted transaction. There is probably a very
> good reason to do that however the logic escapes me...
> 
> Of course I can start testing existing values in the database before
> accepting them in the user interface but that's putting the horse behind
> the cart. I much rather use the constraints at the database level to
> tell me a particular update can't be done and do that without loosing
> everything else I happened to have done in that transaction until that
> point.
> 
> Any suggestions?
> 
> Jan
> 
> 

Start a "savepoint" before each sub-update and rollback to the savepoint if the 
update fails, and then try again with different data.  If it succeeds you then 
release the savepoint anad move on.

David J.
-- 
Sent via pgsql-sql mailing list (pgsql-sql@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-sql

Reply via email to