I'm using Postgres 8.4 on RHEL 6 and I need to support inserting data into
a table from multiple processes with there occasionally being duplicates
from the different processes. Here's a simple example table:
CREATE TABLE test (tutc TIMESTAMP WITHOUT TIME ZONE, id INTEGER, value
INTEGER, PRIMARY KEY (tutc, id));
If I do the following query from 2 processes, then it's fine:
INSERT INTO test (tutc, id, value) SELECT '2015-01-01 01:02:03', 4, 5 WHERE
NOT EXISTS (SELECT 1 FROM test WHERE tutc='2015-01-01 01:02:03' AND id=4);

But if I put the operation in a transaction, then the second process will
block until the transaction of the first is commited (which is fine) but
then the insert fails with a "duplicate key value violation". I'm guessing
that this is because the transaction is making it so that the SELECT only
sees the values from before the transaction of the second process began.

Using an "upsert" type of function, like the one shown in the documentation
( see
http://www.postgresql.org/docs/8.4/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE
) seems like it might work, but I don't need to support updating and was
hoping to not have to use a custom function. So is there some way to catch
the unique_violation exception without creating a function? Or some other
solution to this?

Thanks,
Dave

Reply via email to