* Lincoln Yeoh:
If you use serializable transactions in PostgreSQL 9.1, you can
implement such constraints in the application without additional
locking. However, with concurrent writes and without an index, the rate
of detected serialization violations and resulting transactions aborts
will be
At 04:27 PM 1/20/2012, Florian Weimer wrote:
* Lincoln Yeoh:
If you use serializable transactions in PostgreSQL 9.1, you can
implement such constraints in the application without additional
locking. However, with concurrent writes and without an index, the rate
of detected serialization
* Lincoln Yeoh:
Is there a simple way to get postgresql to retry a transaction, or
does the application have to actually reissue all the necessary
statements again?
The application has to re-run the transaction, which might result in the
execution of different statements. In the
* Gnanakumar:
Just create a unique index on EMAIL column and handle error if it comes
Thanks for your suggestion. Of course, I do understand that this could be
enforced/imposed at the database-level at any time. But I'm trying to find
out whether this could be solved at the application
On Thu, Jan 19, 2012 at 7:54 AM, Florian Weimer fwei...@bfk.de wrote:
* Gnanakumar:
Just create a unique index on EMAIL column and handle error if it comes
Thanks for your suggestion. Of course, I do understand that this could be
enforced/imposed at the database-level at any time. But I'm
On Thu, Jan 19, 2012 at 9:49 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Jan 19, 2012 at 7:54 AM, Florian Weimer fwei...@bfk.de wrote:
* Gnanakumar:
Just create a unique index on EMAIL column and handle error if it comes
Thanks for your suggestion. Of course, I do understand
* Scott Marlowe:
On Thu, Jan 19, 2012 at 7:54 AM, Florian Weimer fwei...@bfk.de wrote:
* Gnanakumar:
Just create a unique index on EMAIL column and handle error if it comes
Thanks for your suggestion. Of course, I do understand that this could be
enforced/imposed at the database-level at
At 10:54 PM 1/19/2012, Florian Weimer wrote:
* Gnanakumar:
Just create a unique index on EMAIL column and handle error if it comes
Thanks for your suggestion. Of course, I do understand that this could be
enforced/imposed at the database-level at any time. But I'm trying to find
out
Hi,
Ours is a web-based application. We're trying to implement ON DUPLICATE
IGNORE for one of our application table, named EMAILLIST. After a quick
Google search, I'm finding the following easy convenient single SQL
statement syntax to follow with:
INSERT INTO EMAILLIST (EMAIL)
SELECT
: [GENERAL] On duplicate ignore
Hi,
Ours is a web-based application. We're trying to implement ON DUPLICATE IGNORE
for one of our application table, named EMAILLIST. After a quick Google
search, I'm finding the following easy convenient single SQL statement
syntax to follow with:
INSERT
Just create a unique index on EMAIL column and handle error if it comes
Thanks for your suggestion. Of course, I do understand that this could be
enforced/imposed at the database-level at any time. But I'm trying to find
out whether this could be solved at the application layer itself. Any
Hey Gnanakumar,
2012/1/18 Gnanakumar gna...@zoniac.com
Just create a unique index on EMAIL column and handle error if it comes
Thanks for your suggestion. Of course, I do understand that this could be
enforced/imposed at the database-level at any time. But I'm trying to find
out whether
@postgresql.org
Subject: RE: [GENERAL] On duplicate ignore
Just create a unique index on EMAIL column and handle error if it
comes
Thanks for your suggestion. Of course, I do understand that this could be
enforced/imposed at the database-level at any time. But I'm trying to find out
whether
13 matches
Mail list logo