P.S. Does this use case (do not retry transaction with serialization or deadlock failure) is most interesting or failed transactions should be retried (and how much times if there seems to be no hope of success...)?

I can't quite parse that sentence, could you restate?

The way I read it was that the most interesting solution would retry
a transaction from the beginning on a serialization failure or
deadlock failure.  Most people who use serializable transactions (at
least in my experience) run though a framework that does that
automatically, regardless of what client code initiated the
transaction.  These retries are generally hidden from the client
code -- it just looks like the transaction took a bit longer.
Sometimes people will have a limit on the number of retries.  I
never used such a limit and never had a problem, because our
implementation of serializable transactions will not throw a
serialization failure error until one of the transactions involved
in causing it has successfully committed -- meaning that the retry
can only hit this again on a *new* set of transactions.

Essentially, the transaction should only count toward the TPS rate
when it eventually completes without a serialization failure.

Marina, did I understand you correctly?

Álvaro Herrera in next message of this thread has understood my text right:

As far as I understand her proposal, it is exactly the opposite -- if a
transaction fails, it is discarded.  And this P.S. note is asking
whether this is a good idea, or would we prefer that failing
transactions are retried.

And thank you very much for your explanation how and why transactions with failures should be retried! I'll try to implement all of it.

--
Marina Polyakova
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to