Hi all,

I’d like to pitch the idea of supporting “dead letter”-savepoints, similar to 
the way you have “dead letter”-exchanges in message-queue systems, etc.  The 
idea is basically that a client can publish a message, but in such a away that 
it only ever actually gets published if the client dies, goes away suddenly, 
etc.  That allows for some nice logic, from simply announcing “Daemon X just 
died”, to more advanced logic, simplifying self-healing clusters and what not.

Different name would be “last will”.

The use-cases for PostgreSQL would be a bit different, so I’d like to draw up 
an example of where this could be very (imho) useful.


A very common usecase for PostgreSQL goes something like this:

1. Worker picks up a row.
2. Worker does something non-idempotent with an external service, where it’s 
important that it only gets done at most once.  Charge a credit card, send an 
email/sms, launch a rocket, and so on.
3. Worker marks the row as successful, or failed.

But you need only one worker to pick up the task, so you expand to lock the 
row, and implement the first point as something like:

SELECT * FROM targets WHERE status=‘scheduled’ FOR UPDATE SKIP LOCKED LIMIT 1;

That’ll give you a nice lock on the line, yet allow other workers to pick up 
other targets.

But what if there’s a bug making a call to the external service?  Most of the 
time, you’ll trap the error and set status to something sane, but what if 
there’s a crash-bug in the SDK implementing it, or some other situation where 
things go very bad?  The rocket might be fired, then the client dies, lock is 
released, another worker picks up the task, and repeats the process ad nausium.

Okay, so you’ll just update the row to say you’re trying to send it.  You set 
status=‘in-flight’ or some other status that’ll prevent the SELECT in other 
workers from picking up the task, and you commit that, so other workers won’t 
pick up the row if you die.  In the process though, you also loose the lock on 
the row.  You still want the row to be tied to you specifically, so you add a 
unique tag to the row, that later needs to be removed, so there’s more 
housekeeping.

This is pretty basic, it works, and it works well.  However, it could (imho) be 
improved, both in terms of developer comfort, and also more efficient.  The 
need for that extra commit – before doing the actual work – could also be 
avoided, potentially reducing the number of transaction by half.

Typically the flow would be something like:

BEGIN;
SELECT id FROM targets WHERE status=‘scheduled’ FOR UPDATE SKIP LOCKED LIMIT 1;
UPDATE targets SET status=‘in-flight’ WHERE id =%(id);
COMMIT;
— Do the work.
BEGIN;
UPDATE targets SET status=‘completed’ WHERE id = %(id); — or 
status=‘failed-foo’, if it fails for reason foo
COMMIT;


What I’m suggesting would be something along the lines of;

BEGIN;
SELECT id FROM targets WHERE status=‘scheduled’ FOR UPDATE SKIP LOCKED LIMIT 1;
UPDATE targets SET status=‘failed-unknown’ WHERE id =%(id);
SAVEPOINT deadletter ON FAILURE COMMIT;
— Do the work.
UPDATE targets SET status=‘completed’ WHERE id = %(id); — or status=‘failed-foo'
COMMIT;

Or, unless re-setting the columns changed before the savepoint, roll back to 
one prior to it.


The basic idea is to be able to say “If I go belly up, I want this stuff to 
happen”.  Depending on different needs, could be made persistent once the 
savepoint is taken, but for a lot of cases that wouldn’t really be needed.  
There’s some room for variation as well, such as having it support only dropped 
connections, or also support turning errors and/or rollbacks into rollback to 
and commits of the savepoint.  Ideally configurable at the point the snapshot 
it taken, to easily support pr. snapshot variation.


I did for a few moments wonder if prepared transactions would be a better place 
for something like this.  It could allow for named independent transactions, 
but there’s a fairly big mismatch between the two concepts.  It also wouldn’t 
be too hard to use multiple named savepoints for effectively the same logic for 
most cases.  One advantage of prepared transactions is that it could perhaps 
also cover the case of a postgresql child dying, but that’s not exactly a 
common problem.  A huge dealbreaker though, is that the prepared transaction 
would very likely keep conflicting locks with the work to be done.




I have to admit, I’m not sure if this is a big ask or not, but I’m hopeful that 
it’s not.  In part because so much is already there.  My hope is that it 
wouldn’t take a lot to turn an error into what is effectively pretty close to 
“ROLLBACK TO deadletter; COMMIT;”, combined with extending savepoints to 
include information about which failures they should “catch”, and the routing 
to use those.  There could be a host of issues I’m not aware of though.

Terje Elde



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to