Gregory Stark wrote:
"Tom Allison" <[EMAIL PROTECTED]> writes:
The other approach would be to use an external file to queue these updates and
run them from a crontab. Something like:
....
and then run a job daily to read all these in to a hash (to make them unique
values) and then run one SQL statement at the end of the day.
Well probably better to keep it in the database. The database also knows how
to use hashes to get distinct values too.
So if you have a "history" table which records ids with dates and then do a
transaction like:
BEGIN;
DELETE FROM tokens WHERE id NOT IN (select id from history);
DELETE from history WHERE seen < now()-'3 days'::interval;
END;
This could still deadlock so it may make sense for it to do it in a
transaction and add LOCK TABLE statements to lock the tables which refer to
the tokens table.
I ended up with two steps to the solution.
First, I do handle the deadlock errors with a sleep/redo loop.
I add a bit more time with each sleep so eventually everything slows down so
much it can't deadlock.
Second, the offending SQL was to UPDATE the table that was the target of a
Foreign Key constraint. I modified the SQL from:
update tokens set last_seen = now() where token_idx in (...)
to:
update tokens set last_seen = now() where
token_idx in (...) and last_seen < current_date;
Since this only happens when things are running at full...
Previously I could deadlock on 60 emails.
Now I can't deadlock on 8000.
I would venture to say the problem is effectively fixed.
I have a question though.
I noticed a particular format for identifying dates like:
now()-'3 days'::interval;
What's '::interval' and why should I use it?
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match