Hi All,

I’m having performance trouble with a particular set of queries. It goes a bit 
like this

1) queue table is initially empty, and very narrow (1 bigint column)
2) we insert ~30 million rows into queue table
3) we do a join with queue table to delete from another table (delete from a 
using queue where a.id<http://a.id> = queue.id<http://queue.id>), but postgres 
stats say that queue table is empty, so it uses a nested loop over all 30 
million rows, taking forever

If I kill the query in 3 and let it run again after autoanalyze has done it’s 
thing then it is very quick

This queue table is empty 99% of the time, and the query in 3 runs immediately 
after step 2. Is there any likelyhood that tweaking the autoanalyze params 
would help in this case? I don’t want to explicitly analyze the table between 
steps 2 and three either as there are other patterns of use where for example 0 
rows are inserted in step 2 and this is expected to run very very quickly. Do I 
have any other options?

Postgres 9.5 ATM, but an upgrade is in planning.


Thanks in advance

David Wheeler
Software developer

[cid:2C4D0888-9F8B-463F-BD54-2B60A322210C]


E dwhee...@dgitsystems.com<mailto:dwhee...@dgitsystems.com>
D +61 3 9663 3554  W http://dgitsystems.com
Level 8, 620 Bourke St, Melbourne VIC 3000.


Reply via email to