On Wed, Jun 27, 2018 at 03:45:26AM +0000, David Wheeler wrote:
> Hi All,
> 
> I’m having performance trouble with a particular set of queries. It goes a 
> bit like this
> 
> 1) queue table is initially empty, and very narrow (1 bigint column)
> 2) we insert ~30 million rows into queue table
> 3) we do a join with queue table to delete from another table (delete from a 
> using queue where a.id<http://a.id> = queue.id<http://queue.id>), but 
> postgres stats say that queue table is empty, so it uses a nested loop over 
> all 30 million rows, taking forever

If it's within a transaction, then autovacuum couldn't begin to help until it
commits.  (And if it's not, then it'll be slow on its own).

It seems to me that you can't rely on autoanalyze to finish between commiting
step 2 and beginning step 3.  So you're left with options like: SET
enable_nestloop=off; or manual ANALZYE (or I guess VACUUM would be adequate to
set reltuples).  Maybe you can conditionalize that: if inserted>9: ANALYZE 
queue.

Justin

Reply via email to