Bendik Rognlien Johansen <[EMAIL PROTECTED]> writes:
> Has anyone got any tips for speeding up this query? It currently  
> takes hours to start.

Are the rowcount estimates close to reality?  The plan doesn't look
unreasonable to me if they are.  It might help to increase work_mem
to ensure that the hash tables don't spill to disk.

Indexes:
     "people_original_is_null" btree (original) WHERE original IS NULL

This index seems poorly designed: the actual index entries are dead
weight since all of them are necessarily NULL.  You might as well make
the index carry something that you frequently test in conjunction with
"original IS NULL".  For instance, if this particular query is a common
case, you could replace this index with

CREATE INDEX people_deleted_original_is_null ON people(deleted)
  WHERE original IS NULL;

This index is still perfectly usable for queries that only say "original
IS NULL", but it can also filter out rows with the wrong value of
deleted.  Now, if there are hardly any rows with deleted = true, maybe
this won't help much for your problem.  But in any case you ought to
consider whether you can make the index entries do something useful.

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to