>
> [skipped]
>
>
>>      But remember that if you update or delete a row, removing it from an
>>>     index, the data will stay in that index until vacuum comes along.
>>>
>>>     Also, there's no point in doing a REINDEX after a VACUUM FULL;
>>>     vacuum full rebuilds all the indexes for you.
>>>
>>>
>>> I was being desperate :)
>>>
>>> I still think there is something very wrong with this particular table.
>>> First, I have production systems that employ this function on way larger
>>> data set, and there is no problem (so far, but still). This machine is
>>> part of a test deployment, there is no constant load, the only data that
>>> is being written now is when I do these tests. Vacuuming should prune
>>> all that dead stuff, and if it's absent, it's unclear where is the time
>>> spent navigating/updating the table with 24 rows :)
>>>
>>
>> I think you definitely have a problem with dead rows, as evidenced by the
>> huge improvement VACUUM FULL made.
>>
>
> But it's not clear why (and not reasonable, IMHO, that) it wouldn't
> improve past current point.
>

What I should've done is 'VACUUM FULL VERBOSE'. Once I did, it told me
there were 800k dead rows that can't be removed. After digging around I
found some dangling prepared transactions, going back months. Once I threw
those away, and re-vacuumed, things got back to normal.

Thanks for all your help and advice.

Reply via email to