"Heikki Linnakangas" <[EMAIL PROTECTED]> writes:
> When I suggested that we get rid of the LP_DELETE flag for heap tuples,
> the tuple-level fragmentation and all that, and just take the vacuum
> lock and call PageRepairFragmentation, I was thinking that we'd do it in
> heap_update and only when we run out of space on the page. But as Greg
> said, it doesn't work because you're already holding a reference to at
> least one tuple on the page, the one you're updating, by the time you
> get to heap_update. That's why I put the pruning code to heap_fetch
> instead. Yes, though the amortized cost is the same, it does push the
> pruning work to the foreground query path.

The amortized cost is only "the same" if every heap_fetch is associated
with a heap update.  I feel pretty urgently unhappy about this choice.
Have you tested the impact of the patch on read-mostly workloads?

>> Another real problem with doing pruning only in UPDATE path is that
>> we may end up with long HOT chains if the page does not receive a
>> UPDATE, after many consecutive HOT updates.

How is that, if the same number of prune attempts would occur?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 2: Don't 'kill -9' the postmaster

Reply via email to