Tom Lane <[EMAIL PROTECTED]> wrote:

> ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> I've applied this but I'm now having some second thoughts about it,
> because I'm seeing an actual *decrease* in pgbench numbers from the
> immediately prior CVS HEAD code.

> Had you done any performance testing on this patch, and if so what
> tests did you use?  I'm a bit hesitant to try to fix it on the basis
> of pgbench results alone.

Thank you for applying and adding documentation.
But hmm... I tested on DBT-2 and pgbench, and the patch worked well on both.
I used disks with battery backup for WAL and enabled writeback cache, so the
extra WAL records might be of no significance.
On DBT-2, the bottleneck was completely the data disks. Cache usage was 
important there, so avoiding splitting was useful for saving momory.
On pgbench, full dataset was cached in memory, so the bottleneck was CPUs.
Especially cleanup for branches_pkey was effective because it was frequently
updated and had many LP_DELETE'd tuples. Degradation was eased on my machine.


> The problem is that we've traded splitting a page every few hundred
> inserts for doing a PageIndexMultiDelete, and emitting an extra WAL
> record, on *every* insert.  This is not good.

I suspect PageIndexMultiDelete() consumes CPU. If there are one or two
dead tuples, PageIndexTupleDelete() is called and memmove(4KB average)
and adjustment of the linepointer-offsets are performed everytime.
I think this is a heavy operation. But if the size of most upper index
entry is same with the dead tuple, we can only move the upper to the hole
and avoid to modify all tuples. Is this change acceptable?

Regards,
---
ITAGAKI Takahiro
NTT Open Source Software Center



---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to