Tom, should I apply this patch now?  Are you still considering other
options for this?

---------------------------------------------------------------------------

Bruce Momjian wrote:
> 
> Tom, I ran your tests with fsync off (as you did), and saw numbers
> bouncing between 400-700 tps without my patch, and sticking at 700 tps
> with my patch.
> 
> ---------------------------------------------------------------------------
> 
> Bruce Momjian wrote:
> > 
> > The attached patch requires the new row to fit, and 10% to be free on
> > the page.  Would someone test that?
> > 
> > ---------------------------------------------------------------------------
> > 
> > Tom Lane wrote:
> > > ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> > > > This is a revised patch originated by Junji TERAMOTO for HEAD.
> > > >   [BTree vacuum before page splitting]
> > > >   http://archives.postgresql.org/pgsql-patches/2006-01/msg00301.php
> > > > I think we can resurrect his idea because we will scan btree pages
> > > > at-atime now; the missing-restarting-point problem went away.
> > > 
> > > I've applied this but I'm now having some second thoughts about it,
> > > because I'm seeing an actual *decrease* in pgbench numbers from the
> > > immediately prior CVS HEAD code.  Using
> > >   pgbench -i -s 10 bench
> > >   pgbench -c 10 -t 1000 bench     (repeat this half a dozen times)
> > > with fsync off but all other settings factory-stock, what I'm seeing
> > > is that the first run looks really good but subsequent runs tail off in
> > > spectacular fashion :-(  Pre-patch there was only minor degradation in
> > > successive runs.
> > > 
> > > What I think is happening is that because pgbench depends so heavily on
> > > updating existing records, we get into a state where an index page is
> > > about full and there's one dead tuple on it, and then for each insertion
> > > we have
> > > 
> > >   * check for uniqueness marks one more tuple dead (the
> > >     next-to-last version of the tuple)
> > >   * newly added code removes one tuple and does a write
> > >   * now there's enough room to insert one tuple
> > >   * lather, rinse, repeat, never splitting the page.
> > > 
> > > The problem is that we've traded splitting a page every few hundred
> > > inserts for doing a PageIndexMultiDelete, and emitting an extra WAL
> > > record, on *every* insert.  This is not good.
> > > 
> > > Had you done any performance testing on this patch, and if so what
> > > tests did you use?  I'm a bit hesitant to try to fix it on the basis
> > > of pgbench results alone.
> > > 
> > > One possible fix that comes to mind is to only perform the cleanup
> > > if we are able to remove more than one dead tuple (perhaps about 10
> > > would be good).  Or do the deletion anyway, but then go ahead and
> > > split the page unless X amount of space has been freed (where X is
> > > more than just barely enough for the incoming tuple).
> > > 
> > > After all the thought we've put into this, it seems a shame to
> > > just abandon it :-(.  But it definitely needs more tweaking.
> > > 
> > >                   regards, tom lane
> > > 
> > > ---------------------------(end of broadcast)---------------------------
> > > TIP 4: Have you searched our list archives?
> > > 
> > >                http://archives.postgresql.org
> > 
> > -- 
> >   Bruce Momjian   [EMAIL PROTECTED]
> >   EnterpriseDB    http://www.enterprisedb.com
> > 
> >   + If your life is a hard drive, Christ can be your backup. +
> 
> 
> > 
> > ---------------------------(end of broadcast)---------------------------
> > TIP 2: Don't 'kill -9' the postmaster
> 
> -- 
>   Bruce Momjian   [EMAIL PROTECTED]
>   EnterpriseDB    http://www.enterprisedb.com
> 
>   + If your life is a hard drive, Christ can be your backup. +

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDB    http://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Reply via email to