On 2016-08-22 13:49:47 -0400, Robert Haas wrote:
> On Mon, Aug 22, 2016 at 1:46 PM, Andres Freund <and...@anarazel.de> wrote:
> > I don't think the runtime overhead is likely to be all that high - if
> > you look at valgrind.supp the peformancecritical parts basically are:
> > - pgstat_send - the context switching is going to drown out some zeroing
> > - xlog insertions - making the crc computation more predictable would
> >   actually be nice
> > - reorderbuffer serialization - zeroing won't be a material part of the
> >   cost
> >
> > The rest is mostly bootstrap or python related.
> >
> > There might be cases where we *don't* unconditionally do the zeroing -
> > e.g. I'm doubtful about the sinval stuff where we currently only
> > conditionally clear - but the stuff in valgrind.supp seems fine.
> 
> Naturally you'll be wanting to conclusively demonstrate this with
> benchmarks on multiple workloads, platforms, and concurrency levels.
> Right?  :-)

Pah ;)

I do think some micro-benchmarks aiming at the individual costs make
sense, we're only taking about ~three places in the code - don't think
concurrency plays a large role though ;)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to