On Mon, Dec 7, 2015 at 9:01 AM, Jeff Janes <jeff.ja...@gmail.com> wrote:
> So if this patch with this exact workload just happens to land on a
> pre-existing infelicity, how big of a deal is that?  It wouldn't be
> creating a regression, just shoving the region that experiences the
> problem around in such a way that it affects a different group of use
> cases.
>
> And perhaps more importantly, can anyone else reproduce this, or understand 
> it?

That's odd. I've never seen anything like that in the field myself,
but then I've never really been a professional DBA.

If possible, could you try using the ioreplay tool to correlate I/O
with a point in the trace_sort timeline? For both master, and the
patch, for comparison? The tool is available from here:

https://code.google.com/p/ioapps/

There is also a tool available to graph the recorded I/O requests over
time called ioprofiler.

This is the only way that I've been able to graph I/O over time
successfully before. Maybe there is a better way, using perf blockio
or something like that, but this is the way I know to work.

While I'm quite willing to believe that there are oddities about our
polyphase merge implementation that can result in what you call
anti-sweetspots (sourspots?), I have a much harder time imagining why
reverting my merge patch could make things better, unless the system
was experiencing some kind of memory pressure. I mean, it doesn't
change the algorithm at all, except to make more memory available from
the merge by avoiding palloc() fragmentation. How could that possibly
hurt?

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to