On 1/27/15 3:46 PM, Stephen Frost wrote:
With 0 workers, first run took 883465.352 ms, and second run took 295050.106 ms.
>With 8 workers, first run took 340302.250 ms, and second run took 307767.758 
ms.
>
>This is a confusing result, because you expect parallelism to help
>more when the relation is partly cached, and make little or no
>difference when it isn't cached.  But that's not what happened.
These numbers seem to indicate that the oddball is the single-threaded
uncached run.  If I followed correctly, the uncached 'dd' took 321s,
which is relatively close to the uncached-lots-of-workers and the two
cached runs.  What in the world is the uncached single-thread case doing
that it takes an extra 543s, or over twice as long?  It's clearly not
disk i/o which is causing the slowdown, based on your dd tests.

One possibility might be round-trip latency.  The multi-threaded case is
able to keep the CPUs and the i/o system going, and the cached results
don't have as much latency since things are cached, but the
single-threaded uncached case going i/o -> cpu -> i/o -> cpu, ends up
with a lot of wait time as it switches between being on CPU and waiting
on the i/o.

This exactly mirrors what I've seen on production systems. On a single SeqScan 
I can't get anywhere close to the IO performance I could get with dd. Once I 
got up to 4-8 SeqScans of different tables running together, I saw iostat 
numbers that were similar to what a single dd bs=8k would do. I've tested this 
with iSCSI SAN volumes on both 1Gbit and 10Gbit ethernet.

This is why I think that when it comes to IO performance, before we start 
worrying about real parallelization we should investigate ways to do some kind 
of async IO.

I only have my SSD laptop and a really old server to test on, but I'll try 
Tom's suggestion of adding a PrefetchBuffer call into heapgetpage() unless 
someone beats me to it. I should be able to do it tomorrow.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to