On Sun, Sep 27, 2009 at 07:22:47PM -0600, Scott Marlowe wrote:
> >> dd if=/dev/zero of=test.txt bs=8192 count=1310720 conv=fdatasync
> >> 10737418240 bytes (11 GB) copied, 169.482 s, 63.4 MB/s
> >>
> >> dd if=test.txt of=/dev/null bs=8192
> >> 10737418240 bytes (11 GB) copied, 86.4457 s, 124 MB/s
> >
> > These look slow.

> They are slow, they are not atypical for RAID5; especially the slow
> writes with SW RAID-5 are typical.

Wow, no wonder it's shunned so much here!  I'd not realized before that
it incurred such a hit.

> I'd try a simple test on a 2 or 3 disk RAID-0 for testing purposes
> only to see how much faster a RAID-10 array of n*2 disks could be.
> The increase in random write performance for RAID-10 will be even more
> noticeable.

I was thinking that the higher the bandwidth the IO subsystem could push
the data though the more important a larger block size would be--less
to and fro between the kernel and userspace.  If the OP reported
considerably higher CPU usage than expected then he could try rebuilding
with larger block sizes to see if it helps.

I'm assuming that PG only issues block sized reads?  How does changing
block size affect index access performance; does it slow it down because
it has to pull the whole block in?

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to