> 
> I have a beast of a Dell server with the following specifications:
>       • 4x Xeon E5-4657LV2 (48 cores total)
>       • 196GB RAM
>       • 2x SCSI 900GB in RAID1 (for the OS)
>       • 8x Intel S3500 SSD 240GB in RAID10
>       • H710p RAID controller, 1GB cache
> Centos 6.6, RAID10 SSDs uses XFS (mkfs.xfs -i size=512 /dev/sdb).

Things to check

- disk cache settings (EnDskCache - for SSD should be on or you're going to 
lose 90% of your performance)

- OS settings e.g. 

echo noop > /sys/block/sda/queue/scheduler
echo 975 > /sys/block/sda/queue/nr_requests
blockdev --setra 16384 /dev/sdb

- OS kernel version 

We use H710Ps with SSDs as well, and these settings make a measurable 
difference to our performance here (though we measure more than just pgbench 
since it's a poor proxy for our use cases).

Also

- SSDs - is the filesystem aligned and block size chosen correctly (you don't 
want to be forced to read 2 blocks of SSD to get every data block)? RAID stripe 
size? May make a small difference. 

- are the SSDs all sitting on different SATA channels? You don't want them to 
be forced to share one channel's worth of bandwidth. The H710P has 8 SATA 
channels I think (?) and you mention 10 devices above. 

Graeme Bell.

On 10 Dec 2014, at 00:28, Strahinja Kustudić <strahin...@nordeus.com> wrote:

> I have a beast of a Dell server with the following specifications:
>       • 4x Xeon E5-4657LV2 (48 cores total)
>       • 196GB RAM
>       • 2x SCSI 900GB in RAID1 (for the OS)
>       • 8x Intel S3500 SSD 240GB in RAID10
>       • H710p RAID controller, 1GB cache
> Centos 6.6, RAID10 SSDs uses XFS (mkfs.xfs -i size=512 /dev/sdb).
> 
> Here are some relevant postgresql.conf settings:
> shared_buffers = 8GB
> work_mem = 64MB
> maintenance_work_mem = 1GB
> synchronous_commit = off
> checkpoint_segments = 256
> checkpoint_timeout = 10min
> checkpoint_completion_target = 0.9
> seq_page_cost = 1.0
> effective_cache_size = 100GB
> 
> I ran some "fast" pgbench tests with 4, 6 and 8 drives in RAID10 and here are 
> the results:
> 
> time /usr/pgsql-9.1/bin/pgbench -U postgres -i -s 12000 pgbench # 292GB DB
> 
> 4 drives      6 drives        8 drives
> 105 min       98 min  94 min
> 
> /usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -N pgbench   # Write test
> 
> 4 drives      6 drives        8 drives
> 6567  7427    8073
> 
> /usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 pgbench  # Read/Write test
> 
> 4 drives      6 drives        8 drives
> 3651  5474    7203
> 
> /usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -S pgbench  # Read test
> 
> 4 drives      6 drives        8 drives
> 17628 25482   28698
> 
> 
> A few notes:
>       • I ran these tests only once, so take these number with reserve. I 
> didn't have the time to run them more times, because I had to test how the 
> server works with our app and it takes a considerable amount of time to run 
> them all.
>       • I wanted to use a bigger scale factor, but there is a bug in pgbench 
> with big scale factors.
>       • Postgres 9.1 was chosen, since the app which will run on this server 
> uses 9.1.
>       • These tests are with the H710p controller set to write-back (WB) and 
> with adaptive read ahead (ADRA). I ran a few tests with write-through (WT) 
> and no read ahead (NORA), but the results were worse.
>       • All tests were run using 96 clients as recommended on the pgbench 
> wiki page, but I'm sure I would get better results if I used 48 clients (1 
> for each core), which I tried with the R/W test and got 7986 on 8 drives, 
> which is almost 800TPS better than with 96 clients.
> 
> Since our app is tied to the Postgres performance a lot, I'm currently trying 
> to optimize it. Do you have any suggestions what Postgres/system settings I 
> could try to tweak to increase performance? I have a feeling I could get more 
> performance out of this system.
> 
> 
> Regards,
> Strahinja


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to