On 08/09/2011 07:17 PM, Ondrej Ivanič wrote:
I'm about to evaluate this SSD card: FusionIO ioDrive Duo [1]. The main reason for this experiment is to see if SSD can significantly improve query performance Database size is around ~1.4TB. Main tables occupied around 1/3 (450GB, ~220mil rows) and aggregated data occupied 2/3 (900GB). All indexes are on separate table space (~550GB)
ioDrive hardware is fast at executing all sorts of I/O, but it particularly excels compared to normal drives with really random workloads. For example, I recently tested them in two different systems, both head to head against regular 20 disk RAID10 arrays (Dell MD units). At sequential reads and writes, all drives were basically the same; >1.2GB/s reads, >600MB/s writes. The regular drive array was actually a bit faster on sequential writes, which is common with SSD showdowns.
Your tables are pretty big; not much of them will fit in memory. If your aggregated queries end up executing a lot of sequential scans of the data set in order to compute, or for them to be utilized, you will probably discover this is barely faster on FusionIO. And you certainly could speed that up for far less money spent on other hardware.
Is there a component to your workload that does a lot of random read or write requests? If so, is that chunk of the data set bigger than RAM, but small enough to fit on the FusionIO drive? Only when all those conditions are true does that hardware really make sense. For example, running a 300GB pgbench test on a system with 128GB of RAM, the FusionIO drive was almost 10X as fast as the 20 disk array. And its raw seek rate was 20X as fast at all concurrency levels.
But at the same time, tests on database sizes that fit into RAM were slower on FusionIO than the regular disk array. When there's no random I/O to worry about, the slower read/write write of the SSD meant it lost the small database tests.
You really need to measure your current system carefully to figure out just what it's doing as far as I/O goes to make this sort of decision. Given what ioDrives cost, if you're not sure how to do that yourself it's surely worth hiring a storage oriented database consultant for a few days to help figure it out.
XEN host with 16 CPU (Intel(R) Xeon(R) CPU L5520 @ 2.27GHz). CentOS 5.6 80GB RAM Storage: some Hitachi Fibre channel SAN with two LUNs: 1st LUN has *everything* under $PG_DATA (used 850 GB) 2nd LUN has *all* indexes (index table space) (used 550GB)
Make sure you do basic benchmarks of all this hardware before you start mixing even more stuff into the mix. Both Xen hosts and SANs can cause all sorts of performance bottlenecks. It's possible you won't even be able to fully utilize the hardware you've already got if it's running with a virtual machine layer in there. I have no idea how a FusionIO drive will work in that environment, but I wouldn't expect it to be great. They need a fast CPU to run well, and some processing is done in the driver rather than on the card.
checkpoint_segments | 48 maintenance_work_mem | 256MB shared_buffers | 9GB wal_buffers | 50MB work_mem | 256MB
checkpoint_segments should be higher, at least 64 and probably 128 to 256. shared_buffers should be lower (at most 8GB, maybe even less). maintenance_work_mem should be 1 to 2GB on a server with 80GB of RAM. There's no proven benefit to increasing wal_buffers over 16MB.
This setting for work_mem can easily allow your server to allocate over 250GB of RAM for query working memory, if all 100 connections do something. Either reduce that a lot, or decrease max_connections, if you want this server to run safely.
-- Greg Smith 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general