Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Yeb Havinga

On 2011-11-02 22:08, Merlin Moncure wrote:

On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havingayebhavi...@gmail.com  wrote:

Intel latency graph at http://imgur.com/Hh3xI
Ocz latency graph at http://imgur.com/T09LG

curious: what were the pgbench results in terms of tps?

merlin


Both comparable near 10K tps.

-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Merlin Moncure
On Thu, Nov 3, 2011 at 4:38 AM, Yeb Havinga yebhavi...@gmail.com wrote:
 On 2011-11-02 22:08, Merlin Moncure wrote:

 On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havingayebhavi...@gmail.com  wrote:

 Intel latency graph at http://imgur.com/Hh3xI
 Ocz latency graph at http://imgur.com/T09LG

 curious: what were the pgbench results in terms of tps?

 merlin

 Both comparable near 10K tps.

Well, and this is just me, I'd probably stick with the 710, but that's
based on my understanding of things on paper, not real world
experience with that drive.  The vertex 2 is definitely a more
reliable performer, but it looks like the results in your graph are
mostly skewed by a few outlying data points.  If the 710 can has the
write durability that intel is advertising, then ISTM that is one less
thing to think about.  My one experience with the vertex 2 pro was
that it was certainly fast but burned out just shy of the 10k write
cycle point after all the numbers were crunched.  This is just too
close for comfort on databases that are doing a lot of writing.

Note that either drive is giving you the performance of somewhere
between a 40 and 60 drive tray of 15k drives configured in a raid 10
(once you overflow the write cache on the raid controller(s)).  It
would take a pretty impressive workload indeed to become i/o bound
with either one of these drives...high scale pgbench is fairly
pathological.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Shaun Thomas

On 11/03/2011 04:38 AM, Yeb Havinga wrote:


Both comparable near 10K tps.


That's another thing I was wondering about. Why are we talking about 
Vertex 2 Pro's, anyway? The Vertex 3 Pros post much better results and 
are still capacitor-backed.


--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
stho...@peak6.com

__

See http://www.peak6.com/email-disclaimer/ for terms and conditions related to 
this email

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Yeb Havinga

On 2011-11-03 15:31, Shaun Thomas wrote:

On 11/03/2011 04:38 AM, Yeb Havinga wrote:


Both comparable near 10K tps.


That's another thing I was wondering about. Why are we talking about 
Vertex 2 Pro's, anyway? The Vertex 3 Pros post much better results and 
are still capacitor-backed.




Not for sale yet..

-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-03 Thread Yeb Havinga

On 2011-11-02 16:06, Magnus Hagander wrote:

On Wed, Nov 2, 2011 at 16:04, Yeb Havingayebhavi...@gmail.com  wrote:

On 2011-11-02 15:06, Kevin Grittner wrote:

Yeb Havingayebhavi...@gmail.comwrote:


I'm now contemplating not using the 710 at all. Why should I not
buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex
3 Max IOPS) with a IO controller+BBU?

Wouldn't the data be subject to loss between the time the IO
controller writes to the SSD and the time it makes it from buffers
to flash RAM?

Good question. My guess would be no, if the raid controller does
'write-throughs' on the attached disks, and the SSD's don't lie about when
they've written to RAM.

Doesn't most SSDs without supercaps lie about the writes, though?



I happened to have a Vertex 3, no supercap, available to test this with 
diskchecker. On a ext4 filesystem (just mounted with noatime, not 
barriers=off), this happenend:


# /root/diskchecker.pl -s 192.168.73.1 verify testfile
 verifying: 0.00%
 verifying: 30.67%
 verifying: 78.97%
 verifying: 100.00%
Total errors: 0

So I guess that's about as much as I can test without actually hooking 
it behind a hardware controller and test that. I will soon test the 
3ware 9750 with Vertex 3 and Intel 510 - both in the 3ware's ssd 
compatibility list.


More info from testing software raid 1:
- with lvm mirroring, discards / trim go through to the disks. This is 
where the Intel is fast enough, but the vertex 2 pro is busy for ~ 10 
seconds.


-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Kevin Grittner
Yeb Havinga yebhavi...@gmail.com wrote:
 
 I'm now contemplating not using the 710 at all. Why should I not
 buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex
 3 Max IOPS) with a IO controller+BBU?
 
Wouldn't the data be subject to loss between the time the IO
controller writes to the SSD and the time it makes it from buffers
to flash RAM?
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Merlin Moncure
On Wed, Nov 2, 2011 at 8:05 AM, Yeb Havinga yebhavi...@gmail.com wrote:
 Hello list,

 A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1
 setup. I was pretty convinced this was the perfect solution to run
 PostgreSQL on SSDs without a IO controller with BBU. No worries for strange
 firmware bugs because of two different drives, good write endurance of the
 710. Access to the smart attributes. Complete control over the disks:
 nothing hidden by a hardware raid IO layer.

 Then I did a pgbench test:
 - bigger than RAM test (~30GB database with 24GB ram)
 - and during that test I removed the Intel 710.
 - during the test I removed the 710 and 10 minutes later inserted it again
 and added it to the array.

 The pgbench transaction latency graph is here: http://imgur.com/JSdQd

 With only the OCZ, latencies are acceptable but with two drives, there are
 latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this due
 to software raid, or is it the Intel 710? To figure that out I repeated the
 test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59
 (The 12 seconds maximum was at disk remove time.)

 So the Intel 710 kind of sucks latency wise. Is it because it is also
 heavily reading, and maybe WAL should not be put on it?

 I did another test, same as before but
 * with 5GB database completely fitting in RAM (24GB)
 * put WAL on a ramdisk
 * started on the mirror
 * during the test mdadm --fail on the Intel SSD

 Latency graph is at: http://imgur.com/dY0Rk

 So still: with Intel 710 participating in writes (beginning of graph), some
 latencies are over 2 seconds, with only the OCZ, max write latencies are
 near 300ms.

 I'm now contemplating not using the 710 at all. Why should I not buy two
 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with
 a IO controller+BBU?

 Benefits: should be faster for all kinds of reads and writes.
 Concerns: TRIM becomes impossible (which was already impossible with md
 raid1, lvm / dm based mirroring could work) but is TRIM important for a
 PostgreSQL io load, without e.g. routine TRUNCATES? Also the write endurance
 of these drives is probably a lot less than previous setup.

software RAID (mdadm) is currently blocking TRIM.  the only way to to
get TRIM in a raid-ish environment is through LVM mirroring/striping
or w/brtfs raid (which is not production ready afaik).

Given that, if you do use software raid, it's not a good idea to
partition the entire drive because the very first thing the raid
driver does is write to the entire device.

I would keep at least 20-30% of both drives unpartitioned to leave the
controller room to wear level and as well as other stuff.  I'd try
wiping the drives, reparititoing, and repeating your test.  I would
also compare times through mdadm and directly to the device.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Yeb Havinga

On 2011-11-02 15:06, Kevin Grittner wrote:

Yeb Havingayebhavi...@gmail.com  wrote:


I'm now contemplating not using the 710 at all. Why should I not
buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex
3 Max IOPS) with a IO controller+BBU?


Wouldn't the data be subject to loss between the time the IO
controller writes to the SSD and the time it makes it from buffers
to flash RAM?


Good question. My guess would be no, if the raid controller does 
'write-throughs' on the attached disks, and the SSD's don't lie about 
when they've written to RAM.


I'll put this on my to test list for the new setup.

-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Magnus Hagander
On Wed, Nov 2, 2011 at 16:04, Yeb Havinga yebhavi...@gmail.com wrote:
 On 2011-11-02 15:06, Kevin Grittner wrote:

 Yeb Havingayebhavi...@gmail.com  wrote:

 I'm now contemplating not using the 710 at all. Why should I not
 buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex
 3 Max IOPS) with a IO controller+BBU?

 Wouldn't the data be subject to loss between the time the IO
 controller writes to the SSD and the time it makes it from buffers
 to flash RAM?

 Good question. My guess would be no, if the raid controller does
 'write-throughs' on the attached disks, and the SSD's don't lie about when
 they've written to RAM.

Doesn't most SSDs without supercaps lie about the writes, though?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Yeb Havinga

On 2011-11-02 15:26, Merlin Moncure wrote:

On Wed, Nov 2, 2011 at 8:05 AM, Yeb Havingayebhavi...@gmail.com  wrote:

Hello list,

A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1
setup. I was pretty convinced this was the perfect solution to run
PostgreSQL on SSDs without a IO controller with BBU. No worries for strange
firmware bugs because of two different drives, good write endurance of the
710. Access to the smart attributes. Complete control over the disks:
nothing hidden by a hardware raid IO layer.

Then I did a pgbench test:
- bigger than RAM test (~30GB database with 24GB ram)
- and during that test I removed the Intel 710.
- during the test I removed the 710 and 10 minutes later inserted it again
and added it to the array.

The pgbench transaction latency graph is here: http://imgur.com/JSdQd

With only the OCZ, latencies are acceptable but with two drives, there are
latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this due
to software raid, or is it the Intel 710? To figure that out I repeated the
test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59
(The 12 seconds maximum was at disk remove time.)

So the Intel 710 kind of sucks latency wise. Is it because it is also
heavily reading, and maybe WAL should not be put on it?

I did another test, same as before but
* with 5GB database completely fitting in RAM (24GB)
* put WAL on a ramdisk
* started on the mirror
* during the test mdadm --fail on the Intel SSD

Latency graph is at: http://imgur.com/dY0Rk

So still: with Intel 710 participating in writes (beginning of graph), some
latencies are over 2 seconds, with only the OCZ, max write latencies are
near 300ms.

I'm now contemplating not using the 710 at all. Why should I not buy two
6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with
a IO controller+BBU?

Benefits: should be faster for all kinds of reads and writes.
Concerns: TRIM becomes impossible (which was already impossible with md
raid1, lvm / dm based mirroring could work) but is TRIM important for a
PostgreSQL io load, without e.g. routine TRUNCATES? Also the write endurance
of these drives is probably a lot less than previous setup.

software RAID (mdadm) is currently blocking TRIM.  the only way to to
get TRIM in a raid-ish environment is through LVM mirroring/striping
or w/brtfs raid (which is not production ready afaik).

Given that, if you do use software raid, it's not a good idea to
partition the entire drive because the very first thing the raid
driver does is write to the entire device.


If that is bad because of a decreased lifetime, I don't think these 
number of writes are significant - in a few hours of pgbenching I the 
GBs written are more than 10 times the GB sizes of the drives. Or do you 
suggest this because then the disk firmware can operate assuming a 
smaller idema capacity, thereby proloning the drive life? (i.e. the 
Intel 710 200GB has 200GB idema capacity but 320GB raw flash).



I would keep at least 20-30% of both drives unpartitioned to leave the
controller room to wear level and as well as other stuff.  I'd try
wiping the drives, reparititoing, and repeating your test.  I would
also compare times through mdadm and directly to the device.


Good idea.

-- Yeb


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread David Boreham




So the Intel 710 kind of sucks latency wise. Is it because it is also 
heavily reading, and maybe WAL should not be put on it?


A couple quick thoughts:

1. There are a lot of moving parts in the system besides the SSDs.
It will take some detailed analysis to determine the cause for the
outlying high latency transactions. The cause may not be as simple
as one SSD processes I/O operations less quickly than another.
For example the system may be subject to some sort of
starvation issue in PG or the OS that is affected by quite
small differences in underlying storage performance.

2. What are your expectations for maximum transaction latency ?
In my experience it is not possible to guarantee sub-second
(or even sub-multi-second) latencies overall in a system
built with general purpose OS and database software.
(put another way : a few outlying 1 second and even
several-second transactions would be pretty much what
I'd expect to see on a database under sustained saturation
load as experienced under a pgbench test).




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Merlin Moncure
On Wed, Nov 2, 2011 at 10:16 AM, Yeb Havinga yebhavi...@gmail.com wrote:
 On 2011-11-02 15:26, Merlin Moncure wrote:

 On Wed, Nov 2, 2011 at 8:05 AM, Yeb Havingayebhavi...@gmail.com  wrote:

 Hello list,

 A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1
 setup. I was pretty convinced this was the perfect solution to run
 PostgreSQL on SSDs without a IO controller with BBU. No worries for
 strange
 firmware bugs because of two different drives, good write endurance of
 the
 710. Access to the smart attributes. Complete control over the disks:
 nothing hidden by a hardware raid IO layer.

 Then I did a pgbench test:
 - bigger than RAM test (~30GB database with 24GB ram)
 - and during that test I removed the Intel 710.
 - during the test I removed the 710 and 10 minutes later inserted it
 again
 and added it to the array.

 The pgbench transaction latency graph is here: http://imgur.com/JSdQd

 With only the OCZ, latencies are acceptable but with two drives, there
 are
 latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this
 due
 to software raid, or is it the Intel 710? To figure that out I repeated
 the
 test, but now removing the OCZ, latency graph at: http://imgur.com/DQa59
 (The 12 seconds maximum was at disk remove time.)

 So the Intel 710 kind of sucks latency wise. Is it because it is also
 heavily reading, and maybe WAL should not be put on it?

 I did another test, same as before but
 * with 5GB database completely fitting in RAM (24GB)
 * put WAL on a ramdisk
 * started on the mirror
 * during the test mdadm --fail on the Intel SSD

 Latency graph is at: http://imgur.com/dY0Rk

 So still: with Intel 710 participating in writes (beginning of graph),
 some
 latencies are over 2 seconds, with only the OCZ, max write latencies are
 near 300ms.

 I'm now contemplating not using the 710 at all. Why should I not buy two
 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS)
 with
 a IO controller+BBU?

 Benefits: should be faster for all kinds of reads and writes.
 Concerns: TRIM becomes impossible (which was already impossible with md
 raid1, lvm / dm based mirroring could work) but is TRIM important for a
 PostgreSQL io load, without e.g. routine TRUNCATES? Also the write
 endurance
 of these drives is probably a lot less than previous setup.

 software RAID (mdadm) is currently blocking TRIM.  the only way to to
 get TRIM in a raid-ish environment is through LVM mirroring/striping
 or w/brtfs raid (which is not production ready afaik).

 Given that, if you do use software raid, it's not a good idea to
 partition the entire drive because the very first thing the raid
 driver does is write to the entire device.

 If that is bad because of a decreased lifetime, I don't think these number
 of writes are significant - in a few hours of pgbenching I the GBs written
 are more than 10 times the GB sizes of the drives. Or do you suggest this
 because then the disk firmware can operate assuming a smaller idema
 capacity, thereby proloning the drive life? (i.e. the Intel 710 200GB has
 200GB idema capacity but 320GB raw flash).

It's bad because the controller thinks all the data is 'live' -- that
is, important.   When all the data on the drive is live the fancy
tricks the controller pulls to do intelligent wear leveling and to get
fast write times becomes much more difficult which in turn leads to
more write amplification and early burnout.  Supposedly, the 710 has
extra space anyways which is probably there specifically to ameliorate
the raid issue as well as extend lifespan but I'm still curious how
this works out.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Yeb Havinga

On 2011-11-02 16:16, Yeb Havinga wrote:

On 2011-11-02 15:26, Merlin Moncure wrote:


I would keep at least 20-30% of both drives unpartitioned to leave the
controller room to wear level and as well as other stuff.  I'd try
wiping the drives, reparititoing, and repeating your test.  I would
also compare times through mdadm and directly to the device.


Good idea.


Reinstalled system -  50% drives unpartitioned.
/dev/sdb3  19G  5.0G   13G  29% /ocz
/dev/sda3  19G  4.8G   13G  28% /intel
/dev/sdb3 on /ocz type ext4 (rw,noatime,nobarrier,discard)
/dev/sda3 on /intel type ext4 (rw,noatime,nobarrier,discard)

Again WAL was put in a ramdisk.

pgbench -i -s 300 t # fits in ram
pgbench -c 20 -M prepared -T 300 -l  t

Intel latency graph at http://imgur.com/Hh3xI
Ocz latency graph at http://imgur.com/T09LG



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Merlin Moncure
On Wed, Nov 2, 2011 at 3:45 PM, Yeb Havinga yebhavi...@gmail.com wrote:
 On 2011-11-02 16:16, Yeb Havinga wrote:

 On 2011-11-02 15:26, Merlin Moncure wrote:

 I would keep at least 20-30% of both drives unpartitioned to leave the
 controller room to wear level and as well as other stuff.  I'd try
 wiping the drives, reparititoing, and repeating your test.  I would
 also compare times through mdadm and directly to the device.

 Good idea.

 Reinstalled system -  50% drives unpartitioned.
 /dev/sdb3              19G  5.0G   13G  29% /ocz
 /dev/sda3              19G  4.8G   13G  28% /intel
 /dev/sdb3 on /ocz type ext4 (rw,noatime,nobarrier,discard)
 /dev/sda3 on /intel type ext4 (rw,noatime,nobarrier,discard)

 Again WAL was put in a ramdisk.

 pgbench -i -s 300 t # fits in ram
 pgbench -c 20 -M prepared -T 300 -l  t

 Intel latency graph at http://imgur.com/Hh3xI
 Ocz latency graph at http://imgur.com/T09LG

curious: what were the pgbench results in terms of tps?

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Intel 710 pgbench write latencies

2011-11-02 Thread Andy
Your results are consistent with the benchmarks I've seen. Intel SSD have much 
worse write performance compared to SSD that uses Sandforce controllers, which 
Vertex 2 Pro does.

According to this benchmark, at high queue depth the random write performance 
of Sandforce is more than 5 times that of Intel 710:
http://www.anandtech.com/show/4902/intel-ssd-710-200gb-review/4


Why don't you just use two Vertex 2 Pro in sw RAID1? It should give you good 
write performance.

Why should I not buy two 6Gbps SSDs without supercap (e.g. Intel 510 and OCZ 
Vertex 3 Max IOPS) with a IO controller+BBU?
Because it that case you'll lose data whenever you have a power loss. Without 
capacitors data written to the SSD is not durable.



From: Yeb Havinga yebhavi...@gmail.com
To: pgsql-performance@postgresql.org pgsql-performance@postgresql.org
Sent: Wednesday, November 2, 2011 9:05 AM
Subject: [PERFORM] Intel 710 pgbench write latencies

Hello list,

A OCZ Vertex 2 PRO and Intel 710 SSD, both 100GB, in a software raid 1 setup. I 
was pretty convinced this was the perfect solution to run PostgreSQL on SSDs 
without a IO controller with BBU. No worries for strange firmware bugs because 
of two different drives, good write endurance of the 710. Access to the smart 
attributes. Complete control over the disks: nothing hidden by a hardware raid 
IO layer.

Then I did a pgbench test:
- bigger than RAM test (~30GB database with 24GB ram)
- and during that test I removed the Intel 710.
- during the test I removed the 710 and 10 minutes later inserted it again and 
added it to the array.

The pgbench transaction latency graph is here: http://imgur.com/JSdQd

With only the OCZ, latencies are acceptable but with two drives, there are 
latencies up to 3 seconds! (and 11 seconds at disk remove time) Is this due to 
software raid, or is it the Intel 710? To figure that out I repeated the test, 
but now removing the OCZ, latency graph at: http://imgur.com/DQa59 (The 12 
seconds maximum was at disk remove time.)

So the Intel 710 kind of sucks latency wise. Is it because it is also heavily 
reading, and maybe WAL should not be put on it?

I did another test, same as before but
* with 5GB database completely fitting in RAM (24GB)
* put WAL on a ramdisk
* started on the mirror
* during the test mdadm --fail on the Intel SSD

Latency graph is at: http://imgur.com/dY0Rk

So still: with Intel 710 participating in writes (beginning of graph), some 
latencies are over 2 seconds, with only the OCZ, max write latencies are near 
300ms.

I'm now contemplating not using the 710 at all. Why should I not buy two 6Gbps 
SSDs without supercap (e.g. Intel 510 and OCZ Vertex 3 Max IOPS) with a IO 
controller+BBU?

Benefits: should be faster for all kinds of reads and writes.
Concerns: TRIM becomes impossible (which was already impossible with md raid1, 
lvm / dm based mirroring could work) but is TRIM important for a PostgreSQL io 
load, without e.g. routine TRUNCATES? Also the write endurance of these drives 
is probably a lot less than previous setup.

Thoughts, ideas are highly appreciated!
-- Yeb

PS:
I checked for proper alignment of partitions as well as md's data offsett, all 
was well.
Ext4 filesystem mounted with barrier=0
/proc/sys/vm/dirty_background_bytes set to 17850



-- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance