On 2011-07-19 09:56, Florian Weimer wrote:
* Yeb Havinga:

The biggest drawback of 2 SSD's with supercap in hardware raid 1, is
that if they are both new and of the same model/firmware, they'd
probably reach the end of their write cycles at the same time, thereby
failing simultaneously.
I thought so too, but I've got two Intel 320s (I suppose, the report
device model is "SSDSA2CT040G3") in a RAID 1 configuration, and after
about a month of testing, one is down to 89 on the media wearout
indicator, and the other is still at 96.  Both devices are
deteriorating, but one at a significantly faster rate.
That's great news if this turns out to be generally true. Is it on mdadm software raid?

I searched a bit in the mdadm manual for reasons this can be the case. It isn't the occasional check (echo check > /sys/block/md0/md/sync_action) since that seems to do two reads and compare. Another idea was that the layout of the mirror might not be different, but the manual says that the --layout configuration directive is only for RAID 5,6 and 10, but not RAID 1. Then my eye caught --write-behind, the maximum number of outstanding writes and it has a non-zero default value, but is only done if a drive is marked write-mostly.

Maybe it is caused by the initial build of the array? But then a 7% difference seems like an awful lot.

It would be interesting to see if the drives also show total xyz written, and if that differs a lot too.

regards,
Yeb Havinga


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to