|> ...to change sata disk for /home

why only for /home ?
since the email data is the biggest part, i would not care for the remaining (system, dovecot, etc)
and put everything on the SSD

> Even server grade SSD's are prone to sudden failures. Mostly due to exeeded max write count.

this is only true for old SSD Drives und bad installation. Modern drives - for instance Intel - the warranty is a MINIMUM lifetime of 5 years @ 20GB wite volume per day.|*|/

here some data fromMtron <http://www.storagesearch.com/mtron.html>(one of the few SSD oems who do quote endurance in a way that non specialists can understand). In thedata sheet for their 32G product <http://www.mtron.net/files/MSD_S_spec.pdf>- which incidentally has 5 million cycles write endurance - they quote the write endurance for the disk as "greater than 85 years assuming 100G / day erase/write cycles" - which involves overwriting the disk 3 times a day.

/|*|so - properly used (trim, 10% overprovisioning, mount with noatime, tmp on virtualfs) a modern SSD is for sure much much more reliable then any magnetic plattern drive.|*|/

Over-provisioningextends the SSD life - because all cells in a chip do not have the same endurance. There's a distribution curve of endurance within chip blocks which is a proprietary secret which can be characterized by the SSD controller designer for the chips they support. Most blocks are significantly better than the floor level in the same memory chip.

SLC : about 100.000 write Cycles / Cell
eMLC : about 10.000 write Cycles / Cell
MLC : about 3000 write Cycles / Cell

> In mirror raid both drives will fail almost at the same time, couse of identical workloads.

Au contraire:- not only can an SSD RAID array offer a multiple of a single SSD's throughput, and IOPs, just as with hard disks but depending on the array configuration theoperating life can be multipliedas well - because not all the disks will operate at 100% duty cycle. That means that MTBF and not write endurance will be the limiting factors. And although oem publishedMTBF data for hard disks has been discredited recently <http://www.storagesearch.com/news2007-feb4.html>- the MTBF data for flash SSDs has been verified for over a decade in more discriminating applications in high reliability embedded systems.

/|*|therefore |*|/I use for my very heavvy loaded servers :

LSI SAS9270i or similar Raid Controller - LSI Service is really good, the controllers are performing very good 6 x SSD Drives Samsung Pro Series, using Raid 1+0 (never use Raid5 ...) - 10 Years Warranty @ 150TB written for each Drive. /|*|S|*|/o I end up with (a minimum of) 450TB /|*|written|*|/- well thats a lot isnt it. and >10% overprovisioning will extend that value a lot.

1 magnetic Drive for Nightly Backup with rsync
PCBackup Server to make a backup of the Nightly Backup during the Day ...

use mdbox format
use xz compression (uses /*LOTS*/ of ram, but reduce datavolume (and therefore write volume), compression is faster then the data write rates ..., also cache is used more efficiently)

Never had ANY problems until now, and speed is amazing.

/|*|> The other factor is the price.

my time - and uptime - is priceless. Hardware is cheap.
I happily throw $3000 in the ring to sleep well.
1 Day data recovery and no mail for 40 users is fore sure much much more expansive ...

You can find much more information here :

http://www.storagesearch.com/ssdmyths-endurance.html

|*|/

/|*|
|
|Am 2014-10-16 um 19:45 schrieb Przemysław Orzechowski:
|
|On 16.10.2014 18:24, Luciano Gabriel Andino wrote:
|
|Hi, I am thinking to change sata disk for /home and I want to know if
change to a SSD hd, is a good option. I have 30-40 accounts with 30-50K
email in boxes.

|
|Hi

SSD's give you fast read and (degrading with time) fast write performance but at a cost.

Even server grade SSD's are prone to sudden failures. Mostly due to exeeded max write count. And when they fail you lose whole data stored on them (this happened fiew times in my work) So we are using SSD's as fast storage ie as cache, but always have a persistent copy somwhere else or store data that can be easily reconstructed in case of SSD's fail.

You can read SSD disks all the time but writing to them causes fast wear. Most SSD's have a specific limit of numer of writes (more specific erase cycles for memory blocks) and when they reach that limit they just stop working. In mirror raid both drives will fail almost at the same time, couse of identical workloads.

Thats what my experiences with SSD storage is.
The other factor is the price.

In my laptop (not so heavily used - fiew VM's and ubuntu desktop (30% of the drive left unpartitioned) an intel consumer grade SSD died in a year (no files recoverable).

So my advice is either store dovecot indexes on SSD (should improve performance) or have a mirror (dsync ?) of Your /home on some other (magnetic) storage unless You are ok with the loss of /home contents
|
||*|/
/|*

Reply via email to