On 11/07/2013 11:47 AM, Gruher, Joseph R wrote:
-----Original Message-----
From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-
boun...@lists.ceph.com] On Behalf Of Dinu Vlad
Sent: Thursday, November 07, 2013 3:30 AM
To: ja...@peacon.co.uk; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph cluster performance

In this case however, the SSDs were only used for journals and I don't know if
ceph-osd sends TRIM to the drive in the process of journaling over a block
device. They were also under-subscribed, with just 3 x 10G partitions out of
240 GB raw capacity. I did a manual trim, but it hasn't changed anything.

If your SSD capacity is well in excess of your journal capacity requirements you could 
consider overprovisioning the SSD.  Overprovisioning should increase SSD performance and 
lifetime.  This achieves the same effect as trim to some degree (lets the SSD better 
understand what cells have real data and which can be treated as free).  I wonder how 
effective trim would be on a Ceph journal area.  If the journal empties and is then 
trimmed the next write cycle should be faster, but if the journal is active all the time 
the benefits would be lost almost immediately, as those cells are going to receive data 
again almost immediately and go back to an "untrimmed" state until the next 
trim occurs.

over-provisioning is definitely something to consider, especially if you aren't buying SSDs with high write endurance. The more cells you can spread the load out over the better. We've had some interesting conversations on here in the past about whether or not it's more cost effective to buy large capacity consumer grade SSDs with more cells or shell out for smaller capacity enterprise grade drives. My personal opinion is that it's worth paying a bit extra for a drive that employs something like MLC-HET, but there's a lot of "enterprise" grade drives out there with low write endurance that you really have to watch out for. If you are going to pay extra, at least get something with high write endurance and reasonable write speeds.

Mark


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to