We started our cluster with consumer (Samsung EVO) disks and the write 
performance was pitiful, they had periodic spikes in latency (average of 8ms, 
but much higher spikes) and just did not perform anywhere near where we were 
expecting.

When replaced with SM863 based devices the difference was night and day.  The 
DC grade disks held a nearly constant low latency (contantly sub-ms), no 
spiking and performance was massively better.   For a period I ran both disks 
in the cluster and was able to graph them side by side with the same workload.  
This was not even a moderately loaded cluster so I am glad we discovered this 
before we went full scale.

So while you certainly can do cheap and cheerful and let the data availability 
be handled by Ceph, don’t expect the performance to keep up.



From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Satish 
Patel
Sent: Wednesday, 11 July 2018 10:50 PM
To: Paul Emmerich <paul.emmer...@croit.io>
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] SSDs for data drives

Prices going way up if I am picking Samsung SM863a for all data drives.

We have many servers running on consumer grade sad drives and we never noticed 
any performance or any fault so far (but we never used ceph before)

I thought that is the whole point of ceph to provide high availability if drive 
go down also parellel read from multiple osd node

Sent from my iPhone

On Jul 11, 2018, at 6:57 AM, Paul Emmerich 
<paul.emmer...@croit.io<mailto:paul.emmer...@croit.io>> wrote:
Hi,

we‘ve no long-term data for the SM variant.
Performance is fine as far as we can tell, but the main difference between 
these two models should be endurance.


Also, I forgot to mention that my experiences are only for the 1, 2, and 4 TB 
variants. Smaller SSDs are often proportionally slower (especially below 500GB).

Paul

Robert Stanford <rstanford8...@gmail.com<mailto:rstanford8...@gmail.com>>:
Paul -

 That's extremely helpful, thanks.  I do have another cluster that uses Samsung 
SM863a just for journal (spinning disks for data).  Do you happen to have an 
opinion on those as well?

On Wed, Jul 11, 2018 at 4:03 AM, Paul Emmerich 
<paul.emmer...@croit.io<mailto:paul.emmer...@croit.io>> wrote:
PM/SM863a are usually great disks and should be the default go-to option, they 
outperform
even the more expensive PM1633 in our experience.
(But that really doesn't matter if it's for the full OSD and not as dedicated 
WAL/journal)

We got a cluster with a few hundred SanDisk Ultra II (discontinued, i believe) 
that was built on a budget.
Not the best disk but great value. They have been running since ~3 years now 
with very few failures and
okayish overall performance.

We also got a few clusters with a few hundred SanDisk Extreme Pro, but we are 
not yet sure about their
long-time durability as they are only ~9 months old (average of ~1000 write 
IOPS on each disk over that time).
Some of them report only 50-60% lifetime left.

For NVMe, the Intel NVMe 750 is still a great disk

Be carefuly to get these exact models. Seemingly similar disks might be just 
completely bad, for
example, the Samsung PM961 is just unusable for Ceph in our experience.

Paul

2018-07-11 10:14 GMT+02:00 Wido den Hollander 
<w...@42on.com<mailto:w...@42on.com>>:


On 07/11/2018 10:10 AM, Robert Stanford wrote:
>
>  In a recent thread the Samsung SM863a was recommended as a journal
> SSD.  Are there any recommendations for data SSDs, for people who want
> to use just SSDs in a new Ceph cluster?
>

Depends on what you are looking for, SATA, SAS3 or NVMe?

I have very good experiences with these drives running with BlueStore in
them in SuperMicro machines:

- SATA: Samsung PM863a
- SATA: Intel S4500
- SAS: Samsung PM1633
- NVMe: Samsung PM963

Running WAL+DB+DATA with BlueStore on the same drives.

Wido

>  Thank you
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 
31h<https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
81247 
München<https://maps.google.com/?q=Freseniusstr.+31h+81247+M%C3%BCnchen&entry=gmail&source=g>
www.croit.io<http://www.croit.io>
Tel: +49 89 1896585 90

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Confidentiality: This email and any attachments are confidential and may be 
subject to copyright, legal or some other professional privilege. They are 
intended solely for the attention and use of the named addressee(s). They may 
only be copied, distributed or disclosed with the consent of the copyright 
owner. If you have received this email by mistake or by breach of the 
confidentiality clause, please notify the sender immediately by return email 
and delete or destroy all copies of the email. Any confidentiality, privilege 
or copyright is not waived or lost because this email has been sent to you by 
mistake.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to