Hi Ken,

thank you for your hint - any input is appreciated. Please note that Ceph does 
highly random IO (especially when having small object sizes), AnandTech also 
states:

"Some of our other tests have shown a few signs that the 870 EVO's write 
performance can drop when the SLC cache runs out, but this straightforward 
sequential write pass over the entire drive doesn't reveal any such behavior. 
The 870 EVO's sequential write performance is extremely consistent, even on the 
second write pass." [3]

So this kind of cache handling is very interesting under the hood, because it 
seems the Samsung SSD 870 Evo is able to handle sequential io at nearly SATA 
line speed. However the random behaviour seems to be inconsistent - maybe I'm 
able to run a big fio on the ssd once I have one of these on my desk, but I 
can't promise I'll have time to.

More interesting: We also have some Kingston SEDC450. Kingston even promises 
constant write speed [4] - even when specifying lower performance. At least 
those drives did not fell into my eye (but I did not examine the situation 
specially).

[4] https://www.kingston.com/en/ssd/dc450-data-center-solid-state-drive

Best regards,
Michael

-----Ursprüngliche Nachricht-----
Von: mailing-lists <mailing-li...@indane.de> 
Gesendet: Dienstag, 21. Februar 2023 10:21
An: ceph-users@ceph.io
Betreff: [ceph-users] Re: Do not use SSDs with (small) SLC cache

Dear Michael,

I don't have an explanation for your problem unfortunately, but I just wondered 
that you experience a drop in performance, that this SSD shouldn't have. Your 
SSDs drives (Samsung 870 EVO) should not get slower on large writes. You can 
verify this on the post you've attached [1] or here [3].

I am curious if replacing them with other disks will improve it.


[3]
https://www.anandtech.com/show/16480/the-samsung-870-evo-ssd-1tb-4tb-review/4


Best

Ken

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to