[ceph-users] Ceph performance paper

2019-08-20 Thread Marc Roos
 
Hi Vitaliy, just saw you recommend someone to use ssd, and wanted to use 
the oppurtunaty to thank you for composing this text[0], enoyed reading 
it. 

- What do you mean with: bad-SSD-only?
- Is this patch[1] in a Nautilus release?


[0]
https://yourcmc.ru/wiki/Ceph_performance

[1]
https://github.com/ceph/ceph/pull/26909
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph performance paper

2019-08-20 Thread vitalif

Hi Marc,

Hi Vitaliy, just saw you recommend someone to use ssd, and wanted to 
use

the oppurtunaty to thank you for composing this text[0], enoyed reading
it.

- What do you mean with: bad-SSD-only?


A cluster consisting only of bad SSDs, like desktop ones :) their 
latency with fsync is almost like HDDs. For example, a 7200rpm HDD may 
give ~120 write iops (~9ms latency) and Samsung 960 evo gives 580 iops 
(~1.72ms latency), even though it's an NVMe and it does great without 
fsync. In the best (worst?) case the patch reduces disk I/O latency 
twice, so it's noticeable only if your disks are slow.



- Is this patch[1] in a Nautilus release?


As I understand it's already backported into Nautilus, Mimic and maybe 
even into Luminous but not released yet in any of those.



[0]
https://yourcmc.ru/wiki/Ceph_performance

[1]
https://github.com/ceph/ceph/pull/26909

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com