But I guess that in 'ceph osd tree' the ssd's were then also displayed
as hdd?
-Original Message-
From: Stolte, Felix [mailto:f.sto...@fz-juelich.de]
Sent: woensdag 4 december 2019 9:12
To: ceph-users
Subject: [ceph-users] SSDs behind Hardware Raid
Hi guys,
maybe this is common knowledge for the most of you, for me it was not:
if you are using SSDs behind a raid controller in raid mode (not JBOD)
make sure your operating system treats them correctly as SSDs. I am an
Ubuntu user but I think the following applies to all linux operating
systems:
/sys/block//queue/rotational determines if an device is
treated as rotational or not. 0 stands for SSD, 1 for Rotational.
In my case Ubuntu treated my SSDs (Raid 0, 1 Disk) as rotational.
Changing the parameter above for my SSDs to 0 and restarting the
corresponding osd daemons increased 4K write IOPS drastically:
rados -p ssd bench 60 write -b 4K (6 Nodes, 3 SSDs each)
Before: ~5200 IOPS
After: ~11500 IOPS
@Developers: I am aware that this is not directly a ceph issue, but
maybe you could consider to add this hint in your documentation. I could
be wrong, but I think I am not the only one using a hw raid controller
for osds (not willingly by the way).
On a sidenote: ceph-volume lvm batch uses the rotational parameter as
well for identifying SSDs (please correct me if I am wrong).
Best regards
Felix
-
-
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
Prof. Dr. Sebastian M. Schmidt
-
-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com