Dear everyone,

Last year I set up an experimental Ceph cluster (still single node, failure domain = osd, MB Asus P10S-M WS, CPU Xeon E3-1235L, RAM 64 GB, HDDs WD30EFRX, Ubuntu 18.04, now with kernel 5.3.0 from Ubuntu mainline PPA and Ceph 14.2.4 from download.ceph.com/debian-nautilus/dists/bionic ). I set up JErasure 2+1 pool, created some RBDs using that as data pool and exported them by iSCSI (using tcmu-runner, gwcli and associated packages). But with HDD-only setup their performance was less than stellar, not saturating even 1Gbit Ethernet on RBD reads.

This year my experiment was funded with Gigabyte PCIe NVMe 1TB SSD (GP-ASACNE2100TTTDR). Now it is plugged in the MB and is visible as a storage device to lsblk. Also I can see its 4 interrupt queues in /proc/interrupts, and its transfer measured by hdparm -t is about 2.3GB/sec.

And now I want to ask your advice on how to best include it into this already existing setup. Should I allocate it for OSD journals and databases ? Is there a way to reconfigure existing OSD in this way without destroying and recreating it ? Or are there plans to ease this kind of migration ? Can I add it as a write-adsorbing cache to individual RBD images ? To individual block devices at the level of bcache/dm-cache ? What about speeding up RBD reads ?

        I would appreciate to read your opinions and recommendations.
(just want to warn you that in this situation I don't have financial option of going full-SSD)

        Thank you all in advance for your response
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to