That page has mixed info.  

What would we use instead? SATA / SAS that are progressively withering in the 
market, less performance for the same money? Why pay extra for an HBA just to 
use legacy media?

You can use NVMe for WAL+DB, with more complexity.   You’ll get faster metadata 
and lower latency on some small writes, but in the end the bulk drives are the 
bottleneck.  

All-NVMe chassis are the only way, legacy interfaces are a false economy.   

Ceph doesn’t seem to automatically class nvme devices but one can easily change 
the device class on an OSD 

> On Jun 28, 2023, at 8:29 AM, Marc <m...@f1-outsourcing.eu> wrote:
> 
> 
>> 
>> Hi,
>> is it a problem that the device class for all my disks is SSD even all
>> of
>> these disks are NVME disks? If it is just a classification for ceph, so
>> I
>> can have pools on SSDs and NVMEs separated I don't care. But maybe ceph
>> handles NVME disks differently internally?
>> 
> 
> I am not entirely sure if it is still with newer versions of ceph, but what I 
> understood from previous discussions is that it does not really pay of to use 
> nvme as an osd and they are mostly used for speeding up writes to slower osds 
> (hdd)
> 
> https://yourcmc.ru/wiki/Ceph_performance
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to