Hi,
do you want to hear the truth from real experience?
Or the myth?
The truth is that:
- hdd are too slow for ceph, the first time you need to do a rebalance or
similar you will discover...
- if you want to use hdds do a raid with your controller and use the
controller BBU cache (do not consider controllers with hdd cache), and
present the raid as one ceph disk.
- enabling single hdd write cache (that is not battery protected) is far
worse than enabling controller cache (which I assume is always protected by
BBU)
- anyway the best thing for ceph is to use nvme disks.

Mario

Il giorno gio 6 apr 2023 alle ore 13:40 Marco Gaiarin <
g...@lilliput.linux.it> ha scritto:

>
> We are testing an experimental Ceph cluster with server and controller at
> subject.
>
> The controller have not an HBA mode, but only a 'NonRAID' mode, come sort
> of
> 'auto RAID0' configuration.
>
> We are using SSD SATA disks (MICRON MTFDDAK480TDT) that perform very well,
> and SAS HDD disks (SEAGATE ST8000NM014A) that instead perform very bad
> (particulary, very low IOPS).
>
>
> There's some hint for disk/controller configuration/optimization?
>
>
> Thanks.
>
> --
>   Io credo nella chimica tanto quanto Giulio Cesare credeva nel caso...
>   mi va bene fino a quando non riguarda me :)   (Emanuele Pucciarelli)
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to