[ceph-users] Re: Best practice and expected benefits of using separate WAL and DB devices with Bluestore

2024-04-19 Thread Simon Kepp
others argue for a best practice of maximum3 OSDs per DB/WAL NVMe, and have myself adopted that as a standard. I run hosts with 12 HDD OSDs and 4 DB/WAL NVMEs. and a FAILURE_DOMAIN=Host. Best Regards, Simon Kepp, Founder, Kepp Technologies. On Fri, Apr 19, 2024 at 2:07 PM Ondřej Kukla wrote: > He

[ceph-users] Re: How to use hardware

2023-11-17 Thread Simon Kepp
. Going with such "fat nodes" is doable, but will significantly limit performance, reliability and availability, compared to distributing the same OSDs on more thinner nodes. Best regards, Simon Kepp Founder/CEO Kepp Technologies On Fri, Nov 17, 2023 at 10:59 AM Albert Shih wrote: >

[ceph-users] Re: How to copy an OSD from one failing disk to another one

2020-12-08 Thread Simon Kepp
near the recommended/default levels of redundancy, it doesn't really matter in which order you do it. Best regards, Simon Kepp, Kepp Technologies. On Tue, Dec 8, 2020 at 8:59 PM Konstantin Shalygin wrote: > Destroy this OSD, replace disk, deploy OSD. > > > k > > Sent from my iPh