[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-10 Thread Christian Wuerdig
On Mon, 9 Oct 2023 at 14:24, Anthony D'Atri wrote: > > > > AFAIK the standing recommendation for all flash setups is to prefer fewer > > but faster cores > > Hrm, I think this might depend on what you’re solving for. This is the > conventional wisdom for MDS for sure. My sense is that OSDs can

[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-10 Thread Gustavo Fahnle
tavo Fahnle Cc: ceph-users@ceph.io Asunto: Re: [ceph-users] Hardware recommendations for a Ceph cluster > Currently, I have an OpenStack installation with a Ceph cluster consisting of > 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second, > independent Ceph clu

[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-08 Thread Anthony D'Atri
> AFAIK the standing recommendation for all flash setups is to prefer fewer > but faster cores Hrm, I think this might depend on what you’re solving for. This is the conventional wisdom for MDS for sure. My sense is that OSDs can use multiple cores fairly well, so I might look at the cores

[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-08 Thread Christian Wuerdig
AFAIK the standing recommendation for all flash setups is to prefer fewer but faster cores, so something like a 75F3 might be yielding better latency. Plus you probably want to experiment with partitioning the NVMEs and running multiple OSDs per drive - either 2 or 4. On Sat, 7 Oct 2023 at 08:23,

[ceph-users] Re: Hardware recommendations for a Ceph cluster

2023-10-06 Thread Anthony D'Atri
> Currently, I have an OpenStack installation with a Ceph cluster consisting of > 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second, > independent Ceph cluster to provide faster disks for OpenStack VMs. Indeed, I know from experience that LFF spinners don't cut it