On Mon, 9 Oct 2023 at 14:24, Anthony D'Atri wrote:
>
>
> > AFAIK the standing recommendation for all flash setups is to prefer fewer
> > but faster cores
>
> Hrm, I think this might depend on what you’re solving for. This is the
> conventional wisdom for MDS for sure. My sense is that OSDs can
tavo Fahnle
Cc: ceph-users@ceph.io
Asunto: Re: [ceph-users] Hardware recommendations for a Ceph cluster
> Currently, I have an OpenStack installation with a Ceph cluster consisting of
> 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second,
> independent Ceph clu
> AFAIK the standing recommendation for all flash setups is to prefer fewer
> but faster cores
Hrm, I think this might depend on what you’re solving for. This is the
conventional wisdom for MDS for sure. My sense is that OSDs can use multiple
cores fairly well, so I might look at the cores
AFAIK the standing recommendation for all flash setups is to prefer fewer
but faster cores, so something like a 75F3 might be yielding better latency.
Plus you probably want to experiment with partitioning the NVMEs and
running multiple OSDs per drive - either 2 or 4.
On Sat, 7 Oct 2023 at 08:23,
> Currently, I have an OpenStack installation with a Ceph cluster consisting of
> 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second,
> independent Ceph cluster to provide faster disks for OpenStack VMs.
Indeed, I know from experience that LFF spinners don't cut it