Sent to quickly — also note that consumer / client SSDs often don’t have
powerloss protection, so if your whole cluster were to lose power at the wrong
time, you might lose data.
> On Nov 28, 2023, at 8:16 PM, Anthony D'Atri wrote:
>
>
>>>
>>> 1) They’re client aka desktop SSDs, not
>>
>> 1) They’re client aka desktop SSDs, not “enterprise”
>> 2) They’re a partition of a larger OSD shared with other purposes
>
> Yup. They're a mix of SATA SSDs and NVMes, but everything is
> consumer-grade. They're only 10% full on average and I'm not
> super-concerned with performance.
On Tue, Nov 28, 2023 at 6:25 PM Anthony D'Atri wrote:
> Looks like one 100GB SSD OSD per host? This is AIUI the screaming minimum
> size for an OSD. With WAL, DB, cluster maps, and other overhead there
> doesn’t end up being much space left for payload data. On larger OSDs the
> overhead is
>> Very small and/or non-uniform clusters can be corner cases for many things,
>> especially if they don’t have enough PGs. What is your failure domain —
>> host or OSD?
>
> Failure domain is host,
Your host buckets do vary in weight by roughly a factor of two. They naturally
will get PGs
On Tue, Nov 28, 2023 at 3:52 PM Anthony D'Atri wrote:
>
> Very small and/or non-uniform clusters can be corner cases for many things,
> especially if they don’t have enough PGs. What is your failure domain — host
> or OSD?
Failure domain is host, and PG number should be fairly reasonable.
>
It's a complicated topic and there is no one answer, it varies for each
cluster and depends. You have a good lay of the land.
I just wanted to mention that the correct "foundation" for equally utilized
OSDs within a cluster relies on two important factors:
- Symmetry of disk/osd quantity and
>
> I'm fairly new to Ceph and running Rook on a fairly small cluster
> (half a dozen nodes, about 15 OSDs).
Very small and/or non-uniform clusters can be corner cases for many things,
especially if they don’t have enough PGs. What is your failure domain — host
or OSD?
Are your OSDs sized