Sent to quickly — also note that consumer / client SSDs often don’t have 
powerloss protection, so if your whole cluster were to lose power at the wrong 
time, you might lose data.

> On Nov 28, 2023, at 8:16 PM, Anthony D'Atri <a...@dreamsnake.net> wrote:
> 
> 
>>> 
>>> 1) They’re client aka desktop SSDs, not “enterprise”
>>> 2) They’re a partition of a larger OSD shared with other purposes
>> 
>> Yup.  They're a mix of SATA SSDs and NVMes, but everything is
>> consumer-grade.  They're only 10% full on average and I'm not
>> super-concerned with performance.  If they did get full I'd allocate
>> more space for them.  Performance is more than adequate for the very
>> light loads they have.
> 
> Fair enough.  We sometimes see people bringing a toothpick to a gun fight and 
> expecting a different result, so I had to ask.  Just keep an eye on their 
> endurance burn.
> 
>> 
>> 
>> It is interesting because Quincy had no issues with the autoscaler
>> with the exact same cluster config.  It might be a Rook issue, or it
>> might just be because so many PGs are remapped.  I'll take another
>> look at that once it reaches more of a steady state.
>> 
>> In any case, if the balancer is designed more for equal-sized OSDs I
>> can always just play with reweights to balance things.
> 
> Look into the JJ balancer, I’ve read good things about it.
> 
>> 
>> --
>> Rich
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to