Hi Mike,

On 30.05.2017 01:49, Mike Cave wrote:
>
> Greetings All,
>
>  
>
> I recently started working with our ceph cluster here and have been
> reading about weighting.
>
>  
>
> It appears the current best practice is to weight each OSD according
> to it’s size (3.64 for 4TB drive, 7.45 for 8TB drive, etc).
>
>  
>
> As it turns out, it was not configured this way at all; all of the
> OSDs are weighted at 1.
>
>  
>
> So my questions are:
>
>  
>
> Can we re-weight the entire cluster to 3.64 and then re-weight the 8TB
> drives afterwards at a slow rate which won’t impact performance?
>
> If we do an entire re-weight will we have any issues?
>
I would set osd_max_backfills + osd_recovery_max_active to 1 (with
injectargs) before start the reweight to minimize the impact for running
clients.
After set all to 3.64 you can raise the weight for the 8TB-drives one by
one.
Depends on your cluster/OSDs, it's perhaps an good idea to adjust the
primary affinity for the 8-TB drives during reweight?! Otherwise you got
more reads from the (slower) 8TB-drives.


> Would it be better to just reweight the 8TB drives to 2 gradually?
>
I would go for 3.64 - than you have the right settings if you init
further OSDs with ceph-deploy.

Udo

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to