[ceph-users] Re: Quincy full osd(s)

2022-07-24 Thread Wesley Dillingham
Can you send along the return of "ceph osd pool ls detail" and "ceph health
detail"

On Sun, Jul 24, 2022, 1:00 AM Nigel Williams 
wrote:

> With current 17.2.1 (cephadm) I am seeing an unusual HEALTH_ERR
> Adding files to a new empty cluster, replica 3 (crush is by host), OSDs
> became 95% full and reweighting them to any value does not cause backfill
> to start.
>
> If I reweight the three too full OSDs to 0.0 I get a large number of
> misplaced objects but no subsequent data movement, cluster remains at
> HEALTH_WARN "Low space hindering backfill". Cluster has 1200 OSDs (all
> except three are close to empty).
>
> Balancer is on, autoscale is on for pool.
>
> I feel I am overlooking something obvious, if anyone can suggest what it
> would be appreciated. thanks.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Quincy full osd(s)

2022-07-25 Thread Nigel Williams
Hi Wesley, thank you for the follow up.

Anthony D'Atri kindly helped me out with some guidance and advice and we
believe the problem is resolved now.

This was a brand new install of a Quincy cluster and I made the mistake of
presuming that autoscale would adjust the PGs as required, however it never
kicked into action. I then neglected to check the pools were configured
correctly and was surprised that after copying 50TB to the cluster it was
full. I went down the wrong rabbit-holes to diagnose, Anthony got me back
on track, thanks Anthony!

During this process Anthony also identified a documentation/setup issue and
will be doing some PRs to improve the documentation, the gist of it is that
the device-class needs to be set on the .mgr/.nfs pools in order for
autoscale-status to produce any output.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io