I tried setting this

ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .92"

but it seems not working or the mon is busy and the command is on queue?

---
osd.2 is near full at 85%
osd.4 is near full at 85%
osd.5 is near full at 85%
osd.6 is near full at 85%
osd.7 is near full at 86%
osd.8 is near full at 87%
osd.9 is near full at 85%
osd.11 is near full at 85%
osd.12 is near full at 85%
osd.16 is near full at 85%
osd.17 is near full at 85%
osd.20 is near full at 85%
osd.22 is near full at 85%
---

On Fri, Feb 19, 2016 at 9:30 AM, Vlad Blando <vbla...@morphlabs.com> wrote:

> I changed my volume PGs from 300 to 512 to even out the distribution,
> right now it is backfilling and remapping and I noticed that it's working.
>
> ---
> osd.2 is near full at 85%
> osd.4 is near full at 85%
> osd.5 is near full at 85%
> osd.6 is near full at 85%
> osd.7 is near full at 86%
> osd.8 is near full at 88%
> osd.9 is near full at 85%
> osd.11 is near full at 85%
> osd.12 is near full at 86%
> osd.16 is near full at 86%
> osd.17 is near full at 85%
> osd.20 is near full at 85%
> osd.23 is near full at 86%
> ---
>
> We will be adding a new node to the cluster after this.
>
> Another question, I'de like to adjust the near full OSD warning from 85%
> to 90% temporarily. I cant remember the command.
>
>
> @don
> ceph df
> ---
> [root@controller-node ~]# ceph df
> GLOBAL:
>     SIZE        AVAIL      RAW USED     %RAW USED
>     100553G     18391G     82161G       81.71
> POOLS:
>     NAME        ID     USED       %USED     OBJECTS
>     images      4      8927G      8.88      1143014
>     volumes     5      18374G     18.27     4721934
> [root@controller-node ~]#
> ---
>
>
> ᐧ
>

ᐧ
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to