[ceph-users] Increasing mon_pg_warn_max_per_osd in v12.2.2

2017-12-04 Thread SOLTECSIS - Victor Rodriguez Cortes
Hello,

I have upgraded from v12.2.1 to v12.2.2 and now a warning shows using
"ceph status":

---
# ceph status
  cluster:
    id:
    health: HEALTH_WARN
    too many PGs per OSD (208 > max 200)
---

I'm ok with the amount of PGs, so I'm trying to increase the max PGs.
I've tried adding this to /etc/ceph/ceph.conf and restarting
services/servers:

---
[global]
mon_pg_warn_max_per_osd = 300
---


I've also tried to inject the config  to running daemons with:

---
ceph tell mon.* injectargs  "-mon_pg_warn_max_per_osd 0"
---

But I'm getting "Error EINVAL: injectargs: failed to parse arguments:
--mon_pg_warn_max_per_osd,300" messages and I'm still getting the
HEALTH_WARN message in the status command.

How can I increase mon_pg_warn_max_per_osd?

Thank you!



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increasing mon_pg_warn_max_per_osd in v12.2.2

2017-12-04 Thread Wido den Hollander

> Op 4 december 2017 om 10:59 schreef SOLTECSIS - Victor Rodriguez Cortes 
> :
> 
> 
> Hello,
> 
> I have upgraded from v12.2.1 to v12.2.2 and now a warning shows using
> "ceph status":
> 
> ---
> # ceph status
>   cluster:
> id:
> health: HEALTH_WARN
> too many PGs per OSD (208 > max 200)
> ---
> 
> I'm ok with the amount of PGs, so I'm trying to increase the max PGs.

Why are you OK with this? A high amount of PGs can cause serious peering 
issues. OSDs might eat up a lot of memory and CPU after a reboot or such.

Wido

> I've tried adding this to /etc/ceph/ceph.conf and restarting
> services/servers:
> 
> ---
> [global]
> mon_pg_warn_max_per_osd = 300
> ---
> 
> 
> I've also tried to inject the config  to running daemons with:
> 
> ---
> ceph tell mon.* injectargs  "-mon_pg_warn_max_per_osd 0"
> ---
> 
> But I'm getting "Error EINVAL: injectargs: failed to parse arguments:
> --mon_pg_warn_max_per_osd,300" messages and I'm still getting the
> HEALTH_WARN message in the status command.
> 
> How can I increase mon_pg_warn_max_per_osd?
> 
> Thank you!
> 
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increasing mon_pg_warn_max_per_osd in v12.2.2

2017-12-04 Thread SOLTECSIS - Victor Rodriguez Cortes

> Why are you OK with this? A high amount of PGs can cause serious peering 
> issues. OSDs might eat up a lot of memory and CPU after a reboot or such.
>
> Wido

Mainly because there was no warning at all in v12.2.1 and it just
appeared after upgrading to v12.2.2. Besides,its not a "too high" number
of PGs for this environment and no CPU/peering issues have been detected
yet.

I'll plan a way to create new OSD's/new CephFS and move files to it, but
in the mean time I would like to just increase that variable, which is
supposed to be supported and easy.

Thanks

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increasing mon_pg_warn_max_per_osd in v12.2.2

2017-12-04 Thread Fabian Grünbichler
On Mon, Dec 04, 2017 at 11:21:42AM +0100, SOLTECSIS - Victor Rodriguez Cortes 
wrote:
> 
> > Why are you OK with this? A high amount of PGs can cause serious peering 
> > issues. OSDs might eat up a lot of memory and CPU after a reboot or such.
> >
> > Wido
> 
> Mainly because there was no warning at all in v12.2.1 and it just
> appeared after upgrading to v12.2.2. Besides,its not a "too high" number
> of PGs for this environment and no CPU/peering issues have been detected
> yet.
> 
> I'll plan a way to create new OSD's/new CephFS and move files to it, but
> in the mean time I would like to just increase that variable, which is
> supposed to be supported and easy.
> 
> Thanks

the option is now called 'mon_max_pg_per_osd'.

this was originally slated for v12.2.1 where it was erroneously
mentioned in the release notes[1] despite note being part of the
release (I remember asking for updated/fixed release notes after 12.2.1,
seems like that never happened?). now it was applied as part of v12.2.2,
but is not mentioned at all in the release notes[2]...

1: http://docs.ceph.com/docs/master/release-notes/#v12-2-1-luminous
2: http://docs.ceph.com/docs/master/release-notes/#v12-2-2-luminous

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increasing mon_pg_warn_max_per_osd in v12.2.2

2017-12-04 Thread SOLTECSIS - Victor Rodriguez Cortes

> the option is now called 'mon_max_pg_per_osd'.
>
> this was originally slated for v12.2.1 where it was erroneously
> mentioned in the release notes[1] despite note being part of the
> release (I remember asking for updated/fixed release notes after 12.2.1,
> seems like that never happened?). now it was applied as part of v12.2.2,
> but is not mentioned at all in the release notes[2]...
>
> 1: http://docs.ceph.com/docs/master/release-notes/#v12-2-1-luminous
> 2: http://docs.ceph.com/docs/master/release-notes/#v12-2-2-luminous
>
That explains why I found nothing in the release notes of 12.2.2 :) 
Thanks a lot for pointing that out.

Using mon_max_pg_per_osd = 300 does work and now HEALTH is ok in this
cluster. Anyway I will move data to pools with less PGs asap, because
this cluster was supposed to have a few more OSDs that it will finally
have and the current PGs are suboptimal.

Thanks a lot to everyone.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com