Dear all,

just upgraded our cluster from Octopus to Pacific (16.2.10). This
introduced an error in autoscaler:

2022-10-11T14:47:40.421+0200 7f3ec2d03700  0 [pg_autoscaler ERROR root] pool 17 
has overlapping roots: {-4, -1}
2022-10-11T14:47:40.423+0200 7f3ec2d03700  0 [pg_autoscaler ERROR root] pool 22 
has overlapping roots: {-4, -1}
2022-10-11T14:47:40.423+0200 7f3ec2d03700  0 [pg_autoscaler ERROR root] pool 23 
has overlapping roots: {-4, -1}
2022-10-11T14:47:40.427+0200 7f3ec2d03700  0 [pg_autoscaler ERROR root] pool 27 
has overlapping roots: {-6, -4, -1}
2022-10-11T14:47:40.428+0200 7f3ec2d03700  0 [pg_autoscaler ERROR root] pool 28 
has overlapping roots: {-6, -4, -1}

Autoscaler status is empty:

[cephmon1] /root # ceph osd pool autoscale-status
[cephmon1] /root # 


On https://forum.proxmox.com/threads/ceph-overlapping-roots.104199/ I
found something similar:

---
I assume that you have at least one pool that still has the
"replicated_rule" assigned, which does not make a distinction between the
device class of the OSDs.

This is why you see this error. The autoscaler cannot decide how many PGs
the pools need. Make sure that all pools are assigned a rule that limit
them to a device class and the errors should stop.
---

Indeed, we have a mixed cluster (hdd + ssd) with some pools spanning hdd-
only, some ssd-only and some both (ec & replicated) which don't care about
the storage device class (e.g. via default "replicated_rule"):

[cephmon1] /root # ceph osd crush rule ls
replicated_rule
ssd_only_replicated_rule
hdd_only_replicated_rule
default.rgw.buckets.data.ec42
test.ec42
[cephmon1] /root #


That worked flawlessly until Octopus. Any idea how to make autoscaler work
again with that kind of setup? Can I really have pools on one device class
only in Pacific in order to get a functional autoscaler?

Thanks,
Andreas
-- 
| Andreas Haupt            | E-Mail: andreas.ha...@desy.de
|  DESY Zeuthen            | WWW:    http://www-zeuthen.desy.de/~ahaupt
|  Platanenallee 6         | Phone:  +49/33762/7-7359
|  D-15738 Zeuthen         | Fax:    +49/33762/7-7216

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to