Hi Thomas,
The two commands you're looking for are:
ceph osd pg-upmap-items $pg $source_osd $dest_osd
and to remove them
ceph osd rm-pg-upmap-items $pg
You need to pair this with finding which pg's are on your full osd's. I use
ceph pg dump and grep for the pool number and osd.
With respect to w
That's good news, thanks David. What's my way forward on this? Is there a point
release for Luminous coming? Or will I need to push ahead with my Nautilus
upgrade to get it working again? Or build something custom from the Git code?
I don't think custom build is an option in this case as this is
Thanks David, I've sent it to you directly.
Rich
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Also, sorry for misspelling your name Bryan :-/
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks Brian. My failure domain is Host and they're very even. 01-06 have
24x6TB and 08/09 24x8TB.
-3131.35547 host bstor01
-5131.32516 host bstor02
-7131.35547 host bstor03
-9
I'm finding the same thing. The balancer used to work flawlessly, giving me a
very even distribution with about 1% variance. Some time between 12.2.7 (maybe)
and 12.2.12 it's stopped working.
Here's a small selection of my osd's showing 47%-62% spread.
210 hdd 7.27739 1.0 7.28TiB 3.43TiB
Unfortunately the scsi reset on this vm happened again last night so this
hasn't resolved the issue.
Thanks for the suggestion though.
Rich
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks Robert, I'm trying those settings to see if they make a difference for
our case. It's usually around the weekend we have issues so should have some
idea by next week.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email