[ceph-users] Re: Forcibly move PGs from full to empty OSD

2020-03-09 Thread Rich Bade
Hi Thomas, The two commands you're looking for are: ceph osd pg-upmap-items $pg $source_osd $dest_osd and to remove them ceph osd rm-pg-upmap-items $pg You need to pair this with finding which pg's are on your full osd's. I use ceph pg dump and grep for the pool number and osd. With respect to w

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
That's good news, thanks David. What's my way forward on this? Is there a point release for Luminous coming? Or will I need to push ahead with my Nautilus upgrade to get it working again? Or build something custom from the Git code? I don't think custom build is an option in this case as this is

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
Thanks David, I've sent it to you directly. Rich ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
Also, sorry for misspelling your name Bryan :-/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
Thanks Brian. My failure domain is Host and they're very even. 01-06 have 24x6TB and 08/09 24x8TB. -3131.35547 host bstor01 -5131.32516 host bstor02 -7131.35547 host bstor03 -9

[ceph-users] Re: RESEND: Re: PG Balancer Upmap mode not working

2019-12-10 Thread Rich Bade
I'm finding the same thing. The balancer used to work flawlessly, giving me a very even distribution with about 1% variance. Some time between 12.2.7 (maybe) and 12.2.12 it's stopped working. Here's a small selection of my osd's showing 47%-62% spread. 210 hdd 7.27739 1.0 7.28TiB 3.43TiB

[ceph-users] Re: Small HDD cluster, switch from Bluestore to Filestore

2019-08-15 Thread Rich Bade
Unfortunately the scsi reset on this vm happened again last night so this hasn't resolved the issue. Thanks for the suggestion though. Rich ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Small HDD cluster, switch from Bluestore to Filestore

2019-08-14 Thread Rich Bade
Thanks Robert, I'm trying those settings to see if they make a difference for our case. It's usually around the weekend we have issues so should have some idea by next week. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email