[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-15 Thread Wesley Dillingham
I have found that I can only reproduce it on clusters built initially on pacific. My cluster which went nautilus to pacific does not reproduce the issue. My working theory is it is related to rocksdb sharding:

[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-14 Thread Wesley Dillingham
I have made https://tracker.ceph.com/issues/56046 regarding the issue I am observing. Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Tue, Jun 14, 2022 at 5:32 AM Eugen Block wrote: > I found the thread I was referring to [1].

[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-14 Thread Eugen Block
I found the thread I was referring to [1]. The report was very similar to yours, apparently the balancer seems to cause the "degraded" messages, but the thread was not concluded. Maybe a tracker ticket should be created if it doesn't already exist, I didn't find a ticket related to that in

[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-13 Thread Wesley Dillingham
Thanks for the reply. I believe regarding "0" vs "0.0" its the same difference. I will note its not just changing crush weights which induces this situation. Introducing upmaps manually or via the balancer also causes the PGs to be degraded instead of the expected remapped PG state. Respectfully,

[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-13 Thread Eugen Block
I remember someone reporting the same thing but I can’t find the thread right now. I’ll try again tomorrow. Zitat von Wesley Dillingham : I have a brand new Cluster 16.2.9 running bluestore with 0 client activity. I am modifying some crush weights to move PGs off of a host for testing