[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2022-04-19 Thread Kai Stian Olstad
On 18.04.2022 21:35, Wesley Dillingham wrote: If you mark an osd "out" but not down / you dont stop the daemon do the PGs go remapped or do they go degraded then as well? First I made sure the balancer was active, then I marked one osd "out", "ceph osd out 34" and check status every 2

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2022-04-18 Thread Wesley Dillingham
If you mark an osd "out" but not down / you dont stop the daemon do the PGs go remapped or do they go degraded then as well? Respectfully, *Wes Dillingham* w...@wesdillingham.com LinkedIn On Thu, Apr 14, 2022 at 5:15 AM Kai Stian Olstad wrote: >

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2022-04-14 Thread Kai Stian Olstad
On 29.03.2022 14:56, Sandor Zeestraten wrote: I was wondering if you ever found out anything more about this issue. Unfortunately no, so I turned it off. I am running into similar degradation issues while running rados bench on a new 16.2.6 cluster. In our case it's with a replicated pool,

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-22 Thread Kai Stian Olstad
On 21.09.2021 09:11, Kobi Ginon wrote: for sure the balancer affects the status Of course, but setting several PG to degraded is something else. i doubt that your customers will be writing so many objects in the same rate of the Test. I only need 2 host running rados bench to get several

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-20 Thread Kai Stian Olstad
On 17.09.2021 16:10, Eugen Block wrote: Since I'm trying to test different erasure encoding plugin and technique I don't want the balancer active. So I tried setting it to none as Eguene suggested, and to my surprise I did not get any degraded messages at all, and the cluster was in

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-17 Thread Eugen Block
Since I'm trying to test different erasure encoding plugin and technique I don't want the balancer active. So I tried setting it to none as Eguene suggested, and to my surprise I did not get any degraded messages at all, and the cluster was in HEALTH_OK the whole time. Interesting, maybe

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-17 Thread Kai Stian Olstad
On 16.09.2021 15:51, Josh Baergen wrote: I assume it's the balancer module. If you write lots of data quickly into the cluster the distribution can vary and the balancer will try to even out the placement. The balancer won't cause degradation, only misplaced objects. Since I'm trying to test

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-16 Thread Eugen Block
You’re absolutely right, of course, the balancer wouldn’t cause degraded PGs. Flapping OSDs seems very likely here. Zitat von Josh Baergen : I assume it's the balancer module. If you write lots of data quickly into the cluster the distribution can vary and the balancer will try to even out

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-16 Thread Josh Baergen
> I assume it's the balancer module. If you write lots of data quickly > into the cluster the distribution can vary and the balancer will try > to even out the placement. The balancer won't cause degradation, only misplaced objects. > Degraded data redundancy: 260/11856050 objects degraded >

[ceph-users] Re: Is it normal Ceph reports "Degraded data redundancy" in normal use?

2021-09-16 Thread Eugen Block
Hi, I assume it's the balancer module. If you write lots of data quickly into the cluster the distribution can vary and the balancer will try to even out the placement. You can check the status with ceph balancer status and disable it if necessary: ceph balancer mode none Regards, Eugen