[ceph-users] Re: activating+undersized+degraded+remapped

2024-03-17 Thread Wesley Dillingham
You may be suffering from the "crush gives up too soon" situation: https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon You have a 5+3 with only 8 hosts, you may need to increase your crush tries. See the link for how to fix Respectfully, *Wes

[ceph-users] Re: activating+undersized+degraded+remapped

2024-03-17 Thread Joachim Kraftmayer - ceph ambassador
also helpful is the output of: cephpg{poolnum}.{pg-id}query ___ ceph ambassador DACH ceph consultant since 2012 Clyso GmbH - Premier Ceph Foundation Member https://www.clyso.com/ Am 16.03.24 um 13:52 schrieb Eugen Block: Yeah, the whole story would help to

[ceph-users] Re: activating+undersized+degraded+remapped

2024-03-16 Thread Eugen Block
Yeah, the whole story would help to give better advice. With EC the default min_size is k+1, you could reduce the min_size to 5 temporarily, this might bring the PGs back online. But the long term fix is to have all required OSDs up and have enough OSDs to sustain an outage. Zitat von

[ceph-users] Re: activating+undersized+degraded+remapped

2024-03-16 Thread Wesley Dillingham
Please share "ceph osd tree" and "ceph osd df tree" I suspect you have not enough hosts to satisfy the EC On Sat, Mar 16, 2024, 8:04 AM Deep Dish wrote: > Hello > > I found myself in the following situation: > > [WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive > > pg 4.3d is