[ceph-users] Re: Num values for 3 DC 4+2 crush rule

2024-03-16 Thread Eugen Block
Hi Torkil, Num is 0 but it's not replicated so how does this translate to picking 3 of 3 datacenters? it doesn't really make a difference if replicated or not, it just defines how many crush buckets to choose, so it applies in the same way as for your replicated pool. I am thinking we

[ceph-users] Re: Fwd: Ceph fs snapshot problem

2024-03-16 Thread Neeraj Pratap Singh
As per the error message you mentioned; Permission denied : It seems that the 'subvolume' flag has been set on the root directory and we cannot create snapshots in directories under subvol dir. Can u pls retry creating directory after unsetting it by using: setfattr -n ceph.dir.subvolume -v 0

[ceph-users] Re: activating+undersized+degraded+remapped

2024-03-16 Thread Eugen Block
Yeah, the whole story would help to give better advice. With EC the default min_size is k+1, you could reduce the min_size to 5 temporarily, this might bring the PGs back online. But the long term fix is to have all required OSDs up and have enough OSDs to sustain an outage. Zitat von

[ceph-users] Re: activating+undersized+degraded+remapped

2024-03-16 Thread Wesley Dillingham
Please share "ceph osd tree" and "ceph osd df tree" I suspect you have not enough hosts to satisfy the EC On Sat, Mar 16, 2024, 8:04 AM Deep Dish wrote: > Hello > > I found myself in the following situation: > > [WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive > > pg 4.3d is

[ceph-users] activating+undersized+degraded+remapped

2024-03-16 Thread Deep Dish
Hello I found myself in the following situation: [WRN] PG_AVAILABILITY: Reduced data availability: 3 pgs inactive pg 4.3d is stuck inactive for 8d, current state activating+undersized+degraded+remapped, last acting [4,NONE,46,NONE,10,13,NONE,74] pg 4.6e is stuck inactive for 9d,

[ceph-users] Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

2024-03-16 Thread Frédéric Nass
  Hello Van Diep,   I read this after you got out of trouble.   According to your ceph osd tree, it looks like your problems started when the ceph orchestrator created osd.29 on node 'cephgw03' because it looks very unlikely that you created a 100MB OSD on a node that's named after

[ceph-users] Re: Fwd: Ceph fs snapshot problem

2024-03-16 Thread Marcus
Hi, There is no such attribute. /mnt: ceph.dir.subvolume: No such attribute I did not have getfattr installed so needed to install attr package. Can it be that this package was not installed when fs was created so ceph.dir.subvolume could not be set at creation? Did not get any warnings at