[ceph-users] Re: One PG stuck in active+clean+remapped

2022-02-24 Thread Erwin Lubbers
h osd setcrushmap -i crush.map2 > > Cheers, dan > > On Thu, Feb 24, 2022 at 6:29 PM Erwin Lubbers wrote: >> >> Hi all, >> >> I have one active+clean+remapped PG on a 152 OSD Octopus (15.2.15) cluster >> with equal balanced OSD's (around 40% usage). Th

[ceph-users] One PG stuck in active+clean+remapped

2022-02-24 Thread Erwin Lubbers
Hi all, I have one active+clean+remapped PG on a 152 OSD Octopus (15.2.15) cluster with equal balanced OSD's (around 40% usage). The cluster has three replicas spreaded around three datacenters (A+B+C). All PGs are available in each datacenter (as defined in the crush map), but only this one

[ceph-users] Re: virtual machines crashes after upgrade to octopus

2020-05-07 Thread Erwin Lubbers
Hi, Did anyone found a way to resolve the problem? I'm seeing the same on a clean Octopus Ceph installation on Ubuntu 18 with an Octopus compiled KVM server running on CentOS 7.8. The KVM machine shows: [ 7682.233684] fn-radosclient[6060]: segfault at 2b19 ip 7f8165cc0a50 sp