# ceph osd setcrushmap -i crush.map2
>
> Cheers, dan
>
> On Thu, Feb 24, 2022 at 6:29 PM Erwin Lubbers wrote:
>>
>> Hi all,
>>
>> I have one active+clean+remapped PG on a 152 OSD Octopus (15.2.15) cluster
>> with equal balanced OSD's (around 40%
Hi all,
I have one active+clean+remapped PG on a 152 OSD Octopus (15.2.15) cluster with
equal balanced OSD's (around 40% usage). The cluster has three replicas
spreaded around three datacenters (A+B+C).
All PGs are available in each datacenter (as defined in the crush map), but
only this one (
Hi,
Did anyone found a way to resolve the problem? I'm seeing the same on a clean
Octopus Ceph installation on Ubuntu 18 with an Octopus compiled KVM server
running on CentOS 7.8. The KVM machine shows:
[ 7682.233684] fn-radosclient[6060]: segfault at 2b19 ip 7f8165cc0a50 sp
7f81397f64