Hi Nicola,

might be 
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/#crush-gives-up-too-soon
 or https://tracker.ceph.com/issues/57348.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Nicola Mori <m...@fi.infn.it>
Sent: 02 November 2022 18:59:22
To: ceph-users
Subject: [ceph-users] Missing OSD in up set

Dear Ceph users,

I have one PG in my cluster that is constantly in active+clean+remapped
state. From what I understand there might a problem with the up set:

# ceph pg map 3.5e
osdmap e23638 pg 3.5e (3.5e) -> up [38,78,55,49,40,39,64,2147483647]
acting [38,78,55,49,40,39,64,68]

The last OSD of the up set is NONE, and this is the only PG in my
cluster with this feature. Since the corresponding OSD in the
active set is 68 I tried to put it out of the cluster, but the only
result I got is that now the PG is active+undersized+degraded, the up
set is still missing one OSD, and no recovery operation for it is ongoing:

# ceph pg map 3.5e
osdmap e23640 pg 3.5e (3.5e) -> up [38,78,55,49,40,39,64,2147483647]
acting [38,78,55,49,40,39,64,2147483647]

I found no clue on the web about how to solve this, so I'd need help.
Thanks,

Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to