Hi,
what does your 'ceph osd df tree' look like?
I've read about these warnings when PGs are incomplete but not when
all are active+clean.
Zitat von Andres Rojas Guerrero <a.ro...@csic.es>:
Hi, recently in a Nautilus cluster version 14.2.6 I have changed the
rule crush map to host type instead osd, all seems Ok, but now I
have "PG not deep-scrubbed in time" with all pgs in active+clean
state:
# ceph status
cluster:
id: c74da5b8-3d1b-483e-8b3a-739134db6cf8
health: HEALTH_WARN
8192 pgs not deep-scrubbed in time
8192 pgs not scrubbed in time
services:
mon: 3 daemons, quorum ceph2mon01,ceph2mon02,ceph2mon03 (age 2w)
mgr: ceph2mon01(active, since 4w), standbys: ceph2mon02, ceph2mon03
mds: nxtclfs:1 {0=ceph2mon01=up:active} 2 up:standby
osd: 768 osds: 768 up (since 12d), 768 in (since 12d)
data:
pools: 2 pools, 16384 pgs
objects: 38.01M objects, 43 TiB
usage: 71 TiB used, 2.7 PiB / 2.7 PiB avail
pgs: 16384 active+clean
If I try no make a deep-scrub of one of the PG not deep-scrubbed in
time I have the error "pg .... has no primary osd"
# ceph pg deep-scrub 3.1d5b
Error EAGAIN: pg 3.1d5b has no primary osd
What could be the cause of the error?
--
*******************************************************
Andrés Rojas Guerrero
Unidad Sistemas Linux
Area Arquitectura Tecnológica
Secretaría General Adjunta de Informática
Consejo Superior de Investigaciones Científicas (CSIC)
Pinar 19
28006 - Madrid
Tel: +34 915680059 -- Ext. 444059
email: a.ro...@csic.es
ID comunicate.csic.es: @50852720l:matrix.csic.es
*******************************************************
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io