Thus spake Brad Hubbard (bhubb...@redhat.com) on mercredi 30 octobre 2019 à 
12:50:50:
> Maybe you should set nodown and noout while you do these maneuvers?
> That will minimise peering and recovery (data movement).

As the commands don't take too long, i just had a few slow requests before
the osd was back online. Thanks for the nodown|noout tip.

> > snapid 22772 from osd.29 and osd.42 :
> > ceph-objectstore-tool --pgid 2.2ba --data-path /var/lib/ceph/osd/ceph-29/ 
> > '["2.2ba",{"oid":"rbd_data.b4537a2ae8944a.000000000000425f","key":"","snapid":22772,"hash":719609530,"max":0,"pool":2,"namespace":"","max":0}]'
> >  remove
> > ceph-objectstore-tool --pgid 2.2ba --data-path /var/lib/ceph/osd/ceph-42/ 
> > '["2.2ba",{"oid":"rbd_data.b4537a2ae8944a.000000000000425f","key":"","snapid":22772,"hash":719609530,"max":0,"pool":2,"namespace":"","max":0}]'
> >  remove
>
> That looks right.

Done preceded by some dump, get-attrs,… commands. Yeah, not sure about
the real interest, but just to be cautious ^^

The PG still looks inconsistent. I asked for a deep-scrub 2.2ba,
still waiting. `list-inconsistent-obj` and `list-inconsistent-snapset`
returns "No scrub information available for pg 2.2ba" for the moment.

I also tried to manage pg 2.371 with :
ceph-objectstore-tool --data-path /var/lib/ceph/osd/ceph-27/ 
'["2.371",{"oid":"rbd_data.0c16b76b8b4567.00000000000420bb","key":"","snapid":22822,"hash":3394498417,"max":0,"pool":2,"namespace":"","max":0}]'
 remove

This one doesn't looks inconsistent anymore but i also asked for a
deep-scrup.


> You should probably try and work out what caused the issue and take
> steps to minimise the likelihood of a recurrence. This is not expected
> behaviour in a correctly configured and stable environment.

Yes… I wait a little bit to see what happens with these commands first
and keep an eye on the cluster health and logs…


--
Gardais Jérémy
Institut de Physique de Rennes
Université Rennes 1
Téléphone: 02-23-23-68-60
Mail & bonnes pratiques: http://fr.wikipedia.org/wiki/Nétiquette
-------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to