Dear Ceph Experts,

after upgrading our Ceph cluster from Hammer to Jewel,
the MDS (after a few days) found some metadata damage:

   # ceph status
   [...]
   health HEALTH_ERR
         mds0: Metadata damage detected
   [...]

The output of

   # ceph tell mds.0 damage ls

is:

   [
      {
         "ino" : [...],
         "id" : [...],
         "damage_type" : "backtrace"
      },
      [...]
   ]

There are 5 such "damage_type" : "backtrace" entries in total.

I'm not really surprised, there were a very few instances in
the past where one or two (mostly empty directories) and
symlinks acted strangely, and couldn't be deleted
("rm results in "Invalid argument"). Back then, I moved them
all in a "quarantine" directory, but wasn't able to do anything
about it.

Now that CephFS does more rigorous checks and has spotted
the trouble - how do I go about repairing this?


Cheers and thanks for any help,

Oliver
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to