I'm on Red Hat Storage 2.2 (ceph-10.2.7-0.el7.x86_64) and I see this...
# cephfs-data-scan
Usage:
  cephfs-data-scan init [--force-init]
  cephfs-data-scan scan_extents [--force-pool] <data pool name>
  cephfs-data-scan scan_inodes [--force-pool] [--force-corrupt] <data pool
name>

    --force-corrupt: overrite apparently corrupt structures
    --force-init: write root inodes even if they exist
    --force-pool: use data pool even if it is not in FSMap

  cephfs-data-scan scan_frags [--force-corrupt]

  cephfs-data-scan tmap_upgrade <metadata_pool>

  --conf/-c FILE    read configuration from the given configuration file
  --id/-i ID        set ID portion of my name
  --name/-n TYPE.ID set name
  --cluster NAME    set cluster name (default: ceph)
  --setuser USER    set uid to user or uid (and gid to user's gid)
  --setgroup GROUP  set gid to group or gid
  --version         show version and quit


Anyone know where "cephfs-data-scan pg_files <path> <pg id> [<pg id>...]"
went per docs <http://docs.ceph.com/docs/master/cephfs/disaster-recovery/>?

Thanks,
/Chris Callegari
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to