We noticed extremely slow performance when remapping is necessary. We didn't do 
anything special other than assigning the correct device_class (to ssd). When 
checking ceph status, we notice the number of objects recovering is around 
17-25 (with watch -n 1 -c ceph status).

How can we increase the recovery process?

There isn't any client load, because we're going to migrate to this cluster in 
the future, so only a rsync once a while is being executed.

[ceph: root@pwsoel12998 /]# ceph status
  cluster:
    id:     da3ca2e4-ee5b-11ed-8096-0050569e8c3b
    health: HEALTH_WARN
            noscrub,nodeep-scrub flag(s) set

  services:
    mon: 5 daemons, quorum 
pqsoel12997,pqsoel12996,pwsoel12994,pwsoel12998,prghygpl03 (age 3h)
    mgr: pwsoel12998.ylvjcb(active, since 3h), standbys: pqsoel12997.gagpbt
    mds: 4/4 daemons up, 2 standby
    osd: 32 osds: 32 up (since 73m), 32 in (since 6d); 10 remapped pgs
         flags noscrub,nodeep-scrub

  data:
    volumes: 2/2 healthy
    pools:   5 pools, 193 pgs
    objects: 13.97M objects, 853 GiB
    usage:   3.5 TiB used, 12 TiB / 16 TiB avail
    pgs:     755092/55882956 objects misplaced (1.351%)
             183 active+clean
             10  active+remapped+backfilling

  io:
    recovery: 2.3 MiB/s, 20 objects/s

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to