Hi, our recovery slowed down significantly towards the end, however it was still about five times faster than the original speed.We suspected that this is caused somehow by threading (more objects transferred - more threads used), but this is only an assumption.

On 11/01/18 05:02, shadow_lin wrote:
Hi,
I had tried these two method and for backfilling it seems only osd-max-backfills works.
How was your recovery speed when it comes to the last few pgs or objects?
2018-01-11
------------------------------------------------------------------------
shadow_lin
------------------------------------------------------------------------

    *发件人:*Josef Zelenka <josef.zele...@cloudevelops.com>
    *发送时间:*2018-01-11 04:53
    *主题:*Re: [ceph-users] How to speed up backfill
    *收件人:*"shadow_lin"<shadow_...@163.com>
    *抄送:*

    Hi, i had the same issue a few days back, i tried playing around
    with these two:

    ceph tell 'osd.*' injectargs '--osd-max-backfills <num>'
    ceph tell 'osd.*' injectargs '--osd-recovery-max-active <num> '
      and it helped greatly(increased our recovery speed 20x), but be careful 
to not overload your systems.


    On 10/01/18 17:50, shadow_lin wrote:
    Hi all,
    I am playing with setting for backfill to try to find how to
    control the speed of backfill.
    Now I only find  "osd max backfills" can have effect the backfill
    speed. But after all pg need to be backfilled begin backfilling I
    can't find any way to speed up backfills.
    Especailly when it comes to the last pg to recover, the speed is
    only a few MB/s(when there are multi pg are backfilled the speed
    could be more than 600MB/s in my test)
    I am a little confused about the setting of backfills and
    recovery.Though backfilling is a kind of recovery but It seems
    recovery setting is only about to replay pg logs to do recover  pg.
    Would change "osd recovery max active" or other recovery setting
    have any effect on backfilling?
    I did tried "osd recovery op priority" and "osd recovery max
    active" with no luck.
    Any advice would be greatly appreciated.Thanks
    2018-01-11
    ------------------------------------------------------------------------
    lin.yunfan


    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to