It requires an OSD restart, unfortunately.

Josh

On Fri, May 24, 2024 at 11:03 AM Mazzystr <mazzy...@gmail.com> wrote:
>
> Is that a setting that can be applied runtime or does it req osd restart?
>
> On Fri, May 24, 2024 at 9:59 AM Joshua Baergen <jbaer...@digitalocean.com>
> wrote:
>
> > Hey Chris,
> >
> > A number of users have been reporting issues with recovery on Reef
> > with mClock. Most folks have had success reverting to
> > osd_op_queue=wpq. AIUI 18.2.3 should have some mClock improvements but
> > I haven't looked at the list myself yet.
> >
> > Josh
> >
> > On Fri, May 24, 2024 at 10:55 AM Mazzystr <mazzy...@gmail.com> wrote:
> > >
> > > Hi all,
> > > Goodness I'd say it's been at least 3 major releases since I had to do a
> > > recovery.  I have disks with 60-75,000 power_on_hours.  I just updated
> > from
> > > Octopus to Reef last month and I'm hit with 3 disk failures and the
> > mclock
> > > ugliness.  My recovery is moving at a wondrous 21 mb/sec after some
> > serious
> > > hacking.  It started out at 9 mb/sec.
> > >
> > > My hosts are showing minimal cpu use.  normal mem use.  0-6% disk
> > > business.  Load is minimal so processes aren't blocked by disk io.
> > >
> > > I tried the changing all the sleeps and recovery_max and
> > > setting osd_mclock_profile high_recovery_ops to no change in performance.
> > >
> > > Does anyone have any suggestions to improve performance?
> > >
> > > Thanks,
> > > /Chris C
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to