On 07/14, Jan Kara wrote:
>
>   Hello,
>
> On Mon 13-07-15 23:25:36, Oleg Nesterov wrote:
> > Al, Jan, could you comment? I mean the intent, the patches are
> > obviously not for inclusion yet.
>
> Thanks for the patches! Hum, what do people have with freeze protection
> these days? Noone cared about it for years and sudddently two patch sets
> within a month :) Anyway, have you seen the patch set from Dave Hansen?
>
> It is here: https://lkml.org/lkml/2015/6/24/682
> He modifies the freezing primitives in a different way. AFAICS the
> resulting performance of the fast path should be about the same.

At first glance, 2-3 do something similar, yes...

> So unless
> I'm missing something and there is a significant performance advantage to
> Dave's patches I'm all for using a generic primitive you suggest.

I think percpu_rw_semaphore looks a bit better. And even a bit faster.
And it will not block __sb_start_write() entirely while freeze_super()
sleeps in synchronize_rcu().

freeze_super() should be faster too after rcu_sync changes, but this
is not that important.

But again, to me the main advantage is that we can use the generic
primitives and remove this nontrivial code in fs/super.c.

> Can you perhaps work with Dave on some common resolution?

Dave, what do you think? Will you agree with percpu_rw_semaphore ?

> >     - __sb_start_write() will be a little bit faster, but this
> >       is minor.
>
> Actually Dave could measure the gain achieved by removing the barrier. It
> would be good to verify that your patches achieve a similar gain.

The fast path in percpu_down_read() is really fast, it does a plain
LOAD plus __this_cpu_add() under preempt_disable(). I doubt this can
be improved. The actual code is:

        static bool update_fast_ctr(struct percpu_rw_semaphore *brw, unsigned 
int val)
        {
                bool success = false;

                preempt_disable();
                if (likely(!atomic_read(&brw->write_ctr))) {
                        __this_cpu_add(*brw->fast_read_ctr, val);
                        success = true;
                }
                preempt_enable();

                return success;
        }

        void percpu_down_read(struct percpu_rw_semaphore *brw)
        {
                might_sleep();
                if (likely(update_fast_ctr(brw, +1))) {
                        rwsem_acquire_read(&brw->rw_sem.dep_map, 0, 0, 
_RET_IP_);
                        return;
                }

                down_read(&brw->rw_sem);
                atomic_inc(&brw->slow_read_ctr);
                __up_read(&brw->rw_sem);
        }


> >     - Fix get_super_thawed(), it will spin if MS_RDONLY...
> >
> >       It is not clear to me what exactly should we do, but this
> >       doesn't look hard. Perhaps it can just return if MS_RDONLY.
>
> What's the exact problem here?

Note that freeze_super() does not do sb_wait_write() (which blocks
__sb_start_write) if MS_RDONLY. This means that after 1/4 get_super_thawed()
will spin until SB_UNFROZEN in this case, sb_start_write() won't block. But
please forget, this is not the problem. I mean, afaics this is easy to fix,
but the initial version should just keep ->wait_unfrozen specially for
get_super_thawed(), then we can remove it in a separate patch.

Thanks!

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to