On Thu, May 25, 2017 at 01:19:31PM +0200, Petr Mladek wrote:
> On Wed 2017-05-24 19:27:38, Dmitry Torokhov wrote:
> > On Thu, May 25, 2017 at 03:00:17AM +0200, Luis R. Rodriguez wrote:
> > > On Wed, May 24, 2017 at 05:45:37PM -0700, Dmitry Torokhov wrote:
> > > > On Thu, May 25, 2017 at 02:14:52AM +0200, Luis R. Rodriguez wrote:
> > > > > On Fri, May 19, 2017 at 03:27:12PM -0700, Dmitry Torokhov wrote:
> > > > > > On Thu, May 18, 2017 at 08:24:43PM -0700, Luis R. Rodriguez wrote:
> > > > > > > In theory it is possible multiple concurrent threads will try to
> > > > > > > kmod_umh_threads_get() and as such atomic_inc(&kmod_concurrent) at
> > > > > > > the same time, therefore enabling a small time during which we've
> > > > > > > bumped kmod_concurrent but have not really enabled work. By using
> > > > > > > preemption we mitigate this a bit.
> > > > > > > 
> > > > > > > Preemption is not needed when we kmod_umh_threads_put().
> > > > > > > 
> > > > > > > Signed-off-by: Luis R. Rodriguez <mcg...@kernel.org>
> > > > > > > ---
> > > > > > >  kernel/kmod.c | 24 ++++++++++++++++++++++--
> > > > > > >  1 file changed, 22 insertions(+), 2 deletions(-)
> > > > > > > 
> > > > > > > diff --git a/kernel/kmod.c b/kernel/kmod.c
> > > > > > > index 563600fc9bb1..7ea11dbc7564 100644
> > > > > > > --- a/kernel/kmod.c
> > > > > > > +++ b/kernel/kmod.c
> > > > > > > @@ -113,15 +113,35 @@ static int call_modprobe(char *module_name, 
> > > > > > > int wait)
> > > > > > >  
> > > > > > >  static int kmod_umh_threads_get(void)
> > > > > > >  {
> > > > > > > + int ret = 0;
> > > > > > > +
> > > > > > > + /*
> > > > > > > +  * Disabling preemption makes sure that we are not rescheduled 
> > > > > > > here
> > > > > > > +  *
> > > > > > > +  * Also preemption helps kmod_concurrent is not increased by 
> > > > > > > mistake
> > > > > > > +  * for too long given in theory two concurrent threads could 
> > > > > > > race on
> > > > > > > +  * atomic_inc() before we atomic_read() -- we know that's 
> > > > > > > possible
> > > > > > > +  * and but we don't care, this is not used for object 
> > > > > > > accounting and
> > > > > > > +  * is just a subjective threshold. The alternative is a lock.
> > > > > > > +  */
> > > > > > > + preempt_disable();
> > > > > > >   atomic_inc(&kmod_concurrent);
> > > > > > >   if (atomic_read(&kmod_concurrent) <= max_modprobes)
> > > > > > 
> > > > > > That is very "fancy" way to basically say:
> > > > > > 
> > > > > >     if (atomic_inc_return(&kmod_concurrent) <= max_modprobes)
> > > > > 
> > > > > Do you mean to combine the atomic_inc() and atomic_read() in one as 
> > > > > you noted
> > > > > (as that is not a change in this patch), *or* that using a memory 
> > > > > barrier here
> > > > > with atomic_inc_return() should suffice to address the same and avoid 
> > > > > an
> > > > > explicit preemption  enable / disable ?
> > > > 
> > > > I am saying that atomic_inc_return() will avoid situation where you have
> > > > more than one threads incrementing the counter and believing that they
> > > > are [not] allowed to start modprobe.
> > > > 
> > > > I have no idea why you think preempt_disable() would help here. It only
> > > > ensures that current thread will not be preempted between the point
> > > > where you update the counter and where you check the result. It does not
> > > > stop interrupts nor does it affect other threads that might be updating
> > > > the same counter.
> > > 
> > > The preemption was inspired by __module_get() and try_module_get(), was 
> > > that
> > > rather silly ?
> > 
> > As far as I can see prrempt_disable() was needed in __module_get() when
> > modules user per-cpu refcounts: you did not want to move away from CPU
> > while manipulating refcount.
> > 
> > Now that modules use simple atomics for refcounting I think these
> > preempt_disable() and preempt_enable() can be removed.
> 
> preempt_disable() still might be useful because you do the
> atomic_dec() when you reach the limit.

No, not really, because even if you disallow process to migrate to
another CPU, it will not help with another thread modifying the counter.

> 
> By other words, you have three operations that should be atomic:
> inc, read, and dec. atomic_inc_return() covers only two of them.
> 
> Hmm, a solution might be to use atomic_dec_if_positive().
> I would kmod_concurrent to something like kmod_concurrent_allowed,
> intialize it with the maximum allowed number. Then you could do:
> 
> static int kmod_umh_threads_get(void)
> {
>       if (atomic_dec_if_positive(kmod_concurrent_available) < 0)
>               return -EBUSY;
>       return 0;
> }

Yes, this looks like optimal solution. And we won't need
kmod_umh_threads_get() wrapper anymore I think.

-- 
Dmitry
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to