On Fri, Mar 11, 2016 at 12:19:31PM +0300, Vladimir Davydov wrote:
> On Fri, Mar 11, 2016 at 09:18:25AM +0100, Michal Hocko wrote:
> > On Thu 10-03-16 15:50:14, Johannes Weiner wrote:
> ...
> > > @@ -5037,9 +5040,36 @@ static ssize_t memory_max_write(struct 
> > > kernfs_open_file *of,
> > >   if (err)
> > >           return err;
> > >  
> > > - err = mem_cgroup_resize_limit(memcg, max);
> > > - if (err)
> > > -         return err;
> > > + xchg(&memcg->memory.limit, max);
> > > +
> > > + for (;;) {
> > > +         unsigned long nr_pages = page_counter_read(&memcg->memory);
> > > +
> > > +         if (nr_pages <= max)
> > > +                 break;
> > > +
> > > +         if (signal_pending(current)) {
> > 
> > Didn't you want fatal_signal_pending here? At least the changelog
> > suggests that.
> 
> I suppose the user might want to interrupt the write by hitting CTRL-C.

Yeah. This is the same thing we do for the current limit setting loop.

> Come to think of it, shouldn't we restore the old limit and return EBUSY
> if we failed to reclaim enough memory?

I suspect it's very rare that it would fail. But even in that case
it's probably better to at least not allow new charges past what the
user requested, even if we can't push the level back far enough.

Reply via email to