On Thu, May 23, 2019 at 11:49:41AM -0700, Shakeel Butt wrote:
> On Thu, May 23, 2019 at 11:37 AM Matthew Wilcox <wi...@infradead.org> wrote:
> >
> > On Thu, May 23, 2019 at 01:43:49PM -0400, Johannes Weiner wrote:
> > > I noticed that recent upstream kernels don't account the xarray nodes
> > > of the page cache to the allocating cgroup, like we used to do for the
> > > radix tree nodes.
> > >
> > > This results in broken isolation for cgrouped apps, allowing them to
> > > escape their containment and harm other cgroups and the system with an
> > > excessive build-up of nonresident information.
> > >
> > > It also breaks thrashing/refault detection because the page cache
> > > lives in a different domain than the xarray nodes, and so the shadow
> > > shrinker can reclaim nonresident information way too early when there
> > > isn't much cache in the root cgroup.
> > >
> > > I'm not quite sure how to fix this, since the xarray code doesn't seem
> > > to have per-tree gfp flags anymore like the radix tree did. We cannot
> > > add SLAB_ACCOUNT to the radix_tree_node_cachep slab cache. And the
> > > xarray api doesn't seem to really support gfp flags, either (xas_nomem
> > > does, but the optimistic internal allocations have fixed gfp flags).
> >
> > Would it be a problem to always add __GFP_ACCOUNT to the fixed flags?
> > I don't really understand cgroups.
> 
> Does xarray cache allocated nodes, something like radix tree's:
> 
> static DEFINE_PER_CPU(struct radix_tree_preload, radix_tree_preloads) = { 0, 
> };
> 
> For the cached one, no __GFP_ACCOUNT flag.

No.  That was the point of the XArray conversion; no cached nodes.

> Also some users of xarray may not want __GFP_ACCOUNT. That's the
> reason we had __GFP_ACCOUNT for page cache instead of hard coding it
> in radix tree.

This is what I don't understand -- why would someone not want
__GFP_ACCOUNT?  For a shared resource?  But the page cache is a shared
resource.  So what is a good example of a time when an allocation should
_not_ be accounted to the cgroup?

Reply via email to