Re: [RFC] Add mempressure cgroup

2012-12-01 Thread Anton Vorontsov
On Fri, Nov 30, 2012 at 03:47:25PM -0200, Luiz Capitulino wrote: [...] > > Query-and-control scheme looks very attractive, and that's actually > > resembles my "balance" level idea, when userland tells the kernel how much > > reclaimable memory it has. Except the your scheme works in the reverse >

Re: [RFC] Add mempressure cgroup

2012-12-01 Thread Anton Vorontsov
Hi Luiz, Thanks for your email! On Fri, Nov 30, 2012 at 03:47:25PM -0200, Luiz Capitulino wrote: [...] > > But there is one, rather major issue: we're crossing kernel-userspace > > boundary. And with the scheme we'll have to cross the boundary four times: > > query / reply-available / control /

Re: [RFC] Add mempressure cgroup

2012-12-01 Thread Anton Vorontsov
Hi Luiz, Thanks for your email! On Fri, Nov 30, 2012 at 03:47:25PM -0200, Luiz Capitulino wrote: [...] But there is one, rather major issue: we're crossing kernel-userspace boundary. And with the scheme we'll have to cross the boundary four times: query / reply-available / control /

Re: [RFC] Add mempressure cgroup

2012-12-01 Thread Anton Vorontsov
On Fri, Nov 30, 2012 at 03:47:25PM -0200, Luiz Capitulino wrote: [...] Query-and-control scheme looks very attractive, and that's actually resembles my balance level idea, when userland tells the kernel how much reclaimable memory it has. Except the your scheme works in the reverse

Re: [RFC] Add mempressure cgroup

2012-11-30 Thread Luiz Capitulino
On Wed, 28 Nov 2012 17:27:51 -0800 Anton Vorontsov wrote: > On Wed, Nov 28, 2012 at 03:14:32PM -0800, Andrew Morton wrote: > [...] > > Compare this with the shrink_slab() shrinkers. With these, the VM can > > query and then control the clients. If something goes wrong or is out > > of balance,

Re: [RFC] Add mempressure cgroup

2012-11-30 Thread Luiz Capitulino
On Wed, 28 Nov 2012 17:27:51 -0800 Anton Vorontsov anton.voront...@linaro.org wrote: On Wed, Nov 28, 2012 at 03:14:32PM -0800, Andrew Morton wrote: [...] Compare this with the shrink_slab() shrinkers. With these, the VM can query and then control the clients. If something goes wrong or is

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
On Thu, Nov 29, 2012 at 08:14:13AM +0200, Kirill A. Shutemov wrote: > On Wed, Nov 28, 2012 at 02:29:08AM -0800, Anton Vorontsov wrote: > > +static int mpc_pre_destroy(struct cgroup *cg) > > +{ > > + struct mpc_state *mpc = cg2mpc(cg); > > + int ret = 0; > > + > > + mutex_lock(>lock); > > + >

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Kirill A. Shutemov
On Wed, Nov 28, 2012 at 02:29:08AM -0800, Anton Vorontsov wrote: > +static int mpc_pre_destroy(struct cgroup *cg) > +{ > + struct mpc_state *mpc = cg2mpc(cg); > + int ret = 0; > + > + mutex_lock(>lock); > + > + if (mpc->eventfd) > + ret = -EBUSY; cgroup_rmdir() will

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
Hello Michal, Thanks a lot for taking a look into this! On Wed, Nov 28, 2012 at 05:29:24PM +0100, Michal Hocko wrote: > On Wed 28-11-12 02:29:08, Anton Vorontsov wrote: > > This is an attempt to implement David Rientjes' idea of mempressure > > cgroup. > > > > The main characteristics are the

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
On Wed, Nov 28, 2012 at 05:27:51PM -0800, Anton Vorontsov wrote: > On Wed, Nov 28, 2012 at 03:14:32PM -0800, Andrew Morton wrote: > [...] > > Compare this with the shrink_slab() shrinkers. With these, the VM can > > query and then control the clients. If something goes wrong or is out > > of

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
On Wed, Nov 28, 2012 at 03:14:32PM -0800, Andrew Morton wrote: [...] > Compare this with the shrink_slab() shrinkers. With these, the VM can > query and then control the clients. If something goes wrong or is out > of balance, it's the VM's problem to solve. > > So I'm thinking that a better

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Andrew Morton
On Wed, 28 Nov 2012 02:29:08 -0800 Anton Vorontsov wrote: > The main characteristics are the same to what I've tried to add to vmevent > API: > > Internally, it uses Mel Gorman's idea of scanned/reclaimed ratio for > pressure index calculation. But we don't expose the index to the >

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Michal Hocko
On Wed 28-11-12 02:29:08, Anton Vorontsov wrote: > This is an attempt to implement David Rientjes' idea of mempressure > cgroup. > > The main characteristics are the same to what I've tried to add to vmevent > API: > > Internally, it uses Mel Gorman's idea of scanned/reclaimed ratio for >

[RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
This is an attempt to implement David Rientjes' idea of mempressure cgroup. The main characteristics are the same to what I've tried to add to vmevent API: Internally, it uses Mel Gorman's idea of scanned/reclaimed ratio for pressure index calculation. But we don't expose the index to the

[RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
This is an attempt to implement David Rientjes' idea of mempressure cgroup. The main characteristics are the same to what I've tried to add to vmevent API: Internally, it uses Mel Gorman's idea of scanned/reclaimed ratio for pressure index calculation. But we don't expose the index to the

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Michal Hocko
On Wed 28-11-12 02:29:08, Anton Vorontsov wrote: This is an attempt to implement David Rientjes' idea of mempressure cgroup. The main characteristics are the same to what I've tried to add to vmevent API: Internally, it uses Mel Gorman's idea of scanned/reclaimed ratio for pressure

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Andrew Morton
On Wed, 28 Nov 2012 02:29:08 -0800 Anton Vorontsov anton.voront...@linaro.org wrote: The main characteristics are the same to what I've tried to add to vmevent API: Internally, it uses Mel Gorman's idea of scanned/reclaimed ratio for pressure index calculation. But we don't expose the

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
On Wed, Nov 28, 2012 at 03:14:32PM -0800, Andrew Morton wrote: [...] Compare this with the shrink_slab() shrinkers. With these, the VM can query and then control the clients. If something goes wrong or is out of balance, it's the VM's problem to solve. So I'm thinking that a better design

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
On Wed, Nov 28, 2012 at 05:27:51PM -0800, Anton Vorontsov wrote: On Wed, Nov 28, 2012 at 03:14:32PM -0800, Andrew Morton wrote: [...] Compare this with the shrink_slab() shrinkers. With these, the VM can query and then control the clients. If something goes wrong or is out of balance,

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
Hello Michal, Thanks a lot for taking a look into this! On Wed, Nov 28, 2012 at 05:29:24PM +0100, Michal Hocko wrote: On Wed 28-11-12 02:29:08, Anton Vorontsov wrote: This is an attempt to implement David Rientjes' idea of mempressure cgroup. The main characteristics are the same to

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Kirill A. Shutemov
On Wed, Nov 28, 2012 at 02:29:08AM -0800, Anton Vorontsov wrote: +static int mpc_pre_destroy(struct cgroup *cg) +{ + struct mpc_state *mpc = cg2mpc(cg); + int ret = 0; + + mutex_lock(mpc-lock); + + if (mpc-eventfd) + ret = -EBUSY; cgroup_rmdir() will

Re: [RFC] Add mempressure cgroup

2012-11-28 Thread Anton Vorontsov
On Thu, Nov 29, 2012 at 08:14:13AM +0200, Kirill A. Shutemov wrote: On Wed, Nov 28, 2012 at 02:29:08AM -0800, Anton Vorontsov wrote: +static int mpc_pre_destroy(struct cgroup *cg) +{ + struct mpc_state *mpc = cg2mpc(cg); + int ret = 0; + + mutex_lock(mpc-lock); + + if