Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 13:24:25, Shakeel Butt wrote:
> On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> > On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
> >> > I am sorry to cut the rest of your proposal because it simply goes over
> >> > the scope of the proposed solution while the usecase you are mentioning
> >> > is still possible. If we want to compare intermediate nodes (which seems
> >> > to be the case) then we can always provide a knob to opt-in - be it your
> >> > oom_gang or others.
> >>
> >> In the Roman's proposed solution we can already force the comparison
> >> of intermediate nodes using 'oom_group', I am just requesting to
> >> separate the killall semantics from it.
> >
> > oom_group _is_ about killall semantic.  And comparing killable entities
> > is just a natural thing to do. So I am not sure what you mean
> >
> 
> I am saying decouple the notion of comparable entities and killable entities.

There is no strong (bijection) relation there. Right now killable
entities are comparable (which I hope we agree is the right thing to do)
but nothing really prevents even non-killable entities to be compared in
the future.

-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 13:24:25, Shakeel Butt wrote:
> On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> > On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
> >> > I am sorry to cut the rest of your proposal because it simply goes over
> >> > the scope of the proposed solution while the usecase you are mentioning
> >> > is still possible. If we want to compare intermediate nodes (which seems
> >> > to be the case) then we can always provide a knob to opt-in - be it your
> >> > oom_gang or others.
> >>
> >> In the Roman's proposed solution we can already force the comparison
> >> of intermediate nodes using 'oom_group', I am just requesting to
> >> separate the killall semantics from it.
> >
> > oom_group _is_ about killall semantic.  And comparing killable entities
> > is just a natural thing to do. So I am not sure what you mean
> >
> 
> I am saying decouple the notion of comparable entities and killable entities.

There is no strong (bijection) relation there. Right now killable
entities are comparable (which I hope we agree is the right thing to do)
but nothing really prevents even non-killable entities to be compared in
the future.

-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Johannes Weiner
On Mon, Oct 02, 2017 at 01:24:25PM -0700, Shakeel Butt wrote:
> On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> > On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
> >> > I am sorry to cut the rest of your proposal because it simply goes over
> >> > the scope of the proposed solution while the usecase you are mentioning
> >> > is still possible. If we want to compare intermediate nodes (which seems
> >> > to be the case) then we can always provide a knob to opt-in - be it your
> >> > oom_gang or others.
> >>
> >> In the Roman's proposed solution we can already force the comparison
> >> of intermediate nodes using 'oom_group', I am just requesting to
> >> separate the killall semantics from it.
> >
> > oom_group _is_ about killall semantic.  And comparing killable entities
> > is just a natural thing to do. So I am not sure what you mean
> >
> 
> I am saying decouple the notion of comparable entities and killable entities.

Feel free to send patches in a new thread.

We don't need this level of control for this series to be useful - to
us, and other users. It can easily be added on top of Roman's work.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Johannes Weiner
On Mon, Oct 02, 2017 at 01:24:25PM -0700, Shakeel Butt wrote:
> On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> > On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
> >> > I am sorry to cut the rest of your proposal because it simply goes over
> >> > the scope of the proposed solution while the usecase you are mentioning
> >> > is still possible. If we want to compare intermediate nodes (which seems
> >> > to be the case) then we can always provide a knob to opt-in - be it your
> >> > oom_gang or others.
> >>
> >> In the Roman's proposed solution we can already force the comparison
> >> of intermediate nodes using 'oom_group', I am just requesting to
> >> separate the killall semantics from it.
> >
> > oom_group _is_ about killall semantic.  And comparing killable entities
> > is just a natural thing to do. So I am not sure what you mean
> >
> 
> I am saying decouple the notion of comparable entities and killable entities.

Feel free to send patches in a new thread.

We don't need this level of control for this series to be useful - to
us, and other users. It can easily be added on top of Roman's work.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
>> > I am sorry to cut the rest of your proposal because it simply goes over
>> > the scope of the proposed solution while the usecase you are mentioning
>> > is still possible. If we want to compare intermediate nodes (which seems
>> > to be the case) then we can always provide a knob to opt-in - be it your
>> > oom_gang or others.
>>
>> In the Roman's proposed solution we can already force the comparison
>> of intermediate nodes using 'oom_group', I am just requesting to
>> separate the killall semantics from it.
>
> oom_group _is_ about killall semantic.  And comparing killable entities
> is just a natural thing to do. So I am not sure what you mean
>

I am saying decouple the notion of comparable entities and killable entities.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
>> > I am sorry to cut the rest of your proposal because it simply goes over
>> > the scope of the proposed solution while the usecase you are mentioning
>> > is still possible. If we want to compare intermediate nodes (which seems
>> > to be the case) then we can always provide a knob to opt-in - be it your
>> > oom_gang or others.
>>
>> In the Roman's proposed solution we can already force the comparison
>> of intermediate nodes using 'oom_group', I am just requesting to
>> separate the killall semantics from it.
>
> oom_group _is_ about killall semantic.  And comparing killable entities
> is just a natural thing to do. So I am not sure what you mean
>

I am saying decouple the notion of comparable entities and killable entities.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
(Replying again as format of previous reply got messed up).

On Mon, Oct 2, 2017 at 1:00 PM, Tim Hockin  wrote:
> In the example above:
>
>root
>/\
>  A  D
>  / \
>B   C
>
> Does oom_group allow me to express "compare A and D; if A is chosen
> compare B and C; kill the loser" ?  As I understand the proposal (from
> reading thread, not patch) it does not.

It will let you compare A and D and if A is chosen then kill A, B and C.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
(Replying again as format of previous reply got messed up).

On Mon, Oct 2, 2017 at 1:00 PM, Tim Hockin  wrote:
> In the example above:
>
>root
>/\
>  A  D
>  / \
>B   C
>
> Does oom_group allow me to express "compare A and D; if A is chosen
> compare B and C; kill the loser" ?  As I understand the proposal (from
> reading thread, not patch) it does not.

It will let you compare A and D and if A is chosen then kill A, B and C.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 13:00:54, Tim Hockin wrote:
> In the example above:
> 
>root
>/\
>  A  D
>  / \
>B   C
> 
> Does oom_group allow me to express "compare A and D; if A is chosen
> compare B and C; kill the loser" ?  As I understand the proposal (from
> reading thread, not patch) it does not.

No it doesn't. It allows you to kill A (recursively) as the largest
memory consumer. So, no, it cannot be used for prioritization, but again
this is not yet the scope of the proposed solution.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 13:00:54, Tim Hockin wrote:
> In the example above:
> 
>root
>/\
>  A  D
>  / \
>B   C
> 
> Does oom_group allow me to express "compare A and D; if A is chosen
> compare B and C; kill the loser" ?  As I understand the proposal (from
> reading thread, not patch) it does not.

No it doesn't. It allows you to kill A (recursively) as the largest
memory consumer. So, no, it cannot be used for prioritization, but again
this is not yet the scope of the proposed solution.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Tim Hockin
In the example above:

   root
   /\
 A  D
 / \
   B   C

Does oom_group allow me to express "compare A and D; if A is chosen
compare B and C; kill the loser" ?  As I understand the proposal (from
reading thread, not patch) it does not.

On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
>> > I am sorry to cut the rest of your proposal because it simply goes over
>> > the scope of the proposed solution while the usecase you are mentioning
>> > is still possible. If we want to compare intermediate nodes (which seems
>> > to be the case) then we can always provide a knob to opt-in - be it your
>> > oom_gang or others.
>>
>> In the Roman's proposed solution we can already force the comparison
>> of intermediate nodes using 'oom_group', I am just requesting to
>> separate the killall semantics from it.
>
> oom_group _is_ about killall semantic.  And comparing killable entities
> is just a natural thing to do. So I am not sure what you mean
>
> --
> Michal Hocko
> SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Tim Hockin
In the example above:

   root
   /\
 A  D
 / \
   B   C

Does oom_group allow me to express "compare A and D; if A is chosen
compare B and C; kill the loser" ?  As I understand the proposal (from
reading thread, not patch) it does not.

On Mon, Oct 2, 2017 at 12:56 PM, Michal Hocko  wrote:
> On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
>> > I am sorry to cut the rest of your proposal because it simply goes over
>> > the scope of the proposed solution while the usecase you are mentioning
>> > is still possible. If we want to compare intermediate nodes (which seems
>> > to be the case) then we can always provide a knob to opt-in - be it your
>> > oom_gang or others.
>>
>> In the Roman's proposed solution we can already force the comparison
>> of intermediate nodes using 'oom_group', I am just requesting to
>> separate the killall semantics from it.
>
> oom_group _is_ about killall semantic.  And comparing killable entities
> is just a natural thing to do. So I am not sure what you mean
>
> --
> Michal Hocko
> SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
> > I am sorry to cut the rest of your proposal because it simply goes over
> > the scope of the proposed solution while the usecase you are mentioning
> > is still possible. If we want to compare intermediate nodes (which seems
> > to be the case) then we can always provide a knob to opt-in - be it your
> > oom_gang or others.
> 
> In the Roman's proposed solution we can already force the comparison
> of intermediate nodes using 'oom_group', I am just requesting to
> separate the killall semantics from it.

oom_group _is_ about killall semantic.  And comparing killable entities
is just a natural thing to do. So I am not sure what you mean

-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 12:45:18, Shakeel Butt wrote:
> > I am sorry to cut the rest of your proposal because it simply goes over
> > the scope of the proposed solution while the usecase you are mentioning
> > is still possible. If we want to compare intermediate nodes (which seems
> > to be the case) then we can always provide a knob to opt-in - be it your
> > oom_gang or others.
> 
> In the Roman's proposed solution we can already force the comparison
> of intermediate nodes using 'oom_group', I am just requesting to
> separate the killall semantics from it.

oom_group _is_ about killall semantic.  And comparing killable entities
is just a natural thing to do. So I am not sure what you mean

-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
> I am sorry to cut the rest of your proposal because it simply goes over
> the scope of the proposed solution while the usecase you are mentioning
> is still possible. If we want to compare intermediate nodes (which seems
> to be the case) then we can always provide a knob to opt-in - be it your
> oom_gang or others.

In the Roman's proposed solution we can already force the comparison
of intermediate nodes using 'oom_group', I am just requesting to
separate the killall semantics from it.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
> I am sorry to cut the rest of your proposal because it simply goes over
> the scope of the proposed solution while the usecase you are mentioning
> is still possible. If we want to compare intermediate nodes (which seems
> to be the case) then we can always provide a knob to opt-in - be it your
> oom_gang or others.

In the Roman's proposed solution we can already force the comparison
of intermediate nodes using 'oom_group', I am just requesting to
separate the killall semantics from it.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 12:00:43, Shakeel Butt wrote:
> > Yes and nobody is disputing that, really. I guess the main disconnect
> > here is that different people want to have more detailed control over
> > the victim selection while the patchset tries to handle the most
> > simplistic scenario when a no userspace control over the selection is
> > required. And I would claim that this will be a last majority of setups
> > and we should address it first.
> 
> IMHO the disconnect/disagreement is which memcgs should be compared
> with each other for oom victim selection. Let's forget about oom
> priority and just take size into the account. Should the oom selection
> algorithm, compare the leaves of the hierarchy or should it compare
> siblings? For the single user system, comparing leaves makes sense
> while in a multi user system, siblings should be compared for victim
> selection.

THis is simply not true. This is not about single vs. multi user
systems. This is about how the memcg hierarchy is organized (please
have a look at the example I've provided previously). I would dare to
claim that comparing siblings is a weaker semantic just because it puts
stronger constrains on how the hierarchy is organized. Especially when
the cgrou v2 is single hierarchy based (so we cannot create intermediate
cgroup nodes for other controllers because we would automatically get
a cumulative memory consumption).

I am sorry to cut the rest of your proposal because it simply goes over
the scope of the proposed solution while the usecase you are mentioning
is still possible. If we want to compare intermediate nodes (which seems
to be the case) then we can always provide a knob to opt-in - be it your
oom_gang or others.

I am sorry but I would really appreciate to focus on making the step
1  done before diverging into details about potential improvements and a
better control over the selection. This whole thing is an opt-in so
there is a no risk of a regression.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 12:00:43, Shakeel Butt wrote:
> > Yes and nobody is disputing that, really. I guess the main disconnect
> > here is that different people want to have more detailed control over
> > the victim selection while the patchset tries to handle the most
> > simplistic scenario when a no userspace control over the selection is
> > required. And I would claim that this will be a last majority of setups
> > and we should address it first.
> 
> IMHO the disconnect/disagreement is which memcgs should be compared
> with each other for oom victim selection. Let's forget about oom
> priority and just take size into the account. Should the oom selection
> algorithm, compare the leaves of the hierarchy or should it compare
> siblings? For the single user system, comparing leaves makes sense
> while in a multi user system, siblings should be compared for victim
> selection.

THis is simply not true. This is not about single vs. multi user
systems. This is about how the memcg hierarchy is organized (please
have a look at the example I've provided previously). I would dare to
claim that comparing siblings is a weaker semantic just because it puts
stronger constrains on how the hierarchy is organized. Especially when
the cgrou v2 is single hierarchy based (so we cannot create intermediate
cgroup nodes for other controllers because we would automatically get
a cumulative memory consumption).

I am sorry to cut the rest of your proposal because it simply goes over
the scope of the proposed solution while the usecase you are mentioning
is still possible. If we want to compare intermediate nodes (which seems
to be the case) then we can always provide a knob to opt-in - be it your
oom_gang or others.

I am sorry but I would really appreciate to focus on making the step
1  done before diverging into details about potential improvements and a
better control over the selection. This whole thing is an opt-in so
there is a no risk of a regression.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
> Yes and nobody is disputing that, really. I guess the main disconnect
> here is that different people want to have more detailed control over
> the victim selection while the patchset tries to handle the most
> simplistic scenario when a no userspace control over the selection is
> required. And I would claim that this will be a last majority of setups
> and we should address it first.

IMHO the disconnect/disagreement is which memcgs should be compared
with each other for oom victim selection. Let's forget about oom
priority and just take size into the account. Should the oom selection
algorithm, compare the leaves of the hierarchy or should it compare
siblings? For the single user system, comparing leaves makes sense
while in a multi user system, siblings should be compared for victim
selection.

Coming back to the same example:

   root
   /\
 A  D
 / \
   B   C

Let's view it as a multi user system and some central job scheduler
has asked a node controller on this system to start two jobs 'A' &
'D'. 'A' then went on to create sub-containers. Now, on system oom,
IMO the most simple sensible thing to do from the semantic point of
view is to compare 'A' and 'D' and if 'A''s usage is higher then
killall 'A' if oom_group or recursively find victim memcg taking 'A'
as root.

I have noted before that for single user systems, comparing 'B', 'C' &
'D' is the most sensible thing to do.

Now, in the multi user system, I can kind of force the comparison of
'A' & 'D' by setting oom_group on 'A'. IMO that is abuse of
'oom_group' as it will get double meanings/semantics which are
comparison leader and killall. I would humbly suggest to have two
separate notions instead. Let's say oom_gang (if you prefer just
'oom_group' is fine too) and killall.

For the single user system example, 'B', 'C' and 'D' will have
'oom_gang' set and if the user wants killall semantics too, he can set
it separately.

For the multi user, 'A' and 'D' will have 'oom_gang' set. Now, lets
say 'A' was selected on system oom, if 'killall' was set on 'A' then
'A' will be selected as victim otherwise the oom selection algorithm
will recursively take 'A' as root and try to find victim memcg.

Another major semantic of 'oom_gang' is that the leaves will always be
treated as 'oom_gang'.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Shakeel Butt
> Yes and nobody is disputing that, really. I guess the main disconnect
> here is that different people want to have more detailed control over
> the victim selection while the patchset tries to handle the most
> simplistic scenario when a no userspace control over the selection is
> required. And I would claim that this will be a last majority of setups
> and we should address it first.

IMHO the disconnect/disagreement is which memcgs should be compared
with each other for oom victim selection. Let's forget about oom
priority and just take size into the account. Should the oom selection
algorithm, compare the leaves of the hierarchy or should it compare
siblings? For the single user system, comparing leaves makes sense
while in a multi user system, siblings should be compared for victim
selection.

Coming back to the same example:

   root
   /\
 A  D
 / \
   B   C

Let's view it as a multi user system and some central job scheduler
has asked a node controller on this system to start two jobs 'A' &
'D'. 'A' then went on to create sub-containers. Now, on system oom,
IMO the most simple sensible thing to do from the semantic point of
view is to compare 'A' and 'D' and if 'A''s usage is higher then
killall 'A' if oom_group or recursively find victim memcg taking 'A'
as root.

I have noted before that for single user systems, comparing 'B', 'C' &
'D' is the most sensible thing to do.

Now, in the multi user system, I can kind of force the comparison of
'A' & 'D' by setting oom_group on 'A'. IMO that is abuse of
'oom_group' as it will get double meanings/semantics which are
comparison leader and killall. I would humbly suggest to have two
separate notions instead. Let's say oom_gang (if you prefer just
'oom_group' is fine too) and killall.

For the single user system example, 'B', 'C' and 'D' will have
'oom_gang' set and if the user wants killall semantics too, he can set
it separately.

For the multi user, 'A' and 'D' will have 'oom_gang' set. Now, lets
say 'A' was selected on system oom, if 'killall' was set on 'A' then
'A' will be selected as victim otherwise the oom selection algorithm
will recursively take 'A' as root and try to find victim memcg.

Another major semantic of 'oom_gang' is that the leaves will always be
treated as 'oom_gang'.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 13:47:12, Roman Gushchin wrote:
> On Mon, Oct 02, 2017 at 02:24:34PM +0200, Michal Hocko wrote:
[...]
> > I believe the latest version (v9) looks sensible from the semantic point
> > of view and we should focus on making it into a mergeable shape.
> 
> The only thing is that after some additional thinking I don't think anymore
> that implicit propagation of oom_group is a good idea.

It would be better to discuss this under the v9 thread. This one is
already quite convoluted IMHO.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Mon 02-10-17 13:47:12, Roman Gushchin wrote:
> On Mon, Oct 02, 2017 at 02:24:34PM +0200, Michal Hocko wrote:
[...]
> > I believe the latest version (v9) looks sensible from the semantic point
> > of view and we should focus on making it into a mergeable shape.
> 
> The only thing is that after some additional thinking I don't think anymore
> that implicit propagation of oom_group is a good idea.

It would be better to discuss this under the v9 thread. This one is
already quite convoluted IMHO.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Roman Gushchin
On Mon, Oct 02, 2017 at 02:24:34PM +0200, Michal Hocko wrote:
> On Sun 01-10-17 16:29:48, Shakeel Butt wrote:
> > >
> > > Going back to Michal's example, say the user configured the following:
> > >
> > >root
> > >   /\
> > >  A  D
> > > / \
> > >B   C
> > >
> > > A global OOM event happens and we find this:
> > > - A > D
> > > - B, C, D are oomgroups
> > >
> > > What the user is telling us is that B, C, and D are compound memory
> > > consumers. They cannot be divided into their task parts from a memory
> > > point of view.
> > >
> > > However, the user doesn't say the same for A: the A subtree summarizes
> > > and controls aggregate consumption of B and C, but without groupoom
> > > set on A, the user says that A is in fact divisible into independent
> > > memory consumers B and C.
> > >
> > > If we don't have to kill all of A, but we'd have to kill all of D,
> > > does it make sense to compare the two?
> > >
> > 
> > I think Tim has given very clear explanation why comparing A & D makes
> > perfect sense. However I think the above example, a single user system
> > where a user has designed and created the whole hierarchy and then
> > attaches different jobs/applications to different nodes in this
> > hierarchy, is also a valid scenario.
> 
> Yes and nobody is disputing that, really. I guess the main disconnect
> here is that different people want to have more detailed control over
> the victim selection while the patchset tries to handle the most
> simplistic scenario when a no userspace control over the selection is
> required. And I would claim that this will be a last majority of setups
> and we should address it first.
> 
> A more fine grained control needs some more thinking to come up with a
> sensible and long term sustainable API. Just look back and see at the
> oom_score_adj story and how it ended up unusable in the end (well apart
> from never/always kill corner cases). Let's not repeat that again now.
> 
> I strongly believe that we can come up with something - be it priority
> based, BFP based or module based selection. But let's start simple with
> the most basic scenario first with a most sensible semantic implemented.

Totally agree.

> I believe the latest version (v9) looks sensible from the semantic point
> of view and we should focus on making it into a mergeable shape.

The only thing is that after some additional thinking I don't think anymore
that implicit propagation of oom_group is a good idea.

Let me explain: assume we have memcg A with memory.max and memory.oom_group
set, and nested memcg A/B with memory.max set. Let's imagine we have an OOM
event if A/B. What is an expected system behavior?
We have OOM scoped to A/B, and any action should be also scoped to A/B.
We really shouldn't touch processes which are not belonging to A/B.
That means we should either kill the biggest process in A/B, either all
processes in A/B. It's natural to make A/B/memory.oom_group responsible
for this decision. It's strange to make the depend on A/memory.oom_group, IMO.
It really makes no sense, and makes oom_group knob really hard to describe.

Also, after some off-list discussion, we've realized that memory.oom_knob
should be delegatable. The workload should have control over it to express
dependency between processes.

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Roman Gushchin
On Mon, Oct 02, 2017 at 02:24:34PM +0200, Michal Hocko wrote:
> On Sun 01-10-17 16:29:48, Shakeel Butt wrote:
> > >
> > > Going back to Michal's example, say the user configured the following:
> > >
> > >root
> > >   /\
> > >  A  D
> > > / \
> > >B   C
> > >
> > > A global OOM event happens and we find this:
> > > - A > D
> > > - B, C, D are oomgroups
> > >
> > > What the user is telling us is that B, C, and D are compound memory
> > > consumers. They cannot be divided into their task parts from a memory
> > > point of view.
> > >
> > > However, the user doesn't say the same for A: the A subtree summarizes
> > > and controls aggregate consumption of B and C, but without groupoom
> > > set on A, the user says that A is in fact divisible into independent
> > > memory consumers B and C.
> > >
> > > If we don't have to kill all of A, but we'd have to kill all of D,
> > > does it make sense to compare the two?
> > >
> > 
> > I think Tim has given very clear explanation why comparing A & D makes
> > perfect sense. However I think the above example, a single user system
> > where a user has designed and created the whole hierarchy and then
> > attaches different jobs/applications to different nodes in this
> > hierarchy, is also a valid scenario.
> 
> Yes and nobody is disputing that, really. I guess the main disconnect
> here is that different people want to have more detailed control over
> the victim selection while the patchset tries to handle the most
> simplistic scenario when a no userspace control over the selection is
> required. And I would claim that this will be a last majority of setups
> and we should address it first.
> 
> A more fine grained control needs some more thinking to come up with a
> sensible and long term sustainable API. Just look back and see at the
> oom_score_adj story and how it ended up unusable in the end (well apart
> from never/always kill corner cases). Let's not repeat that again now.
> 
> I strongly believe that we can come up with something - be it priority
> based, BFP based or module based selection. But let's start simple with
> the most basic scenario first with a most sensible semantic implemented.

Totally agree.

> I believe the latest version (v9) looks sensible from the semantic point
> of view and we should focus on making it into a mergeable shape.

The only thing is that after some additional thinking I don't think anymore
that implicit propagation of oom_group is a good idea.

Let me explain: assume we have memcg A with memory.max and memory.oom_group
set, and nested memcg A/B with memory.max set. Let's imagine we have an OOM
event if A/B. What is an expected system behavior?
We have OOM scoped to A/B, and any action should be also scoped to A/B.
We really shouldn't touch processes which are not belonging to A/B.
That means we should either kill the biggest process in A/B, either all
processes in A/B. It's natural to make A/B/memory.oom_group responsible
for this decision. It's strange to make the depend on A/memory.oom_group, IMO.
It really makes no sense, and makes oom_group knob really hard to describe.

Also, after some off-list discussion, we've realized that memory.oom_knob
should be delegatable. The workload should have control over it to express
dependency between processes.

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Sun 01-10-17 16:29:48, Shakeel Butt wrote:
> >
> > Going back to Michal's example, say the user configured the following:
> >
> >root
> >   /\
> >  A  D
> > / \
> >B   C
> >
> > A global OOM event happens and we find this:
> > - A > D
> > - B, C, D are oomgroups
> >
> > What the user is telling us is that B, C, and D are compound memory
> > consumers. They cannot be divided into their task parts from a memory
> > point of view.
> >
> > However, the user doesn't say the same for A: the A subtree summarizes
> > and controls aggregate consumption of B and C, but without groupoom
> > set on A, the user says that A is in fact divisible into independent
> > memory consumers B and C.
> >
> > If we don't have to kill all of A, but we'd have to kill all of D,
> > does it make sense to compare the two?
> >
> 
> I think Tim has given very clear explanation why comparing A & D makes
> perfect sense. However I think the above example, a single user system
> where a user has designed and created the whole hierarchy and then
> attaches different jobs/applications to different nodes in this
> hierarchy, is also a valid scenario.

Yes and nobody is disputing that, really. I guess the main disconnect
here is that different people want to have more detailed control over
the victim selection while the patchset tries to handle the most
simplistic scenario when a no userspace control over the selection is
required. And I would claim that this will be a last majority of setups
and we should address it first.

A more fine grained control needs some more thinking to come up with a
sensible and long term sustainable API. Just look back and see at the
oom_score_adj story and how it ended up unusable in the end (well apart
from never/always kill corner cases). Let's not repeat that again now.

I strongly believe that we can come up with something - be it priority
based, BFP based or module based selection. But let's start simple with
the most basic scenario first with a most sensible semantic implemented.

I believe the latest version (v9) looks sensible from the semantic point
of view and we should focus on making it into a mergeable shape.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Michal Hocko
On Sun 01-10-17 16:29:48, Shakeel Butt wrote:
> >
> > Going back to Michal's example, say the user configured the following:
> >
> >root
> >   /\
> >  A  D
> > / \
> >B   C
> >
> > A global OOM event happens and we find this:
> > - A > D
> > - B, C, D are oomgroups
> >
> > What the user is telling us is that B, C, and D are compound memory
> > consumers. They cannot be divided into their task parts from a memory
> > point of view.
> >
> > However, the user doesn't say the same for A: the A subtree summarizes
> > and controls aggregate consumption of B and C, but without groupoom
> > set on A, the user says that A is in fact divisible into independent
> > memory consumers B and C.
> >
> > If we don't have to kill all of A, but we'd have to kill all of D,
> > does it make sense to compare the two?
> >
> 
> I think Tim has given very clear explanation why comparing A & D makes
> perfect sense. However I think the above example, a single user system
> where a user has designed and created the whole hierarchy and then
> attaches different jobs/applications to different nodes in this
> hierarchy, is also a valid scenario.

Yes and nobody is disputing that, really. I guess the main disconnect
here is that different people want to have more detailed control over
the victim selection while the patchset tries to handle the most
simplistic scenario when a no userspace control over the selection is
required. And I would claim that this will be a last majority of setups
and we should address it first.

A more fine grained control needs some more thinking to come up with a
sensible and long term sustainable API. Just look back and see at the
oom_score_adj story and how it ended up unusable in the end (well apart
from never/always kill corner cases). Let's not repeat that again now.

I strongly believe that we can come up with something - be it priority
based, BFP based or module based selection. But let's start simple with
the most basic scenario first with a most sensible semantic implemented.

I believe the latest version (v9) looks sensible from the semantic point
of view and we should focus on making it into a mergeable shape.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Tetsuo Handa
Shakeel Butt wrote:
> I think Tim has given very clear explanation why comparing A & D makes
> perfect sense. However I think the above example, a single user system
> where a user has designed and created the whole hierarchy and then
> attaches different jobs/applications to different nodes in this
> hierarchy, is also a valid scenario. One solution I can think of, to
> cater both scenarios, is to introduce a notion of 'bypass oom' or not
> include a memcg for oom comparision and instead include its children
> in the comparison.

I'm not catching up to this thread because I don't use memcg.
But if there are multiple scenarios, what about offloading memcg OOM
handling to loadable kernel modules (like there are many filesystems
which are called by VFS interface) ? We can do try and error more casually.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-02 Thread Tetsuo Handa
Shakeel Butt wrote:
> I think Tim has given very clear explanation why comparing A & D makes
> perfect sense. However I think the above example, a single user system
> where a user has designed and created the whole hierarchy and then
> attaches different jobs/applications to different nodes in this
> hierarchy, is also a valid scenario. One solution I can think of, to
> cater both scenarios, is to introduce a notion of 'bypass oom' or not
> include a memcg for oom comparision and instead include its children
> in the comparison.

I'm not catching up to this thread because I don't use memcg.
But if there are multiple scenarios, what about offloading memcg OOM
handling to loadable kernel modules (like there are many filesystems
which are called by VFS interface) ? We can do try and error more casually.


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-01 Thread Shakeel Butt
>
> Going back to Michal's example, say the user configured the following:
>
>root
>   /\
>  A  D
> / \
>B   C
>
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
>
> What the user is telling us is that B, C, and D are compound memory
> consumers. They cannot be divided into their task parts from a memory
> point of view.
>
> However, the user doesn't say the same for A: the A subtree summarizes
> and controls aggregate consumption of B and C, but without groupoom
> set on A, the user says that A is in fact divisible into independent
> memory consumers B and C.
>
> If we don't have to kill all of A, but we'd have to kill all of D,
> does it make sense to compare the two?
>

I think Tim has given very clear explanation why comparing A & D makes
perfect sense. However I think the above example, a single user system
where a user has designed and created the whole hierarchy and then
attaches different jobs/applications to different nodes in this
hierarchy, is also a valid scenario. One solution I can think of, to
cater both scenarios, is to introduce a notion of 'bypass oom' or not
include a memcg for oom comparision and instead include its children
in the comparison.

So, in the same above example:
root
   /   \
  A(b)D
 /  \
B   C

A is marked as bypass and thus B and C are to be compared to D. So,
for the single user scenario, all the internal nodes are marked
'bypass oom comparison' and oom_priority of the leaves has to be set
to the same value.

Below is the pseudo code of select_victim_memcg() based on this idea
and David's previous pseudo code. The calculation of size of a memcg
is still not very well baked here yet. I am working on it and I plan
to have a patch based on Roman's v9 "mm, oom: cgroup-aware OOM killer"
patch.


struct mem_cgroup *memcg = root_mem_cgroup;
struct mem_cgroup *selected_memcg = root_mem_cgroup;
struct mem_cgroup *low_memcg;
unsigned long low_priority;
unsigned long prev_badness = memcg_oom_badness(memcg); // Roman's code
LIST_HEAD(queue);

next_level:
low_memcg = NULL;
low_priority = ULONG_MAX;

next:
for_each_child_of_memcg(it, memcg) {
unsigned long prio = it->oom_priority;
unsigned long badness = 0;

if (it->bypass_oom && !it->oom_group &&
memcg_has_children(it)) {
list_add(>oom_queue, );
continue;
}

if (prio > low_priority)
continue;

if (prio == low_priority) {
badness = mem_cgroup_usage(it); // for
simplicity, need more thinking
if (badness < prev_badness)
continue;
}

low_memcg = it;
low_priority = prio;
prev_badness = badness ?: mem_cgroup_usage(it);  //
for simplicity
}
if (!list_empty()) {
memcg = list_last_entry(, struct mem_cgroup, oom_queue);
list_del(>oom_queue);
goto next;
}
if (low_memcg) {
selected_memcg = memcg = low_memcg;
prev_badness = 0;
if (!low_memcg->oom_group)
goto next_level;
}
if (selected_memcg->oom_group)
oom_kill_memcg(selected_memcg);
else
oom_kill_process_from_memcg(selected_memcg);


Re: [v8 0/4] cgroup-aware OOM killer

2017-10-01 Thread Shakeel Butt
>
> Going back to Michal's example, say the user configured the following:
>
>root
>   /\
>  A  D
> / \
>B   C
>
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
>
> What the user is telling us is that B, C, and D are compound memory
> consumers. They cannot be divided into their task parts from a memory
> point of view.
>
> However, the user doesn't say the same for A: the A subtree summarizes
> and controls aggregate consumption of B and C, but without groupoom
> set on A, the user says that A is in fact divisible into independent
> memory consumers B and C.
>
> If we don't have to kill all of A, but we'd have to kill all of D,
> does it make sense to compare the two?
>

I think Tim has given very clear explanation why comparing A & D makes
perfect sense. However I think the above example, a single user system
where a user has designed and created the whole hierarchy and then
attaches different jobs/applications to different nodes in this
hierarchy, is also a valid scenario. One solution I can think of, to
cater both scenarios, is to introduce a notion of 'bypass oom' or not
include a memcg for oom comparision and instead include its children
in the comparison.

So, in the same above example:
root
   /   \
  A(b)D
 /  \
B   C

A is marked as bypass and thus B and C are to be compared to D. So,
for the single user scenario, all the internal nodes are marked
'bypass oom comparison' and oom_priority of the leaves has to be set
to the same value.

Below is the pseudo code of select_victim_memcg() based on this idea
and David's previous pseudo code. The calculation of size of a memcg
is still not very well baked here yet. I am working on it and I plan
to have a patch based on Roman's v9 "mm, oom: cgroup-aware OOM killer"
patch.


struct mem_cgroup *memcg = root_mem_cgroup;
struct mem_cgroup *selected_memcg = root_mem_cgroup;
struct mem_cgroup *low_memcg;
unsigned long low_priority;
unsigned long prev_badness = memcg_oom_badness(memcg); // Roman's code
LIST_HEAD(queue);

next_level:
low_memcg = NULL;
low_priority = ULONG_MAX;

next:
for_each_child_of_memcg(it, memcg) {
unsigned long prio = it->oom_priority;
unsigned long badness = 0;

if (it->bypass_oom && !it->oom_group &&
memcg_has_children(it)) {
list_add(>oom_queue, );
continue;
}

if (prio > low_priority)
continue;

if (prio == low_priority) {
badness = mem_cgroup_usage(it); // for
simplicity, need more thinking
if (badness < prev_badness)
continue;
}

low_memcg = it;
low_priority = prio;
prev_badness = badness ?: mem_cgroup_usage(it);  //
for simplicity
}
if (!list_empty()) {
memcg = list_last_entry(, struct mem_cgroup, oom_queue);
list_del(>oom_queue);
goto next;
}
if (low_memcg) {
selected_memcg = memcg = low_memcg;
prev_badness = 0;
if (!low_memcg->oom_group)
goto next_level;
}
if (selected_memcg->oom_group)
oom_kill_memcg(selected_memcg);
else
oom_kill_process_from_memcg(selected_memcg);


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Tim Hockin
On Wed, Sep 27, 2017 at 9:23 AM, Roman Gushchin  wrote:
> On Wed, Sep 27, 2017 at 08:35:50AM -0700, Tim Hockin wrote:
>> On Wed, Sep 27, 2017 at 12:43 AM, Michal Hocko  wrote:
>> > On Tue 26-09-17 20:37:37, Tim Hockin wrote:
>> > [...]
>> >> I feel like David has offered examples here, and many of us at Google
>> >> have offered examples as long ago as 2013 (if I recall) of cases where
>> >> the proposed heuristic is EXACTLY WRONG.
>> >
>> > I do not think we have discussed anything resembling the current
>> > approach. And I would really appreciate some more examples where
>> > decisions based on leaf nodes would be EXACTLY WRONG.
>> >
>> >> We need OOM behavior to kill in a deterministic order configured by
>> >> policy.
>> >
>> > And nobody is objecting to this usecase. I think we can build a priority
>> > policy on top of leaf-based decision as well. The main point we are
>> > trying to sort out here is a reasonable semantic that would work for
>> > most workloads. Sibling based selection will simply not work on those
>> > that have to use deeper hierarchies for organizational purposes. I
>> > haven't heard a counter argument for that example yet.
>>
>
> Hi, Tim!
>
>> We have a priority-based, multi-user cluster.  That cluster runs a
>> variety of work, including critical things like search and gmail, as
>> well as non-critical things like batch work.  We try to offer our
>> users an SLA around how often they will be killed by factors outside
>> themselves, but we also want to get higher utilization.  We know for a
>> fact (data, lots of data) that most jobs have spare memory capacity,
>> set aside for spikes or simply because accurate sizing is hard.  We
>> can sell "guaranteed" resources to critical jobs, with a high SLA.  We
>> can sell "best effort" resources to non-critical jobs with a low SLA.
>> We achieve much better overall utilization this way.
>
> This is well understood.
>
>>
>> I need to represent the priority of these tasks in a way that gives me
>> a very strong promise that, in case of system OOM, the non-critical
>> jobs will be chosen before the critical jobs.  Regardless of size.
>> Regardless of how many non-critical jobs have to die.  I'd rather kill
>> *all* of the non-critical jobs than a single critical job.  Size of
>> the process or cgroup is simply not a factor, and honestly given 2
>> options of equal priority I'd say age matters more than size.
>>
>> So concretely I have 2 first-level cgroups, one for "guaranteed" and
>> one for "best effort" classes.  I always want to kill from "best
>> effort", even if that means killing 100 small cgroups, before touching
>> "guaranteed".
>>
>> I apologize if this is not as thorough as the rest of the thread - I
>> am somewhat out of touch with the guts of it all these days.  I just
>> feel compelled to indicate that, as a historical user (via Google
>> systems) and current user (via Kubernetes), some of the assertions
>> being made here do not ring true for our very real use cases.  I
>> desperately want cgroup-aware OOM handing, but it has to be
>> policy-based or it is just not useful to us.
>
> A policy-based approach was suggested by Michal at a very beginning of
> this discussion. Although nobody had any strong objections against it,
> we've agreed that this is out of scope of this patchset.
>
> The idea of this patchset is to introduce an ability to select a memcg
> as an OOM victim with the following optional killing of all belonging tasks.
> I believe, it's absolutely mandatory for _any_ further development
> of the OOM killer, which wants to deal with memory cgroups as OOM entities.
>
> If you think that it makes impossible to support some use cases in the future,
> let's discuss it. Otherwise, I'd prefer to finish this part of the work,
> and proceed to the following improvements on top of it.
>
> Thank you!

I am 100% in favor of killing whole groups.  We want that too.  I just
needed to express disagreement with statements that size-based
decisions could not produce bad results.  They can and do.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Tim Hockin
On Wed, Sep 27, 2017 at 9:23 AM, Roman Gushchin  wrote:
> On Wed, Sep 27, 2017 at 08:35:50AM -0700, Tim Hockin wrote:
>> On Wed, Sep 27, 2017 at 12:43 AM, Michal Hocko  wrote:
>> > On Tue 26-09-17 20:37:37, Tim Hockin wrote:
>> > [...]
>> >> I feel like David has offered examples here, and many of us at Google
>> >> have offered examples as long ago as 2013 (if I recall) of cases where
>> >> the proposed heuristic is EXACTLY WRONG.
>> >
>> > I do not think we have discussed anything resembling the current
>> > approach. And I would really appreciate some more examples where
>> > decisions based on leaf nodes would be EXACTLY WRONG.
>> >
>> >> We need OOM behavior to kill in a deterministic order configured by
>> >> policy.
>> >
>> > And nobody is objecting to this usecase. I think we can build a priority
>> > policy on top of leaf-based decision as well. The main point we are
>> > trying to sort out here is a reasonable semantic that would work for
>> > most workloads. Sibling based selection will simply not work on those
>> > that have to use deeper hierarchies for organizational purposes. I
>> > haven't heard a counter argument for that example yet.
>>
>
> Hi, Tim!
>
>> We have a priority-based, multi-user cluster.  That cluster runs a
>> variety of work, including critical things like search and gmail, as
>> well as non-critical things like batch work.  We try to offer our
>> users an SLA around how often they will be killed by factors outside
>> themselves, but we also want to get higher utilization.  We know for a
>> fact (data, lots of data) that most jobs have spare memory capacity,
>> set aside for spikes or simply because accurate sizing is hard.  We
>> can sell "guaranteed" resources to critical jobs, with a high SLA.  We
>> can sell "best effort" resources to non-critical jobs with a low SLA.
>> We achieve much better overall utilization this way.
>
> This is well understood.
>
>>
>> I need to represent the priority of these tasks in a way that gives me
>> a very strong promise that, in case of system OOM, the non-critical
>> jobs will be chosen before the critical jobs.  Regardless of size.
>> Regardless of how many non-critical jobs have to die.  I'd rather kill
>> *all* of the non-critical jobs than a single critical job.  Size of
>> the process or cgroup is simply not a factor, and honestly given 2
>> options of equal priority I'd say age matters more than size.
>>
>> So concretely I have 2 first-level cgroups, one for "guaranteed" and
>> one for "best effort" classes.  I always want to kill from "best
>> effort", even if that means killing 100 small cgroups, before touching
>> "guaranteed".
>>
>> I apologize if this is not as thorough as the rest of the thread - I
>> am somewhat out of touch with the guts of it all these days.  I just
>> feel compelled to indicate that, as a historical user (via Google
>> systems) and current user (via Kubernetes), some of the assertions
>> being made here do not ring true for our very real use cases.  I
>> desperately want cgroup-aware OOM handing, but it has to be
>> policy-based or it is just not useful to us.
>
> A policy-based approach was suggested by Michal at a very beginning of
> this discussion. Although nobody had any strong objections against it,
> we've agreed that this is out of scope of this patchset.
>
> The idea of this patchset is to introduce an ability to select a memcg
> as an OOM victim with the following optional killing of all belonging tasks.
> I believe, it's absolutely mandatory for _any_ further development
> of the OOM killer, which wants to deal with memory cgroups as OOM entities.
>
> If you think that it makes impossible to support some use cases in the future,
> let's discuss it. Otherwise, I'd prefer to finish this part of the work,
> and proceed to the following improvements on top of it.
>
> Thank you!

I am 100% in favor of killing whole groups.  We want that too.  I just
needed to express disagreement with statements that size-based
decisions could not produce bad results.  They can and do.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Roman Gushchin
On Wed, Sep 27, 2017 at 08:35:50AM -0700, Tim Hockin wrote:
> On Wed, Sep 27, 2017 at 12:43 AM, Michal Hocko  wrote:
> > On Tue 26-09-17 20:37:37, Tim Hockin wrote:
> > [...]
> >> I feel like David has offered examples here, and many of us at Google
> >> have offered examples as long ago as 2013 (if I recall) of cases where
> >> the proposed heuristic is EXACTLY WRONG.
> >
> > I do not think we have discussed anything resembling the current
> > approach. And I would really appreciate some more examples where
> > decisions based on leaf nodes would be EXACTLY WRONG.
> >
> >> We need OOM behavior to kill in a deterministic order configured by
> >> policy.
> >
> > And nobody is objecting to this usecase. I think we can build a priority
> > policy on top of leaf-based decision as well. The main point we are
> > trying to sort out here is a reasonable semantic that would work for
> > most workloads. Sibling based selection will simply not work on those
> > that have to use deeper hierarchies for organizational purposes. I
> > haven't heard a counter argument for that example yet.
>

Hi, Tim!

> We have a priority-based, multi-user cluster.  That cluster runs a
> variety of work, including critical things like search and gmail, as
> well as non-critical things like batch work.  We try to offer our
> users an SLA around how often they will be killed by factors outside
> themselves, but we also want to get higher utilization.  We know for a
> fact (data, lots of data) that most jobs have spare memory capacity,
> set aside for spikes or simply because accurate sizing is hard.  We
> can sell "guaranteed" resources to critical jobs, with a high SLA.  We
> can sell "best effort" resources to non-critical jobs with a low SLA.
> We achieve much better overall utilization this way.

This is well understood.

> 
> I need to represent the priority of these tasks in a way that gives me
> a very strong promise that, in case of system OOM, the non-critical
> jobs will be chosen before the critical jobs.  Regardless of size.
> Regardless of how many non-critical jobs have to die.  I'd rather kill
> *all* of the non-critical jobs than a single critical job.  Size of
> the process or cgroup is simply not a factor, and honestly given 2
> options of equal priority I'd say age matters more than size.
> 
> So concretely I have 2 first-level cgroups, one for "guaranteed" and
> one for "best effort" classes.  I always want to kill from "best
> effort", even if that means killing 100 small cgroups, before touching
> "guaranteed".
> 
> I apologize if this is not as thorough as the rest of the thread - I
> am somewhat out of touch with the guts of it all these days.  I just
> feel compelled to indicate that, as a historical user (via Google
> systems) and current user (via Kubernetes), some of the assertions
> being made here do not ring true for our very real use cases.  I
> desperately want cgroup-aware OOM handing, but it has to be
> policy-based or it is just not useful to us.

A policy-based approach was suggested by Michal at a very beginning of
this discussion. Although nobody had any strong objections against it,
we've agreed that this is out of scope of this patchset.

The idea of this patchset is to introduce an ability to select a memcg
as an OOM victim with the following optional killing of all belonging tasks.
I believe, it's absolutely mandatory for _any_ further development
of the OOM killer, which wants to deal with memory cgroups as OOM entities.

If you think that it makes impossible to support some use cases in the future,
let's discuss it. Otherwise, I'd prefer to finish this part of the work,
and proceed to the following improvements on top of it.

Thank you!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Roman Gushchin
On Wed, Sep 27, 2017 at 08:35:50AM -0700, Tim Hockin wrote:
> On Wed, Sep 27, 2017 at 12:43 AM, Michal Hocko  wrote:
> > On Tue 26-09-17 20:37:37, Tim Hockin wrote:
> > [...]
> >> I feel like David has offered examples here, and many of us at Google
> >> have offered examples as long ago as 2013 (if I recall) of cases where
> >> the proposed heuristic is EXACTLY WRONG.
> >
> > I do not think we have discussed anything resembling the current
> > approach. And I would really appreciate some more examples where
> > decisions based on leaf nodes would be EXACTLY WRONG.
> >
> >> We need OOM behavior to kill in a deterministic order configured by
> >> policy.
> >
> > And nobody is objecting to this usecase. I think we can build a priority
> > policy on top of leaf-based decision as well. The main point we are
> > trying to sort out here is a reasonable semantic that would work for
> > most workloads. Sibling based selection will simply not work on those
> > that have to use deeper hierarchies for organizational purposes. I
> > haven't heard a counter argument for that example yet.
>

Hi, Tim!

> We have a priority-based, multi-user cluster.  That cluster runs a
> variety of work, including critical things like search and gmail, as
> well as non-critical things like batch work.  We try to offer our
> users an SLA around how often they will be killed by factors outside
> themselves, but we also want to get higher utilization.  We know for a
> fact (data, lots of data) that most jobs have spare memory capacity,
> set aside for spikes or simply because accurate sizing is hard.  We
> can sell "guaranteed" resources to critical jobs, with a high SLA.  We
> can sell "best effort" resources to non-critical jobs with a low SLA.
> We achieve much better overall utilization this way.

This is well understood.

> 
> I need to represent the priority of these tasks in a way that gives me
> a very strong promise that, in case of system OOM, the non-critical
> jobs will be chosen before the critical jobs.  Regardless of size.
> Regardless of how many non-critical jobs have to die.  I'd rather kill
> *all* of the non-critical jobs than a single critical job.  Size of
> the process or cgroup is simply not a factor, and honestly given 2
> options of equal priority I'd say age matters more than size.
> 
> So concretely I have 2 first-level cgroups, one for "guaranteed" and
> one for "best effort" classes.  I always want to kill from "best
> effort", even if that means killing 100 small cgroups, before touching
> "guaranteed".
> 
> I apologize if this is not as thorough as the rest of the thread - I
> am somewhat out of touch with the guts of it all these days.  I just
> feel compelled to indicate that, as a historical user (via Google
> systems) and current user (via Kubernetes), some of the assertions
> being made here do not ring true for our very real use cases.  I
> desperately want cgroup-aware OOM handing, but it has to be
> policy-based or it is just not useful to us.

A policy-based approach was suggested by Michal at a very beginning of
this discussion. Although nobody had any strong objections against it,
we've agreed that this is out of scope of this patchset.

The idea of this patchset is to introduce an ability to select a memcg
as an OOM victim with the following optional killing of all belonging tasks.
I believe, it's absolutely mandatory for _any_ further development
of the OOM killer, which wants to deal with memory cgroups as OOM entities.

If you think that it makes impossible to support some use cases in the future,
let's discuss it. Otherwise, I'd prefer to finish this part of the work,
and proceed to the following improvements on top of it.

Thank you!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Tim Hockin
On Wed, Sep 27, 2017 at 12:43 AM, Michal Hocko  wrote:
> On Tue 26-09-17 20:37:37, Tim Hockin wrote:
> [...]
>> I feel like David has offered examples here, and many of us at Google
>> have offered examples as long ago as 2013 (if I recall) of cases where
>> the proposed heuristic is EXACTLY WRONG.
>
> I do not think we have discussed anything resembling the current
> approach. And I would really appreciate some more examples where
> decisions based on leaf nodes would be EXACTLY WRONG.
>
>> We need OOM behavior to kill in a deterministic order configured by
>> policy.
>
> And nobody is objecting to this usecase. I think we can build a priority
> policy on top of leaf-based decision as well. The main point we are
> trying to sort out here is a reasonable semantic that would work for
> most workloads. Sibling based selection will simply not work on those
> that have to use deeper hierarchies for organizational purposes. I
> haven't heard a counter argument for that example yet.

We have a priority-based, multi-user cluster.  That cluster runs a
variety of work, including critical things like search and gmail, as
well as non-critical things like batch work.  We try to offer our
users an SLA around how often they will be killed by factors outside
themselves, but we also want to get higher utilization.  We know for a
fact (data, lots of data) that most jobs have spare memory capacity,
set aside for spikes or simply because accurate sizing is hard.  We
can sell "guaranteed" resources to critical jobs, with a high SLA.  We
can sell "best effort" resources to non-critical jobs with a low SLA.
We achieve much better overall utilization this way.

I need to represent the priority of these tasks in a way that gives me
a very strong promise that, in case of system OOM, the non-critical
jobs will be chosen before the critical jobs.  Regardless of size.
Regardless of how many non-critical jobs have to die.  I'd rather kill
*all* of the non-critical jobs than a single critical job.  Size of
the process or cgroup is simply not a factor, and honestly given 2
options of equal priority I'd say age matters more than size.

So concretely I have 2 first-level cgroups, one for "guaranteed" and
one for "best effort" classes.  I always want to kill from "best
effort", even if that means killing 100 small cgroups, before touching
"guaranteed".

I apologize if this is not as thorough as the rest of the thread - I
am somewhat out of touch with the guts of it all these days.  I just
feel compelled to indicate that, as a historical user (via Google
systems) and current user (via Kubernetes), some of the assertions
being made here do not ring true for our very real use cases.  I
desperately want cgroup-aware OOM handing, but it has to be
policy-based or it is just not useful to us.

Thanks.

Tim


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Tim Hockin
On Wed, Sep 27, 2017 at 12:43 AM, Michal Hocko  wrote:
> On Tue 26-09-17 20:37:37, Tim Hockin wrote:
> [...]
>> I feel like David has offered examples here, and many of us at Google
>> have offered examples as long ago as 2013 (if I recall) of cases where
>> the proposed heuristic is EXACTLY WRONG.
>
> I do not think we have discussed anything resembling the current
> approach. And I would really appreciate some more examples where
> decisions based on leaf nodes would be EXACTLY WRONG.
>
>> We need OOM behavior to kill in a deterministic order configured by
>> policy.
>
> And nobody is objecting to this usecase. I think we can build a priority
> policy on top of leaf-based decision as well. The main point we are
> trying to sort out here is a reasonable semantic that would work for
> most workloads. Sibling based selection will simply not work on those
> that have to use deeper hierarchies for organizational purposes. I
> haven't heard a counter argument for that example yet.

We have a priority-based, multi-user cluster.  That cluster runs a
variety of work, including critical things like search and gmail, as
well as non-critical things like batch work.  We try to offer our
users an SLA around how often they will be killed by factors outside
themselves, but we also want to get higher utilization.  We know for a
fact (data, lots of data) that most jobs have spare memory capacity,
set aside for spikes or simply because accurate sizing is hard.  We
can sell "guaranteed" resources to critical jobs, with a high SLA.  We
can sell "best effort" resources to non-critical jobs with a low SLA.
We achieve much better overall utilization this way.

I need to represent the priority of these tasks in a way that gives me
a very strong promise that, in case of system OOM, the non-critical
jobs will be chosen before the critical jobs.  Regardless of size.
Regardless of how many non-critical jobs have to die.  I'd rather kill
*all* of the non-critical jobs than a single critical job.  Size of
the process or cgroup is simply not a factor, and honestly given 2
options of equal priority I'd say age matters more than size.

So concretely I have 2 first-level cgroups, one for "guaranteed" and
one for "best effort" classes.  I always want to kill from "best
effort", even if that means killing 100 small cgroups, before touching
"guaranteed".

I apologize if this is not as thorough as the rest of the thread - I
am somewhat out of touch with the guts of it all these days.  I just
feel compelled to indicate that, as a historical user (via Google
systems) and current user (via Kubernetes), some of the assertions
being made here do not ring true for our very real use cases.  I
desperately want cgroup-aware OOM handing, but it has to be
policy-based or it is just not useful to us.

Thanks.

Tim


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Roman Gushchin
On Wed, Sep 27, 2017 at 09:43:19AM +0200, Michal Hocko wrote:
> On Tue 26-09-17 20:37:37, Tim Hockin wrote:
> [...]
> > I feel like David has offered examples here, and many of us at Google
> > have offered examples as long ago as 2013 (if I recall) of cases where
> > the proposed heuristic is EXACTLY WRONG.
> 
> I do not think we have discussed anything resembling the current
> approach. And I would really appreciate some more examples where
> decisions based on leaf nodes would be EXACTLY WRONG.
>

I would agree here.

The discussing two-step approach (select biggest leaf or oom_group memcg,
then select largest process inside) does really look as a way to go.

It should work well in practice and it allows further development.
It will catch workloads which are leaking child processes by default,
which is an advantage in comparison to the existing algorithm.

Both strong hierarchical approach (as in v8) and pure flat (by Johannes)
are more limiting. In first case, deep hierarchies are affected (as Michal
mentioned) and we stick with tree traverse policy (Tejun's point).

In second case, the further development is under a question: any new idea
(say, oom_priorities, or, for example, if we will have a new useful memcg
metric) should be applied to processes and memcgs simultaneously.
Also, We drop any idea of memcg-level fairness and obtain some implementation
issues (which I mentioned earlier). The idea of mixing tasks and memcgs
leads to a much more hairy code, and the OOM code is already quite hairy.
The idea of comparing killable entities is a leaking abstraction,
as we can't predict how much memory killing a single process will release
(say, for example, the process is the init in a pid namespace).

> > We need OOM behavior to kill in a deterministic order configured by
> > policy.
> 
> And nobody is objecting to this usecase. I think we can build a priority
> policy on top of leaf-based decision as well. The main point we are
> trying to sort out here is a reasonable semantic that would work for
> most workloads. Sibling based selection will simply not work on those
> that have to use deeper hierarchies for organizational purposes. I
> haven't heard a counter argument for that example yet.

Yes, implementing oom_priorities is a ~15 lines patch on top of
the discussing approach. David can use this small off-stream patch
for now, in any case it's a step forward in comparison to the existing state.


Overall, do we have any open question left? Does anyone has any strong
arguments against the discussing design?

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Roman Gushchin
On Wed, Sep 27, 2017 at 09:43:19AM +0200, Michal Hocko wrote:
> On Tue 26-09-17 20:37:37, Tim Hockin wrote:
> [...]
> > I feel like David has offered examples here, and many of us at Google
> > have offered examples as long ago as 2013 (if I recall) of cases where
> > the proposed heuristic is EXACTLY WRONG.
> 
> I do not think we have discussed anything resembling the current
> approach. And I would really appreciate some more examples where
> decisions based on leaf nodes would be EXACTLY WRONG.
>

I would agree here.

The discussing two-step approach (select biggest leaf or oom_group memcg,
then select largest process inside) does really look as a way to go.

It should work well in practice and it allows further development.
It will catch workloads which are leaking child processes by default,
which is an advantage in comparison to the existing algorithm.

Both strong hierarchical approach (as in v8) and pure flat (by Johannes)
are more limiting. In first case, deep hierarchies are affected (as Michal
mentioned) and we stick with tree traverse policy (Tejun's point).

In second case, the further development is under a question: any new idea
(say, oom_priorities, or, for example, if we will have a new useful memcg
metric) should be applied to processes and memcgs simultaneously.
Also, We drop any idea of memcg-level fairness and obtain some implementation
issues (which I mentioned earlier). The idea of mixing tasks and memcgs
leads to a much more hairy code, and the OOM code is already quite hairy.
The idea of comparing killable entities is a leaking abstraction,
as we can't predict how much memory killing a single process will release
(say, for example, the process is the init in a pid namespace).

> > We need OOM behavior to kill in a deterministic order configured by
> > policy.
> 
> And nobody is objecting to this usecase. I think we can build a priority
> policy on top of leaf-based decision as well. The main point we are
> trying to sort out here is a reasonable semantic that would work for
> most workloads. Sibling based selection will simply not work on those
> that have to use deeper hierarchies for organizational purposes. I
> haven't heard a counter argument for that example yet.

Yes, implementing oom_priorities is a ~15 lines patch on top of
the discussing approach. David can use this small off-stream patch
for now, in any case it's a step forward in comparison to the existing state.


Overall, do we have any open question left? Does anyone has any strong
arguments against the discussing design?

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Roman Gushchin
On Wed, Sep 27, 2017 at 09:37:44AM +0200, Michal Hocko wrote:
> On Tue 26-09-17 14:04:41, David Rientjes wrote:
> > On Tue, 26 Sep 2017, Michal Hocko wrote:
> > 
> > > > No, I agree that we shouldn't compare sibling memory cgroups based on 
> > > > different criteria depending on whether group_oom is set or not.
> > > > 
> > > > I think it would be better to compare siblings based on the same 
> > > > criteria 
> > > > independent of group_oom if the user has mounted the hierarchy with the 
> > > > new mode (I think we all agree that the mount option is needed).  It's 
> > > > very easy to describe to the user and the selection is simple to 
> > > > understand. 
> > > 
> > > I disagree. Just take the most simplistic example when cgroups reflect
> > > some other higher level organization - e.g. school with teachers,
> > > students and admins as the top level cgroups to control the proper cpu
> > > share load. Now you want to have a fair OOM selection between different
> > > entities. Do you consider selecting students all the time as an expected
> > > behavior just because their are the largest group? This just doesn't
> > > make any sense to me.
> > > 
> > 
> > Are you referring to this?
> > 
> > root
> >/\
> > studentsadmins
> > /  \/\
> > A  BCD
> > 
> > If the cumulative usage of all students exceeds the cumulative usage of 
> > all admins, yes, the choice is to kill from the /students tree.
> 
> Which is wrong IMHO because the number of stutends is likely much more
> larger than admins (or teachers) yet it might be the admins one to run
> away. This example simply shows how comparing siblinks highly depends
> on the way you organize the hierarchy rather than the actual memory
> consumer runaways which is the primary goal of the OOM killer to handle.
> 
> > This has been Roman's design from the very beginning.
> 
> I suspect this was the case because deeper hierarchies for
> organizational purposes haven't been considered.
> 
> > If the preference is to kill 
> > the single largest process, which may be attached to either subtree, you 
> > would not have opted-in to the new heuristic.
> 
> I believe you are making a wrong assumption here. The container cleanup
> is sound reason to opt in and deeper hierarchies are simply required in
> the cgroup v2 world where you do not have separate hierarchies.
>  
> > > > Then, once a cgroup has been chosen as the victim cgroup, 
> > > > kill the process with the highest badness, allowing the user to 
> > > > influence 
> > > > that with /proc/pid/oom_score_adj just as today, if group_oom is 
> > > > disabled; 
> > > > otherwise, kill all eligible processes if enabled.
> > > 
> > > And now, what should be the semantic of group_oom on an intermediate
> > > (non-leaf) memcg? Why should we compare it to other killable entities?
> > > Roman was mentioning a setup where a _single_ workload consists of a
> > > deeper hierarchy which has to be shut down at once. It absolutely makes
> > > sense to consider the cumulative memory of that hierarchy when we are
> > > going to kill it all.
> > > 
> > 
> > If group_oom is enabled on an intermediate memcg, I think the intuitive 
> > way to handle it would be that all descendants are also implicitly or 
> > explicitly group_oom.
> 
> This is an interesting point. I would tend to agree here. If somebody
> requires all-in clean up up the hierarchy it feels strange that a
> subtree would disagree (e.g. during memcg oom on the subtree). I can
> hardly see a usecase that would really need a different group_oom policy
> depending on where in the hierarchy the oom happened to be honest.
> Roman?

Yes, I'd say that it's strange to apply settings from outside the OOMing
cgroup to the subtree, but actually it's not. The oom_group setting should
basically mean that the OOM killer will not kill a random task in the subtree.
And it doesn't matter if it was global or memcg-wide OOM.

Applied to v9. Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Roman Gushchin
On Wed, Sep 27, 2017 at 09:37:44AM +0200, Michal Hocko wrote:
> On Tue 26-09-17 14:04:41, David Rientjes wrote:
> > On Tue, 26 Sep 2017, Michal Hocko wrote:
> > 
> > > > No, I agree that we shouldn't compare sibling memory cgroups based on 
> > > > different criteria depending on whether group_oom is set or not.
> > > > 
> > > > I think it would be better to compare siblings based on the same 
> > > > criteria 
> > > > independent of group_oom if the user has mounted the hierarchy with the 
> > > > new mode (I think we all agree that the mount option is needed).  It's 
> > > > very easy to describe to the user and the selection is simple to 
> > > > understand. 
> > > 
> > > I disagree. Just take the most simplistic example when cgroups reflect
> > > some other higher level organization - e.g. school with teachers,
> > > students and admins as the top level cgroups to control the proper cpu
> > > share load. Now you want to have a fair OOM selection between different
> > > entities. Do you consider selecting students all the time as an expected
> > > behavior just because their are the largest group? This just doesn't
> > > make any sense to me.
> > > 
> > 
> > Are you referring to this?
> > 
> > root
> >/\
> > studentsadmins
> > /  \/\
> > A  BCD
> > 
> > If the cumulative usage of all students exceeds the cumulative usage of 
> > all admins, yes, the choice is to kill from the /students tree.
> 
> Which is wrong IMHO because the number of stutends is likely much more
> larger than admins (or teachers) yet it might be the admins one to run
> away. This example simply shows how comparing siblinks highly depends
> on the way you organize the hierarchy rather than the actual memory
> consumer runaways which is the primary goal of the OOM killer to handle.
> 
> > This has been Roman's design from the very beginning.
> 
> I suspect this was the case because deeper hierarchies for
> organizational purposes haven't been considered.
> 
> > If the preference is to kill 
> > the single largest process, which may be attached to either subtree, you 
> > would not have opted-in to the new heuristic.
> 
> I believe you are making a wrong assumption here. The container cleanup
> is sound reason to opt in and deeper hierarchies are simply required in
> the cgroup v2 world where you do not have separate hierarchies.
>  
> > > > Then, once a cgroup has been chosen as the victim cgroup, 
> > > > kill the process with the highest badness, allowing the user to 
> > > > influence 
> > > > that with /proc/pid/oom_score_adj just as today, if group_oom is 
> > > > disabled; 
> > > > otherwise, kill all eligible processes if enabled.
> > > 
> > > And now, what should be the semantic of group_oom on an intermediate
> > > (non-leaf) memcg? Why should we compare it to other killable entities?
> > > Roman was mentioning a setup where a _single_ workload consists of a
> > > deeper hierarchy which has to be shut down at once. It absolutely makes
> > > sense to consider the cumulative memory of that hierarchy when we are
> > > going to kill it all.
> > > 
> > 
> > If group_oom is enabled on an intermediate memcg, I think the intuitive 
> > way to handle it would be that all descendants are also implicitly or 
> > explicitly group_oom.
> 
> This is an interesting point. I would tend to agree here. If somebody
> requires all-in clean up up the hierarchy it feels strange that a
> subtree would disagree (e.g. during memcg oom on the subtree). I can
> hardly see a usecase that would really need a different group_oom policy
> depending on where in the hierarchy the oom happened to be honest.
> Roman?

Yes, I'd say that it's strange to apply settings from outside the OOMing
cgroup to the subtree, but actually it's not. The oom_group setting should
basically mean that the OOM killer will not kill a random task in the subtree.
And it doesn't matter if it was global or memcg-wide OOM.

Applied to v9. Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Michal Hocko
On Tue 26-09-17 20:37:37, Tim Hockin wrote:
[...]
> I feel like David has offered examples here, and many of us at Google
> have offered examples as long ago as 2013 (if I recall) of cases where
> the proposed heuristic is EXACTLY WRONG.

I do not think we have discussed anything resembling the current
approach. And I would really appreciate some more examples where
decisions based on leaf nodes would be EXACTLY WRONG.

> We need OOM behavior to kill in a deterministic order configured by
> policy.

And nobody is objecting to this usecase. I think we can build a priority
policy on top of leaf-based decision as well. The main point we are
trying to sort out here is a reasonable semantic that would work for
most workloads. Sibling based selection will simply not work on those
that have to use deeper hierarchies for organizational purposes. I
haven't heard a counter argument for that example yet.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Michal Hocko
On Tue 26-09-17 20:37:37, Tim Hockin wrote:
[...]
> I feel like David has offered examples here, and many of us at Google
> have offered examples as long ago as 2013 (if I recall) of cases where
> the proposed heuristic is EXACTLY WRONG.

I do not think we have discussed anything resembling the current
approach. And I would really appreciate some more examples where
decisions based on leaf nodes would be EXACTLY WRONG.

> We need OOM behavior to kill in a deterministic order configured by
> policy.

And nobody is objecting to this usecase. I think we can build a priority
policy on top of leaf-based decision as well. The main point we are
trying to sort out here is a reasonable semantic that would work for
most workloads. Sibling based selection will simply not work on those
that have to use deeper hierarchies for organizational purposes. I
haven't heard a counter argument for that example yet.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Michal Hocko
On Tue 26-09-17 14:04:41, David Rientjes wrote:
> On Tue, 26 Sep 2017, Michal Hocko wrote:
> 
> > > No, I agree that we shouldn't compare sibling memory cgroups based on 
> > > different criteria depending on whether group_oom is set or not.
> > > 
> > > I think it would be better to compare siblings based on the same criteria 
> > > independent of group_oom if the user has mounted the hierarchy with the 
> > > new mode (I think we all agree that the mount option is needed).  It's 
> > > very easy to describe to the user and the selection is simple to 
> > > understand. 
> > 
> > I disagree. Just take the most simplistic example when cgroups reflect
> > some other higher level organization - e.g. school with teachers,
> > students and admins as the top level cgroups to control the proper cpu
> > share load. Now you want to have a fair OOM selection between different
> > entities. Do you consider selecting students all the time as an expected
> > behavior just because their are the largest group? This just doesn't
> > make any sense to me.
> > 
> 
> Are you referring to this?
> 
>   root
>/\
> studentsadmins
> /  \/\
> A  BCD
> 
> If the cumulative usage of all students exceeds the cumulative usage of 
> all admins, yes, the choice is to kill from the /students tree.

Which is wrong IMHO because the number of stutends is likely much more
larger than admins (or teachers) yet it might be the admins one to run
away. This example simply shows how comparing siblinks highly depends
on the way you organize the hierarchy rather than the actual memory
consumer runaways which is the primary goal of the OOM killer to handle.

> This has been Roman's design from the very beginning.

I suspect this was the case because deeper hierarchies for
organizational purposes haven't been considered.

> If the preference is to kill 
> the single largest process, which may be attached to either subtree, you 
> would not have opted-in to the new heuristic.

I believe you are making a wrong assumption here. The container cleanup
is sound reason to opt in and deeper hierarchies are simply required in
the cgroup v2 world where you do not have separate hierarchies.
 
> > > Then, once a cgroup has been chosen as the victim cgroup, 
> > > kill the process with the highest badness, allowing the user to influence 
> > > that with /proc/pid/oom_score_adj just as today, if group_oom is 
> > > disabled; 
> > > otherwise, kill all eligible processes if enabled.
> > 
> > And now, what should be the semantic of group_oom on an intermediate
> > (non-leaf) memcg? Why should we compare it to other killable entities?
> > Roman was mentioning a setup where a _single_ workload consists of a
> > deeper hierarchy which has to be shut down at once. It absolutely makes
> > sense to consider the cumulative memory of that hierarchy when we are
> > going to kill it all.
> > 
> 
> If group_oom is enabled on an intermediate memcg, I think the intuitive 
> way to handle it would be that all descendants are also implicitly or 
> explicitly group_oom.

This is an interesting point. I would tend to agree here. If somebody
requires all-in clean up up the hierarchy it feels strange that a
subtree would disagree (e.g. during memcg oom on the subtree). I can
hardly see a usecase that would really need a different group_oom policy
depending on where in the hierarchy the oom happened to be honest.
Roman?

> It is compared to sibling cgroups based on 
> cumulative usage at the time of oom and the largest is chosen and 
> iterated.  The point is to separate out the selection heuristic (policy) 
> from group_oom (mechanism) so that we don't bias or prefer subtrees based 
> on group_oom, which makes this much more complex.

I disagree. group_oom determines killable entity and making a decision
based on a non-killable entities is weird as already pointed out.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-27 Thread Michal Hocko
On Tue 26-09-17 14:04:41, David Rientjes wrote:
> On Tue, 26 Sep 2017, Michal Hocko wrote:
> 
> > > No, I agree that we shouldn't compare sibling memory cgroups based on 
> > > different criteria depending on whether group_oom is set or not.
> > > 
> > > I think it would be better to compare siblings based on the same criteria 
> > > independent of group_oom if the user has mounted the hierarchy with the 
> > > new mode (I think we all agree that the mount option is needed).  It's 
> > > very easy to describe to the user and the selection is simple to 
> > > understand. 
> > 
> > I disagree. Just take the most simplistic example when cgroups reflect
> > some other higher level organization - e.g. school with teachers,
> > students and admins as the top level cgroups to control the proper cpu
> > share load. Now you want to have a fair OOM selection between different
> > entities. Do you consider selecting students all the time as an expected
> > behavior just because their are the largest group? This just doesn't
> > make any sense to me.
> > 
> 
> Are you referring to this?
> 
>   root
>/\
> studentsadmins
> /  \/\
> A  BCD
> 
> If the cumulative usage of all students exceeds the cumulative usage of 
> all admins, yes, the choice is to kill from the /students tree.

Which is wrong IMHO because the number of stutends is likely much more
larger than admins (or teachers) yet it might be the admins one to run
away. This example simply shows how comparing siblinks highly depends
on the way you organize the hierarchy rather than the actual memory
consumer runaways which is the primary goal of the OOM killer to handle.

> This has been Roman's design from the very beginning.

I suspect this was the case because deeper hierarchies for
organizational purposes haven't been considered.

> If the preference is to kill 
> the single largest process, which may be attached to either subtree, you 
> would not have opted-in to the new heuristic.

I believe you are making a wrong assumption here. The container cleanup
is sound reason to opt in and deeper hierarchies are simply required in
the cgroup v2 world where you do not have separate hierarchies.
 
> > > Then, once a cgroup has been chosen as the victim cgroup, 
> > > kill the process with the highest badness, allowing the user to influence 
> > > that with /proc/pid/oom_score_adj just as today, if group_oom is 
> > > disabled; 
> > > otherwise, kill all eligible processes if enabled.
> > 
> > And now, what should be the semantic of group_oom on an intermediate
> > (non-leaf) memcg? Why should we compare it to other killable entities?
> > Roman was mentioning a setup where a _single_ workload consists of a
> > deeper hierarchy which has to be shut down at once. It absolutely makes
> > sense to consider the cumulative memory of that hierarchy when we are
> > going to kill it all.
> > 
> 
> If group_oom is enabled on an intermediate memcg, I think the intuitive 
> way to handle it would be that all descendants are also implicitly or 
> explicitly group_oom.

This is an interesting point. I would tend to agree here. If somebody
requires all-in clean up up the hierarchy it feels strange that a
subtree would disagree (e.g. during memcg oom on the subtree). I can
hardly see a usecase that would really need a different group_oom policy
depending on where in the hierarchy the oom happened to be honest.
Roman?

> It is compared to sibling cgroups based on 
> cumulative usage at the time of oom and the largest is chosen and 
> iterated.  The point is to separate out the selection heuristic (policy) 
> from group_oom (mechanism) so that we don't bias or prefer subtrees based 
> on group_oom, which makes this much more complex.

I disagree. group_oom determines killable entity and making a decision
based on a non-killable entities is weird as already pointed out.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Tim Hockin
I'm excited to see this being discussed again - it's been years since
the last attempt.  I've tried to stay out of the conversation, but I
feel obligated say something and then go back to lurking.

On Tue, Sep 26, 2017 at 10:26 AM, Johannes Weiner  wrote:
> On Tue, Sep 26, 2017 at 03:30:40PM +0200, Michal Hocko wrote:
>> On Tue 26-09-17 13:13:00, Roman Gushchin wrote:
>> > On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
>> > > On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
>> > > > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
>> > > > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
>> > > > > [...]
>> > > > > > I'm not against this model, as I've said before. It feels logical,
>> > > > > > and will work fine in most cases.
>> > > > > >
>> > > > > > In this case we can drop any mount/boot options, because it 
>> > > > > > preserves
>> > > > > > the existing behavior in the default configuration. A big 
>> > > > > > advantage.
>> > > > >
>> > > > > I am not sure about this. We still need an opt-in, ragardless, 
>> > > > > because
>> > > > > selecting the largest process from the largest memcg != selecting the
>> > > > > largest task (just consider memcgs with many processes example).
>> > > >
>> > > > As I understand Johannes, he suggested to compare individual processes 
>> > > > with
>> > > > group_oom mem cgroups. In other words, always select a killable entity 
>> > > > with
>> > > > the biggest memory footprint.
>> > > >
>> > > > This is slightly different from my v8 approach, where I treat leaf 
>> > > > memcgs
>> > > > as indivisible memory consumers independent on group_oom setting, so
>> > > > by default I'm selecting the biggest task in the biggest memcg.
>> > >
>> > > My reading is that he is actually proposing the same thing I've been
>> > > mentioning. Simply select the biggest killable entity (leaf memcg or
>> > > group_oom hierarchy) and either kill the largest task in that entity
>> > > (for !group_oom) or the whole memcg/hierarchy otherwise.
>> >
>> > He wrote the following:
>> > "So I'm leaning toward the second model: compare all oomgroups and
>> > standalone tasks in the system with each other, independent of the
>> > failed hierarchical control structure. Then kill the biggest of them."
>>
>> I will let Johannes to comment but I believe this is just a
>> misunderstanding. If we compared only the biggest task from each memcg
>> then we are basically losing our fairness objective, aren't we?
>
> Sorry about the confusion.
>
> Yeah I was making the case for what Michal proposed, to kill the
> biggest terminal consumer, which is either a task or an oomgroup.
>
> You'd basically iterate through all the tasks and cgroups in the
> system and pick the biggest task that isn't in an oom group or the
> biggest oom group and then kill that.
>
> Yeah, you'd have to compare the memory footprints of tasks with the
> memory footprints of cgroups. These aren't defined identically, and
> tasks don't get attributed every type of allocation that a cgroup
> would. But it should get us in the ballpark, and I cannot picture a
> scenario where this would lead to a completely undesirable outcome.

That last sentence:

> I cannot picture a scenario where this would lead to a completely undesirable 
> outcome.

I feel like David has offered examples here, and many of us at Google
have offered examples as long ago as 2013 (if I recall) of cases where
the proposed heuristic is EXACTLY WRONG.  We need OOM behavior to kill
in a deterministic order configured by policy.  Sometimes, I would
literally prefer to kill every other cgroup before killing "the big
one".  The policy is *all* that matters for shared clusters of varying
users and priorities.

We did this in Borg, and it works REALLY well.  Has for years.  Now
that the world is adopting Kubernetes we need it again, only it's much
harder to carry a kernel patch in this case.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Tim Hockin
I'm excited to see this being discussed again - it's been years since
the last attempt.  I've tried to stay out of the conversation, but I
feel obligated say something and then go back to lurking.

On Tue, Sep 26, 2017 at 10:26 AM, Johannes Weiner  wrote:
> On Tue, Sep 26, 2017 at 03:30:40PM +0200, Michal Hocko wrote:
>> On Tue 26-09-17 13:13:00, Roman Gushchin wrote:
>> > On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
>> > > On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
>> > > > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
>> > > > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
>> > > > > [...]
>> > > > > > I'm not against this model, as I've said before. It feels logical,
>> > > > > > and will work fine in most cases.
>> > > > > >
>> > > > > > In this case we can drop any mount/boot options, because it 
>> > > > > > preserves
>> > > > > > the existing behavior in the default configuration. A big 
>> > > > > > advantage.
>> > > > >
>> > > > > I am not sure about this. We still need an opt-in, ragardless, 
>> > > > > because
>> > > > > selecting the largest process from the largest memcg != selecting the
>> > > > > largest task (just consider memcgs with many processes example).
>> > > >
>> > > > As I understand Johannes, he suggested to compare individual processes 
>> > > > with
>> > > > group_oom mem cgroups. In other words, always select a killable entity 
>> > > > with
>> > > > the biggest memory footprint.
>> > > >
>> > > > This is slightly different from my v8 approach, where I treat leaf 
>> > > > memcgs
>> > > > as indivisible memory consumers independent on group_oom setting, so
>> > > > by default I'm selecting the biggest task in the biggest memcg.
>> > >
>> > > My reading is that he is actually proposing the same thing I've been
>> > > mentioning. Simply select the biggest killable entity (leaf memcg or
>> > > group_oom hierarchy) and either kill the largest task in that entity
>> > > (for !group_oom) or the whole memcg/hierarchy otherwise.
>> >
>> > He wrote the following:
>> > "So I'm leaning toward the second model: compare all oomgroups and
>> > standalone tasks in the system with each other, independent of the
>> > failed hierarchical control structure. Then kill the biggest of them."
>>
>> I will let Johannes to comment but I believe this is just a
>> misunderstanding. If we compared only the biggest task from each memcg
>> then we are basically losing our fairness objective, aren't we?
>
> Sorry about the confusion.
>
> Yeah I was making the case for what Michal proposed, to kill the
> biggest terminal consumer, which is either a task or an oomgroup.
>
> You'd basically iterate through all the tasks and cgroups in the
> system and pick the biggest task that isn't in an oom group or the
> biggest oom group and then kill that.
>
> Yeah, you'd have to compare the memory footprints of tasks with the
> memory footprints of cgroups. These aren't defined identically, and
> tasks don't get attributed every type of allocation that a cgroup
> would. But it should get us in the ballpark, and I cannot picture a
> scenario where this would lead to a completely undesirable outcome.

That last sentence:

> I cannot picture a scenario where this would lead to a completely undesirable 
> outcome.

I feel like David has offered examples here, and many of us at Google
have offered examples as long ago as 2013 (if I recall) of cases where
the proposed heuristic is EXACTLY WRONG.  We need OOM behavior to kill
in a deterministic order configured by policy.  Sometimes, I would
literally prefer to kill every other cgroup before killing "the big
one".  The policy is *all* that matters for shared clusters of varying
users and priorities.

We did this in Borg, and it works REALLY well.  Has for years.  Now
that the world is adopting Kubernetes we need it again, only it's much
harder to carry a kernel patch in this case.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread David Rientjes
On Tue, 26 Sep 2017, Michal Hocko wrote:

> > No, I agree that we shouldn't compare sibling memory cgroups based on 
> > different criteria depending on whether group_oom is set or not.
> > 
> > I think it would be better to compare siblings based on the same criteria 
> > independent of group_oom if the user has mounted the hierarchy with the 
> > new mode (I think we all agree that the mount option is needed).  It's 
> > very easy to describe to the user and the selection is simple to 
> > understand. 
> 
> I disagree. Just take the most simplistic example when cgroups reflect
> some other higher level organization - e.g. school with teachers,
> students and admins as the top level cgroups to control the proper cpu
> share load. Now you want to have a fair OOM selection between different
> entities. Do you consider selecting students all the time as an expected
> behavior just because their are the largest group? This just doesn't
> make any sense to me.
> 

Are you referring to this?

root
   /\
studentsadmins
/  \/\
A  BCD

If the cumulative usage of all students exceeds the cumulative usage of 
all admins, yes, the choice is to kill from the /students tree.  This has 
been Roman's design from the very beginning.  If the preference is to kill 
the single largest process, which may be attached to either subtree, you 
would not have opted-in to the new heuristic.

> > Then, once a cgroup has been chosen as the victim cgroup, 
> > kill the process with the highest badness, allowing the user to influence 
> > that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
> > otherwise, kill all eligible processes if enabled.
> 
> And now, what should be the semantic of group_oom on an intermediate
> (non-leaf) memcg? Why should we compare it to other killable entities?
> Roman was mentioning a setup where a _single_ workload consists of a
> deeper hierarchy which has to be shut down at once. It absolutely makes
> sense to consider the cumulative memory of that hierarchy when we are
> going to kill it all.
> 

If group_oom is enabled on an intermediate memcg, I think the intuitive 
way to handle it would be that all descendants are also implicitly or 
explicitly group_oom.  It is compared to sibling cgroups based on 
cumulative usage at the time of oom and the largest is chosen and 
iterated.  The point is to separate out the selection heuristic (policy) 
from group_oom (mechanism) so that we don't bias or prefer subtrees based 
on group_oom, which makes this much more complex.

> But what you are proposing is something different from oom_score_adj.
> That only sets bias to the killable entities while priorities on
> intermediate non-killable memcgs controls how the whole oom hierarchy
> is traversed. So a non-killable intermediate memcg can hugely influence
> what gets killed in the end.

Why is there an intermediate non-killable memcg allowed?  Cgroup oom 
priorities should not be allowed to disable oom killing, it should only 
set a priority.  The only reason an intermediate cgroup should be 
non-killable is if there are no processes attached, but I don't think 
anyone is arguing we should just do nothing in that scenario.  The point 
is that the user has infleunce over the decisionmaking with a per-process 
heuristic with oom_score_adj and should also have influence over the 
decisionmaking with a per-cgroup heuristic.

> This is IMHO a tricky and I would even dare
> to claim a wrong semantic. I can see priorities being very useful on
> killable entities for sure. I am not entirely sure what would be the
> best approach yet and that is why I've suggested that to postpone to
> after we settle with a simple approach first. Bringing priorities back
> to the discussion again will not help to move that forward I am afraid.
> 

I agree to keep it as simple as possible, especially since some users want 
specific victim selection, it should be clear to document, and it 
shouldn't be influenced by some excessive amount of usage in another 
subtree the user has no control over (/admins over /students) to prevent 
the user from defining that it really wants to be the first oom victim or 
the admin from defining it really prefers something else killed first.

My suggestion is that Roman's implementation is clear, well defined, and 
has real-world usecases and it should be the direction that this moves in.  
I think victim selection and group_oom are distinct and should not 
influence the decisionmaking.  I think that oom_priority should influence 
the decisionmaking.

When mounted with the new option, as the oom hierarchy is iterated, 
compare all sibling cgroups regarding cumulative size unless an oom 
priority overrides that (either user specifying it wants to be oom killed 
or admin specifying it prefers something else).  When a victim memcg is 
chosen, use group_oom to determine what should be killed, otherwise choose 
by oom_score_adj.  I can't imagine 

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread David Rientjes
On Tue, 26 Sep 2017, Michal Hocko wrote:

> > No, I agree that we shouldn't compare sibling memory cgroups based on 
> > different criteria depending on whether group_oom is set or not.
> > 
> > I think it would be better to compare siblings based on the same criteria 
> > independent of group_oom if the user has mounted the hierarchy with the 
> > new mode (I think we all agree that the mount option is needed).  It's 
> > very easy to describe to the user and the selection is simple to 
> > understand. 
> 
> I disagree. Just take the most simplistic example when cgroups reflect
> some other higher level organization - e.g. school with teachers,
> students and admins as the top level cgroups to control the proper cpu
> share load. Now you want to have a fair OOM selection between different
> entities. Do you consider selecting students all the time as an expected
> behavior just because their are the largest group? This just doesn't
> make any sense to me.
> 

Are you referring to this?

root
   /\
studentsadmins
/  \/\
A  BCD

If the cumulative usage of all students exceeds the cumulative usage of 
all admins, yes, the choice is to kill from the /students tree.  This has 
been Roman's design from the very beginning.  If the preference is to kill 
the single largest process, which may be attached to either subtree, you 
would not have opted-in to the new heuristic.

> > Then, once a cgroup has been chosen as the victim cgroup, 
> > kill the process with the highest badness, allowing the user to influence 
> > that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
> > otherwise, kill all eligible processes if enabled.
> 
> And now, what should be the semantic of group_oom on an intermediate
> (non-leaf) memcg? Why should we compare it to other killable entities?
> Roman was mentioning a setup where a _single_ workload consists of a
> deeper hierarchy which has to be shut down at once. It absolutely makes
> sense to consider the cumulative memory of that hierarchy when we are
> going to kill it all.
> 

If group_oom is enabled on an intermediate memcg, I think the intuitive 
way to handle it would be that all descendants are also implicitly or 
explicitly group_oom.  It is compared to sibling cgroups based on 
cumulative usage at the time of oom and the largest is chosen and 
iterated.  The point is to separate out the selection heuristic (policy) 
from group_oom (mechanism) so that we don't bias or prefer subtrees based 
on group_oom, which makes this much more complex.

> But what you are proposing is something different from oom_score_adj.
> That only sets bias to the killable entities while priorities on
> intermediate non-killable memcgs controls how the whole oom hierarchy
> is traversed. So a non-killable intermediate memcg can hugely influence
> what gets killed in the end.

Why is there an intermediate non-killable memcg allowed?  Cgroup oom 
priorities should not be allowed to disable oom killing, it should only 
set a priority.  The only reason an intermediate cgroup should be 
non-killable is if there are no processes attached, but I don't think 
anyone is arguing we should just do nothing in that scenario.  The point 
is that the user has infleunce over the decisionmaking with a per-process 
heuristic with oom_score_adj and should also have influence over the 
decisionmaking with a per-cgroup heuristic.

> This is IMHO a tricky and I would even dare
> to claim a wrong semantic. I can see priorities being very useful on
> killable entities for sure. I am not entirely sure what would be the
> best approach yet and that is why I've suggested that to postpone to
> after we settle with a simple approach first. Bringing priorities back
> to the discussion again will not help to move that forward I am afraid.
> 

I agree to keep it as simple as possible, especially since some users want 
specific victim selection, it should be clear to document, and it 
shouldn't be influenced by some excessive amount of usage in another 
subtree the user has no control over (/admins over /students) to prevent 
the user from defining that it really wants to be the first oom victim or 
the admin from defining it really prefers something else killed first.

My suggestion is that Roman's implementation is clear, well defined, and 
has real-world usecases and it should be the direction that this moves in.  
I think victim selection and group_oom are distinct and should not 
influence the decisionmaking.  I think that oom_priority should influence 
the decisionmaking.

When mounted with the new option, as the oom hierarchy is iterated, 
compare all sibling cgroups regarding cumulative size unless an oom 
priority overrides that (either user specifying it wants to be oom killed 
or admin specifying it prefers something else).  When a victim memcg is 
chosen, use group_oom to determine what should be killed, otherwise choose 
by oom_score_adj.  I can't imagine 

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Johannes Weiner
On Tue, Sep 26, 2017 at 03:30:40PM +0200, Michal Hocko wrote:
> On Tue 26-09-17 13:13:00, Roman Gushchin wrote:
> > On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
> > > On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> > > > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > > > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > > > > [...]
> > > > > > I'm not against this model, as I've said before. It feels logical,
> > > > > > and will work fine in most cases.
> > > > > > 
> > > > > > In this case we can drop any mount/boot options, because it 
> > > > > > preserves
> > > > > > the existing behavior in the default configuration. A big advantage.
> > > > > 
> > > > > I am not sure about this. We still need an opt-in, ragardless, because
> > > > > selecting the largest process from the largest memcg != selecting the
> > > > > largest task (just consider memcgs with many processes example).
> > > > 
> > > > As I understand Johannes, he suggested to compare individual processes 
> > > > with
> > > > group_oom mem cgroups. In other words, always select a killable entity 
> > > > with
> > > > the biggest memory footprint.
> > > > 
> > > > This is slightly different from my v8 approach, where I treat leaf 
> > > > memcgs
> > > > as indivisible memory consumers independent on group_oom setting, so
> > > > by default I'm selecting the biggest task in the biggest memcg.
> > > 
> > > My reading is that he is actually proposing the same thing I've been
> > > mentioning. Simply select the biggest killable entity (leaf memcg or
> > > group_oom hierarchy) and either kill the largest task in that entity
> > > (for !group_oom) or the whole memcg/hierarchy otherwise.
> > 
> > He wrote the following:
> > "So I'm leaning toward the second model: compare all oomgroups and
> > standalone tasks in the system with each other, independent of the
> > failed hierarchical control structure. Then kill the biggest of them."
> 
> I will let Johannes to comment but I believe this is just a
> misunderstanding. If we compared only the biggest task from each memcg
> then we are basically losing our fairness objective, aren't we?

Sorry about the confusion.

Yeah I was making the case for what Michal proposed, to kill the
biggest terminal consumer, which is either a task or an oomgroup.

You'd basically iterate through all the tasks and cgroups in the
system and pick the biggest task that isn't in an oom group or the
biggest oom group and then kill that.

Yeah, you'd have to compare the memory footprints of tasks with the
memory footprints of cgroups. These aren't defined identically, and
tasks don't get attributed every type of allocation that a cgroup
would. But it should get us in the ballpark, and I cannot picture a
scenario where this would lead to a completely undesirable outcome.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Johannes Weiner
On Tue, Sep 26, 2017 at 03:30:40PM +0200, Michal Hocko wrote:
> On Tue 26-09-17 13:13:00, Roman Gushchin wrote:
> > On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
> > > On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> > > > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > > > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > > > > [...]
> > > > > > I'm not against this model, as I've said before. It feels logical,
> > > > > > and will work fine in most cases.
> > > > > > 
> > > > > > In this case we can drop any mount/boot options, because it 
> > > > > > preserves
> > > > > > the existing behavior in the default configuration. A big advantage.
> > > > > 
> > > > > I am not sure about this. We still need an opt-in, ragardless, because
> > > > > selecting the largest process from the largest memcg != selecting the
> > > > > largest task (just consider memcgs with many processes example).
> > > > 
> > > > As I understand Johannes, he suggested to compare individual processes 
> > > > with
> > > > group_oom mem cgroups. In other words, always select a killable entity 
> > > > with
> > > > the biggest memory footprint.
> > > > 
> > > > This is slightly different from my v8 approach, where I treat leaf 
> > > > memcgs
> > > > as indivisible memory consumers independent on group_oom setting, so
> > > > by default I'm selecting the biggest task in the biggest memcg.
> > > 
> > > My reading is that he is actually proposing the same thing I've been
> > > mentioning. Simply select the biggest killable entity (leaf memcg or
> > > group_oom hierarchy) and either kill the largest task in that entity
> > > (for !group_oom) or the whole memcg/hierarchy otherwise.
> > 
> > He wrote the following:
> > "So I'm leaning toward the second model: compare all oomgroups and
> > standalone tasks in the system with each other, independent of the
> > failed hierarchical control structure. Then kill the biggest of them."
> 
> I will let Johannes to comment but I believe this is just a
> misunderstanding. If we compared only the biggest task from each memcg
> then we are basically losing our fairness objective, aren't we?

Sorry about the confusion.

Yeah I was making the case for what Michal proposed, to kill the
biggest terminal consumer, which is either a task or an oomgroup.

You'd basically iterate through all the tasks and cgroups in the
system and pick the biggest task that isn't in an oom group or the
biggest oom group and then kill that.

Yeah, you'd have to compare the memory footprints of tasks with the
memory footprints of cgroups. These aren't defined identically, and
tasks don't get attributed every type of allocation that a cgroup
would. But it should get us in the ballpark, and I cannot picture a
scenario where this would lead to a completely undesirable outcome.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Michal Hocko
On Tue 26-09-17 13:13:00, Roman Gushchin wrote:
> On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
> > On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> > > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > > > [...]
> > > > > I'm not against this model, as I've said before. It feels logical,
> > > > > and will work fine in most cases.
> > > > > 
> > > > > In this case we can drop any mount/boot options, because it preserves
> > > > > the existing behavior in the default configuration. A big advantage.
> > > > 
> > > > I am not sure about this. We still need an opt-in, ragardless, because
> > > > selecting the largest process from the largest memcg != selecting the
> > > > largest task (just consider memcgs with many processes example).
> > > 
> > > As I understand Johannes, he suggested to compare individual processes 
> > > with
> > > group_oom mem cgroups. In other words, always select a killable entity 
> > > with
> > > the biggest memory footprint.
> > > 
> > > This is slightly different from my v8 approach, where I treat leaf memcgs
> > > as indivisible memory consumers independent on group_oom setting, so
> > > by default I'm selecting the biggest task in the biggest memcg.
> > 
> > My reading is that he is actually proposing the same thing I've been
> > mentioning. Simply select the biggest killable entity (leaf memcg or
> > group_oom hierarchy) and either kill the largest task in that entity
> > (for !group_oom) or the whole memcg/hierarchy otherwise.
> 
> He wrote the following:
> "So I'm leaning toward the second model: compare all oomgroups and
> standalone tasks in the system with each other, independent of the
> failed hierarchical control structure. Then kill the biggest of them."

I will let Johannes to comment but I believe this is just a
misunderstanding. If we compared only the biggest task from each memcg
then we are basically losing our fairness objective, aren't we?
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Michal Hocko
On Tue 26-09-17 13:13:00, Roman Gushchin wrote:
> On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
> > On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> > > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > > > [...]
> > > > > I'm not against this model, as I've said before. It feels logical,
> > > > > and will work fine in most cases.
> > > > > 
> > > > > In this case we can drop any mount/boot options, because it preserves
> > > > > the existing behavior in the default configuration. A big advantage.
> > > > 
> > > > I am not sure about this. We still need an opt-in, ragardless, because
> > > > selecting the largest process from the largest memcg != selecting the
> > > > largest task (just consider memcgs with many processes example).
> > > 
> > > As I understand Johannes, he suggested to compare individual processes 
> > > with
> > > group_oom mem cgroups. In other words, always select a killable entity 
> > > with
> > > the biggest memory footprint.
> > > 
> > > This is slightly different from my v8 approach, where I treat leaf memcgs
> > > as indivisible memory consumers independent on group_oom setting, so
> > > by default I'm selecting the biggest task in the biggest memcg.
> > 
> > My reading is that he is actually proposing the same thing I've been
> > mentioning. Simply select the biggest killable entity (leaf memcg or
> > group_oom hierarchy) and either kill the largest task in that entity
> > (for !group_oom) or the whole memcg/hierarchy otherwise.
> 
> He wrote the following:
> "So I'm leaning toward the second model: compare all oomgroups and
> standalone tasks in the system with each other, independent of the
> failed hierarchical control structure. Then kill the biggest of them."

I will let Johannes to comment but I believe this is just a
misunderstanding. If we compared only the biggest task from each memcg
then we are basically losing our fairness objective, aren't we?
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Roman Gushchin
On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
> On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > > [...]
> > > > I'm not against this model, as I've said before. It feels logical,
> > > > and will work fine in most cases.
> > > > 
> > > > In this case we can drop any mount/boot options, because it preserves
> > > > the existing behavior in the default configuration. A big advantage.
> > > 
> > > I am not sure about this. We still need an opt-in, ragardless, because
> > > selecting the largest process from the largest memcg != selecting the
> > > largest task (just consider memcgs with many processes example).
> > 
> > As I understand Johannes, he suggested to compare individual processes with
> > group_oom mem cgroups. In other words, always select a killable entity with
> > the biggest memory footprint.
> > 
> > This is slightly different from my v8 approach, where I treat leaf memcgs
> > as indivisible memory consumers independent on group_oom setting, so
> > by default I'm selecting the biggest task in the biggest memcg.
> 
> My reading is that he is actually proposing the same thing I've been
> mentioning. Simply select the biggest killable entity (leaf memcg or
> group_oom hierarchy) and either kill the largest task in that entity
> (for !group_oom) or the whole memcg/hierarchy otherwise.

He wrote the following:
"So I'm leaning toward the second model: compare all oomgroups and
standalone tasks in the system with each other, independent of the
failed hierarchical control structure. Then kill the biggest of them."

>  
> > While the approach suggested by Johannes looks clear and reasonable,
> > I'm slightly concerned about possible implementation issues,
> > which I've described below:
> > 
> > > 
> > > > The only thing, I'm slightly concerned, that due to the way how we 
> > > > calculate
> > > > the memory footprint for tasks and memory cgroups, we will have a number
> > > > of weird edge cases. For instance, when putting a single process into
> > > > the group_oom memcg will alter the oom_score significantly and result
> > > > in significantly different chances to be killed. An obvious example will
> > > > be a task with oom_score_adj set to any non-extreme (other than 0 and 
> > > > -1000)
> > > > value, but it can also happen in case of constrained alloc, for 
> > > > instance.
> > > 
> > > I am not sure I understand. Are you talking about root memcg comparing
> > > to other memcgs?
> > 
> > Not only, but root memcg in this case will be another complication. We can
> > also use the same trick for all memcg (define memcg oom_score as maximum 
> > oom_score
> > of the belonging tasks), it will turn group_oom into pure container cleanup
> > solution, without changing victim selection algorithm
> 
> I fail to see the problem to be honest. Simply evaluate the memcg_score
> you have so far with one minor detail. You only check memcgs which have
> tasks (rather than check for leaf node check) or it is group_oom. An
> intermediate memcg will get a cumulative size of the whole subhierarchy
> and then you know you can skip the subtree because any subtree can be larger.
> 
> > But, again, I'm not against approach suggested by Johannes. I think that 
> > overall
> > it's the best possible semantics, if we're not taking some implementation 
> > details
> > into account.
> 
> I do not see those implementation details issues and let me repeat do
> not develop a semantic based on implementation details.

There are no problems in "select the biggest leaf or group_oom memcg, then
kill the biggest task or all tasks depending on group_oom" approach,
which you're describing. Comparing tasks and memcgs (what Johannes is 
suggesting)
may have some issues.

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Roman Gushchin
On Tue, Sep 26, 2017 at 01:21:34PM +0200, Michal Hocko wrote:
> On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> > On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > > [...]
> > > > I'm not against this model, as I've said before. It feels logical,
> > > > and will work fine in most cases.
> > > > 
> > > > In this case we can drop any mount/boot options, because it preserves
> > > > the existing behavior in the default configuration. A big advantage.
> > > 
> > > I am not sure about this. We still need an opt-in, ragardless, because
> > > selecting the largest process from the largest memcg != selecting the
> > > largest task (just consider memcgs with many processes example).
> > 
> > As I understand Johannes, he suggested to compare individual processes with
> > group_oom mem cgroups. In other words, always select a killable entity with
> > the biggest memory footprint.
> > 
> > This is slightly different from my v8 approach, where I treat leaf memcgs
> > as indivisible memory consumers independent on group_oom setting, so
> > by default I'm selecting the biggest task in the biggest memcg.
> 
> My reading is that he is actually proposing the same thing I've been
> mentioning. Simply select the biggest killable entity (leaf memcg or
> group_oom hierarchy) and either kill the largest task in that entity
> (for !group_oom) or the whole memcg/hierarchy otherwise.

He wrote the following:
"So I'm leaning toward the second model: compare all oomgroups and
standalone tasks in the system with each other, independent of the
failed hierarchical control structure. Then kill the biggest of them."

>  
> > While the approach suggested by Johannes looks clear and reasonable,
> > I'm slightly concerned about possible implementation issues,
> > which I've described below:
> > 
> > > 
> > > > The only thing, I'm slightly concerned, that due to the way how we 
> > > > calculate
> > > > the memory footprint for tasks and memory cgroups, we will have a number
> > > > of weird edge cases. For instance, when putting a single process into
> > > > the group_oom memcg will alter the oom_score significantly and result
> > > > in significantly different chances to be killed. An obvious example will
> > > > be a task with oom_score_adj set to any non-extreme (other than 0 and 
> > > > -1000)
> > > > value, but it can also happen in case of constrained alloc, for 
> > > > instance.
> > > 
> > > I am not sure I understand. Are you talking about root memcg comparing
> > > to other memcgs?
> > 
> > Not only, but root memcg in this case will be another complication. We can
> > also use the same trick for all memcg (define memcg oom_score as maximum 
> > oom_score
> > of the belonging tasks), it will turn group_oom into pure container cleanup
> > solution, without changing victim selection algorithm
> 
> I fail to see the problem to be honest. Simply evaluate the memcg_score
> you have so far with one minor detail. You only check memcgs which have
> tasks (rather than check for leaf node check) or it is group_oom. An
> intermediate memcg will get a cumulative size of the whole subhierarchy
> and then you know you can skip the subtree because any subtree can be larger.
> 
> > But, again, I'm not against approach suggested by Johannes. I think that 
> > overall
> > it's the best possible semantics, if we're not taking some implementation 
> > details
> > into account.
> 
> I do not see those implementation details issues and let me repeat do
> not develop a semantic based on implementation details.

There are no problems in "select the biggest leaf or group_oom memcg, then
kill the biggest task or all tasks depending on group_oom" approach,
which you're describing. Comparing tasks and memcgs (what Johannes is 
suggesting)
may have some issues.

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Michal Hocko
On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > [...]
> > > I'm not against this model, as I've said before. It feels logical,
> > > and will work fine in most cases.
> > > 
> > > In this case we can drop any mount/boot options, because it preserves
> > > the existing behavior in the default configuration. A big advantage.
> > 
> > I am not sure about this. We still need an opt-in, ragardless, because
> > selecting the largest process from the largest memcg != selecting the
> > largest task (just consider memcgs with many processes example).
> 
> As I understand Johannes, he suggested to compare individual processes with
> group_oom mem cgroups. In other words, always select a killable entity with
> the biggest memory footprint.
> 
> This is slightly different from my v8 approach, where I treat leaf memcgs
> as indivisible memory consumers independent on group_oom setting, so
> by default I'm selecting the biggest task in the biggest memcg.

My reading is that he is actually proposing the same thing I've been
mentioning. Simply select the biggest killable entity (leaf memcg or
group_oom hierarchy) and either kill the largest task in that entity
(for !group_oom) or the whole memcg/hierarchy otherwise.
 
> While the approach suggested by Johannes looks clear and reasonable,
> I'm slightly concerned about possible implementation issues,
> which I've described below:
> 
> > 
> > > The only thing, I'm slightly concerned, that due to the way how we 
> > > calculate
> > > the memory footprint for tasks and memory cgroups, we will have a number
> > > of weird edge cases. For instance, when putting a single process into
> > > the group_oom memcg will alter the oom_score significantly and result
> > > in significantly different chances to be killed. An obvious example will
> > > be a task with oom_score_adj set to any non-extreme (other than 0 and 
> > > -1000)
> > > value, but it can also happen in case of constrained alloc, for instance.
> > 
> > I am not sure I understand. Are you talking about root memcg comparing
> > to other memcgs?
> 
> Not only, but root memcg in this case will be another complication. We can
> also use the same trick for all memcg (define memcg oom_score as maximum 
> oom_score
> of the belonging tasks), it will turn group_oom into pure container cleanup
> solution, without changing victim selection algorithm

I fail to see the problem to be honest. Simply evaluate the memcg_score
you have so far with one minor detail. You only check memcgs which have
tasks (rather than check for leaf node check) or it is group_oom. An
intermediate memcg will get a cumulative size of the whole subhierarchy
and then you know you can skip the subtree because any subtree can be larger.

> But, again, I'm not against approach suggested by Johannes. I think that 
> overall
> it's the best possible semantics, if we're not taking some implementation 
> details
> into account.

I do not see those implementation details issues and let me repeat do
not develop a semantic based on implementation details.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Michal Hocko
On Tue 26-09-17 11:59:25, Roman Gushchin wrote:
> On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> > On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> > [...]
> > > I'm not against this model, as I've said before. It feels logical,
> > > and will work fine in most cases.
> > > 
> > > In this case we can drop any mount/boot options, because it preserves
> > > the existing behavior in the default configuration. A big advantage.
> > 
> > I am not sure about this. We still need an opt-in, ragardless, because
> > selecting the largest process from the largest memcg != selecting the
> > largest task (just consider memcgs with many processes example).
> 
> As I understand Johannes, he suggested to compare individual processes with
> group_oom mem cgroups. In other words, always select a killable entity with
> the biggest memory footprint.
> 
> This is slightly different from my v8 approach, where I treat leaf memcgs
> as indivisible memory consumers independent on group_oom setting, so
> by default I'm selecting the biggest task in the biggest memcg.

My reading is that he is actually proposing the same thing I've been
mentioning. Simply select the biggest killable entity (leaf memcg or
group_oom hierarchy) and either kill the largest task in that entity
(for !group_oom) or the whole memcg/hierarchy otherwise.
 
> While the approach suggested by Johannes looks clear and reasonable,
> I'm slightly concerned about possible implementation issues,
> which I've described below:
> 
> > 
> > > The only thing, I'm slightly concerned, that due to the way how we 
> > > calculate
> > > the memory footprint for tasks and memory cgroups, we will have a number
> > > of weird edge cases. For instance, when putting a single process into
> > > the group_oom memcg will alter the oom_score significantly and result
> > > in significantly different chances to be killed. An obvious example will
> > > be a task with oom_score_adj set to any non-extreme (other than 0 and 
> > > -1000)
> > > value, but it can also happen in case of constrained alloc, for instance.
> > 
> > I am not sure I understand. Are you talking about root memcg comparing
> > to other memcgs?
> 
> Not only, but root memcg in this case will be another complication. We can
> also use the same trick for all memcg (define memcg oom_score as maximum 
> oom_score
> of the belonging tasks), it will turn group_oom into pure container cleanup
> solution, without changing victim selection algorithm

I fail to see the problem to be honest. Simply evaluate the memcg_score
you have so far with one minor detail. You only check memcgs which have
tasks (rather than check for leaf node check) or it is group_oom. An
intermediate memcg will get a cumulative size of the whole subhierarchy
and then you know you can skip the subtree because any subtree can be larger.

> But, again, I'm not against approach suggested by Johannes. I think that 
> overall
> it's the best possible semantics, if we're not taking some implementation 
> details
> into account.

I do not see those implementation details issues and let me repeat do
not develop a semantic based on implementation details.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Roman Gushchin
On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> [...]
> > I'm not against this model, as I've said before. It feels logical,
> > and will work fine in most cases.
> > 
> > In this case we can drop any mount/boot options, because it preserves
> > the existing behavior in the default configuration. A big advantage.
> 
> I am not sure about this. We still need an opt-in, ragardless, because
> selecting the largest process from the largest memcg != selecting the
> largest task (just consider memcgs with many processes example).

As I understand Johannes, he suggested to compare individual processes with
group_oom mem cgroups. In other words, always select a killable entity with
the biggest memory footprint.

This is slightly different from my v8 approach, where I treat leaf memcgs
as indivisible memory consumers independent on group_oom setting, so
by default I'm selecting the biggest task in the biggest memcg.

While the approach suggested by Johannes looks clear and reasonable,
I'm slightly concerned about possible implementation issues,
which I've described below:

> 
> > The only thing, I'm slightly concerned, that due to the way how we calculate
> > the memory footprint for tasks and memory cgroups, we will have a number
> > of weird edge cases. For instance, when putting a single process into
> > the group_oom memcg will alter the oom_score significantly and result
> > in significantly different chances to be killed. An obvious example will
> > be a task with oom_score_adj set to any non-extreme (other than 0 and -1000)
> > value, but it can also happen in case of constrained alloc, for instance.
> 
> I am not sure I understand. Are you talking about root memcg comparing
> to other memcgs?

Not only, but root memcg in this case will be another complication. We can
also use the same trick for all memcg (define memcg oom_score as maximum 
oom_score
of the belonging tasks), it will turn group_oom into pure container cleanup
solution, without changing victim selection algorithm

But, again, I'm not against approach suggested by Johannes. I think that overall
it's the best possible semantics, if we're not taking some implementation 
details
into account.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Roman Gushchin
On Mon, Sep 25, 2017 at 10:25:21PM +0200, Michal Hocko wrote:
> On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
> [...]
> > I'm not against this model, as I've said before. It feels logical,
> > and will work fine in most cases.
> > 
> > In this case we can drop any mount/boot options, because it preserves
> > the existing behavior in the default configuration. A big advantage.
> 
> I am not sure about this. We still need an opt-in, ragardless, because
> selecting the largest process from the largest memcg != selecting the
> largest task (just consider memcgs with many processes example).

As I understand Johannes, he suggested to compare individual processes with
group_oom mem cgroups. In other words, always select a killable entity with
the biggest memory footprint.

This is slightly different from my v8 approach, where I treat leaf memcgs
as indivisible memory consumers independent on group_oom setting, so
by default I'm selecting the biggest task in the biggest memcg.

While the approach suggested by Johannes looks clear and reasonable,
I'm slightly concerned about possible implementation issues,
which I've described below:

> 
> > The only thing, I'm slightly concerned, that due to the way how we calculate
> > the memory footprint for tasks and memory cgroups, we will have a number
> > of weird edge cases. For instance, when putting a single process into
> > the group_oom memcg will alter the oom_score significantly and result
> > in significantly different chances to be killed. An obvious example will
> > be a task with oom_score_adj set to any non-extreme (other than 0 and -1000)
> > value, but it can also happen in case of constrained alloc, for instance.
> 
> I am not sure I understand. Are you talking about root memcg comparing
> to other memcgs?

Not only, but root memcg in this case will be another complication. We can
also use the same trick for all memcg (define memcg oom_score as maximum 
oom_score
of the belonging tasks), it will turn group_oom into pure container cleanup
solution, without changing victim selection algorithm

But, again, I'm not against approach suggested by Johannes. I think that overall
it's the best possible semantics, if we're not taking some implementation 
details
into account.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Michal Hocko
On Mon 25-09-17 15:21:03, David Rientjes wrote:
> On Mon, 25 Sep 2017, Johannes Weiner wrote:
> 
> > > True but we want to have the semantic reasonably understandable. And it
> > > is quite hard to explain that the oom killer hasn't selected the largest
> > > memcg just because it happened to be in a deeper hierarchy which has
> > > been configured to cover a different resource.
> > 
> > Going back to Michal's example, say the user configured the following:
> > 
> >root
> >   /\
> >  A  D
> > / \
> >B   C
> > 
> > A global OOM event happens and we find this:
> > - A > D
> > - B, C, D are oomgroups
> > 
> > What the user is telling us is that B, C, and D are compound memory
> > consumers. They cannot be divided into their task parts from a memory
> > point of view.
> > 
> > However, the user doesn't say the same for A: the A subtree summarizes
> > and controls aggregate consumption of B and C, but without groupoom
> > set on A, the user says that A is in fact divisible into independent
> > memory consumers B and C.
> > 
> > If we don't have to kill all of A, but we'd have to kill all of D,
> > does it make sense to compare the two?
> > 
> 
> No, I agree that we shouldn't compare sibling memory cgroups based on 
> different criteria depending on whether group_oom is set or not.
> 
> I think it would be better to compare siblings based on the same criteria 
> independent of group_oom if the user has mounted the hierarchy with the 
> new mode (I think we all agree that the mount option is needed).  It's 
> very easy to describe to the user and the selection is simple to 
> understand. 

I disagree. Just take the most simplistic example when cgroups reflect
some other higher level organization - e.g. school with teachers,
students and admins as the top level cgroups to control the proper cpu
share load. Now you want to have a fair OOM selection between different
entities. Do you consider selecting students all the time as an expected
behavior just because their are the largest group? This just doesn't
make any sense to me.

> Then, once a cgroup has been chosen as the victim cgroup, 
> kill the process with the highest badness, allowing the user to influence 
> that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
> otherwise, kill all eligible processes if enabled.

And now, what should be the semantic of group_oom on an intermediate
(non-leaf) memcg? Why should we compare it to other killable entities?
Roman was mentioning a setup where a _single_ workload consists of a
deeper hierarchy which has to be shut down at once. It absolutely makes
sense to consider the cumulative memory of that hierarchy when we are
going to kill it all.

> That, to me, is a very clear semantic and I believe it addresses Roman's 
> usecase.  My desire to have oom priorities amongst siblings is so that 
> userspace can influence which cgroup is chosen, just as it can influence 
> which process is chosen.

But what you are proposing is something different from oom_score_adj.
That only sets bias to the killable entities while priorities on
intermediate non-killable memcgs controls how the whole oom hierarchy
is traversed. So a non-killable intermediate memcg can hugely influence
what gets killed in the end. This is IMHO a tricky and I would even dare
to claim a wrong semantic. I can see priorities being very useful on
killable entities for sure. I am not entirely sure what would be the
best approach yet and that is why I've suggested that to postpone to
after we settle with a simple approach first. Bringing priorities back
to the discussion again will not help to move that forward I am afraid.

-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-26 Thread Michal Hocko
On Mon 25-09-17 15:21:03, David Rientjes wrote:
> On Mon, 25 Sep 2017, Johannes Weiner wrote:
> 
> > > True but we want to have the semantic reasonably understandable. And it
> > > is quite hard to explain that the oom killer hasn't selected the largest
> > > memcg just because it happened to be in a deeper hierarchy which has
> > > been configured to cover a different resource.
> > 
> > Going back to Michal's example, say the user configured the following:
> > 
> >root
> >   /\
> >  A  D
> > / \
> >B   C
> > 
> > A global OOM event happens and we find this:
> > - A > D
> > - B, C, D are oomgroups
> > 
> > What the user is telling us is that B, C, and D are compound memory
> > consumers. They cannot be divided into their task parts from a memory
> > point of view.
> > 
> > However, the user doesn't say the same for A: the A subtree summarizes
> > and controls aggregate consumption of B and C, but without groupoom
> > set on A, the user says that A is in fact divisible into independent
> > memory consumers B and C.
> > 
> > If we don't have to kill all of A, but we'd have to kill all of D,
> > does it make sense to compare the two?
> > 
> 
> No, I agree that we shouldn't compare sibling memory cgroups based on 
> different criteria depending on whether group_oom is set or not.
> 
> I think it would be better to compare siblings based on the same criteria 
> independent of group_oom if the user has mounted the hierarchy with the 
> new mode (I think we all agree that the mount option is needed).  It's 
> very easy to describe to the user and the selection is simple to 
> understand. 

I disagree. Just take the most simplistic example when cgroups reflect
some other higher level organization - e.g. school with teachers,
students and admins as the top level cgroups to control the proper cpu
share load. Now you want to have a fair OOM selection between different
entities. Do you consider selecting students all the time as an expected
behavior just because their are the largest group? This just doesn't
make any sense to me.

> Then, once a cgroup has been chosen as the victim cgroup, 
> kill the process with the highest badness, allowing the user to influence 
> that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
> otherwise, kill all eligible processes if enabled.

And now, what should be the semantic of group_oom on an intermediate
(non-leaf) memcg? Why should we compare it to other killable entities?
Roman was mentioning a setup where a _single_ workload consists of a
deeper hierarchy which has to be shut down at once. It absolutely makes
sense to consider the cumulative memory of that hierarchy when we are
going to kill it all.

> That, to me, is a very clear semantic and I believe it addresses Roman's 
> usecase.  My desire to have oom priorities amongst siblings is so that 
> userspace can influence which cgroup is chosen, just as it can influence 
> which process is chosen.

But what you are proposing is something different from oom_score_adj.
That only sets bias to the killable entities while priorities on
intermediate non-killable memcgs controls how the whole oom hierarchy
is traversed. So a non-killable intermediate memcg can hugely influence
what gets killed in the end. This is IMHO a tricky and I would even dare
to claim a wrong semantic. I can see priorities being very useful on
killable entities for sure. I am not entirely sure what would be the
best approach yet and that is why I've suggested that to postpone to
after we settle with a simple approach first. Bringing priorities back
to the discussion again will not help to move that forward I am afraid.

-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread David Rientjes
On Mon, 25 Sep 2017, Johannes Weiner wrote:

> > True but we want to have the semantic reasonably understandable. And it
> > is quite hard to explain that the oom killer hasn't selected the largest
> > memcg just because it happened to be in a deeper hierarchy which has
> > been configured to cover a different resource.
> 
> Going back to Michal's example, say the user configured the following:
> 
>root
>   /\
>  A  D
> / \
>B   C
> 
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
> 
> What the user is telling us is that B, C, and D are compound memory
> consumers. They cannot be divided into their task parts from a memory
> point of view.
> 
> However, the user doesn't say the same for A: the A subtree summarizes
> and controls aggregate consumption of B and C, but without groupoom
> set on A, the user says that A is in fact divisible into independent
> memory consumers B and C.
> 
> If we don't have to kill all of A, but we'd have to kill all of D,
> does it make sense to compare the two?
> 

No, I agree that we shouldn't compare sibling memory cgroups based on 
different criteria depending on whether group_oom is set or not.

I think it would be better to compare siblings based on the same criteria 
independent of group_oom if the user has mounted the hierarchy with the 
new mode (I think we all agree that the mount option is needed).  It's 
very easy to describe to the user and the selection is simple to 
understand.  Then, once a cgroup has been chosen as the victim cgroup, 
kill the process with the highest badness, allowing the user to influence 
that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
otherwise, kill all eligible processes if enabled.

That, to me, is a very clear semantic and I believe it addresses Roman's 
usecase.  My desire to have oom priorities amongst siblings is so that 
userspace can influence which cgroup is chosen, just as it can influence 
which process is chosen.

I see group_oom as a mechanism to be used when victim selection has 
already been done instead of something that should be considered in the 
policy of victim selection.

> Let's consider an extreme case of this conundrum:
> 
>   root
>   / \
>  A   B
> /|\  |
>  A1-A1000B1
> 
> Again we find:
> - A > B
> - A1 to A1000 and B1 are oomgroups
> But:
> - A1 to A1000 individually are tiny, B1 is huge
> 
> Going level by level, we'd pick A as the bigger hierarchy in the
> system, and then kill off one of the tiny groups A1 to A1000.
> 
> Conversely, going for biggest consumer regardless of hierarchy, we'd
> compare A1 to A1000 and B1, then pick B1 as the biggest single atomic
> memory consumer in the system and kill all its tasks.
> 

If we compare sibling memcgs independent of group_oom, we don't 
necessarily pick A unless it really is larger than B.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread David Rientjes
On Mon, 25 Sep 2017, Johannes Weiner wrote:

> > True but we want to have the semantic reasonably understandable. And it
> > is quite hard to explain that the oom killer hasn't selected the largest
> > memcg just because it happened to be in a deeper hierarchy which has
> > been configured to cover a different resource.
> 
> Going back to Michal's example, say the user configured the following:
> 
>root
>   /\
>  A  D
> / \
>B   C
> 
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
> 
> What the user is telling us is that B, C, and D are compound memory
> consumers. They cannot be divided into their task parts from a memory
> point of view.
> 
> However, the user doesn't say the same for A: the A subtree summarizes
> and controls aggregate consumption of B and C, but without groupoom
> set on A, the user says that A is in fact divisible into independent
> memory consumers B and C.
> 
> If we don't have to kill all of A, but we'd have to kill all of D,
> does it make sense to compare the two?
> 

No, I agree that we shouldn't compare sibling memory cgroups based on 
different criteria depending on whether group_oom is set or not.

I think it would be better to compare siblings based on the same criteria 
independent of group_oom if the user has mounted the hierarchy with the 
new mode (I think we all agree that the mount option is needed).  It's 
very easy to describe to the user and the selection is simple to 
understand.  Then, once a cgroup has been chosen as the victim cgroup, 
kill the process with the highest badness, allowing the user to influence 
that with /proc/pid/oom_score_adj just as today, if group_oom is disabled; 
otherwise, kill all eligible processes if enabled.

That, to me, is a very clear semantic and I believe it addresses Roman's 
usecase.  My desire to have oom priorities amongst siblings is so that 
userspace can influence which cgroup is chosen, just as it can influence 
which process is chosen.

I see group_oom as a mechanism to be used when victim selection has 
already been done instead of something that should be considered in the 
policy of victim selection.

> Let's consider an extreme case of this conundrum:
> 
>   root
>   / \
>  A   B
> /|\  |
>  A1-A1000B1
> 
> Again we find:
> - A > B
> - A1 to A1000 and B1 are oomgroups
> But:
> - A1 to A1000 individually are tiny, B1 is huge
> 
> Going level by level, we'd pick A as the bigger hierarchy in the
> system, and then kill off one of the tiny groups A1 to A1000.
> 
> Conversely, going for biggest consumer regardless of hierarchy, we'd
> compare A1 to A1000 and B1, then pick B1 as the biggest single atomic
> memory consumer in the system and kill all its tasks.
> 

If we compare sibling memcgs independent of group_oom, we don't 
necessarily pick A unless it really is larger than B.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Michal Hocko
On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
[...]
> I'm not against this model, as I've said before. It feels logical,
> and will work fine in most cases.
> 
> In this case we can drop any mount/boot options, because it preserves
> the existing behavior in the default configuration. A big advantage.

I am not sure about this. We still need an opt-in, ragardless, because
selecting the largest process from the largest memcg != selecting the
largest task (just consider memcgs with many processes example).

> The only thing, I'm slightly concerned, that due to the way how we calculate
> the memory footprint for tasks and memory cgroups, we will have a number
> of weird edge cases. For instance, when putting a single process into
> the group_oom memcg will alter the oom_score significantly and result
> in significantly different chances to be killed. An obvious example will
> be a task with oom_score_adj set to any non-extreme (other than 0 and -1000)
> value, but it can also happen in case of constrained alloc, for instance.

I am not sure I understand. Are you talking about root memcg comparing
to other memcgs?
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Michal Hocko
On Mon 25-09-17 19:15:33, Roman Gushchin wrote:
[...]
> I'm not against this model, as I've said before. It feels logical,
> and will work fine in most cases.
> 
> In this case we can drop any mount/boot options, because it preserves
> the existing behavior in the default configuration. A big advantage.

I am not sure about this. We still need an opt-in, ragardless, because
selecting the largest process from the largest memcg != selecting the
largest task (just consider memcgs with many processes example).

> The only thing, I'm slightly concerned, that due to the way how we calculate
> the memory footprint for tasks and memory cgroups, we will have a number
> of weird edge cases. For instance, when putting a single process into
> the group_oom memcg will alter the oom_score significantly and result
> in significantly different chances to be killed. An obvious example will
> be a task with oom_score_adj set to any non-extreme (other than 0 and -1000)
> value, but it can also happen in case of constrained alloc, for instance.

I am not sure I understand. Are you talking about root memcg comparing
to other memcgs?
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Roman Gushchin
On Mon, Sep 25, 2017 at 01:00:04PM -0400, Johannes Weiner wrote:
> On Mon, Sep 25, 2017 at 02:24:00PM +0200, Michal Hocko wrote:
> > I would really appreciate some feedback from Tejun, Johannes here.
> > 
> > On Wed 20-09-17 14:53:41, Roman Gushchin wrote:
> > > On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> > > > On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > > > > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
> > [...]
> > > > > > But then you just enforce a structural restriction on your 
> > > > > > configuration
> > > > > > because
> > > > > > root
> > > > > > /  \
> > > > > >AD
> > > > > >   /\   
> > > > > >  B  C
> > > > > > 
> > > > > > is a different thing than
> > > > > > root
> > > > > > / | \
> > > > > >B  C  D
> > > > > >
> > > > > 
> > > > > I actually don't have a strong argument against an approach to select
> > > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > > no much difference.
> > > 
> > > I've tried to implement this approach, and it's really arguable.
> > > Although your example looks reasonable, the opposite example is also 
> > > valid:
> > > you might want to compare whole hierarchies, and it's a quite typical 
> > > usecase.
> > > 
> > > Assume, you have several containerized workloads on a machine (probably,
> > > each will be contained in a memcg with memory.max set), with some 
> > > hierarchy
> > > of cgroups inside. Then in case of global memory shortage we want to 
> > > reclaim
> > > some memory from the biggest workload, and the selection should not depend
> > > on group_oom settings. It would be really strange, if setting group_oom 
> > > will
> > > higher the chances to be killed.
> > > 
> > > In other words, let's imagine processes as leaf nodes in memcg tree. We 
> > > decided
> > > to select the biggest memcg and kill one or more processes inside 
> > > (depending
> > > on group_oom setting), but the memcg selection doesn't depend on it.
> > > We do not compare processes from different cgroups, as well as cgroups 
> > > with
> > > processes. The same should apply to cgroups: why do we want to compare 
> > > cgroups
> > > from different sub-trees?
> > > 
> > > While size-based comparison can be implemented with this approach,
> > > the priority-based is really weird (as David mentioned).
> > > If priorities have no hierarchical meaning at all, we lack the very 
> > > important
> > > ability to enforce hierarchy oom_priority. Otherwise we have to invent 
> > > some
> > > complex rules of oom_priority propagation (e.g. is someone is raising
> > > the oom_priority in parent, should it be applied to children immediately, 
> > > etc).
> > 
> > I would really forget about the priority at this stage. This needs
> > really much more thinking and I consider the David's usecase very
> > specialized to use it as a template for a general purpose oom
> > prioritization. I might be wrong here of course...
> 
> No, I agree.
> 
> > > In any case, OOM is a last resort mechanism. The goal is to reclaim some 
> > > memory
> > > and do not crash the system or do not leave it in totally broken state.
> > > Any really complex mm in userspace should be applied _before_ OOM happens.
> > > So, I don't think we have to support all possible configurations here,
> > > if we're able to achieve the main goal (kill some processes and do not 
> > > leave
> > > broken systems/containers).
> > 
> > True but we want to have the semantic reasonably understandable. And it
> > is quite hard to explain that the oom killer hasn't selected the largest
> > memcg just because it happened to be in a deeper hierarchy which has
> > been configured to cover a different resource.
> 
> Going back to Michal's example, say the user configured the following:
> 
>root
>   /\
>  A  D
> / \
>B   C
> 
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
> 
> What the user is telling us is that B, C, and D are compound memory
> consumers. They cannot be divided into their task parts from a memory
> point of view.
> 
> However, the user doesn't say the same for A: the A subtree summarizes
> and controls aggregate consumption of B and C, but without groupoom
> set on A, the user says that A is in fact divisible into independent
> memory consumers B and C.
> 
> If we don't have to kill all of A, but we'd have to kill all of D,
> does it make sense to compare the two?
> 
> Let's consider an extreme case of this conundrum:
> 
>   root
>   / \
>  A   B
> /|\  |
>  A1-A1000B1
> 
> Again we find:
> - A > B
> - A1 to A1000 and B1 are oomgroups
> But:
> - A1 to A1000 individually are tiny, B1 is huge
> 
> Going level by level, we'd pick A as the bigger hierarchy in the
> system, and then kill off one of the tiny groups A1 to A1000.
> 
> Conversely, going for biggest consumer regardless of hierarchy, we'd
> compare 

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Roman Gushchin
On Mon, Sep 25, 2017 at 01:00:04PM -0400, Johannes Weiner wrote:
> On Mon, Sep 25, 2017 at 02:24:00PM +0200, Michal Hocko wrote:
> > I would really appreciate some feedback from Tejun, Johannes here.
> > 
> > On Wed 20-09-17 14:53:41, Roman Gushchin wrote:
> > > On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> > > > On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > > > > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
> > [...]
> > > > > > But then you just enforce a structural restriction on your 
> > > > > > configuration
> > > > > > because
> > > > > > root
> > > > > > /  \
> > > > > >AD
> > > > > >   /\   
> > > > > >  B  C
> > > > > > 
> > > > > > is a different thing than
> > > > > > root
> > > > > > / | \
> > > > > >B  C  D
> > > > > >
> > > > > 
> > > > > I actually don't have a strong argument against an approach to select
> > > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > > no much difference.
> > > 
> > > I've tried to implement this approach, and it's really arguable.
> > > Although your example looks reasonable, the opposite example is also 
> > > valid:
> > > you might want to compare whole hierarchies, and it's a quite typical 
> > > usecase.
> > > 
> > > Assume, you have several containerized workloads on a machine (probably,
> > > each will be contained in a memcg with memory.max set), with some 
> > > hierarchy
> > > of cgroups inside. Then in case of global memory shortage we want to 
> > > reclaim
> > > some memory from the biggest workload, and the selection should not depend
> > > on group_oom settings. It would be really strange, if setting group_oom 
> > > will
> > > higher the chances to be killed.
> > > 
> > > In other words, let's imagine processes as leaf nodes in memcg tree. We 
> > > decided
> > > to select the biggest memcg and kill one or more processes inside 
> > > (depending
> > > on group_oom setting), but the memcg selection doesn't depend on it.
> > > We do not compare processes from different cgroups, as well as cgroups 
> > > with
> > > processes. The same should apply to cgroups: why do we want to compare 
> > > cgroups
> > > from different sub-trees?
> > > 
> > > While size-based comparison can be implemented with this approach,
> > > the priority-based is really weird (as David mentioned).
> > > If priorities have no hierarchical meaning at all, we lack the very 
> > > important
> > > ability to enforce hierarchy oom_priority. Otherwise we have to invent 
> > > some
> > > complex rules of oom_priority propagation (e.g. is someone is raising
> > > the oom_priority in parent, should it be applied to children immediately, 
> > > etc).
> > 
> > I would really forget about the priority at this stage. This needs
> > really much more thinking and I consider the David's usecase very
> > specialized to use it as a template for a general purpose oom
> > prioritization. I might be wrong here of course...
> 
> No, I agree.
> 
> > > In any case, OOM is a last resort mechanism. The goal is to reclaim some 
> > > memory
> > > and do not crash the system or do not leave it in totally broken state.
> > > Any really complex mm in userspace should be applied _before_ OOM happens.
> > > So, I don't think we have to support all possible configurations here,
> > > if we're able to achieve the main goal (kill some processes and do not 
> > > leave
> > > broken systems/containers).
> > 
> > True but we want to have the semantic reasonably understandable. And it
> > is quite hard to explain that the oom killer hasn't selected the largest
> > memcg just because it happened to be in a deeper hierarchy which has
> > been configured to cover a different resource.
> 
> Going back to Michal's example, say the user configured the following:
> 
>root
>   /\
>  A  D
> / \
>B   C
> 
> A global OOM event happens and we find this:
> - A > D
> - B, C, D are oomgroups
> 
> What the user is telling us is that B, C, and D are compound memory
> consumers. They cannot be divided into their task parts from a memory
> point of view.
> 
> However, the user doesn't say the same for A: the A subtree summarizes
> and controls aggregate consumption of B and C, but without groupoom
> set on A, the user says that A is in fact divisible into independent
> memory consumers B and C.
> 
> If we don't have to kill all of A, but we'd have to kill all of D,
> does it make sense to compare the two?
> 
> Let's consider an extreme case of this conundrum:
> 
>   root
>   / \
>  A   B
> /|\  |
>  A1-A1000B1
> 
> Again we find:
> - A > B
> - A1 to A1000 and B1 are oomgroups
> But:
> - A1 to A1000 individually are tiny, B1 is huge
> 
> Going level by level, we'd pick A as the bigger hierarchy in the
> system, and then kill off one of the tiny groups A1 to A1000.
> 
> Conversely, going for biggest consumer regardless of hierarchy, we'd
> compare 

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Johannes Weiner
On Mon, Sep 25, 2017 at 02:24:00PM +0200, Michal Hocko wrote:
> I would really appreciate some feedback from Tejun, Johannes here.
> 
> On Wed 20-09-17 14:53:41, Roman Gushchin wrote:
> > On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> > > On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > > > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
> [...]
> > > > > But then you just enforce a structural restriction on your 
> > > > > configuration
> > > > > because
> > > > >   root
> > > > > /  \
> > > > >AD
> > > > >   /\   
> > > > >  B  C
> > > > > 
> > > > > is a different thing than
> > > > >   root
> > > > > / | \
> > > > >B  C  D
> > > > >
> > > > 
> > > > I actually don't have a strong argument against an approach to select
> > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > no much difference.
> > 
> > I've tried to implement this approach, and it's really arguable.
> > Although your example looks reasonable, the opposite example is also valid:
> > you might want to compare whole hierarchies, and it's a quite typical 
> > usecase.
> > 
> > Assume, you have several containerized workloads on a machine (probably,
> > each will be contained in a memcg with memory.max set), with some hierarchy
> > of cgroups inside. Then in case of global memory shortage we want to reclaim
> > some memory from the biggest workload, and the selection should not depend
> > on group_oom settings. It would be really strange, if setting group_oom will
> > higher the chances to be killed.
> > 
> > In other words, let's imagine processes as leaf nodes in memcg tree. We 
> > decided
> > to select the biggest memcg and kill one or more processes inside (depending
> > on group_oom setting), but the memcg selection doesn't depend on it.
> > We do not compare processes from different cgroups, as well as cgroups with
> > processes. The same should apply to cgroups: why do we want to compare 
> > cgroups
> > from different sub-trees?
> > 
> > While size-based comparison can be implemented with this approach,
> > the priority-based is really weird (as David mentioned).
> > If priorities have no hierarchical meaning at all, we lack the very 
> > important
> > ability to enforce hierarchy oom_priority. Otherwise we have to invent some
> > complex rules of oom_priority propagation (e.g. is someone is raising
> > the oom_priority in parent, should it be applied to children immediately, 
> > etc).
> 
> I would really forget about the priority at this stage. This needs
> really much more thinking and I consider the David's usecase very
> specialized to use it as a template for a general purpose oom
> prioritization. I might be wrong here of course...

No, I agree.

> > In any case, OOM is a last resort mechanism. The goal is to reclaim some 
> > memory
> > and do not crash the system or do not leave it in totally broken state.
> > Any really complex mm in userspace should be applied _before_ OOM happens.
> > So, I don't think we have to support all possible configurations here,
> > if we're able to achieve the main goal (kill some processes and do not leave
> > broken systems/containers).
> 
> True but we want to have the semantic reasonably understandable. And it
> is quite hard to explain that the oom killer hasn't selected the largest
> memcg just because it happened to be in a deeper hierarchy which has
> been configured to cover a different resource.

Going back to Michal's example, say the user configured the following:

   root
  /\
 A  D
/ \
   B   C

A global OOM event happens and we find this:
- A > D
- B, C, D are oomgroups

What the user is telling us is that B, C, and D are compound memory
consumers. They cannot be divided into their task parts from a memory
point of view.

However, the user doesn't say the same for A: the A subtree summarizes
and controls aggregate consumption of B and C, but without groupoom
set on A, the user says that A is in fact divisible into independent
memory consumers B and C.

If we don't have to kill all of A, but we'd have to kill all of D,
does it make sense to compare the two?

Let's consider an extreme case of this conundrum:

root
  / \
 A   B
/|\  |
 A1-A1000B1

Again we find:
- A > B
- A1 to A1000 and B1 are oomgroups
But:
- A1 to A1000 individually are tiny, B1 is huge

Going level by level, we'd pick A as the bigger hierarchy in the
system, and then kill off one of the tiny groups A1 to A1000.

Conversely, going for biggest consumer regardless of hierarchy, we'd
compare A1 to A1000 and B1, then pick B1 as the biggest single atomic
memory consumer in the system and kill all its tasks.

Which one of these two fits both the purpose and our historic approach
to OOM killing better?

As was noted in this thread, OOM is the last resort to avoid a memory
deadlock. Killing the biggest consumer is most likely to resolve this

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Johannes Weiner
On Mon, Sep 25, 2017 at 02:24:00PM +0200, Michal Hocko wrote:
> I would really appreciate some feedback from Tejun, Johannes here.
> 
> On Wed 20-09-17 14:53:41, Roman Gushchin wrote:
> > On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> > > On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > > > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
> [...]
> > > > > But then you just enforce a structural restriction on your 
> > > > > configuration
> > > > > because
> > > > >   root
> > > > > /  \
> > > > >AD
> > > > >   /\   
> > > > >  B  C
> > > > > 
> > > > > is a different thing than
> > > > >   root
> > > > > / | \
> > > > >B  C  D
> > > > >
> > > > 
> > > > I actually don't have a strong argument against an approach to select
> > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > no much difference.
> > 
> > I've tried to implement this approach, and it's really arguable.
> > Although your example looks reasonable, the opposite example is also valid:
> > you might want to compare whole hierarchies, and it's a quite typical 
> > usecase.
> > 
> > Assume, you have several containerized workloads on a machine (probably,
> > each will be contained in a memcg with memory.max set), with some hierarchy
> > of cgroups inside. Then in case of global memory shortage we want to reclaim
> > some memory from the biggest workload, and the selection should not depend
> > on group_oom settings. It would be really strange, if setting group_oom will
> > higher the chances to be killed.
> > 
> > In other words, let's imagine processes as leaf nodes in memcg tree. We 
> > decided
> > to select the biggest memcg and kill one or more processes inside (depending
> > on group_oom setting), but the memcg selection doesn't depend on it.
> > We do not compare processes from different cgroups, as well as cgroups with
> > processes. The same should apply to cgroups: why do we want to compare 
> > cgroups
> > from different sub-trees?
> > 
> > While size-based comparison can be implemented with this approach,
> > the priority-based is really weird (as David mentioned).
> > If priorities have no hierarchical meaning at all, we lack the very 
> > important
> > ability to enforce hierarchy oom_priority. Otherwise we have to invent some
> > complex rules of oom_priority propagation (e.g. is someone is raising
> > the oom_priority in parent, should it be applied to children immediately, 
> > etc).
> 
> I would really forget about the priority at this stage. This needs
> really much more thinking and I consider the David's usecase very
> specialized to use it as a template for a general purpose oom
> prioritization. I might be wrong here of course...

No, I agree.

> > In any case, OOM is a last resort mechanism. The goal is to reclaim some 
> > memory
> > and do not crash the system or do not leave it in totally broken state.
> > Any really complex mm in userspace should be applied _before_ OOM happens.
> > So, I don't think we have to support all possible configurations here,
> > if we're able to achieve the main goal (kill some processes and do not leave
> > broken systems/containers).
> 
> True but we want to have the semantic reasonably understandable. And it
> is quite hard to explain that the oom killer hasn't selected the largest
> memcg just because it happened to be in a deeper hierarchy which has
> been configured to cover a different resource.

Going back to Michal's example, say the user configured the following:

   root
  /\
 A  D
/ \
   B   C

A global OOM event happens and we find this:
- A > D
- B, C, D are oomgroups

What the user is telling us is that B, C, and D are compound memory
consumers. They cannot be divided into their task parts from a memory
point of view.

However, the user doesn't say the same for A: the A subtree summarizes
and controls aggregate consumption of B and C, but without groupoom
set on A, the user says that A is in fact divisible into independent
memory consumers B and C.

If we don't have to kill all of A, but we'd have to kill all of D,
does it make sense to compare the two?

Let's consider an extreme case of this conundrum:

root
  / \
 A   B
/|\  |
 A1-A1000B1

Again we find:
- A > B
- A1 to A1000 and B1 are oomgroups
But:
- A1 to A1000 individually are tiny, B1 is huge

Going level by level, we'd pick A as the bigger hierarchy in the
system, and then kill off one of the tiny groups A1 to A1000.

Conversely, going for biggest consumer regardless of hierarchy, we'd
compare A1 to A1000 and B1, then pick B1 as the biggest single atomic
memory consumer in the system and kill all its tasks.

Which one of these two fits both the purpose and our historic approach
to OOM killing better?

As was noted in this thread, OOM is the last resort to avoid a memory
deadlock. Killing the biggest consumer is most likely to resolve this

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Michal Hocko
I would really appreciate some feedback from Tejun, Johannes here.

On Wed 20-09-17 14:53:41, Roman Gushchin wrote:
> On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> > On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
[...]
> > > > But then you just enforce a structural restriction on your configuration
> > > > because
> > > > root
> > > > /  \
> > > >AD
> > > >   /\   
> > > >  B  C
> > > > 
> > > > is a different thing than
> > > > root
> > > > / | \
> > > >B  C  D
> > > >
> > > 
> > > I actually don't have a strong argument against an approach to select
> > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > no much difference.
> 
> I've tried to implement this approach, and it's really arguable.
> Although your example looks reasonable, the opposite example is also valid:
> you might want to compare whole hierarchies, and it's a quite typical usecase.
> 
> Assume, you have several containerized workloads on a machine (probably,
> each will be contained in a memcg with memory.max set), with some hierarchy
> of cgroups inside. Then in case of global memory shortage we want to reclaim
> some memory from the biggest workload, and the selection should not depend
> on group_oom settings. It would be really strange, if setting group_oom will
> higher the chances to be killed.
> 
> In other words, let's imagine processes as leaf nodes in memcg tree. We 
> decided
> to select the biggest memcg and kill one or more processes inside (depending
> on group_oom setting), but the memcg selection doesn't depend on it.
> We do not compare processes from different cgroups, as well as cgroups with
> processes. The same should apply to cgroups: why do we want to compare cgroups
> from different sub-trees?
> 
> While size-based comparison can be implemented with this approach,
> the priority-based is really weird (as David mentioned).
> If priorities have no hierarchical meaning at all, we lack the very important
> ability to enforce hierarchy oom_priority. Otherwise we have to invent some
> complex rules of oom_priority propagation (e.g. is someone is raising
> the oom_priority in parent, should it be applied to children immediately, 
> etc).

I would really forget about the priority at this stage. This needs
really much more thinking and I consider the David's usecase very
specialized to use it as a template for a general purpose oom
prioritization. I might be wrong here of course...

> The oom_group knob meaning also becoms more complex. It affects both
> the victim selection and OOM action. _ANY_ mechanism which allows to affect
> OOM victim selection (either priorities, either bpf-based approach) should
> not have global system-wide meaning, it breaks everything.
> 
> I do understand your point, but the same is true for other stuff, right?
> E.g. cpu time distribution (and io, etc) depends on hierarchy configuration.
> It's a limitation, but it's ok, as user should create a hierarchy which
> reflects some logical relations between processes and groups of processes.
> Otherwise we're going to the configuration hell.

And that is _exactly_ my concern. We surely do not want tell people that
they have to consider their cgroup tree structure to control the global
oom behavior. You simply do not have that constrain with leaf-only
semantic and if kill-all intermediate nodes are used then there is an
explicit opt-in for the hierarchy considerations.

> In any case, OOM is a last resort mechanism. The goal is to reclaim some 
> memory
> and do not crash the system or do not leave it in totally broken state.
> Any really complex mm in userspace should be applied _before_ OOM happens.
> So, I don't think we have to support all possible configurations here,
> if we're able to achieve the main goal (kill some processes and do not leave
> broken systems/containers).

True but we want to have the semantic reasonably understandable. And it
is quite hard to explain that the oom killer hasn't selected the largest
memcg just because it happened to be in a deeper hierarchy which has
been configured to cover a different resource.

I am sorry to repeat my self and I will not argue if there is a
prevalent agreement that level-by-level comparison is considered
desirable and documented behavior but, by all means, do not define this
semantic based on a priority requirements and/or implementation details.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-25 Thread Michal Hocko
I would really appreciate some feedback from Tejun, Johannes here.

On Wed 20-09-17 14:53:41, Roman Gushchin wrote:
> On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> > On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
[...]
> > > > But then you just enforce a structural restriction on your configuration
> > > > because
> > > > root
> > > > /  \
> > > >AD
> > > >   /\   
> > > >  B  C
> > > > 
> > > > is a different thing than
> > > > root
> > > > / | \
> > > >B  C  D
> > > >
> > > 
> > > I actually don't have a strong argument against an approach to select
> > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > no much difference.
> 
> I've tried to implement this approach, and it's really arguable.
> Although your example looks reasonable, the opposite example is also valid:
> you might want to compare whole hierarchies, and it's a quite typical usecase.
> 
> Assume, you have several containerized workloads on a machine (probably,
> each will be contained in a memcg with memory.max set), with some hierarchy
> of cgroups inside. Then in case of global memory shortage we want to reclaim
> some memory from the biggest workload, and the selection should not depend
> on group_oom settings. It would be really strange, if setting group_oom will
> higher the chances to be killed.
> 
> In other words, let's imagine processes as leaf nodes in memcg tree. We 
> decided
> to select the biggest memcg and kill one or more processes inside (depending
> on group_oom setting), but the memcg selection doesn't depend on it.
> We do not compare processes from different cgroups, as well as cgroups with
> processes. The same should apply to cgroups: why do we want to compare cgroups
> from different sub-trees?
> 
> While size-based comparison can be implemented with this approach,
> the priority-based is really weird (as David mentioned).
> If priorities have no hierarchical meaning at all, we lack the very important
> ability to enforce hierarchy oom_priority. Otherwise we have to invent some
> complex rules of oom_priority propagation (e.g. is someone is raising
> the oom_priority in parent, should it be applied to children immediately, 
> etc).

I would really forget about the priority at this stage. This needs
really much more thinking and I consider the David's usecase very
specialized to use it as a template for a general purpose oom
prioritization. I might be wrong here of course...

> The oom_group knob meaning also becoms more complex. It affects both
> the victim selection and OOM action. _ANY_ mechanism which allows to affect
> OOM victim selection (either priorities, either bpf-based approach) should
> not have global system-wide meaning, it breaks everything.
> 
> I do understand your point, but the same is true for other stuff, right?
> E.g. cpu time distribution (and io, etc) depends on hierarchy configuration.
> It's a limitation, but it's ok, as user should create a hierarchy which
> reflects some logical relations between processes and groups of processes.
> Otherwise we're going to the configuration hell.

And that is _exactly_ my concern. We surely do not want tell people that
they have to consider their cgroup tree structure to control the global
oom behavior. You simply do not have that constrain with leaf-only
semantic and if kill-all intermediate nodes are used then there is an
explicit opt-in for the hierarchy considerations.

> In any case, OOM is a last resort mechanism. The goal is to reclaim some 
> memory
> and do not crash the system or do not leave it in totally broken state.
> Any really complex mm in userspace should be applied _before_ OOM happens.
> So, I don't think we have to support all possible configurations here,
> if we're able to achieve the main goal (kill some processes and do not leave
> broken systems/containers).

True but we want to have the semantic reasonably understandable. And it
is quite hard to explain that the oom killer hasn't selected the largest
memcg just because it happened to be in a deeper hierarchy which has
been configured to cover a different resource.

I am sorry to repeat my self and I will not argue if there is a
prevalent agreement that level-by-level comparison is considered
desirable and documented behavior but, by all means, do not define this
semantic based on a priority requirements and/or implementation details.
-- 
Michal Hocko
SUSE Labs


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-23 Thread David Rientjes
On Fri, 22 Sep 2017, Tejun Heo wrote:

> > If you have this low priority maintenance job charging memory to the high 
> > priority hierarchy, you're already misconfigured unless you adjust 
> > /proc/pid/oom_score_adj because it will oom kill any larger process than 
> > itself in today's kernels anyway.
> > 
> > A better configuration would be attach this hypothetical low priority 
> > maintenance job to its own sibling cgroup with its own memory limit to 
> > avoid exactly that problem: it going berserk and charging too much memory 
> > to the high priority container that results in one of its processes 
> > getting oom killed.
> 
> And how do you guarantee that across delegation boundaries?  The
> points you raise on why the priority should be applied level-by-level
> are exactly the same points why this doesn't really work.  OOM killing
> priority isn't something which can be distributed across cgroup
> hierarchy level-by-level.  The resulting decision tree doesn't make
> any sense.
> 

It works very well in practice with real world usecases, and Roman has 
developed the same design independently that we have used for the past 
four years.  Saying it doesn't make any sense doesn't hold a lot of weight 
when we both independently designed and implemented the same solution to 
address our usecases.

> I'm not against adding something which works but strict level-by-level
> comparison isn't the solution.
> 

Each of the eight versions of Roman's cgroup aware oom killer has done 
comparisons between siblings at each level.  Userspace influence on that 
comparison would thus also need to be done at each level.  It's a very 
powerful combination in practice.

Thanks.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-23 Thread David Rientjes
On Fri, 22 Sep 2017, Tejun Heo wrote:

> > If you have this low priority maintenance job charging memory to the high 
> > priority hierarchy, you're already misconfigured unless you adjust 
> > /proc/pid/oom_score_adj because it will oom kill any larger process than 
> > itself in today's kernels anyway.
> > 
> > A better configuration would be attach this hypothetical low priority 
> > maintenance job to its own sibling cgroup with its own memory limit to 
> > avoid exactly that problem: it going berserk and charging too much memory 
> > to the high priority container that results in one of its processes 
> > getting oom killed.
> 
> And how do you guarantee that across delegation boundaries?  The
> points you raise on why the priority should be applied level-by-level
> are exactly the same points why this doesn't really work.  OOM killing
> priority isn't something which can be distributed across cgroup
> hierarchy level-by-level.  The resulting decision tree doesn't make
> any sense.
> 

It works very well in practice with real world usecases, and Roman has 
developed the same design independently that we have used for the past 
four years.  Saying it doesn't make any sense doesn't hold a lot of weight 
when we both independently designed and implemented the same solution to 
address our usecases.

> I'm not against adding something which works but strict level-by-level
> comparison isn't the solution.
> 

Each of the eight versions of Roman's cgroup aware oom killer has done 
comparisons between siblings at each level.  Userspace influence on that 
comparison would thus also need to be done at each level.  It's a very 
powerful combination in practice.

Thanks.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread Tejun Heo
Hello,

On Fri, Sep 22, 2017 at 01:39:55PM -0700, David Rientjes wrote:
> Current heuristic based on processes is coupled with per-process
> /proc/pid/oom_score_adj.  The proposed 
> heuristic has no ability to be influenced by userspace, and it needs one.  
> The proposed heuristic based on memory cgroups coupled with Roman's 
> per-memcg memory.oom_priority is appropriate and needed.  It is not 

So, this is where we disagree.  I don't think it's a good design.

> "sophisticated intelligence," it merely allows userspace to protect vital 
> memory cgroups when opting into the new features (cgroups compared based 
> on size and memory.oom_group) that we very much want.

which can't achieve that goal very well for wide variety of users.

> > We even change the whole scheduling behaviors and try really hard to
> > not get locked into specific implementation details which exclude
> > future improvements.  Guaranteeing OOM killing selection would be
> > crazy.  Why would we prevent ourselves from doing things better in the
> > future?  We aren't talking about the semantics of read(2) here.  This
> > is a kernel emergency mechanism to avoid deadlock at the last moment.
> 
> We merely want to prefer other memory cgroups are oom killed on system oom 
> conditions before important ones, regardless if the important one is using 
> more memory than the others because of the new heuristic this patchset 
> introduces.  This is exactly the same as /proc/pid/oom_score_adj for the 
> current heuristic.

You were arguing that we should lock into a specific heuristics and
guarantee the same behavior.  We shouldn't.

When we introduce a user visible interface, we're making a lot of
promises.  My point is that we need to be really careful when making
those promises.

> If you have this low priority maintenance job charging memory to the high 
> priority hierarchy, you're already misconfigured unless you adjust 
> /proc/pid/oom_score_adj because it will oom kill any larger process than 
> itself in today's kernels anyway.
> 
> A better configuration would be attach this hypothetical low priority 
> maintenance job to its own sibling cgroup with its own memory limit to 
> avoid exactly that problem: it going berserk and charging too much memory 
> to the high priority container that results in one of its processes 
> getting oom killed.

And how do you guarantee that across delegation boundaries?  The
points you raise on why the priority should be applied level-by-level
are exactly the same points why this doesn't really work.  OOM killing
priority isn't something which can be distributed across cgroup
hierarchy level-by-level.  The resulting decision tree doesn't make
any sense.

I'm not against adding something which works but strict level-by-level
comparison isn't the solution.

Thanks.

-- 
tejun


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread Tejun Heo
Hello,

On Fri, Sep 22, 2017 at 01:39:55PM -0700, David Rientjes wrote:
> Current heuristic based on processes is coupled with per-process
> /proc/pid/oom_score_adj.  The proposed 
> heuristic has no ability to be influenced by userspace, and it needs one.  
> The proposed heuristic based on memory cgroups coupled with Roman's 
> per-memcg memory.oom_priority is appropriate and needed.  It is not 

So, this is where we disagree.  I don't think it's a good design.

> "sophisticated intelligence," it merely allows userspace to protect vital 
> memory cgroups when opting into the new features (cgroups compared based 
> on size and memory.oom_group) that we very much want.

which can't achieve that goal very well for wide variety of users.

> > We even change the whole scheduling behaviors and try really hard to
> > not get locked into specific implementation details which exclude
> > future improvements.  Guaranteeing OOM killing selection would be
> > crazy.  Why would we prevent ourselves from doing things better in the
> > future?  We aren't talking about the semantics of read(2) here.  This
> > is a kernel emergency mechanism to avoid deadlock at the last moment.
> 
> We merely want to prefer other memory cgroups are oom killed on system oom 
> conditions before important ones, regardless if the important one is using 
> more memory than the others because of the new heuristic this patchset 
> introduces.  This is exactly the same as /proc/pid/oom_score_adj for the 
> current heuristic.

You were arguing that we should lock into a specific heuristics and
guarantee the same behavior.  We shouldn't.

When we introduce a user visible interface, we're making a lot of
promises.  My point is that we need to be really careful when making
those promises.

> If you have this low priority maintenance job charging memory to the high 
> priority hierarchy, you're already misconfigured unless you adjust 
> /proc/pid/oom_score_adj because it will oom kill any larger process than 
> itself in today's kernels anyway.
> 
> A better configuration would be attach this hypothetical low priority 
> maintenance job to its own sibling cgroup with its own memory limit to 
> avoid exactly that problem: it going berserk and charging too much memory 
> to the high priority container that results in one of its processes 
> getting oom killed.

And how do you guarantee that across delegation boundaries?  The
points you raise on why the priority should be applied level-by-level
are exactly the same points why this doesn't really work.  OOM killing
priority isn't something which can be distributed across cgroup
hierarchy level-by-level.  The resulting decision tree doesn't make
any sense.

I'm not against adding something which works but strict level-by-level
comparison isn't the solution.

Thanks.

-- 
tejun


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread David Rientjes
On Thu, 21 Sep 2017, Johannes Weiner wrote:

> > The issue is that if you opt-in to the new feature, then you are forced to 
> > change /proc/pid/oom_score_adj of all processes attached to a cgroup that 
> > you do not want oom killed based on size to be oom disabled.
> 
> You're assuming that most people would want to influence the oom
> behavior in the first place. I think the opposite is the case: most
> people don't care as long as the OOM killer takes the intent the user
> has expressed wrt runtime containerization/grouping into account.
> 

If you do not want to influence the oom behavior, do not change 
memory.oom_priority from its default.  It's that simple.

> > The kernel provides no other remedy without oom priorities since the
> > new feature would otherwise disregard oom_score_adj.
> 
> As of v8, it respects this setting and doesn't kill min score tasks.
> 

That's the issue.  To protect a memory cgroup from being oom killed in a 
system oom condition, you need to change oom_score_adj of *all* processes 
attached to be oom disabled.  Then, you have a huge problem in memory 
cgroup oom conditions because nothing can be killed in that hierarchy 
itself.

> > The patchset compares memory cgroup size relative to sibling cgroups only, 
> > the same comparison for memory.oom_priority.  There is a guarantee 
> > provided on how cgroup size is compared in select_victim_memcg(), it 
> > hierarchically accumulates the "size" from leaf nodes up to the root memcg 
> > and then iterates the tree comparing sizes between sibling cgroups to 
> > choose a victim memcg.  That algorithm could be more elaborately described 
> > in the documentation, but we simply cannot change the implementation of 
> > select_victim_memcg() later even without oom priorities since users cannot 
> > get inconsistent results after opting into a feature between kernel 
> > versions.  I believe the selection criteria should be implemented to be 
> > deterministic, as select_victim_memcg() does, and the documentation should 
> > fully describe what the selection criteria is, and then allow the user to 
> > decide.
> 
> I wholeheartedly disagree. We have changed the behavior multiple times
> in the past. In fact, you have arguably done the most drastic changes
> to the algorithm since the OOM killer was first introduced. E.g.
> 
>   a63d83f427fb oom: badness heuristic rewrite
> 
> And that's completely fine. Because this thing is not a resource
> management tool for userspace, it's the kernel saving itself. At best
> in a manner that's not too surprising to userspace.
> 

When I did that, I had to add /proc/pid/oom_score_adj to allow userspace 
to influence selection.  We came up with /proc/pid/oom_score_adj when 
working with kde, openssh, chromium, and udev because they cared about the 
ability to influence the decisionmaking.  I'm perfectly happy with the new 
heuristic presented in this patchset, I simply want userspace to be able 
to influence it, if it desires.  Requiring userspace to set all processes 
to be oom disabled to protect a hierarchy is totally and completely 
broken.  It livelocks the memory cgroup if it is oom itself.

> To me, your argument behind the NAK still boils down to "this doesn't
> support my highly specialized usecase." But since it doesn't prohibit
> your usecase - which isn't even supported upstream, btw - this really
> doesn't carry much weight.
> 
> I'd say if you want configurability on top of Roman's code, please
> submit patches and push the case for these in a separate effort.
> 

Roman implemented memory.oom_priority himself, it has my Tested-by, and it 
allows users who want to protect high priority memory cgroups from using 
the size based comparison for all other cgroups that we very much desire.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread David Rientjes
On Thu, 21 Sep 2017, Johannes Weiner wrote:

> > The issue is that if you opt-in to the new feature, then you are forced to 
> > change /proc/pid/oom_score_adj of all processes attached to a cgroup that 
> > you do not want oom killed based on size to be oom disabled.
> 
> You're assuming that most people would want to influence the oom
> behavior in the first place. I think the opposite is the case: most
> people don't care as long as the OOM killer takes the intent the user
> has expressed wrt runtime containerization/grouping into account.
> 

If you do not want to influence the oom behavior, do not change 
memory.oom_priority from its default.  It's that simple.

> > The kernel provides no other remedy without oom priorities since the
> > new feature would otherwise disregard oom_score_adj.
> 
> As of v8, it respects this setting and doesn't kill min score tasks.
> 

That's the issue.  To protect a memory cgroup from being oom killed in a 
system oom condition, you need to change oom_score_adj of *all* processes 
attached to be oom disabled.  Then, you have a huge problem in memory 
cgroup oom conditions because nothing can be killed in that hierarchy 
itself.

> > The patchset compares memory cgroup size relative to sibling cgroups only, 
> > the same comparison for memory.oom_priority.  There is a guarantee 
> > provided on how cgroup size is compared in select_victim_memcg(), it 
> > hierarchically accumulates the "size" from leaf nodes up to the root memcg 
> > and then iterates the tree comparing sizes between sibling cgroups to 
> > choose a victim memcg.  That algorithm could be more elaborately described 
> > in the documentation, but we simply cannot change the implementation of 
> > select_victim_memcg() later even without oom priorities since users cannot 
> > get inconsistent results after opting into a feature between kernel 
> > versions.  I believe the selection criteria should be implemented to be 
> > deterministic, as select_victim_memcg() does, and the documentation should 
> > fully describe what the selection criteria is, and then allow the user to 
> > decide.
> 
> I wholeheartedly disagree. We have changed the behavior multiple times
> in the past. In fact, you have arguably done the most drastic changes
> to the algorithm since the OOM killer was first introduced. E.g.
> 
>   a63d83f427fb oom: badness heuristic rewrite
> 
> And that's completely fine. Because this thing is not a resource
> management tool for userspace, it's the kernel saving itself. At best
> in a manner that's not too surprising to userspace.
> 

When I did that, I had to add /proc/pid/oom_score_adj to allow userspace 
to influence selection.  We came up with /proc/pid/oom_score_adj when 
working with kde, openssh, chromium, and udev because they cared about the 
ability to influence the decisionmaking.  I'm perfectly happy with the new 
heuristic presented in this patchset, I simply want userspace to be able 
to influence it, if it desires.  Requiring userspace to set all processes 
to be oom disabled to protect a hierarchy is totally and completely 
broken.  It livelocks the memory cgroup if it is oom itself.

> To me, your argument behind the NAK still boils down to "this doesn't
> support my highly specialized usecase." But since it doesn't prohibit
> your usecase - which isn't even supported upstream, btw - this really
> doesn't carry much weight.
> 
> I'd say if you want configurability on top of Roman's code, please
> submit patches and push the case for these in a separate effort.
> 

Roman implemented memory.oom_priority himself, it has my Tested-by, and it 
allows users who want to protect high priority memory cgroups from using 
the size based comparison for all other cgroups that we very much desire.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread David Rientjes
On Fri, 22 Sep 2017, Tejun Heo wrote:

> > It doesn't have anything to do with my particular usecase, but rather the 
> > ability of userspace to influence the decisions of the kernel.  Previous 
> > to this patchset, when selection is done based on process size, userspace 
> > has full control over selection.  After this patchset, userspace has no 
> > control other than setting all processes to be oom disabled if the largest 
> > memory consumer is to be protected.  Roman's memory.oom_priority provides 
> > a perfect solution for userspace to be able to influence this decision 
> > making and causes no change in behavior for users who choose not to tune 
> > memory.oom_priority.  The nack originates from the general need for 
> > userspace influence over oom victim selection and to avoid userspace 
> > needing to take the rather drastic measure of setting all processes to be 
> > oom disabled to prevent oom kill in kernels before oom priorities are 
> > introduced.
> 
> Overall, I think that OOM killing is the wrong place to implement
> sophisticated intelligence in.  It's too late to be smart - the
> workload already has suffered significantly and there's only very
> limited amount of computing which can be performed.  That said, if
> there's a useful and general enough mechanism to configure OOM killer
> behavior from userland, that can definitely be useful.
> 

What is under discussion is a new way to compare sibling cgroups when 
selecting a victim for oom kill.  It's a new heuristic based on a 
characteristic of the memory cgroup rather than the individual process.  
We want this behavior that the patchset implements.  The only desire is a 
way for userspace to influence that decision making in the same way that 
/proc/pid/oom_score_adj allows userspace to influence the current 
heuristic.

Current heuristic based on processes is coupled with per-process
/proc/pid/oom_score_adj.  The proposed 
heuristic has no ability to be influenced by userspace, and it needs one.  
The proposed heuristic based on memory cgroups coupled with Roman's 
per-memcg memory.oom_priority is appropriate and needed.  It is not 
"sophisticated intelligence," it merely allows userspace to protect vital 
memory cgroups when opting into the new features (cgroups compared based 
on size and memory.oom_group) that we very much want.

> We even change the whole scheduling behaviors and try really hard to
> not get locked into specific implementation details which exclude
> future improvements.  Guaranteeing OOM killing selection would be
> crazy.  Why would we prevent ourselves from doing things better in the
> future?  We aren't talking about the semantics of read(2) here.  This
> is a kernel emergency mechanism to avoid deadlock at the last moment.
> 

We merely want to prefer other memory cgroups are oom killed on system oom 
conditions before important ones, regardless if the important one is using 
more memory than the others because of the new heuristic this patchset 
introduces.  This is exactly the same as /proc/pid/oom_score_adj for the 
current heuristic.

> Here's a really simple use case.  Imagine a system which hosts two
> containers of services and one is somewhat favored over the other and
> wants to set up cgroup hierarchy so that resources are split at the
> top level between the two containers.  oom_priority is set accordingly
> too.  Let's say a low priority maintenance job in higher priority
> container goes berserk, as they oftne do, and pushing the system into
> OOM.
> 
> With the proposed static oom_priority mechanism, the only
> configuration which can be expressed is "kill all of the lower top
> level subtree before any of the higher one", which is a silly
> restriction leading to silly behavior and a direct result of
> conflating resource distribution network with level-by-level OOM
> killing decsion.
> 

The problem you're describing is an issue with the top-level limits after 
this patchset is merged, not memory.oom_priority at all.

If they are truly split evenly, this patchset kills the largest process 
from the hierarchy with the most charged memory.  That's unchanged if the 
two priorities are equal.  By changing the priority to be more preferred 
for a hierarchy, you indeed prefer oom kills from the lower priority 
hierarchy.  You've opted in.  One hierarchy is more important than the 
other, regardless of any hypothetical low priority maintenance job going 
berserk.

If you have this low priority maintenance job charging memory to the high 
priority hierarchy, you're already misconfigured unless you adjust 
/proc/pid/oom_score_adj because it will oom kill any larger process than 
itself in today's kernels anyway.

A better configuration would be attach this hypothetical low priority 
maintenance job to its own sibling cgroup with its own memory limit to 
avoid exactly that problem: it going berserk and charging too much memory 
to the high priority container that results in one of its processes 

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread David Rientjes
On Fri, 22 Sep 2017, Tejun Heo wrote:

> > It doesn't have anything to do with my particular usecase, but rather the 
> > ability of userspace to influence the decisions of the kernel.  Previous 
> > to this patchset, when selection is done based on process size, userspace 
> > has full control over selection.  After this patchset, userspace has no 
> > control other than setting all processes to be oom disabled if the largest 
> > memory consumer is to be protected.  Roman's memory.oom_priority provides 
> > a perfect solution for userspace to be able to influence this decision 
> > making and causes no change in behavior for users who choose not to tune 
> > memory.oom_priority.  The nack originates from the general need for 
> > userspace influence over oom victim selection and to avoid userspace 
> > needing to take the rather drastic measure of setting all processes to be 
> > oom disabled to prevent oom kill in kernels before oom priorities are 
> > introduced.
> 
> Overall, I think that OOM killing is the wrong place to implement
> sophisticated intelligence in.  It's too late to be smart - the
> workload already has suffered significantly and there's only very
> limited amount of computing which can be performed.  That said, if
> there's a useful and general enough mechanism to configure OOM killer
> behavior from userland, that can definitely be useful.
> 

What is under discussion is a new way to compare sibling cgroups when 
selecting a victim for oom kill.  It's a new heuristic based on a 
characteristic of the memory cgroup rather than the individual process.  
We want this behavior that the patchset implements.  The only desire is a 
way for userspace to influence that decision making in the same way that 
/proc/pid/oom_score_adj allows userspace to influence the current 
heuristic.

Current heuristic based on processes is coupled with per-process
/proc/pid/oom_score_adj.  The proposed 
heuristic has no ability to be influenced by userspace, and it needs one.  
The proposed heuristic based on memory cgroups coupled with Roman's 
per-memcg memory.oom_priority is appropriate and needed.  It is not 
"sophisticated intelligence," it merely allows userspace to protect vital 
memory cgroups when opting into the new features (cgroups compared based 
on size and memory.oom_group) that we very much want.

> We even change the whole scheduling behaviors and try really hard to
> not get locked into specific implementation details which exclude
> future improvements.  Guaranteeing OOM killing selection would be
> crazy.  Why would we prevent ourselves from doing things better in the
> future?  We aren't talking about the semantics of read(2) here.  This
> is a kernel emergency mechanism to avoid deadlock at the last moment.
> 

We merely want to prefer other memory cgroups are oom killed on system oom 
conditions before important ones, regardless if the important one is using 
more memory than the others because of the new heuristic this patchset 
introduces.  This is exactly the same as /proc/pid/oom_score_adj for the 
current heuristic.

> Here's a really simple use case.  Imagine a system which hosts two
> containers of services and one is somewhat favored over the other and
> wants to set up cgroup hierarchy so that resources are split at the
> top level between the two containers.  oom_priority is set accordingly
> too.  Let's say a low priority maintenance job in higher priority
> container goes berserk, as they oftne do, and pushing the system into
> OOM.
> 
> With the proposed static oom_priority mechanism, the only
> configuration which can be expressed is "kill all of the lower top
> level subtree before any of the higher one", which is a silly
> restriction leading to silly behavior and a direct result of
> conflating resource distribution network with level-by-level OOM
> killing decsion.
> 

The problem you're describing is an issue with the top-level limits after 
this patchset is merged, not memory.oom_priority at all.

If they are truly split evenly, this patchset kills the largest process 
from the hierarchy with the most charged memory.  That's unchanged if the 
two priorities are equal.  By changing the priority to be more preferred 
for a hierarchy, you indeed prefer oom kills from the lower priority 
hierarchy.  You've opted in.  One hierarchy is more important than the 
other, regardless of any hypothetical low priority maintenance job going 
berserk.

If you have this low priority maintenance job charging memory to the high 
priority hierarchy, you're already misconfigured unless you adjust 
/proc/pid/oom_score_adj because it will oom kill any larger process than 
itself in today's kernels anyway.

A better configuration would be attach this hypothetical low priority 
maintenance job to its own sibling cgroup with its own memory limit to 
avoid exactly that problem: it going berserk and charging too much memory 
to the high priority container that results in one of its processes 

Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread Tejun Heo
Hello, David.

On Thu, Sep 21, 2017 at 02:17:25PM -0700, David Rientjes wrote:
> It doesn't have anything to do with my particular usecase, but rather the 
> ability of userspace to influence the decisions of the kernel.  Previous 
> to this patchset, when selection is done based on process size, userspace 
> has full control over selection.  After this patchset, userspace has no 
> control other than setting all processes to be oom disabled if the largest 
> memory consumer is to be protected.  Roman's memory.oom_priority provides 
> a perfect solution for userspace to be able to influence this decision 
> making and causes no change in behavior for users who choose not to tune 
> memory.oom_priority.  The nack originates from the general need for 
> userspace influence over oom victim selection and to avoid userspace 
> needing to take the rather drastic measure of setting all processes to be 
> oom disabled to prevent oom kill in kernels before oom priorities are 
> introduced.

Overall, I think that OOM killing is the wrong place to implement
sophisticated intelligence in.  It's too late to be smart - the
workload already has suffered significantly and there's only very
limited amount of computing which can be performed.  That said, if
there's a useful and general enough mechanism to configure OOM killer
behavior from userland, that can definitely be useful.

> The patchset compares memory cgroup size relative to sibling cgroups only, 
> the same comparison for memory.oom_priority.  There is a guarantee 
> provided on how cgroup size is compared in select_victim_memcg(), it 
> hierarchically accumulates the "size" from leaf nodes up to the root memcg 
> and then iterates the tree comparing sizes between sibling cgroups to 
> choose a victim memcg.  That algorithm could be more elaborately described 
> in the documentation, but we simply cannot change the implementation of 
> select_victim_memcg() later even without oom priorities since users cannot 
> get inconsistent results after opting into a feature between kernel 
> versions.  I believe the selection criteria should be implemented to be 
> deterministic, as select_victim_memcg() does, and the documentation should 
> fully describe what the selection criteria is, and then allow the user to 
> decide.

We even change the whole scheduling behaviors and try really hard to
not get locked into specific implementation details which exclude
future improvements.  Guaranteeing OOM killing selection would be
crazy.  Why would we prevent ourselves from doing things better in the
future?  We aren't talking about the semantics of read(2) here.  This
is a kernel emergency mechanism to avoid deadlock at the last moment.

> Roman is planning on introducing memory.oom_priority back into the 
> patchset per https://marc.info/?l=linux-kernel=150574701126877 and I 
> agree with the very clear semantic that it introduces: to have the 
> size-based comparison use the same rules as the userspace priority 
> comparison.  It's very powerful and I'm happy to ack the final version 
> that he plans on posting.

To me, the proposed oom_priority mechanism seems too limited and makes
the error of tightly coupling the hierarchical behavior of resource
distribution with OOM victim selection.  They can be related but are
not the same and coupling them together in the kernel interface is
likely a mistake which will lead to long term pains that we can't
easily get out of.

Here's a really simple use case.  Imagine a system which hosts two
containers of services and one is somewhat favored over the other and
wants to set up cgroup hierarchy so that resources are split at the
top level between the two containers.  oom_priority is set accordingly
too.  Let's say a low priority maintenance job in higher priority
container goes berserk, as they oftne do, and pushing the system into
OOM.

With the proposed static oom_priority mechanism, the only
configuration which can be expressed is "kill all of the lower top
level subtree before any of the higher one", which is a silly
restriction leading to silly behavior and a direct result of
conflating resource distribution network with level-by-level OOM
killing decsion.

If we want to allow users to steer OOM killing, I suspect that it
should be aligned at delegation boundaries rather than on cgroup
hierarchy itself.  We can discuss that but it is a separate
discussion.

The mechanism being proposed is fundamentally flawed.  You can't push
that in by nacking other improvements.

Thanks.

-- 
tejun


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-22 Thread Tejun Heo
Hello, David.

On Thu, Sep 21, 2017 at 02:17:25PM -0700, David Rientjes wrote:
> It doesn't have anything to do with my particular usecase, but rather the 
> ability of userspace to influence the decisions of the kernel.  Previous 
> to this patchset, when selection is done based on process size, userspace 
> has full control over selection.  After this patchset, userspace has no 
> control other than setting all processes to be oom disabled if the largest 
> memory consumer is to be protected.  Roman's memory.oom_priority provides 
> a perfect solution for userspace to be able to influence this decision 
> making and causes no change in behavior for users who choose not to tune 
> memory.oom_priority.  The nack originates from the general need for 
> userspace influence over oom victim selection and to avoid userspace 
> needing to take the rather drastic measure of setting all processes to be 
> oom disabled to prevent oom kill in kernels before oom priorities are 
> introduced.

Overall, I think that OOM killing is the wrong place to implement
sophisticated intelligence in.  It's too late to be smart - the
workload already has suffered significantly and there's only very
limited amount of computing which can be performed.  That said, if
there's a useful and general enough mechanism to configure OOM killer
behavior from userland, that can definitely be useful.

> The patchset compares memory cgroup size relative to sibling cgroups only, 
> the same comparison for memory.oom_priority.  There is a guarantee 
> provided on how cgroup size is compared in select_victim_memcg(), it 
> hierarchically accumulates the "size" from leaf nodes up to the root memcg 
> and then iterates the tree comparing sizes between sibling cgroups to 
> choose a victim memcg.  That algorithm could be more elaborately described 
> in the documentation, but we simply cannot change the implementation of 
> select_victim_memcg() later even without oom priorities since users cannot 
> get inconsistent results after opting into a feature between kernel 
> versions.  I believe the selection criteria should be implemented to be 
> deterministic, as select_victim_memcg() does, and the documentation should 
> fully describe what the selection criteria is, and then allow the user to 
> decide.

We even change the whole scheduling behaviors and try really hard to
not get locked into specific implementation details which exclude
future improvements.  Guaranteeing OOM killing selection would be
crazy.  Why would we prevent ourselves from doing things better in the
future?  We aren't talking about the semantics of read(2) here.  This
is a kernel emergency mechanism to avoid deadlock at the last moment.

> Roman is planning on introducing memory.oom_priority back into the 
> patchset per https://marc.info/?l=linux-kernel=150574701126877 and I 
> agree with the very clear semantic that it introduces: to have the 
> size-based comparison use the same rules as the userspace priority 
> comparison.  It's very powerful and I'm happy to ack the final version 
> that he plans on posting.

To me, the proposed oom_priority mechanism seems too limited and makes
the error of tightly coupling the hierarchical behavior of resource
distribution with OOM victim selection.  They can be related but are
not the same and coupling them together in the kernel interface is
likely a mistake which will lead to long term pains that we can't
easily get out of.

Here's a really simple use case.  Imagine a system which hosts two
containers of services and one is somewhat favored over the other and
wants to set up cgroup hierarchy so that resources are split at the
top level between the two containers.  oom_priority is set accordingly
too.  Let's say a low priority maintenance job in higher priority
container goes berserk, as they oftne do, and pushing the system into
OOM.

With the proposed static oom_priority mechanism, the only
configuration which can be expressed is "kill all of the lower top
level subtree before any of the higher one", which is a silly
restriction leading to silly behavior and a direct result of
conflating resource distribution network with level-by-level OOM
killing decsion.

If we want to allow users to steer OOM killing, I suspect that it
should be aligned at delegation boundaries rather than on cgroup
hierarchy itself.  We can discuss that but it is a separate
discussion.

The mechanism being proposed is fundamentally flawed.  You can't push
that in by nacking other improvements.

Thanks.

-- 
tejun


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread Johannes Weiner
On Thu, Sep 21, 2017 at 02:17:25PM -0700, David Rientjes wrote:
> On Thu, 21 Sep 2017, Johannes Weiner wrote:
> 
> > That's a ridiculous nak.
> > 
> > The fact that this patch series doesn't solve your particular problem
> > is not a technical argument to *reject* somebody else's work to solve
> > a different problem. It's not a regression when behavior is completely
> > unchanged unless you explicitly opt into a new functionality.
> > 
> > So let's stay reasonable here.
> > 
> 
> The issue is that if you opt-in to the new feature, then you are forced to 
> change /proc/pid/oom_score_adj of all processes attached to a cgroup that 
> you do not want oom killed based on size to be oom disabled.

You're assuming that most people would want to influence the oom
behavior in the first place. I think the opposite is the case: most
people don't care as long as the OOM killer takes the intent the user
has expressed wrt runtime containerization/grouping into account.

> The kernel provides no other remedy without oom priorities since the
> new feature would otherwise disregard oom_score_adj.

As of v8, it respects this setting and doesn't kill min score tasks.

> The nack originates from the general need for userspace influence
> over oom victim selection and to avoid userspace needing to take the
> rather drastic measure of setting all processes to be oom disabled
> to prevent oom kill in kernels before oom priorities are introduced.

As I said, we can discuss this in a separate context. Because again, I
really don't see how the lack of configurability in an opt-in feature
would diminish its value for many people who don't even care to adjust
and influence this behavior.

> > The patch series has merit as it currently stands. It makes OOM
> > killing in a cgrouped system fairer and less surprising. Whether you
> > have the ability to influence this in a new way is an entirely
> > separate discussion. It's one that involves ABI and user guarantees.
> > 
> > Right now Roman's patches make no guarantees on how the cgroup tree is
> > descended. But once we define an interface for prioritization, it
> > locks the victim algorithm into place to a certain extent.
> > 
> 
> The patchset compares memory cgroup size relative to sibling cgroups only, 
> the same comparison for memory.oom_priority.  There is a guarantee 
> provided on how cgroup size is compared in select_victim_memcg(), it 
> hierarchically accumulates the "size" from leaf nodes up to the root memcg 
> and then iterates the tree comparing sizes between sibling cgroups to 
> choose a victim memcg.  That algorithm could be more elaborately described 
> in the documentation, but we simply cannot change the implementation of 
> select_victim_memcg() later even without oom priorities since users cannot 
> get inconsistent results after opting into a feature between kernel 
> versions.  I believe the selection criteria should be implemented to be 
> deterministic, as select_victim_memcg() does, and the documentation should 
> fully describe what the selection criteria is, and then allow the user to 
> decide.

I wholeheartedly disagree. We have changed the behavior multiple times
in the past. In fact, you have arguably done the most drastic changes
to the algorithm since the OOM killer was first introduced. E.g.

a63d83f427fb oom: badness heuristic rewrite

And that's completely fine. Because this thing is not a resource
management tool for userspace, it's the kernel saving itself. At best
in a manner that's not too surprising to userspace.

To me, your argument behind the NAK still boils down to "this doesn't
support my highly specialized usecase." But since it doesn't prohibit
your usecase - which isn't even supported upstream, btw - this really
doesn't carry much weight.

I'd say if you want configurability on top of Roman's code, please
submit patches and push the case for these in a separate effort.

Thanks


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread Johannes Weiner
On Thu, Sep 21, 2017 at 02:17:25PM -0700, David Rientjes wrote:
> On Thu, 21 Sep 2017, Johannes Weiner wrote:
> 
> > That's a ridiculous nak.
> > 
> > The fact that this patch series doesn't solve your particular problem
> > is not a technical argument to *reject* somebody else's work to solve
> > a different problem. It's not a regression when behavior is completely
> > unchanged unless you explicitly opt into a new functionality.
> > 
> > So let's stay reasonable here.
> > 
> 
> The issue is that if you opt-in to the new feature, then you are forced to 
> change /proc/pid/oom_score_adj of all processes attached to a cgroup that 
> you do not want oom killed based on size to be oom disabled.

You're assuming that most people would want to influence the oom
behavior in the first place. I think the opposite is the case: most
people don't care as long as the OOM killer takes the intent the user
has expressed wrt runtime containerization/grouping into account.

> The kernel provides no other remedy without oom priorities since the
> new feature would otherwise disregard oom_score_adj.

As of v8, it respects this setting and doesn't kill min score tasks.

> The nack originates from the general need for userspace influence
> over oom victim selection and to avoid userspace needing to take the
> rather drastic measure of setting all processes to be oom disabled
> to prevent oom kill in kernels before oom priorities are introduced.

As I said, we can discuss this in a separate context. Because again, I
really don't see how the lack of configurability in an opt-in feature
would diminish its value for many people who don't even care to adjust
and influence this behavior.

> > The patch series has merit as it currently stands. It makes OOM
> > killing in a cgrouped system fairer and less surprising. Whether you
> > have the ability to influence this in a new way is an entirely
> > separate discussion. It's one that involves ABI and user guarantees.
> > 
> > Right now Roman's patches make no guarantees on how the cgroup tree is
> > descended. But once we define an interface for prioritization, it
> > locks the victim algorithm into place to a certain extent.
> > 
> 
> The patchset compares memory cgroup size relative to sibling cgroups only, 
> the same comparison for memory.oom_priority.  There is a guarantee 
> provided on how cgroup size is compared in select_victim_memcg(), it 
> hierarchically accumulates the "size" from leaf nodes up to the root memcg 
> and then iterates the tree comparing sizes between sibling cgroups to 
> choose a victim memcg.  That algorithm could be more elaborately described 
> in the documentation, but we simply cannot change the implementation of 
> select_victim_memcg() later even without oom priorities since users cannot 
> get inconsistent results after opting into a feature between kernel 
> versions.  I believe the selection criteria should be implemented to be 
> deterministic, as select_victim_memcg() does, and the documentation should 
> fully describe what the selection criteria is, and then allow the user to 
> decide.

I wholeheartedly disagree. We have changed the behavior multiple times
in the past. In fact, you have arguably done the most drastic changes
to the algorithm since the OOM killer was first introduced. E.g.

a63d83f427fb oom: badness heuristic rewrite

And that's completely fine. Because this thing is not a resource
management tool for userspace, it's the kernel saving itself. At best
in a manner that's not too surprising to userspace.

To me, your argument behind the NAK still boils down to "this doesn't
support my highly specialized usecase." But since it doesn't prohibit
your usecase - which isn't even supported upstream, btw - this really
doesn't carry much weight.

I'd say if you want configurability on top of Roman's code, please
submit patches and push the case for these in a separate effort.

Thanks


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread David Rientjes
On Thu, 21 Sep 2017, Johannes Weiner wrote:

> That's a ridiculous nak.
> 
> The fact that this patch series doesn't solve your particular problem
> is not a technical argument to *reject* somebody else's work to solve
> a different problem. It's not a regression when behavior is completely
> unchanged unless you explicitly opt into a new functionality.
> 
> So let's stay reasonable here.
> 

The issue is that if you opt-in to the new feature, then you are forced to 
change /proc/pid/oom_score_adj of all processes attached to a cgroup that 
you do not want oom killed based on size to be oom disabled.  The kernel 
provides no other remedy without oom priorities since the new feature 
would otherwise disregard oom_score_adj.  In that case, userspace is 
racing in two ways: (1) attach of process to a memcg you want to protect 
from oom kill (first class, vital, large memory hog job) to set to oom 
disable and (2) adjustment of other cgroups to make them eligible after 
first oom kill.

It doesn't have anything to do with my particular usecase, but rather the 
ability of userspace to influence the decisions of the kernel.  Previous 
to this patchset, when selection is done based on process size, userspace 
has full control over selection.  After this patchset, userspace has no 
control other than setting all processes to be oom disabled if the largest 
memory consumer is to be protected.  Roman's memory.oom_priority provides 
a perfect solution for userspace to be able to influence this decision 
making and causes no change in behavior for users who choose not to tune 
memory.oom_priority.  The nack originates from the general need for 
userspace influence over oom victim selection and to avoid userspace 
needing to take the rather drastic measure of setting all processes to be 
oom disabled to prevent oom kill in kernels before oom priorities are 
introduced.

> The patch series has merit as it currently stands. It makes OOM
> killing in a cgrouped system fairer and less surprising. Whether you
> have the ability to influence this in a new way is an entirely
> separate discussion. It's one that involves ABI and user guarantees.
> 
> Right now Roman's patches make no guarantees on how the cgroup tree is
> descended. But once we define an interface for prioritization, it
> locks the victim algorithm into place to a certain extent.
> 

The patchset compares memory cgroup size relative to sibling cgroups only, 
the same comparison for memory.oom_priority.  There is a guarantee 
provided on how cgroup size is compared in select_victim_memcg(), it 
hierarchically accumulates the "size" from leaf nodes up to the root memcg 
and then iterates the tree comparing sizes between sibling cgroups to 
choose a victim memcg.  That algorithm could be more elaborately described 
in the documentation, but we simply cannot change the implementation of 
select_victim_memcg() later even without oom priorities since users cannot 
get inconsistent results after opting into a feature between kernel 
versions.  I believe the selection criteria should be implemented to be 
deterministic, as select_victim_memcg() does, and the documentation should 
fully describe what the selection criteria is, and then allow the user to 
decide.

> It also involves a discussion about how much control userspace should
> have over OOM killing in the first place. It's a last-minute effort to
> save the kernel from deadlocking on memory. Whether that is the time
> and place to have userspace make clever resource management decisions
> is an entirely different thing than what Roman is doing.
> 
> But this patch series doesn't prevent any such future discussion and
> implementations, and it's not useless without it. So let's not
> conflate these two things, and hold the priority patch for now.
> 

Roman is planning on introducing memory.oom_priority back into the 
patchset per https://marc.info/?l=linux-kernel=150574701126877 and I 
agree with the very clear semantic that it introduces: to have the 
size-based comparison use the same rules as the userspace priority 
comparison.  It's very powerful and I'm happy to ack the final version 
that he plans on posting.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread David Rientjes
On Thu, 21 Sep 2017, Johannes Weiner wrote:

> That's a ridiculous nak.
> 
> The fact that this patch series doesn't solve your particular problem
> is not a technical argument to *reject* somebody else's work to solve
> a different problem. It's not a regression when behavior is completely
> unchanged unless you explicitly opt into a new functionality.
> 
> So let's stay reasonable here.
> 

The issue is that if you opt-in to the new feature, then you are forced to 
change /proc/pid/oom_score_adj of all processes attached to a cgroup that 
you do not want oom killed based on size to be oom disabled.  The kernel 
provides no other remedy without oom priorities since the new feature 
would otherwise disregard oom_score_adj.  In that case, userspace is 
racing in two ways: (1) attach of process to a memcg you want to protect 
from oom kill (first class, vital, large memory hog job) to set to oom 
disable and (2) adjustment of other cgroups to make them eligible after 
first oom kill.

It doesn't have anything to do with my particular usecase, but rather the 
ability of userspace to influence the decisions of the kernel.  Previous 
to this patchset, when selection is done based on process size, userspace 
has full control over selection.  After this patchset, userspace has no 
control other than setting all processes to be oom disabled if the largest 
memory consumer is to be protected.  Roman's memory.oom_priority provides 
a perfect solution for userspace to be able to influence this decision 
making and causes no change in behavior for users who choose not to tune 
memory.oom_priority.  The nack originates from the general need for 
userspace influence over oom victim selection and to avoid userspace 
needing to take the rather drastic measure of setting all processes to be 
oom disabled to prevent oom kill in kernels before oom priorities are 
introduced.

> The patch series has merit as it currently stands. It makes OOM
> killing in a cgrouped system fairer and less surprising. Whether you
> have the ability to influence this in a new way is an entirely
> separate discussion. It's one that involves ABI and user guarantees.
> 
> Right now Roman's patches make no guarantees on how the cgroup tree is
> descended. But once we define an interface for prioritization, it
> locks the victim algorithm into place to a certain extent.
> 

The patchset compares memory cgroup size relative to sibling cgroups only, 
the same comparison for memory.oom_priority.  There is a guarantee 
provided on how cgroup size is compared in select_victim_memcg(), it 
hierarchically accumulates the "size" from leaf nodes up to the root memcg 
and then iterates the tree comparing sizes between sibling cgroups to 
choose a victim memcg.  That algorithm could be more elaborately described 
in the documentation, but we simply cannot change the implementation of 
select_victim_memcg() later even without oom priorities since users cannot 
get inconsistent results after opting into a feature between kernel 
versions.  I believe the selection criteria should be implemented to be 
deterministic, as select_victim_memcg() does, and the documentation should 
fully describe what the selection criteria is, and then allow the user to 
decide.

> It also involves a discussion about how much control userspace should
> have over OOM killing in the first place. It's a last-minute effort to
> save the kernel from deadlocking on memory. Whether that is the time
> and place to have userspace make clever resource management decisions
> is an entirely different thing than what Roman is doing.
> 
> But this patch series doesn't prevent any such future discussion and
> implementations, and it's not useless without it. So let's not
> conflate these two things, and hold the priority patch for now.
> 

Roman is planning on introducing memory.oom_priority back into the 
patchset per https://marc.info/?l=linux-kernel=150574701126877 and I 
agree with the very clear semantic that it introduces: to have the 
size-based comparison use the same rules as the userspace priority 
comparison.  It's very powerful and I'm happy to ack the final version 
that he plans on posting.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread Johannes Weiner
On Mon, Sep 11, 2017 at 01:44:39PM -0700, David Rientjes wrote:
> On Mon, 11 Sep 2017, Roman Gushchin wrote:
> 
> > This patchset makes the OOM killer cgroup-aware.
> > 
> > v8:
> >   - Do not kill tasks with OOM_SCORE_ADJ -1000
> >   - Make the whole thing opt-in with cgroup mount option control
> >   - Drop oom_priority for further discussions
> 
> Nack, we specifically require oom_priority for this to function correctly, 
> otherwise we cannot prefer to kill from low priority leaf memcgs as 
> required.  v8 appears to implement new functionality that we want, to 
> compare two memcgs based on usage, but without the ability to influence 
> that decision to protect important userspace, so now I'm in a position 
> where (1) nothing has changed if I don't use the new mount option or (2) I 
> get completely different oom kill selection with the new mount option but 
> not the ability to influence it.  I was much happier with the direction 
> that v7 was taking, but since v8 causes us to regress without the ability 
> to change memcg priority, this has to be nacked.

That's a ridiculous nak.

The fact that this patch series doesn't solve your particular problem
is not a technical argument to *reject* somebody else's work to solve
a different problem. It's not a regression when behavior is completely
unchanged unless you explicitly opt into a new functionality.

So let's stay reasonable here.

The patch series has merit as it currently stands. It makes OOM
killing in a cgrouped system fairer and less surprising. Whether you
have the ability to influence this in a new way is an entirely
separate discussion. It's one that involves ABI and user guarantees.

Right now Roman's patches make no guarantees on how the cgroup tree is
descended. But once we define an interface for prioritization, it
locks the victim algorithm into place to a certain extent.

It also involves a discussion about how much control userspace should
have over OOM killing in the first place. It's a last-minute effort to
save the kernel from deadlocking on memory. Whether that is the time
and place to have userspace make clever resource management decisions
is an entirely different thing than what Roman is doing.

But this patch series doesn't prevent any such future discussion and
implementations, and it's not useless without it. So let's not
conflate these two things, and hold the priority patch for now.

Thanks.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread Johannes Weiner
On Mon, Sep 11, 2017 at 01:44:39PM -0700, David Rientjes wrote:
> On Mon, 11 Sep 2017, Roman Gushchin wrote:
> 
> > This patchset makes the OOM killer cgroup-aware.
> > 
> > v8:
> >   - Do not kill tasks with OOM_SCORE_ADJ -1000
> >   - Make the whole thing opt-in with cgroup mount option control
> >   - Drop oom_priority for further discussions
> 
> Nack, we specifically require oom_priority for this to function correctly, 
> otherwise we cannot prefer to kill from low priority leaf memcgs as 
> required.  v8 appears to implement new functionality that we want, to 
> compare two memcgs based on usage, but without the ability to influence 
> that decision to protect important userspace, so now I'm in a position 
> where (1) nothing has changed if I don't use the new mount option or (2) I 
> get completely different oom kill selection with the new mount option but 
> not the ability to influence it.  I was much happier with the direction 
> that v7 was taking, but since v8 causes us to regress without the ability 
> to change memcg priority, this has to be nacked.

That's a ridiculous nak.

The fact that this patch series doesn't solve your particular problem
is not a technical argument to *reject* somebody else's work to solve
a different problem. It's not a regression when behavior is completely
unchanged unless you explicitly opt into a new functionality.

So let's stay reasonable here.

The patch series has merit as it currently stands. It makes OOM
killing in a cgrouped system fairer and less surprising. Whether you
have the ability to influence this in a new way is an entirely
separate discussion. It's one that involves ABI and user guarantees.

Right now Roman's patches make no guarantees on how the cgroup tree is
descended. But once we define an interface for prioritization, it
locks the victim algorithm into place to a certain extent.

It also involves a discussion about how much control userspace should
have over OOM killing in the first place. It's a last-minute effort to
save the kernel from deadlocking on memory. Whether that is the time
and place to have userspace make clever resource management decisions
is an entirely different thing than what Roman is doing.

But this patch series doesn't prevent any such future discussion and
implementations, and it's not useless without it. So let's not
conflate these two things, and hold the priority patch for now.

Thanks.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread David Rientjes
On Mon, 18 Sep 2017, Roman Gushchin wrote:

> > As said in other email. We can make priorities hierarchical (in the same
> > sense as hard limit or others) so that children cannot override their
> > parent.
> 
> You mean they can set the knob to any value, but parent's value is enforced,
> if it's greater than child's value?
> 
> If so, this sounds logical to me. Then we have size-based comparison and
> priority-based comparison with similar rules, and all use cases are covered.
> 
> Ok, can we stick with this design?
> Then I'll return oom_priorities in place, and post a (hopefully) final 
> version.
> 

I just want to make sure that we are going with your original 
implementation here: that oom_priority is only effective for compare 
sibling memory cgroups and nothing beyond that.  The value alone has no 
relationship to any ancestor.  We can't set oom_priority based on the 
priorities of any other memory cgroups other than our own siblings because 
we have no control over how those change.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread David Rientjes
On Mon, 18 Sep 2017, Roman Gushchin wrote:

> > As said in other email. We can make priorities hierarchical (in the same
> > sense as hard limit or others) so that children cannot override their
> > parent.
> 
> You mean they can set the knob to any value, but parent's value is enforced,
> if it's greater than child's value?
> 
> If so, this sounds logical to me. Then we have size-based comparison and
> priority-based comparison with similar rules, and all use cases are covered.
> 
> Ok, can we stick with this design?
> Then I'll return oom_priorities in place, and post a (hopefully) final 
> version.
> 

I just want to make sure that we are going with your original 
implementation here: that oom_priority is only effective for compare 
sibling memory cgroups and nothing beyond that.  The value alone has no 
relationship to any ancestor.  We can't set oom_priority based on the 
priorities of any other memory cgroups other than our own siblings because 
we have no control over how those change.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread David Rientjes
On Wed, 20 Sep 2017, Roman Gushchin wrote:

> > It's actually much more complex because in our environment we'd need an 
> > "activity manager" with CAP_SYS_RESOURCE to control oom priorities of user 
> > subcontainers when today it need only be concerned with top-level memory 
> > cgroups.  Users can create their own hierarchies with their own oom 
> > priorities at will, it doesn't alter the selection heuristic for another 
> > other user running on the same system and gives them full control over the 
> > selection in their own subtree.  We shouldn't need to have a system-wide 
> > daemon with CAP_SYS_RESOURCE be required to manage subcontainers when 
> > nothing else requires it.  I believe it's also much easier to document: 
> > oom_priority is considered for all sibling cgroups at each level of the 
> > hierarchy and the cgroup with the lowest priority value gets iterated.
> 
> I do agree actually. System-wide OOM priorities make no sense.
> 
> Always compare sibling cgroups, either by priority or size, seems to be
> simple, clear and powerful enough for all reasonable use cases. Am I right,
> that it's exactly what you've used internally? This is a perfect confirmation,
> I believe.
> 

We've used it for at least four years, I added my Tested-by to your patch, 
we would convert to your implementation if it is merged upstream, and I 
would enthusiastically support your patch if you would integrate it back 
into your series.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-21 Thread David Rientjes
On Wed, 20 Sep 2017, Roman Gushchin wrote:

> > It's actually much more complex because in our environment we'd need an 
> > "activity manager" with CAP_SYS_RESOURCE to control oom priorities of user 
> > subcontainers when today it need only be concerned with top-level memory 
> > cgroups.  Users can create their own hierarchies with their own oom 
> > priorities at will, it doesn't alter the selection heuristic for another 
> > other user running on the same system and gives them full control over the 
> > selection in their own subtree.  We shouldn't need to have a system-wide 
> > daemon with CAP_SYS_RESOURCE be required to manage subcontainers when 
> > nothing else requires it.  I believe it's also much easier to document: 
> > oom_priority is considered for all sibling cgroups at each level of the 
> > hierarchy and the cgroup with the lowest priority value gets iterated.
> 
> I do agree actually. System-wide OOM priorities make no sense.
> 
> Always compare sibling cgroups, either by priority or size, seems to be
> simple, clear and powerful enough for all reasonable use cases. Am I right,
> that it's exactly what you've used internally? This is a perfect confirmation,
> I believe.
> 

We've used it for at least four years, I added my Tested-by to your patch, 
we would convert to your implementation if it is merged upstream, and I 
would enthusiastically support your patch if you would integrate it back 
into your series.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-20 Thread Roman Gushchin
On Tue, Sep 19, 2017 at 01:54:48PM -0700, David Rientjes wrote:
> On Fri, 15 Sep 2017, Roman Gushchin wrote:
> 
> > > > > But then you just enforce a structural restriction on your 
> > > > > configuration
> > > > > because
> > > > >   root
> > > > > /  \
> > > > >AD
> > > > >   /\   
> > > > >  B  C
> > > > > 
> > > > > is a different thing than
> > > > >   root
> > > > > / | \
> > > > >B  C  D
> > > > >
> > > > 
> > > > I actually don't have a strong argument against an approach to select
> > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > no much difference.
> > > > 
> > > > The only real concern I have is that then we have to do the same with
> > > > oom_priorities (select largest priority tree-wide), and this will limit
> > > > an ability to enforce the priority by parent cgroup.
> > > > 
> > > 
> > > Yes, oom_priority cannot select the largest priority tree-wide for 
> > > exactly 
> > > that reason.  We need the ability to control from which subtree the kill 
> > > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > > this becomes quite powerful so they can define their own oom priorities.  
> > >  
> > > Otherwise, they can easily override the oom priorities of other cgroups.
> > 
> > I believe, it's a solvable problem: we can require CAP_SYS_RESOURCE to set
> > the oom_priority below parent's value, or something like this.
> > 
> > But it looks more complex, and I'm not sure there are real examples,
> > when we have to compare memcgs, which are on different levels
> > (or in different subtrees).
> > 
> 
> It's actually much more complex because in our environment we'd need an 
> "activity manager" with CAP_SYS_RESOURCE to control oom priorities of user 
> subcontainers when today it need only be concerned with top-level memory 
> cgroups.  Users can create their own hierarchies with their own oom 
> priorities at will, it doesn't alter the selection heuristic for another 
> other user running on the same system and gives them full control over the 
> selection in their own subtree.  We shouldn't need to have a system-wide 
> daemon with CAP_SYS_RESOURCE be required to manage subcontainers when 
> nothing else requires it.  I believe it's also much easier to document: 
> oom_priority is considered for all sibling cgroups at each level of the 
> hierarchy and the cgroup with the lowest priority value gets iterated.

I do agree actually. System-wide OOM priorities make no sense.

Always compare sibling cgroups, either by priority or size, seems to be
simple, clear and powerful enough for all reasonable use cases. Am I right,
that it's exactly what you've used internally? This is a perfect confirmation,
I believe.

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-20 Thread Roman Gushchin
On Tue, Sep 19, 2017 at 01:54:48PM -0700, David Rientjes wrote:
> On Fri, 15 Sep 2017, Roman Gushchin wrote:
> 
> > > > > But then you just enforce a structural restriction on your 
> > > > > configuration
> > > > > because
> > > > >   root
> > > > > /  \
> > > > >AD
> > > > >   /\   
> > > > >  B  C
> > > > > 
> > > > > is a different thing than
> > > > >   root
> > > > > / | \
> > > > >B  C  D
> > > > >
> > > > 
> > > > I actually don't have a strong argument against an approach to select
> > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > no much difference.
> > > > 
> > > > The only real concern I have is that then we have to do the same with
> > > > oom_priorities (select largest priority tree-wide), and this will limit
> > > > an ability to enforce the priority by parent cgroup.
> > > > 
> > > 
> > > Yes, oom_priority cannot select the largest priority tree-wide for 
> > > exactly 
> > > that reason.  We need the ability to control from which subtree the kill 
> > > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > > this becomes quite powerful so they can define their own oom priorities.  
> > >  
> > > Otherwise, they can easily override the oom priorities of other cgroups.
> > 
> > I believe, it's a solvable problem: we can require CAP_SYS_RESOURCE to set
> > the oom_priority below parent's value, or something like this.
> > 
> > But it looks more complex, and I'm not sure there are real examples,
> > when we have to compare memcgs, which are on different levels
> > (or in different subtrees).
> > 
> 
> It's actually much more complex because in our environment we'd need an 
> "activity manager" with CAP_SYS_RESOURCE to control oom priorities of user 
> subcontainers when today it need only be concerned with top-level memory 
> cgroups.  Users can create their own hierarchies with their own oom 
> priorities at will, it doesn't alter the selection heuristic for another 
> other user running on the same system and gives them full control over the 
> selection in their own subtree.  We shouldn't need to have a system-wide 
> daemon with CAP_SYS_RESOURCE be required to manage subcontainers when 
> nothing else requires it.  I believe it's also much easier to document: 
> oom_priority is considered for all sibling cgroups at each level of the 
> hierarchy and the cgroup with the lowest priority value gets iterated.

I do agree actually. System-wide OOM priorities make no sense.

Always compare sibling cgroups, either by priority or size, seems to be
simple, clear and powerful enough for all reasonable use cases. Am I right,
that it's exactly what you've used internally? This is a perfect confirmation,
I believe.

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-20 Thread Roman Gushchin
On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
> > > On Thu 14-09-17 09:05:48, Roman Gushchin wrote:
> > > > On Thu, Sep 14, 2017 at 03:40:14PM +0200, Michal Hocko wrote:
> > > > > On Wed 13-09-17 14:56:07, Roman Gushchin wrote:
> > > > > > On Wed, Sep 13, 2017 at 02:29:14PM +0200, Michal Hocko wrote:
> > > > > [...]
> > > > > > > I strongly believe that comparing only leaf memcgs
> > > > > > > is more straightforward and it doesn't lead to unexpected results 
> > > > > > > as
> > > > > > > mentioned before (kill a small memcg which is a part of the larger
> > > > > > > sub-hierarchy).
> > > > > > 
> > > > > > One of two main goals of this patchset is to introduce cgroup-level
> > > > > > fairness: bigger cgroups should be affected more than smaller,
> > > > > > despite the size of tasks inside. I believe the same principle
> > > > > > should be used for cgroups.
> > > > > 
> > > > > Yes bigger cgroups should be preferred but I fail to see why bigger
> > > > > hierarchies should be considered as well if they are not kill-all. And
> > > > > whether non-leaf memcgs should allow kill-all is not entirely clear to
> > > > > me. What would be the usecase?
> > > > 
> > > > We definitely want to support kill-all for non-leaf cgroups.
> > > > A workload can consist of several cgroups and we want to clean up
> > > > the whole thing on OOM.
> > > 
> > > Could you be more specific about such a workload? E.g. how can be such a
> > > hierarchy handled consistently when its sub-tree gets killed due to
> > > internal memory pressure?
> > 
> > Or just system-wide OOM.
> > 
> > > Or do you expect that none of the subtree will
> > > have hard limit configured?
> > 
> > And this can also be a case: the whole workload may have hard limit
> > configured, while internal memcgs have only memory.low set for "soft"
> > prioritization.
> > 
> > > 
> > > But then you just enforce a structural restriction on your configuration
> > > because
> > >   root
> > > /  \
> > >AD
> > >   /\   
> > >  B  C
> > > 
> > > is a different thing than
> > >   root
> > > / | \
> > >B  C  D
> > >
> > 
> > I actually don't have a strong argument against an approach to select
> > largest leaf or kill-all-set memcg. I think, in practice there will be
> > no much difference.

I've tried to implement this approach, and it's really arguable.
Although your example looks reasonable, the opposite example is also valid:
you might want to compare whole hierarchies, and it's a quite typical usecase.

Assume, you have several containerized workloads on a machine (probably,
each will be contained in a memcg with memory.max set), with some hierarchy
of cgroups inside. Then in case of global memory shortage we want to reclaim
some memory from the biggest workload, and the selection should not depend
on group_oom settings. It would be really strange, if setting group_oom will
higher the chances to be killed.

In other words, let's imagine processes as leaf nodes in memcg tree. We decided
to select the biggest memcg and kill one or more processes inside (depending
on group_oom setting), but the memcg selection doesn't depend on it.
We do not compare processes from different cgroups, as well as cgroups with
processes. The same should apply to cgroups: why do we want to compare cgroups
from different sub-trees?

While size-based comparison can be implemented with this approach,
the priority-based is really weird (as David mentioned).
If priorities have no hierarchical meaning at all, we lack the very important
ability to enforce hierarchy oom_priority. Otherwise we have to invent some
complex rules of oom_priority propagation (e.g. is someone is raising
the oom_priority in parent, should it be applied to children immediately, etc).

The oom_group knob meaning also becoms more complex. It affects both
the victim selection and OOM action. _ANY_ mechanism which allows to affect
OOM victim selection (either priorities, either bpf-based approach) should
not have global system-wide meaning, it breaks everything.

I do understand your point, but the same is true for other stuff, right?
E.g. cpu time distribution (and io, etc) depends on hierarchy configuration.
It's a limitation, but it's ok, as user should create a hierarchy which
reflects some logical relations between processes and groups of processes.
Otherwise we're going to the configuration hell.

In any case, OOM is a last resort mechanism. The goal is to reclaim some memory
and do not crash the system or do not leave it in totally broken state.
Any really complex mm in userspace should be applied _before_ OOM happens.
So, I don't think we have to support all possible configurations here,
if we're able to achieve the main goal (kill some processes and do not leave
broken systems/containers).


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-20 Thread Roman Gushchin
On Mon, Sep 18, 2017 at 08:14:05AM +0200, Michal Hocko wrote:
> On Fri 15-09-17 08:23:01, Roman Gushchin wrote:
> > On Fri, Sep 15, 2017 at 12:58:26PM +0200, Michal Hocko wrote:
> > > On Thu 14-09-17 09:05:48, Roman Gushchin wrote:
> > > > On Thu, Sep 14, 2017 at 03:40:14PM +0200, Michal Hocko wrote:
> > > > > On Wed 13-09-17 14:56:07, Roman Gushchin wrote:
> > > > > > On Wed, Sep 13, 2017 at 02:29:14PM +0200, Michal Hocko wrote:
> > > > > [...]
> > > > > > > I strongly believe that comparing only leaf memcgs
> > > > > > > is more straightforward and it doesn't lead to unexpected results 
> > > > > > > as
> > > > > > > mentioned before (kill a small memcg which is a part of the larger
> > > > > > > sub-hierarchy).
> > > > > > 
> > > > > > One of two main goals of this patchset is to introduce cgroup-level
> > > > > > fairness: bigger cgroups should be affected more than smaller,
> > > > > > despite the size of tasks inside. I believe the same principle
> > > > > > should be used for cgroups.
> > > > > 
> > > > > Yes bigger cgroups should be preferred but I fail to see why bigger
> > > > > hierarchies should be considered as well if they are not kill-all. And
> > > > > whether non-leaf memcgs should allow kill-all is not entirely clear to
> > > > > me. What would be the usecase?
> > > > 
> > > > We definitely want to support kill-all for non-leaf cgroups.
> > > > A workload can consist of several cgroups and we want to clean up
> > > > the whole thing on OOM.
> > > 
> > > Could you be more specific about such a workload? E.g. how can be such a
> > > hierarchy handled consistently when its sub-tree gets killed due to
> > > internal memory pressure?
> > 
> > Or just system-wide OOM.
> > 
> > > Or do you expect that none of the subtree will
> > > have hard limit configured?
> > 
> > And this can also be a case: the whole workload may have hard limit
> > configured, while internal memcgs have only memory.low set for "soft"
> > prioritization.
> > 
> > > 
> > > But then you just enforce a structural restriction on your configuration
> > > because
> > >   root
> > > /  \
> > >AD
> > >   /\   
> > >  B  C
> > > 
> > > is a different thing than
> > >   root
> > > / | \
> > >B  C  D
> > >
> > 
> > I actually don't have a strong argument against an approach to select
> > largest leaf or kill-all-set memcg. I think, in practice there will be
> > no much difference.

I've tried to implement this approach, and it's really arguable.
Although your example looks reasonable, the opposite example is also valid:
you might want to compare whole hierarchies, and it's a quite typical usecase.

Assume, you have several containerized workloads on a machine (probably,
each will be contained in a memcg with memory.max set), with some hierarchy
of cgroups inside. Then in case of global memory shortage we want to reclaim
some memory from the biggest workload, and the selection should not depend
on group_oom settings. It would be really strange, if setting group_oom will
higher the chances to be killed.

In other words, let's imagine processes as leaf nodes in memcg tree. We decided
to select the biggest memcg and kill one or more processes inside (depending
on group_oom setting), but the memcg selection doesn't depend on it.
We do not compare processes from different cgroups, as well as cgroups with
processes. The same should apply to cgroups: why do we want to compare cgroups
from different sub-trees?

While size-based comparison can be implemented with this approach,
the priority-based is really weird (as David mentioned).
If priorities have no hierarchical meaning at all, we lack the very important
ability to enforce hierarchy oom_priority. Otherwise we have to invent some
complex rules of oom_priority propagation (e.g. is someone is raising
the oom_priority in parent, should it be applied to children immediately, etc).

The oom_group knob meaning also becoms more complex. It affects both
the victim selection and OOM action. _ANY_ mechanism which allows to affect
OOM victim selection (either priorities, either bpf-based approach) should
not have global system-wide meaning, it breaks everything.

I do understand your point, but the same is true for other stuff, right?
E.g. cpu time distribution (and io, etc) depends on hierarchy configuration.
It's a limitation, but it's ok, as user should create a hierarchy which
reflects some logical relations between processes and groups of processes.
Otherwise we're going to the configuration hell.

In any case, OOM is a last resort mechanism. The goal is to reclaim some memory
and do not crash the system or do not leave it in totally broken state.
Any really complex mm in userspace should be applied _before_ OOM happens.
So, I don't think we have to support all possible configurations here,
if we're able to achieve the main goal (kill some processes and do not leave
broken systems/containers).


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-19 Thread David Rientjes
On Fri, 15 Sep 2017, Roman Gushchin wrote:

> > > > But then you just enforce a structural restriction on your configuration
> > > > because
> > > > root
> > > > /  \
> > > >AD
> > > >   /\   
> > > >  B  C
> > > > 
> > > > is a different thing than
> > > > root
> > > > / | \
> > > >B  C  D
> > > >
> > > 
> > > I actually don't have a strong argument against an approach to select
> > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > no much difference.
> > > 
> > > The only real concern I have is that then we have to do the same with
> > > oom_priorities (select largest priority tree-wide), and this will limit
> > > an ability to enforce the priority by parent cgroup.
> > > 
> > 
> > Yes, oom_priority cannot select the largest priority tree-wide for exactly 
> > that reason.  We need the ability to control from which subtree the kill 
> > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > this becomes quite powerful so they can define their own oom priorities.   
> > Otherwise, they can easily override the oom priorities of other cgroups.
> 
> I believe, it's a solvable problem: we can require CAP_SYS_RESOURCE to set
> the oom_priority below parent's value, or something like this.
> 
> But it looks more complex, and I'm not sure there are real examples,
> when we have to compare memcgs, which are on different levels
> (or in different subtrees).
> 

It's actually much more complex because in our environment we'd need an 
"activity manager" with CAP_SYS_RESOURCE to control oom priorities of user 
subcontainers when today it need only be concerned with top-level memory 
cgroups.  Users can create their own hierarchies with their own oom 
priorities at will, it doesn't alter the selection heuristic for another 
other user running on the same system and gives them full control over the 
selection in their own subtree.  We shouldn't need to have a system-wide 
daemon with CAP_SYS_RESOURCE be required to manage subcontainers when 
nothing else requires it.  I believe it's also much easier to document: 
oom_priority is considered for all sibling cgroups at each level of the 
hierarchy and the cgroup with the lowest priority value gets iterated.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-19 Thread David Rientjes
On Fri, 15 Sep 2017, Roman Gushchin wrote:

> > > > But then you just enforce a structural restriction on your configuration
> > > > because
> > > > root
> > > > /  \
> > > >AD
> > > >   /\   
> > > >  B  C
> > > > 
> > > > is a different thing than
> > > > root
> > > > / | \
> > > >B  C  D
> > > >
> > > 
> > > I actually don't have a strong argument against an approach to select
> > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > no much difference.
> > > 
> > > The only real concern I have is that then we have to do the same with
> > > oom_priorities (select largest priority tree-wide), and this will limit
> > > an ability to enforce the priority by parent cgroup.
> > > 
> > 
> > Yes, oom_priority cannot select the largest priority tree-wide for exactly 
> > that reason.  We need the ability to control from which subtree the kill 
> > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > this becomes quite powerful so they can define their own oom priorities.   
> > Otherwise, they can easily override the oom priorities of other cgroups.
> 
> I believe, it's a solvable problem: we can require CAP_SYS_RESOURCE to set
> the oom_priority below parent's value, or something like this.
> 
> But it looks more complex, and I'm not sure there are real examples,
> when we have to compare memcgs, which are on different levels
> (or in different subtrees).
> 

It's actually much more complex because in our environment we'd need an 
"activity manager" with CAP_SYS_RESOURCE to control oom priorities of user 
subcontainers when today it need only be concerned with top-level memory 
cgroups.  Users can create their own hierarchies with their own oom 
priorities at will, it doesn't alter the selection heuristic for another 
other user running on the same system and gives them full control over the 
selection in their own subtree.  We shouldn't need to have a system-wide 
daemon with CAP_SYS_RESOURCE be required to manage subcontainers when 
nothing else requires it.  I believe it's also much easier to document: 
oom_priority is considered for all sibling cgroups at each level of the 
hierarchy and the cgroup with the lowest priority value gets iterated.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-19 Thread David Rientjes
On Mon, 18 Sep 2017, Michal Hocko wrote:

> > > > But then you just enforce a structural restriction on your configuration
> > > > because
> > > > root
> > > > /  \
> > > >AD
> > > >   /\   
> > > >  B  C
> > > > 
> > > > is a different thing than
> > > > root
> > > > / | \
> > > >B  C  D
> > > >
> > > 
> > > I actually don't have a strong argument against an approach to select
> > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > no much difference.
> > > 
> > > The only real concern I have is that then we have to do the same with
> > > oom_priorities (select largest priority tree-wide), and this will limit
> > > an ability to enforce the priority by parent cgroup.
> > > 
> > 
> > Yes, oom_priority cannot select the largest priority tree-wide for exactly 
> > that reason.  We need the ability to control from which subtree the kill 
> > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > this becomes quite powerful so they can define their own oom priorities.   
> > Otherwise, they can easily override the oom priorities of other cgroups.
> 
> Could you be more speicific about your usecase? What would be a
> problem If we allow to only increase priority in children (like other
> hierarchical controls).
> 

For memcg constrained oom conditions, there is only a theoretical issue if 
the subtree is not under the control of a single user and various users 
can alter their priorities without knowledge of the priorities of other 
children in the same subtree that is oom, or those values change without 
knowledge of a child.  I don't know of anybody that configures memory 
cgroup hierarchies that way, though.

The problem is more obvious in system oom conditions.  If we have two 
top-level memory cgroups with the same "job" priority, they get the same 
oom priority.  The user who configures subcontainers is now always 
targeted for oom kill in an "increase priority in children" policy.

The hierarchy becomes this:

root
   /\
  A  D
 / \   / | \
B   C E  F  G

where A/memory.oom_priority == D/memory.oom_priority.

D wants to kill in order of E -> F -> G, but can't configure that if
B = A - 1 and C = B - 1.  It also shouldn't need to adjust its own oom 
priorities based on a hierarchy outside its control and which can change 
at any time at the discretion of the user (with namespaces you may not 
even be able to access it).

But also if A/memory.oom_priority = D/memory.oom_priority - 100, A is 
preferred unless its subcontainers configure themselves in a way where 
they have higher oom priority values than E, F, and G.  That may yield 
very different results when additional jobs get scheduled on the system 
(and H tree) where the user has full control over their own oom 
priorities, even when the value must only increase.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-19 Thread David Rientjes
On Mon, 18 Sep 2017, Michal Hocko wrote:

> > > > But then you just enforce a structural restriction on your configuration
> > > > because
> > > > root
> > > > /  \
> > > >AD
> > > >   /\   
> > > >  B  C
> > > > 
> > > > is a different thing than
> > > > root
> > > > / | \
> > > >B  C  D
> > > >
> > > 
> > > I actually don't have a strong argument against an approach to select
> > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > no much difference.
> > > 
> > > The only real concern I have is that then we have to do the same with
> > > oom_priorities (select largest priority tree-wide), and this will limit
> > > an ability to enforce the priority by parent cgroup.
> > > 
> > 
> > Yes, oom_priority cannot select the largest priority tree-wide for exactly 
> > that reason.  We need the ability to control from which subtree the kill 
> > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > this becomes quite powerful so they can define their own oom priorities.   
> > Otherwise, they can easily override the oom priorities of other cgroups.
> 
> Could you be more speicific about your usecase? What would be a
> problem If we allow to only increase priority in children (like other
> hierarchical controls).
> 

For memcg constrained oom conditions, there is only a theoretical issue if 
the subtree is not under the control of a single user and various users 
can alter their priorities without knowledge of the priorities of other 
children in the same subtree that is oom, or those values change without 
knowledge of a child.  I don't know of anybody that configures memory 
cgroup hierarchies that way, though.

The problem is more obvious in system oom conditions.  If we have two 
top-level memory cgroups with the same "job" priority, they get the same 
oom priority.  The user who configures subcontainers is now always 
targeted for oom kill in an "increase priority in children" policy.

The hierarchy becomes this:

root
   /\
  A  D
 / \   / | \
B   C E  F  G

where A/memory.oom_priority == D/memory.oom_priority.

D wants to kill in order of E -> F -> G, but can't configure that if
B = A - 1 and C = B - 1.  It also shouldn't need to adjust its own oom 
priorities based on a hierarchy outside its control and which can change 
at any time at the discretion of the user (with namespaces you may not 
even be able to access it).

But also if A/memory.oom_priority = D/memory.oom_priority - 100, A is 
preferred unless its subcontainers configure themselves in a way where 
they have higher oom priority values than E, F, and G.  That may yield 
very different results when additional jobs get scheduled on the system 
(and H tree) where the user has full control over their own oom 
priorities, even when the value must only increase.


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-18 Thread Roman Gushchin
On Mon, Sep 18, 2017 at 08:20:45AM +0200, Michal Hocko wrote:
> On Fri 15-09-17 14:08:07, Roman Gushchin wrote:
> > On Fri, Sep 15, 2017 at 12:55:55PM -0700, David Rientjes wrote:
> > > On Fri, 15 Sep 2017, Roman Gushchin wrote:
> > > 
> > > > > But then you just enforce a structural restriction on your 
> > > > > configuration
> > > > > because
> > > > >   root
> > > > > /  \
> > > > >AD
> > > > >   /\   
> > > > >  B  C
> > > > > 
> > > > > is a different thing than
> > > > >   root
> > > > > / | \
> > > > >B  C  D
> > > > >
> > > > 
> > > > I actually don't have a strong argument against an approach to select
> > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > no much difference.
> > > > 
> > > > The only real concern I have is that then we have to do the same with
> > > > oom_priorities (select largest priority tree-wide), and this will limit
> > > > an ability to enforce the priority by parent cgroup.
> > > > 
> > > 
> > > Yes, oom_priority cannot select the largest priority tree-wide for 
> > > exactly 
> > > that reason.  We need the ability to control from which subtree the kill 
> > > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > > this becomes quite powerful so they can define their own oom priorities.  
> > >  
> > > Otherwise, they can easily override the oom priorities of other cgroups.
> > 
> > I believe, it's a solvable problem: we can require CAP_SYS_RESOURCE to set
> > the oom_priority below parent's value, or something like this.
> 
> As said in other email. We can make priorities hierarchical (in the same
> sense as hard limit or others) so that children cannot override their
> parent.

You mean they can set the knob to any value, but parent's value is enforced,
if it's greater than child's value?

If so, this sounds logical to me. Then we have size-based comparison and
priority-based comparison with similar rules, and all use cases are covered.

Ok, can we stick with this design?
Then I'll return oom_priorities in place, and post a (hopefully) final version.

Thanks!


Re: [v8 0/4] cgroup-aware OOM killer

2017-09-18 Thread Roman Gushchin
On Mon, Sep 18, 2017 at 08:20:45AM +0200, Michal Hocko wrote:
> On Fri 15-09-17 14:08:07, Roman Gushchin wrote:
> > On Fri, Sep 15, 2017 at 12:55:55PM -0700, David Rientjes wrote:
> > > On Fri, 15 Sep 2017, Roman Gushchin wrote:
> > > 
> > > > > But then you just enforce a structural restriction on your 
> > > > > configuration
> > > > > because
> > > > >   root
> > > > > /  \
> > > > >AD
> > > > >   /\   
> > > > >  B  C
> > > > > 
> > > > > is a different thing than
> > > > >   root
> > > > > / | \
> > > > >B  C  D
> > > > >
> > > > 
> > > > I actually don't have a strong argument against an approach to select
> > > > largest leaf or kill-all-set memcg. I think, in practice there will be
> > > > no much difference.
> > > > 
> > > > The only real concern I have is that then we have to do the same with
> > > > oom_priorities (select largest priority tree-wide), and this will limit
> > > > an ability to enforce the priority by parent cgroup.
> > > > 
> > > 
> > > Yes, oom_priority cannot select the largest priority tree-wide for 
> > > exactly 
> > > that reason.  We need the ability to control from which subtree the kill 
> > > occurs in ancestor cgroups.  If multiple jobs are allocated their own 
> > > cgroups and they can own memory.oom_priority for their own subcontainers, 
> > > this becomes quite powerful so they can define their own oom priorities.  
> > >  
> > > Otherwise, they can easily override the oom priorities of other cgroups.
> > 
> > I believe, it's a solvable problem: we can require CAP_SYS_RESOURCE to set
> > the oom_priority below parent's value, or something like this.
> 
> As said in other email. We can make priorities hierarchical (in the same
> sense as hard limit or others) so that children cannot override their
> parent.

You mean they can set the knob to any value, but parent's value is enforced,
if it's greater than child's value?

If so, this sounds logical to me. Then we have size-based comparison and
priority-based comparison with similar rules, and all use cases are covered.

Ok, can we stick with this design?
Then I'll return oom_priorities in place, and post a (hopefully) final version.

Thanks!


  1   2   >