On Fri, 26 Jan 2018, Andrew Morton wrote:
> > -ECONFUSED. We want to have a mount option that has the sole purpose of
> > doing echo cgroup > /mnt/cgroup/memory.oom_policy?
>
> Approximately. Let me put it another way: can we modify your patchset
> so that the mount option remains, and
On Fri, 26 Jan 2018, Michal Hocko wrote:
> > Could you elaborate on why specifying the oom policy for the entire
> > hierarchy as part of the root mem cgroup and also for individual subtrees
> > is incomplete? It allows admins to specify and delegate policy decisions
> > to subtrees owners as
On Fri, 26 Jan 2018, Michal Hocko wrote:
> > Could you elaborate on why specifying the oom policy for the entire
> > hierarchy as part of the root mem cgroup and also for individual subtrees
> > is incomplete? It allows admins to specify and delegate policy decisions
> > to subtrees owners as
On Thu, 25 Jan 2018, Andrew Morton wrote:
> > Now that each mem cgroup on the system has a memory.oom_policy tunable to
> > specify oom kill selection behavior, remove the needless "groupoom" mount
> > option that requires (1) the entire system to be forced, perhaps
> > unnecessarily, perhaps
On Thu, 25 Jan 2018, Andrew Morton wrote:
> > Now that each mem cgroup on the system has a memory.oom_policy tunable to
> > specify oom kill selection behavior, remove the needless "groupoom" mount
> > option that requires (1) the entire system to be forced, perhaps
> > unnecessarily, perhaps
his allows administrators, for example, to require users in
their own top-level mem cgroup subtree to be accounted for with
hierarchical usage. In other words, they can longer evade the oom killer
by using other controllers or subcontainers.
Signed-off-by: David Rientjes <rient...@google.com>
--
his allows administrators, for example, to require users in
their own top-level mem cgroup subtree to be accounted for with
hierarchical usage. In other words, they can longer evade the oom killer
by using other controllers or subcontainers.
Signed-off-by: David Rientjes
---
Documentation/cgroup-v
ffers from the traditional per process selection, and (2) a remount to
change.
Instead of enabling the cgroup aware oom killer with the "groupoom" mount
option, set the mem cgroup subtree's memory.oom_policy to "cgroup".
Signed-off-by: David Rientjes <rient...@google.com>
-
ffers from the traditional per process selection, and (2) a remount to
change.
Instead of enabling the cgroup aware oom killer with the "groupoom" mount
option, set the mem cgroup subtree's memory.oom_policy to "cgroup".
Signed-off-by: David Rientjes
---
Do
selection should be done per process.
Signed-off-by: David Rientjes <rient...@google.com>
---
Documentation/cgroup-v2.txt | 9 +
include/linux/memcontrol.h | 11 +++
mm/memcontrol.c | 35 +++
3 files changed, 55 insertion
selection should be done per process.
Signed-off-by: David Rientjes
---
Documentation/cgroup-v2.txt | 9 +
include/linux/memcontrol.h | 11 +++
mm/memcontrol.c | 35 +++
3 files changed, 55 insertions(+)
diff --git a/Documentation/cgr
There are three significant concerns about the cgroup aware oom killer as
it is implemented in -mm:
(1) allows users to evade the oom killer by creating subcontainers or
using other controllers since scoring is done per cgroup and not
hierarchically,
(2) does not allow the user to
There are three significant concerns about the cgroup aware oom killer as
it is implemented in -mm:
(1) allows users to evade the oom killer by creating subcontainers or
using other controllers since scoring is done per cgroup and not
hierarchically,
(2) does not allow the user to
On Thu, 25 Jan 2018, Michal Hocko wrote:
> > As a result, this would remove patch 3/4 from the series. Do you have any
> > other feedback regarding the remainder of this patch series before I
> > rebase it?
>
> Yes, and I have provided it already. What you are proposing is
> incomplete at
On Thu, 25 Jan 2018, Michal Hocko wrote:
> > As a result, this would remove patch 3/4 from the series. Do you have any
> > other feedback regarding the remainder of this patch series before I
> > rebase it?
>
> Yes, and I have provided it already. What you are proposing is
> incomplete at
On Wed, 24 Jan 2018, Michal Hocko wrote:
> > The current implementation of memory.oom_group is based on top of a
> > selection implementation that is broken in three ways I have listed for
> > months:
>
> This doesn't lead to anywhere. You are not presenting any new arguments
> and you are
On Wed, 24 Jan 2018, Michal Hocko wrote:
> > The current implementation of memory.oom_group is based on top of a
> > selection implementation that is broken in three ways I have listed for
> > months:
>
> This doesn't lead to anywhere. You are not presenting any new arguments
> and you are
On Tue, 23 Jan 2018, Michal Hocko wrote:
> > It can't, because the current patchset locks the system into a single
> > selection criteria that is unnecessary and the mount option would become a
> > no-op after the policy per subtree becomes configurable by the user as
> > part of the hierarchy
On Tue, 23 Jan 2018, Michal Hocko wrote:
> > It can't, because the current patchset locks the system into a single
> > selection criteria that is unnecessary and the mount option would become a
> > no-op after the policy per subtree becomes configurable by the user as
> > part of the hierarchy
On Sat, 20 Jan 2018, Tejun Heo wrote:
> > Hearing no response, I'll implement this as a separate tunable in a v2
> > series assuming there are no better ideas proposed before next week. One
> > of the nice things about a separate tunable is that an admin can control
> > the overall policy and
On Sat, 20 Jan 2018, Tejun Heo wrote:
> > Hearing no response, I'll implement this as a separate tunable in a v2
> > series assuming there are no better ideas proposed before next week. One
> > of the nice things about a separate tunable is that an admin can control
> > the overall policy and
On Wed, 17 Jan 2018, David Rientjes wrote:
> Yes, this is a valid point. The policy of "tree" and "all" are identical
> policies and then the mechanism differs wrt to whether one process is
> killed or all eligible processes are killed, respectively. My motiva
On Wed, 17 Jan 2018, David Rientjes wrote:
> Yes, this is a valid point. The policy of "tree" and "all" are identical
> policies and then the mechanism differs wrt to whether one process is
> killed or all eligible processes are killed, respectively. My motiva
On Wed, 17 Jan 2018, Roman Gushchin wrote:
> You're introducing a new oom_policy knob, which has two separate sets
> of possible values for the root and non-root cgroups. I don't think
> it aligns with the existing cgroup v2 design.
>
The root mem cgroup can use "none" or "cgroup" to either
On Wed, 17 Jan 2018, Roman Gushchin wrote:
> You're introducing a new oom_policy knob, which has two separate sets
> of possible values for the root and non-root cgroups. I don't think
> it aligns with the existing cgroup v2 design.
>
The root mem cgroup can use "none" or "cgroup" to either
On Wed, 17 Jan 2018, Michal Hocko wrote:
> Absolutely agreed! And moreover, there are not all that many ways what
> to do as an action. You just kill a logical entity - be it a process or
> a logical group of processes. But you have way too many policies how
> to select that entity. Do you want
On Wed, 17 Jan 2018, Michal Hocko wrote:
> Absolutely agreed! And moreover, there are not all that many ways what
> to do as an action. You just kill a logical entity - be it a process or
> a logical group of processes. But you have way too many policies how
> to select that entity. Do you want
On Wed, 17 Jan 2018, Tejun Heo wrote:
> Hello, David.
>
Hi Tejun!
> > The behavior of killing an entire indivisible memory consumer, enabled
> > by memory.oom_group, is an oom policy itself. It specifies that all
>
> I thought we discussed this before but maybe I'm misremembering.
> There
On Wed, 17 Jan 2018, Tejun Heo wrote:
> Hello, David.
>
Hi Tejun!
> > The behavior of killing an entire indivisible memory consumer, enabled
> > by memory.oom_group, is an oom policy itself. It specifies that all
>
> I thought we discussed this before but maybe I'm misremembering.
> There
ffers from the traditional per process selection, and (2) a remount to
change.
Instead of enabling the cgroup aware oom killer with the "groupoom" mount
option, set the mem cgroup subtree's memory.oom_policy to "cgroup".
Signed-off-by: David Rientjes <rient...@google.com>
-
ffers from the traditional per process selection, and (2) a remount to
change.
Instead of enabling the cgroup aware oom killer with the "groupoom" mount
option, set the mem cgroup subtree's memory.oom_policy to "cgroup".
Signed-off-by: David Rientjes
---
Do
selection should be done per process.
Signed-off-by: David Rientjes <rient...@google.com>
---
Documentation/cgroup-v2.txt | 9 +
include/linux/memcontrol.h | 11 +++
mm/memcontrol.c | 35 +++
3 files changed, 55 insertion
selection should be done per process.
Signed-off-by: David Rientjes
---
Documentation/cgroup-v2.txt | 9 +
include/linux/memcontrol.h | 11 +++
mm/memcontrol.c | 35 +++
3 files changed, 55 insertions(+)
diff --git a/Documentation/cgr
his allows administrators, for example, to require users in
their own top-level mem cgroup subtree to be accounted for with
hierarchical usage. In other words, they can longer evade the oom killer
by using other controllers or subcontainers.
Signed-off-by: David Rientjes <rient...@google.com>
--
w by writing "cgroup" to the root
mem cgroup's memory.oom_policy).
The "all" oom policy cannot be enabled on the root mem cgroup.
Signed-off-by: David Rientjes <rient...@google.com>
---
Documentation/cgroup-v2.txt | 51 ++---
includ
his allows administrators, for example, to require users in
their own top-level mem cgroup subtree to be accounted for with
hierarchical usage. In other words, they can longer evade the oom killer
by using other controllers or subcontainers.
Signed-off-by: David Rientjes
---
Documentation/cgroup-v
w by writing "cgroup" to the root
mem cgroup's memory.oom_policy).
The "all" oom policy cannot be enabled on the root mem cgroup.
Signed-off-by: David Rientjes
---
Documentation/cgroup-v2.txt | 51 ++---
include/linux/memcontrol.
There are three significant concerns about the cgroup aware oom killer as
it is implemented in -mm:
(1) allows users to evade the oom killer by creating subcontainers or
using other controllers since scoring is done per cgroup and not
hierarchically,
(2) does not allow the user to
There are three significant concerns about the cgroup aware oom killer as
it is implemented in -mm:
(1) allows users to evade the oom killer by creating subcontainers or
using other controllers since scoring is done per cgroup and not
hierarchically,
(2) does not allow the user to
On Mon, 15 Jan 2018, Michal Hocko wrote:
> > No, this isn't how kernel features get introduced. We don't design a new
> > kernel feature with its own API for a highly specialized usecase and then
> > claim we'll fix the problems later. Users will work around the
> > constraints of the new
On Mon, 15 Jan 2018, Michal Hocko wrote:
> > No, this isn't how kernel features get introduced. We don't design a new
> > kernel feature with its own API for a highly specialized usecase and then
> > claim we'll fix the problems later. Users will work around the
> > constraints of the new
On Mon, 15 Jan 2018, Johannes Weiner wrote:
> > It's quite trivial to allow the root mem cgroup to be compared exactly the
> > same as another cgroup. Please see
> > https://marc.info/?l=linux-kernel=151579459920305.
>
> This only says "that will be fixed" and doesn't address why I care.
>
On Mon, 15 Jan 2018, Johannes Weiner wrote:
> > It's quite trivial to allow the root mem cgroup to be compared exactly the
> > same as another cgroup. Please see
> > https://marc.info/?l=linux-kernel=151579459920305.
>
> This only says "that will be fixed" and doesn't address why I care.
>
On Sat, 13 Jan 2018, Johannes Weiner wrote:
> You don't have any control and no accounting of the stuff situated
> inside the root cgroup, so it doesn't make sense to leave anything in
> there while also using sophisticated containerization mechanisms like
> this group oom setting.
>
> In fact,
On Sat, 13 Jan 2018, Johannes Weiner wrote:
> You don't have any control and no accounting of the stuff situated
> inside the root cgroup, so it doesn't make sense to leave anything in
> there while also using sophisticated containerization mechanisms like
> this group oom setting.
>
> In fact,
in that. We should not fall into a cgroup v1
mentality which became very difficult to make extensible. Let's make a
feature that is generally useful, complete, and empowers the user rather
than push them into a corner with a system wide policy with obvious
downsides.
For these reasons, and the
in that. We should not fall into a cgroup v1
mentality which became very difficult to make extensible. Let's make a
feature that is generally useful, complete, and empowers the user rather
than push them into a corner with a system wide policy with obvious
downsides.
For these reasons, an
On Thu, 11 Jan 2018, Michal Hocko wrote:
> > > I find this problem quite minor, because I haven't seen any practical
> > > problems
> > > caused by accounting of the root cgroup memory.
> > > If it's a serious problem for you, it can be solved without switching to
> > > the
> > > hierarchical
On Thu, 11 Jan 2018, Michal Hocko wrote:
> > > I find this problem quite minor, because I haven't seen any practical
> > > problems
> > > caused by accounting of the root cgroup memory.
> > > If it's a serious problem for you, it can be solved without switching to
> > > the
> > > hierarchical
On Wed, 10 Jan 2018, Roman Gushchin wrote:
> > 1. The unfair comparison of the root mem cgroup vs leaf mem cgroups
> >
> > The patchset uses two different heuristics to compare root and leaf mem
> > cgroups and scores them based on number of pages. For the root mem
> > cgroup, it totals the
On Wed, 10 Jan 2018, Roman Gushchin wrote:
> > 1. The unfair comparison of the root mem cgroup vs leaf mem cgroups
> >
> > The patchset uses two different heuristics to compare root and leaf mem
> > cgroups and scores them based on number of pages. For the root mem
> > cgroup, it totals the
On Thu, 30 Nov 2017, Andrew Morton wrote:
> > This patchset makes the OOM killer cgroup-aware.
>
> Thanks, I'll grab these.
>
> There has been controversy over this patchset, to say the least. I
> can't say that I followed it closely! Could those who still have
> reservations please summarise
On Thu, 30 Nov 2017, Andrew Morton wrote:
> > This patchset makes the OOM killer cgroup-aware.
>
> Thanks, I'll grab these.
>
> There has been controversy over this patchset, to say the least. I
> can't say that I followed it closely! Could those who still have
> reservations please summarise
ment about invalidate_range() always being called
under the ptl spinlock.
Signed-off-by: David Rientjes <rient...@google.com>
---
include/linux/mmu_notifier.h | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/mmu_notifier.h b/include/linu
ment about invalidate_range() always being called
under the ptl spinlock.
Signed-off-by: David Rientjes
---
include/linux/mmu_notifier.h | 16 +---
1 file changed, 9 insertions(+), 7 deletions(-)
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
--- a/inc
On Fri, 15 Dec 2017, Michal Hocko wrote:
> > This uses the new annotation to determine if an mm has mmu notifiers with
> > blockable invalidate range callbacks to avoid oom reaping. Otherwise, the
> > callbacks are used around unmap_page_range().
>
> Do you have any example where this helped?
On Fri, 15 Dec 2017, Michal Hocko wrote:
> > This uses the new annotation to determine if an mm has mmu notifiers with
> > blockable invalidate range callbacks to avoid oom reaping. Otherwise, the
> > callbacks are used around unmap_page_range().
>
> Do you have any example where this helped?
This uses the new annotation to determine if an mm has mmu notifiers with
blockable invalidate range callbacks to avoid oom reaping. Otherwise, the
callbacks are used around unmap_page_range().
Signed-off-by: David Rientjes <rient...@google.com>
---
mm/oom_kill.c | 21 +++---
This uses the new annotation to determine if an mm has mmu notifiers with
blockable invalidate range callbacks to avoid oom reaping. Otherwise, the
callbacks are used around unmap_page_range().
Signed-off-by: David Rientjes
---
mm/oom_kill.c | 21 +++--
1 file changed, 11
patch adds a "flags" field
to mmu notifier ops that can set a bit to indicate that these callbacks do
not block.
The implementation is steered toward an expensive slowpath, such as after
the oom reaper has grabbed mm->mmap_sem of a still alive oom victim.
Signed-off-by: David Rientjes <ri
patch adds a "flags" field
to mmu notifier ops that can set a bit to indicate that these callbacks do
not block.
The implementation is steered toward an expensive slowpath, such as after
the oom reaper has grabbed mm->mmap_sem of a still alive oom victim.
Signed-off-by: David Rientjes
---
On Wed, 13 Dec 2017, Christian König wrote:
> > > > --- a/drivers/misc/sgi-gru/grutlbpurge.c
> > > > +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> > > > @@ -298,6 +298,7 @@ struct gru_mm_struct
> > > > *gru_register_mmu_notifier(void)
> > > > return ERR_PTR(-ENOMEM);
> > > >
On Wed, 13 Dec 2017, Christian König wrote:
> > > > --- a/drivers/misc/sgi-gru/grutlbpurge.c
> > > > +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> > > > @@ -298,6 +298,7 @@ struct gru_mm_struct
> > > > *gru_register_mmu_notifier(void)
> > > > return ERR_PTR(-ENOMEM);
> > > >
On Tue, 12 Dec 2017, Randy Dunlap wrote:
> Sure, but I didn't keep the patch emails.
>
> Acked-by: Randy Dunlap
>
You may have noticed changing functions like is_file_lru() to bool when it
is used to index into an array or as part of an arithmetic operation for
ZVC
On Tue, 12 Dec 2017, Randy Dunlap wrote:
> Sure, but I didn't keep the patch emails.
>
> Acked-by: Randy Dunlap
>
You may have noticed changing functions like is_file_lru() to bool when it
is used to index into an array or as part of an arithmetic operation for
ZVC stats. I'm not sure why
On Tue, 12 Dec 2017, Dimitri Sivanich wrote:
> > --- a/drivers/misc/sgi-gru/grutlbpurge.c
> > +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> > @@ -298,6 +298,7 @@ struct gru_mm_struct *gru_register_mmu_notifier(void)
> > return ERR_PTR(-ENOMEM);
> > STAT(gms_alloc);
>
On Tue, 12 Dec 2017, Dimitri Sivanich wrote:
> > --- a/drivers/misc/sgi-gru/grutlbpurge.c
> > +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> > @@ -298,6 +298,7 @@ struct gru_mm_struct *gru_register_mmu_notifier(void)
> > return ERR_PTR(-ENOMEM);
> > STAT(gms_alloc);
>
On Mon, 11 Dec 2017, Yaowei Bai wrote:
> This patchset makes some *_is_* like functions return bool because
> these functions only use true or false as their return values.
>
> No functional changes.
>
I think the concern about this type of patchset in the past is that it is
unnecessary churn
On Mon, 11 Dec 2017, Yaowei Bai wrote:
> This patchset makes some *_is_* like functions return bool because
> these functions only use true or false as their return values.
>
> No functional changes.
>
I think the concern about this type of patchset in the past is that it is
unnecessary churn
On Mon, 11 Dec 2017, Paolo Bonzini wrote:
> > Commit 4d4bbd8526a8 ("mm, oom_reaper: skip mm structs with mmu notifiers")
> > prevented the oom reaper from unmapping private anonymous memory with the
> > oom reaper when the oom victim mm had mmu notifiers registered.
> >
> > The rationale is that
On Mon, 11 Dec 2017, Paolo Bonzini wrote:
> > Commit 4d4bbd8526a8 ("mm, oom_reaper: skip mm structs with mmu notifiers")
> > prevented the oom reaper from unmapping private anonymous memory with the
> > oom reaper when the oom victim mm had mmu notifiers registered.
> >
> > The rationale is that
This uses the new annotation to determine if an mm has mmu notifiers with
blockable invalidate range callbacks to avoid oom reaping. Otherwise, the
callbacks are used around unmap_page_range().
Signed-off-by: David Rientjes <rient...@google.com>
---
mm/oom_kill.c | 21 +++---
patch adds a "flags" field
for mmu notifiers that can set a bit to indicate that these callbacks do
block.
The implementation is steered toward an expensive slowpath, such as after
the oom reaper has grabbed mm->mmap_sem of a still alive oom victim.
Signed-off-by: David Rientjes <rien
patch adds a "flags" field
for mmu notifiers that can set a bit to indicate that these callbacks do
block.
The implementation is steered toward an expensive slowpath, such as after
the oom reaper has grabbed mm->mmap_sem of a still alive oom victim.
Signed-off-by: David Rientjes
---
arch/po
This uses the new annotation to determine if an mm has mmu notifiers with
blockable invalidate range callbacks to avoid oom reaping. Otherwise, the
callbacks are used around unmap_page_range().
Signed-off-by: David Rientjes
---
mm/oom_kill.c | 21 +++--
1 file changed, 11
On Thu, 7 Dec 2017, Suren Baghdasaryan wrote:
> Slab shrinkers can be quite time consuming and when signal
> is pending they can delay handling of the signal. If fatal
> signal is pending there is no point in shrinking that process
> since it will be killed anyway. This change checks for pending
On Thu, 7 Dec 2017, Suren Baghdasaryan wrote:
> Slab shrinkers can be quite time consuming and when signal
> is pending they can delay handling of the signal. If fatal
> signal is pending there is no point in shrinking that process
> since it will be killed anyway. This change checks for pending
On Thu, 7 Dec 2017, David Rientjes wrote:
> I'm backporting and testing the following patch against Linus's tree. To
> clarify an earlier point, we don't actually have any change from upstream
> code that allows for free_pgtables() before the
> set_bit(MMF_OOM_SKIP);down_writ
On Thu, 7 Dec 2017, David Rientjes wrote:
> I'm backporting and testing the following patch against Linus's tree. To
> clarify an earlier point, we don't actually have any change from upstream
> code that allows for free_pgtables() before the
> set_bit(MMF_OOM_SKIP);down_writ
On Thu, 7 Dec 2017, Michal Hocko wrote:
> yes. I will fold the following in if this turned out to really address
> David's issue. But I suspect this will be the case considering the NULL
> pmd in the report which would suggest racing with free_pgtable...
>
I'm backporting and testing the
On Thu, 7 Dec 2017, Michal Hocko wrote:
> yes. I will fold the following in if this turned out to really address
> David's issue. But I suspect this will be the case considering the NULL
> pmd in the report which would suggest racing with free_pgtable...
>
I'm backporting and testing the
On Thu, 7 Dec 2017, Michal Hocko wrote:
> Very well spotted! It could be any task in fact (e.g. somebody reading
> from /proc/ file which requires mm_struct).
>
> oom_reaperoom_victim task
> mmget_not_zero
>
On Thu, 7 Dec 2017, Michal Hocko wrote:
> Very well spotted! It could be any task in fact (e.g. somebody reading
> from /proc/ file which requires mm_struct).
>
> oom_reaperoom_victim task
> mmget_not_zero
>
On Wed, 6 Dec 2017, Tetsuo Handa wrote:
> > > One way to solve the issue is to have two mm flags: one to indicate the
> > > mm
> > > is entering unmap_vmas(): set the flag, do down_write(>mmap_sem);
> > > up_write(>mmap_sem), then unmap_vmas(). The oom reaper needs this
> > > flag clear, not
On Wed, 6 Dec 2017, Tetsuo Handa wrote:
> > > One way to solve the issue is to have two mm flags: one to indicate the
> > > mm
> > > is entering unmap_vmas(): set the flag, do down_write(>mmap_sem);
> > > up_write(>mmap_sem), then unmap_vmas(). The oom reaper needs this
> > > flag clear, not
On Tue, 5 Dec 2017, David Rientjes wrote:
> One way to solve the issue is to have two mm flags: one to indicate the mm
> is entering unmap_vmas(): set the flag, do down_write(>mmap_sem);
> up_write(>mmap_sem), then unmap_vmas(). The oom reaper needs this
> flag clear, not M
On Tue, 5 Dec 2017, David Rientjes wrote:
> One way to solve the issue is to have two mm flags: one to indicate the mm
> is entering unmap_vmas(): set the flag, do down_write(>mmap_sem);
> up_write(>mmap_sem), then unmap_vmas(). The oom reaper needs this
> flag clear, not M
Hi,
I'd like to understand the synchronization between the oom_reaper's
unmap_page_range() and exit_mmap(). The latter does not hold
mm->mmap_sem: it's supposed to be the last thread operating on the mm
before it is destroyed.
If unmap_page_range() races with unmap_vmas(), we trivially call
Hi,
I'd like to understand the synchronization between the oom_reaper's
unmap_page_range() and exit_mmap(). The latter does not hold
mm->mmap_sem: it's supposed to be the last thread operating on the mm
before it is destroyed.
If unmap_page_range() races with unmap_vmas(), we trivially call
ng Xie <xieyishe...@huawei.com>
Acked-by: David Rientjes <rient...@google.com>
On Fri, 17 Nov 2017, Yisheng Xie wrote:
> We have already checked whether maxnode is a page worth of bits, by:
> maxnode > PAGE_SIZE*BITS_PER_BYTE
>
> So no need to check it once more.
>
> Acked-by: Vlastimil Babka
> Signed-off-by: Yisheng Xie
Acked-by: David Rientjes
inux-foundation.org>
> Cc: Michal Hocko <mho...@suse.com>
> Cc: Johannes Weiner <han...@cmpxchg.org>
> Cc: Mike Kravetz <mike.krav...@oracle.com>
> Cc: "Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com>
> Cc: Andrea Arcangeli <aarca...@redhat.com>
36 kB
> DirectMap1G: 6291456 kB
>
> Also, this patch updates corresponding docs to reflect
> Hugetlb entry meaning and difference between Hugetlb and
> HugePages_Total * Hugepagesize.
>
> Signed-off-by: Roman Gushchin
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Johan
On Wed, 15 Nov 2017, Michal Hocko wrote:
> > > > if (!hugepages_supported())
> > > > return;
> > > > seq_printf(m,
> > > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > > > h->resv_huge_pages,
> > > >
On Wed, 15 Nov 2017, Michal Hocko wrote:
> > > > if (!hugepages_supported())
> > > > return;
> > > > seq_printf(m,
> > > > @@ -2987,6 +2989,11 @@ void hugetlb_report_meminfo(struct seq_file *m)
> > > > h->resv_huge_pages,
> > > >
org>
> > Cc: Michal Hocko <mho...@suse.com>
> > Cc: Johannes Weiner <han...@cmpxchg.org>
> > Cc: Mike Kravetz <mike.krav...@oracle.com>
> > Cc: "Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com>
> > Cc: Andrea Arcangeli <aarca...
epagesize: 2048 kB
> > Hugetlb: 4194304 kB
> > DirectMap4k: 32632 kB
> > DirectMap2M: 4161536 kB
> > DirectMap1G: 6291456 kB
> >
> > Signed-off-by: Roman Gushchin
> > Cc: Andrew Morton
> > Cc: Michal Hocko
> > Cc: Johanne
t the
compound page is not synchronously split like it was prior to the thp
refcounting patchset, however.
Acked-by: David Rientjes <rient...@google.com>
synchronously split like it was prior to the thp
refcounting patchset, however.
Acked-by: David Rientjes
On Wed, 1 Nov 2017, Michal Hocko wrote:
> > memory.oom_score_adj would never need to be permanently tuned, just as
> > /proc/pid/oom_score_adj need never be permanently tuned. My response was
> > an answer to Roman's concern that "v8 has it's own limitations," but I
> > haven't seen a
901 - 1000 of 6072 matches
Mail list logo