On Tue, 6 Jan 2015, Andrew Morton wrote:
> On Fri, 26 Dec 2014 19:56:49 +0800 "Wang, Yalin"
> wrote:
>
> > This patch subtract sharedram from cached,
> > sharedram can only be swap into swap partitions,
> > they should be treated as swap pages, not as cached pages.
> >
> > ...
> >
> > ---
On Tue, 6 Jan 2015, Andrew Morton wrote:
On Fri, 26 Dec 2014 19:56:49 +0800 Wang, Yalin yalin.w...@sonymobile.com
wrote:
This patch subtract sharedram from cached,
sharedram can only be swap into swap partitions,
they should be treated as swap pages, not as cached pages.
...
On Tue, 6 Jan 2015, Andrew Morton wrote:
On Tue, 6 Jan 2015 17:04:33 -0800 (PST) Hugh Dickins hu...@google.com wrote:
On Tue, 6 Jan 2015, Andrew Morton wrote:
On Fri, 26 Dec 2014 19:56:49 +0800 Wang, Yalin
yalin.w...@sonymobile.com wrote:
This patch subtract sharedram from cached
On Sun, 28 Dec 2014, Joe Perches wrote:
> On Mon, 2014-12-29 at 02:49 +, Al Viro wrote:
> > On Mon, Dec 29, 2014 at 02:39:39AM +, Al Viro wrote:
> > > On Sun, Dec 28, 2014 at 11:56:53AM +0600, Alexander Kuleshov wrote:
> > > > Signed-off-by: Alexander Kuleshov
> > > > ---
> > >
> > > For
On Sun, 28 Dec 2014, Joe Perches wrote:
On Mon, 2014-12-29 at 02:49 +, Al Viro wrote:
On Mon, Dec 29, 2014 at 02:39:39AM +, Al Viro wrote:
On Sun, Dec 28, 2014 at 11:56:53AM +0600, Alexander Kuleshov wrote:
Signed-off-by: Alexander Kuleshov kuleshovm...@gmail.com
---
On Sat, 13 Dec 2014, Davidlohr Bueso wrote:
> On Fri, 2014-12-12 at 16:56 -0800, a...@linux-foundation.org wrote:
> > From: Hugh Dickins
> > Subject: mm: unmapped page migration avoid unmap+remap overhead
> >
> > Page migration's __unmap_and_move(), and rmap's try_t
On Sat, 13 Dec 2014, Davidlohr Bueso wrote:
On Fri, 2014-12-12 at 16:56 -0800, a...@linux-foundation.org wrote:
From: Hugh Dickins hu...@google.com
Subject: mm: unmapped page migration avoid unmap+remap overhead
Page migration's __unmap_and_move(), and rmap's try_to_unmap(), were
On Mon, 1 Dec 2014, Yasuaki Ishimatsu wrote:
> (2014/12/01 13:52), Hugh Dickins wrote:
> > @@ -798,7 +798,7 @@ static int __unmap_and_move(struct page
> > int force, enum migrate_mode mode)
> > {
> > int rc = -EAGAIN;
> > - int
patch in before his - this one is
much less useful after Davidlohr's conversion to rwsem, but still good.
Signed-off-by: Hugh Dickins
---
mm/migrate.c | 28 ++--
1 file changed, 18 insertions(+), 10 deletions(-)
--- 3.18-rc7/mm/migrate.c 2014-10-19 22:12:56.80962506
, swapoff won't be able to
find where to replace them.
There's already a !non_swap_entry() test for stats: move that up
before the swap_duplicate() and the addition to mmlist.
Signed-off-by: Hugh Dickins
Cc: sta...@vger.kernel.org # 2.6.18+
---
mm/memory.c | 24
1 file
On Thu, 13 Nov 2014, Pranith Kumar wrote:
> Recently lockless_dereference() was added which can be used in place of
> hard-coding smp_read_barrier_depends(). The following PATCH makes the change.
>
> Signed-off-by: Pranith Kumar
Sorry, I don't think your patch is buggy, but I do think it
makes
On Thu, 13 Nov 2014, Pranith Kumar wrote:
Recently lockless_dereference() was added which can be used in place of
hard-coding smp_read_barrier_depends(). The following PATCH makes the change.
Signed-off-by: Pranith Kumar bobby.pr...@gmail.com
Sorry, I don't think your patch is buggy, but I
, swapoff won't be able to
find where to replace them.
There's already a !non_swap_entry() test for stats: move that up
before the swap_duplicate() and the addition to mmlist.
Signed-off-by: Hugh Dickins hu...@google.com
Cc: sta...@vger.kernel.org # 2.6.18+
---
mm/memory.c | 24
- this one is
much less useful after Davidlohr's conversion to rwsem, but still good.
Signed-off-by: Hugh Dickins hu...@google.com
---
mm/migrate.c | 28 ++--
1 file changed, 18 insertions(+), 10 deletions(-)
--- 3.18-rc7/mm/migrate.c 2014-10-19 22:12:56.809625067 -0700
On Mon, 1 Dec 2014, Yasuaki Ishimatsu wrote:
(2014/12/01 13:52), Hugh Dickins wrote:
@@ -798,7 +798,7 @@ static int __unmap_and_move(struct page
int force, enum migrate_mode mode)
{
int rc = -EAGAIN;
- int remap_swapcache = 1;
+ int
pages (ultimately via zap_page_range_single) without
> touching the actual interval tree, thus share the lock.
>
> Signed-off-by: Davidlohr Bueso
Acked-by: Hugh Dickins
Yes, thanks, let's get this 11/10 into mmotm along with the rest,
but put the hugetlb 10/10 on the shelf for now, until we've had
time
zap_page_range_single) without
touching the actual interval tree, thus share the lock.
Signed-off-by: Davidlohr Bueso dbu...@suse.de
Acked-by: Hugh Dickins hu...@google.com
Yes, thanks, let's get this 11/10 into mmotm along with the rest,
but put the hugetlb 10/10 on the shelf for now, until we've had
On Thu, 30 Oct 2014, Davidlohr Bueso wrote:
> The i_mmap_rwsem protects shared pages against races
> when doing the sharing and unsharing, ultimately
> calling huge_pmd_share/unshare() for PMD pages --
> it also needs it to avoid races when populating the pud
> for pmd allocation when looking for
I'm glad to see this series back, and nicely presented: thank you.
Not worth respinning them, but consider 1,2,3,4,5,6,7 and 9 as
Acked-by: Hugh Dickins
On Thu, 30 Oct 2014, Davidlohr Bueso wrote:
> As per the comment in move_ptes(), we only require taking the
> anon vma and i_mmap
I'm glad to see this series back, and nicely presented: thank you.
Not worth respinning them, but consider 1,2,3,4,5,6,7 and 9 as
Acked-by: Hugh Dickins hu...@google.com
On Thu, 30 Oct 2014, Davidlohr Bueso wrote:
As per the comment in move_ptes(), we only require taking the
anon vma
On Thu, 30 Oct 2014, Davidlohr Bueso wrote:
The i_mmap_rwsem protects shared pages against races
when doing the sharing and unsharing, ultimately
calling huge_pmd_share/unshare() for PMD pages --
it also needs it to avoid races when populating the pud
for pmd allocation when looking for a
userspace for now") but it should still skip VMAs the
> same way task_numa_work does.
>
> Signed-off-by: Mel Gorman
> Acked-by: Rik van Riel
Acked-by: Hugh Dickins
Yes, this is much the same as the patch I wrote for Linus two days ago,
then discovered that we don't need un
aults
> will not be triggered which is marginal in comparison to the complexity
> in dealing with the corner cases during THP split.
>
> Cc: sta...@vger.kernel.org
> Signed-off-by: Mel Gorman
> Acked-by: Rik van Riel
> Acked-by: Kirill A. Shutemov
Acked-by: Hugh Dickins
except for wher
double checking the VMA permissions using maybe_mkwrite when migration
> completes.
>
> [torva...@linux-foundation.org: use maybe_mkwrite]
> Cc: sta...@vger.kernel.org
> Signed-off-by: Mel Gorman
> Acked-by: Rik van Riel
Sort-of-Acked-by: Hugh Dickins
Safe patch, but I stand
On Wed, 1 Oct 2014, Sasha Levin wrote:
> On 10/01/2014 05:07 PM, Andrew Morton wrote:
> > On Mon, 29 Sep 2014 21:47:14 -0400 Sasha Levin
> > wrote:
> >
> >> Currently we're seeing a few issues which are unexplainable by looking at
> >> the
> >> data we see and are most likely caused by a
On Wed, 1 Oct 2014, Linus Torvalds wrote:
> On Wed, Oct 1, 2014 at 1:19 AM, Hugh Dickins wrote:
>
> Can we please just get rid of _PAGE_NUMA. There is no excuse for it.
I'm no lover of _PAGE_NUMA, and hope that it can be simplified away
as you outline. What we have in 3.16+3.17 i
using maybe_mkwrite when migration
completes.
[torva...@linux-foundation.org: use maybe_mkwrite]
Cc: sta...@vger.kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Sort-of-Acked-by: Hugh Dickins hu...@google.com
Safe patch, but I stand
with the corner cases during THP split.
Cc: sta...@vger.kernel.org
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Kirill A. Shutemov kirill.shute...@linux.intel.com
Acked-by: Hugh Dickins hu...@google.com
except for where you say it should get fixed
) but it should still skip VMAs the
same way task_numa_work does.
Signed-off-by: Mel Gorman mgor...@suse.de
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Hugh Dickins hu...@google.com
Yes, this is much the same as the patch I wrote for Linus two days ago,
then discovered that we don't need until
On Wed, 1 Oct 2014, Linus Torvalds wrote:
On Wed, Oct 1, 2014 at 1:19 AM, Hugh Dickins hu...@google.com wrote:
Can we please just get rid of _PAGE_NUMA. There is no excuse for it.
I'm no lover of _PAGE_NUMA, and hope that it can be simplified away
as you outline. What we have in 3.16+3.17
On Wed, 1 Oct 2014, Sasha Levin wrote:
On 10/01/2014 05:07 PM, Andrew Morton wrote:
On Mon, 29 Sep 2014 21:47:14 -0400 Sasha Levin sasha.le...@oracle.com
wrote:
Currently we're seeing a few issues which are unexplainable by looking at
the
data we see and are most likely caused by a
ed, and he touched some of this
> code last ("tag, you're it").
>
> Kirill: the thread is on lkml, but basically it boils down to the
> second byte write in fault_in_pages_writeable() faulting forever,
> despite handle_mm_fault() apparently thinking that everythi
adding Hugh Dickins, just because the more people who know this
code that are involved, the better.
I've tried, but failed to explain it.
I think it's likely related to the VM_BUG_ON(!(val _PAGE_PRESENT))
which linux-next has in pte_mknuma(), which Sasha Levin first reported
hitting in https
On Mon, 15 Sep 2014, Naoya Horiguchi wrote:
> When running the test which causes the race as shown in the previous patch,
> we can hit the BUG "get_page() on refcount 0 page" in hugetlb_fault().
Two minor comments...
> @@ -3192,22 +3208,19 @@ int hugetlb_fault(struct mm_struct *mm, struct
>
On Mon, 15 Sep 2014, Naoya Horiguchi wrote:
> We have a race condition between move_pages() and freeing hugepages,
I've been looking through these 5 today, and they're much better now,
thank you. But a new concern below, and a minor correction to 3/5.
> --- mmotm-2014-09-09-14-42.orig/mm/gup.c
On Mon, 15 Sep 2014, Naoya Horiguchi wrote:
When running the test which causes the race as shown in the previous patch,
we can hit the BUG get_page() on refcount 0 page in hugetlb_fault().
Two minor comments...
@@ -3192,22 +3208,19 @@ int hugetlb_fault(struct mm_struct *mm, struct
On Mon, 15 Sep 2014, Naoya Horiguchi wrote:
We have a race condition between move_pages() and freeing hugepages,
I've been looking through these 5 today, and they're much better now,
thank you. But a new concern below, and a minor correction to 3/5.
--- mmotm-2014-09-09-14-42.orig/mm/gup.c
On Mon, 22 Sep 2014, Anton Altaparmakov wrote:
> Hi Hugh,
>
> On 22 Sep 2014, at 05:43, Hugh Dickins wrote:
> > On Mon, 22 Sep 2014, Anton Altaparmakov wrote:
> >> Any code that uses __getblk() and thus bread(), breadahead(), sb_bread(),
> >> sb_breadahead()
On Mon, 22 Sep 2014, Anton Altaparmakov wrote:
Hi Hugh,
On 22 Sep 2014, at 05:43, Hugh Dickins hu...@google.com wrote:
On Mon, 22 Sep 2014, Anton Altaparmakov wrote:
Any code that uses __getblk() and thus bread(), breadahead(), sb_bread(),
sb_breadahead(), sb_getblk(), and calls
1C1F988 i.e. the
> top 32-bits are missing (in this case the 0x1 at the top).
>
> This is because grow_dev_page() was broken in commit 676ce6d5ca30: "block:
> replace __getblk_slow misfix by grow_dev_page fix" by Hugh Dickins so that
> it now has a 32-bit overflow due to shifti
at the top).
This is because grow_dev_page() was broken in commit 676ce6d5ca30: block:
replace __getblk_slow misfix by grow_dev_page fix by Hugh Dickins so that
it now has a 32-bit overflow due to shifting the block value to the right
so it fits in 32-bits and storing the result in pgoff_t
On Thu, 11 Sep 2014, Chintan Pandya wrote:
> I don't mean to divert the thread too much. But just one suggestion offered
> by Harshad.
>
> Why can't we stop invoking more of a KSM scanner thread when we are
> saturating from savings ? But again, to check whether savings are saturated
> or not,
On Wed, 10 Sep 2014, Peter Zijlstra wrote:
>
> Does it make sense to drive both KSM and khugepage the same way we drive
> the numa scanning? It has the benefit of getting rid of these threads,
> which pushes the work into the right accountable context (the task its
> doing the scanning for) and
xt_switch() anyway.
[PATCH v2] ksm: avoid periodic wakeup while mergeable mms are quiet
Description yet to be written!
Reported-by: Chintan Pandya
Not-Signed-off-by: Hugh Dickins
---
include/linux/ksm.h | 11 +
include/linux/sched.h |1
kernel/sched/core.c |
On Wed, 10 Sep 2014, Sasha Levin wrote:
> On 09/10/2014 03:36 PM, Hugh Dickins wrote:
> > Right, and Sasha reports that that can fire, but he sees the bug
> > with this patch in and without that firing.
>
> I've changed that WARN_ON_ONCE() to a VM_BUG_ON_VMA() to
On Wed, 10 Sep 2014, Sasha Levin wrote:
On 09/10/2014 03:36 PM, Hugh Dickins wrote:
Right, and Sasha reports that that can fire, but he sees the bug
with this patch in and without that firing.
I've changed that WARN_ON_ONCE() to a VM_BUG_ON_VMA() to get some useful
VMA information out
v2] ksm: avoid periodic wakeup while mergeable mms are quiet
Description yet to be written!
Reported-by: Chintan Pandya cpan...@codeaurora.org
Not-Signed-off-by: Hugh Dickins hu...@google.com
---
include/linux/ksm.h | 11 +
include/linux/sched.h |1
kernel/sched/core.c |5
On Wed, 10 Sep 2014, Peter Zijlstra wrote:
Does it make sense to drive both KSM and khugepage the same way we drive
the numa scanning? It has the benefit of getting rid of these threads,
which pushes the work into the right accountable context (the task its
doing the scanning for) and makes
On Thu, 11 Sep 2014, Chintan Pandya wrote:
I don't mean to divert the thread too much. But just one suggestion offered
by Harshad.
Why can't we stop invoking more of a KSM scanner thread when we are
saturating from savings ? But again, to check whether savings are saturated
or not, we may
On Wed, 10 Sep 2014, Sasha Levin wrote:
> On 09/10/2014 03:09 PM, Hugh Dickins wrote:
> > Thanks for supplying, but the change in inlining means that
> > change_protection_range() and change_protection() are no longer
> > relevant for these traces, we now need to see change_pt
On Wed, 10 Sep 2014, Mel Gorman wrote:
> On Tue, Sep 09, 2014 at 07:45:26PM -0700, Hugh Dickins wrote:
> >
> > I've been rather assuming that the 9d340902 seen in many of the
> > registers in that Aug26 dump is the pte val in question: that's
> > SOFT_DIRTY|PROTNONE|
On Wed, 10 Sep 2014, Sasha Levin wrote:
> On 09/09/2014 10:45 PM, Hugh Dickins wrote:
> > Sasha, you say you're getting plenty of these now, but I've only seen
> > the dump for one of them, on Aug26: please post a few more dumps, so
> > that we can look for commonality.
>
On Wed, 10 Sep 2014, Sasha Levin wrote:
On 09/09/2014 10:45 PM, Hugh Dickins wrote:
Sasha, you say you're getting plenty of these now, but I've only seen
the dump for one of them, on Aug26: please post a few more dumps, so
that we can look for commonality.
I wasn't saving older logs
On Wed, 10 Sep 2014, Mel Gorman wrote:
On Tue, Sep 09, 2014 at 07:45:26PM -0700, Hugh Dickins wrote:
I've been rather assuming that the 9d340902 seen in many of the
registers in that Aug26 dump is the pte val in question: that's
SOFT_DIRTY|PROTNONE|RW.
The 900s in the latest dumps imply
On Wed, 10 Sep 2014, Sasha Levin wrote:
On 09/10/2014 03:09 PM, Hugh Dickins wrote:
Thanks for supplying, but the change in inlining means that
change_protection_range() and change_protection() are no longer
relevant for these traces, we now need to see change_pte_range()
instead
On Tue, 9 Sep 2014, Sasha Levin wrote:
> On 09/09/2014 05:33 PM, Mel Gorman wrote:
> > On Mon, Sep 08, 2014 at 01:56:55PM -0400, Sasha Levin wrote:
> >> On 09/08/2014 01:18 PM, Mel Gorman wrote:
> >>> A worse possibility is that somehow the lock is getting corrupted but
> >>> that's also a tough
pers grow increasingly sick and
sceptical of such knobs, preferring to make an effort to get things
working well without them. Both attitudes are valid.
>
> > On Mon, Sep 08, 2014 at 01:25:36AM -0700, Hugh Dickins wrote:
> > > Well, yes, but... how do we know when there is no mor
On Mon, 8 Sep 2014, Peter Zijlstra wrote:
> On Mon, Sep 08, 2014 at 01:25:36AM -0700, Hugh Dickins wrote:
> >
> > --- 3.17-rc4/include/linux/ksm.h2014-03-30 20:40:15.0 -0700
> > +++ linux/include/linux/ksm.h 2014-09-07 11:54:41.528003316 -0700
>
>
On Mon, 8 Sep 2014, Naoya Horiguchi wrote:
> On Mon, Sep 08, 2014 at 12:13:16AM -0700, Hugh Dickins wrote:
> > On Fri, 5 Sep 2014, Naoya Horiguchi wrote:
> > > On Wed, Sep 03, 2014 at 02:17:41PM -0700, Hugh Dickins wrote:
> > >
> > > > One subtlety to take
On Mon, 8 Sep 2014, Naoya Horiguchi wrote:
On Mon, Sep 08, 2014 at 12:13:16AM -0700, Hugh Dickins wrote:
On Fri, 5 Sep 2014, Naoya Horiguchi wrote:
On Wed, Sep 03, 2014 at 02:17:41PM -0700, Hugh Dickins wrote:
One subtlety to take care over: it's a long time since I've had
On Mon, 8 Sep 2014, Peter Zijlstra wrote:
On Mon, Sep 08, 2014 at 01:25:36AM -0700, Hugh Dickins wrote:
--- 3.17-rc4/include/linux/ksm.h2014-03-30 20:40:15.0 -0700
+++ linux/include/linux/ksm.h 2014-09-07 11:54:41.528003316 -0700
@@ -87,6 +96,11 @@ static inline void
an effort to get things
working well without them. Both attitudes are valid.
On Mon, Sep 08, 2014 at 01:25:36AM -0700, Hugh Dickins wrote:
Well, yes, but... how do we know when there is no more work to do?
Yeah, I figured that out _after_ I send that email..
Thomas has given reason
On Tue, 9 Sep 2014, Sasha Levin wrote:
On 09/09/2014 05:33 PM, Mel Gorman wrote:
On Mon, Sep 08, 2014 at 01:56:55PM -0400, Sasha Levin wrote:
On 09/08/2014 01:18 PM, Mel Gorman wrote:
A worse possibility is that somehow the lock is getting corrupted but
that's also a tough sell
On Wed, 3 Sep 2014, Peter Zijlstra wrote:
> On Wed, Aug 27, 2014 at 11:02:20PM -0700, Hugh Dickins wrote:
> > On Wed, 20 Aug 2014, Chintan Pandya wrote:
> >
> > > KSM thread to scan pages is scheduled on definite timeout. That wakes up
> > > CPU from idle sta
On Fri, 5 Sep 2014, Naoya Horiguchi wrote:
> On Wed, Sep 03, 2014 at 02:17:41PM -0700, Hugh Dickins wrote:
> > On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
> > >
> > > Reported-by: Hugh Dickins
> > > Signed-off-by: Naoya Horiguchi
> > > Cc: # [3.12+
On Fri, 5 Sep 2014, Naoya Horiguchi wrote:
On Wed, Sep 03, 2014 at 02:17:41PM -0700, Hugh Dickins wrote:
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
Reported-by: Hugh Dickins hu...@google.com
Signed-off-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: sta...@vger.kernel.org
On Wed, 3 Sep 2014, Peter Zijlstra wrote:
On Wed, Aug 27, 2014 at 11:02:20PM -0700, Hugh Dickins wrote:
On Wed, 20 Aug 2014, Chintan Pandya wrote:
KSM thread to scan pages is scheduled on definite timeout. That wakes up
CPU from idle state and hence may affect the power consumption
lting in a user's mm */
> > > if (!p->mm)
> > > return;
I don't understand your difficulty with that, I thought the comment
was helpful enough. Does the original commit comment help?
commit 2832bc19f6668fd00116f61f821105040599ef8b
Author: Hugh Dickins
Date:
;
I don't understand your difficulty with that, I thought the comment
was helpful enough. Does the original commit comment help?
commit 2832bc19f6668fd00116f61f821105040599ef8b
Author: Hugh Dickins hu...@google.com
Date: Wed Dec 19 17:42:16 2012 -0800
sched: numa: ksm: fix oops
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
> If __unmap_hugepage_range() tries to unmap the address range over which
> hugepage migration is on the way, we get the wrong page because pte_page()
> doesn't work for migration entries. This patch calls pte_to_swp_entry() and
>
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
> There is a race condition between hugepage migration and change_protection(),
> where hugetlb_change_protection() doesn't care about migration entries and
> wrongly overwrites them. That causes unexpected results like kernel crash.
>
> This patch
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
> When running the test which causes the race as shown in the previous patch,
> we can hit the BUG "get_page() on refcount 0 page" in hugetlb_fault().
>
> This race happens when pte turns into migration entry just after the first
> check of
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
> follow_huge_addr()'s parameter write is not used, so let's remove it.
>
> Signed-off-by: Naoya Horiguchi
I think this patch is a waste of time: that it should be replaced
by a patch which replaces the "write" argument by a "flags" argument,
so that
ation code
> to move_pages()"), so is applicable to -stable kernels which includes it.
Just say
Fixes: e632a938d914 ("mm: migrate: add hugepage migration code to move_pages()")
>
> ChangeLog v3:
> - remove unnecessary if (page) check
> - check (pmd|pud)_huge
T is define as (PAGE_SHIFT + PAGE_SHIFT + PTE_ORDER - 3), but
> PTE_ORDER is always 0, so these are identical.
>
> Signed-off-by: Naoya Horiguchi
Acked-by: Hugh Dickins
> ---
> arch/arm/mm/hugetlbpage.c | 6 --
> arch/arm64/mm/hugetlbpage.c | 6 -
...@ah.jp.nec.com
Acked-by: Hugh Dickins hu...@google.com
---
arch/arm/mm/hugetlbpage.c | 6 --
arch/arm64/mm/hugetlbpage.c | 6 --
arch/ia64/mm/hugetlbpage.c| 6 --
arch/metag/mm/hugetlbpage.c | 6 --
arch/mips/mm/hugetlbpage.c| 18 --
arch
in follow_huge_addr()
ChangeLog v2:
- introduce follow_huge_pmd_lock() to do locking in arch-independent code.
ChangeLog vN info belongs below the ---
Reported-by: Hugh Dickins hu...@google.com
Signed-off-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
Cc: sta...@vger.kernel.org # [3.12+]
No ack
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
follow_huge_addr()'s parameter write is not used, so let's remove it.
Signed-off-by: Naoya Horiguchi n-horigu...@ah.jp.nec.com
I think this patch is a waste of time: that it should be replaced
by a patch which replaces the write argument by a flags
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
When running the test which causes the race as shown in the previous patch,
we can hit the BUG get_page() on refcount 0 page in hugetlb_fault().
This race happens when pte turns into migration entry just after the first
check of
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
There is a race condition between hugepage migration and change_protection(),
where hugetlb_change_protection() doesn't care about migration entries and
wrongly overwrites them. That causes unexpected results like kernel crash.
This patch adds
On Thu, 28 Aug 2014, Naoya Horiguchi wrote:
If __unmap_hugepage_range() tries to unmap the address range over which
hugepage migration is on the way, we get the wrong page because pte_page()
doesn't work for migration entries. This patch calls pte_to_swp_entry() and
migration_entry_to_page()
I'm rather hoping we can strike a good enough
balance with your deferrable timer, that nobody will need any better.
So, with a few changes here and below, please add my
Acked-by: Hugh Dickins
to patches 1 and 2, and resend to akpm - thank you!
Here (above), it's restore the text to V3's
To
timer, that nobody will need any better.
So, with a few changes here and below, please add my
Acked-by: Hugh Dickins hu...@google.com
to patches 1 and 2, and resend to akpm - thank you!
Here (above), it's restore the text to V3's
To enable deferrable timer,
$ echo 1 /sys/kernel/mm/ksm
On Tue, 26 Aug 2014, Cyrill Gorcunov wrote:
> On Tue, Aug 26, 2014 at 06:43:55PM +0300, Kirill A. Shutemov wrote:
> > On Tue, Aug 26, 2014 at 07:18:13PM +0400, Cyrill Gorcunov wrote:
> > > > Basically, it's safe if only soft-dirty is allowed to modify vm_flags
> > > > without down_write(). But why
On Tue, 26 Aug 2014, Cyrill Gorcunov wrote:
> On Mon, Aug 25, 2014 at 09:45:34PM -0700, Hugh Dickins wrote:
> >
> > Hmm. For a long time I thought you were fixing another important bug
> > with down_write, since we "always" use down_write to modify vm_flags.
On Tue, 26 Aug 2014, Cyrill Gorcunov wrote:
On Mon, Aug 25, 2014 at 09:45:34PM -0700, Hugh Dickins wrote:
Hmm. For a long time I thought you were fixing another important bug
with down_write, since we always use down_write to modify vm_flags.
But now I'm realizing
On Tue, 26 Aug 2014, Cyrill Gorcunov wrote:
On Tue, Aug 26, 2014 at 06:43:55PM +0300, Kirill A. Shutemov wrote:
On Tue, Aug 26, 2014 at 07:18:13PM +0400, Cyrill Gorcunov wrote:
Basically, it's safe if only soft-dirty is allowed to modify vm_flags
without down_write(). But why is
This means
> that in the likely case "addr > start_stack - size - PAGE_SIZE * 5"
> is simply impossible after find_vma_intersection() == F, or the stack
> can't grow anyway because of RLIMIT_STACK.
>
> Many thanks to Hugh for his explanations.
>
> Signed-off-by: Oleg Nes
- PAGE_SIZE * 5
is simply impossible after find_vma_intersection() == F, or the stack
can't grow anyway because of RLIMIT_STACK.
Many thanks to Hugh for his explanations.
Signed-off-by: Oleg Nesterov o...@redhat.com
Acked-by: Hugh Dickins hu...@google.com
But you're much too generous to me: I
On Sun, 24 Aug 2014, Peter Feiner wrote:
> For VMAs that don't want write notifications, PTEs created for read
> faults have their write bit set. If the read fault happens after
> VM_SOFTDIRTY is cleared, then the PTE's softdirty bit will remain
> clear after subsequent writes.
Good catch.
On Mon, 25 Aug 2014, Samuel Thibault wrote:
> Samuel Thibault, le Mon 25 Aug 2014 23:23:24 +0200, a écrit :
> > We could indeed have a loop if the user was making the VT::* leds use
> > the vt-* trigger,
>
> Actually, while there can be a loop, it wouldn't be possible to inject
> events in it: a
On Mon, 25 Aug 2014, Oleg Nesterov wrote:
> The ->start_stack check in do_shmat() looks ugly and simply wrong.
>
> 1. ->start_stack is only valid right after exec(), the application
>can switch to another stack and even unmap this area. Or a stack
>can simply grow, ->start_stack won't
On Mon, 25 Aug 2014, Oleg Nesterov wrote:
> On 08/25, Hugh Dickins wrote:
>
> > And I think I'll let Linus's guard page justify your 4 (to match comment)
> > in place of the original's mysterious 5.
>
> Ah, thanks again. Yes, if we want to guarantee 4 pages we shoul
On Mon, 25 Aug 2014, Oleg Nesterov wrote:
> On 08/24, Hugh Dickins wrote:
> >
> > I'd say it comes earlier, from Christoph Rohland's 2.4.17-pre7's
> > "Add missing checks on shmat()", though I didn't find more than that.
> >
> > We can all understand
On Mon, 25 Aug 2014, Oleg Nesterov wrote:
On 08/24, Hugh Dickins wrote:
I'd say it comes earlier, from Christoph Rohland's 2.4.17-pre7's
Add missing checks on shmat(), though I didn't find more than that.
We can all understand wanting to leave a gap below the growsdown stack
On Mon, 25 Aug 2014, Oleg Nesterov wrote:
On 08/25, Hugh Dickins wrote:
And I think I'll let Linus's guard page justify your 4 (to match comment)
in place of the original's mysterious 5.
Ah, thanks again. Yes, if we want to guarantee 4 pages we should check 5.
Although obviously
On Mon, 25 Aug 2014, Oleg Nesterov wrote:
The -start_stack check in do_shmat() looks ugly and simply wrong.
1. -start_stack is only valid right after exec(), the application
can switch to another stack and even unmap this area. Or a stack
can simply grow, -start_stack won't even
On Mon, 25 Aug 2014, Samuel Thibault wrote:
Samuel Thibault, le Mon 25 Aug 2014 23:23:24 +0200, a écrit :
We could indeed have a loop if the user was making the VT::* leds use
the vt-* trigger,
Actually, while there can be a loop, it wouldn't be possible to inject
events in it: a VT::*
On Sun, 24 Aug 2014, Peter Feiner wrote:
For VMAs that don't want write notifications, PTEs created for read
faults have their write bit set. If the read fault happens after
VM_SOFTDIRTY is cleared, then the PTE's softdirty bit will remain
clear after subsequent writes.
Good catch. Worrying
tly to
> > > avoid the usage of mm->start_stack) and ignores VM_GROWSUP.
> > >
> > > Signed-off-by: Oleg Nesterov
> > Reviewed-by: Cyrill Gorcunov
Yes, much better to use find_vma than have this strange stray use
of unreliable start_stack.
Acked-by: Hugh Dickins
1201 - 1300 of 4042 matches
Mail list logo