the
notifiacation when the refcount hits 1 now, the PAGEMAP_OPS Kconfig
symbol can go away and be replaced with a FS_DAX check for this hook
in the put_page fastpath.
Based on an earlier patch from Ralph Campbell .
Signed-off-by: Christoph Hellwig
Thanks for working on this, definite step forward.
Reviewed
a second attempt will succeed,
and the retry adds complexity. So clean this up by removing the retry
and MIGRATE_PFN_LOCKED flag.
Destination pages are also meant to have the MIGRATE_PFN_LOCKED flag
set, but nothing actually checks that.
Signed-off-by: Alistair Popple
You can add:
Reviewed-by: Ralph
On 10/14/21 11:01 AM, Jason Gunthorpe wrote:
On Thu, Oct 14, 2021 at 10:35:27AM -0700, Ralph Campbell wrote:
I ran xfstests-dev using the kernel boot option to "fake" a pmem device
when I first posted this patch. The tests ran OK (or at least the same
tests passed with and withou
On 10/14/21 10:06 AM, Jason Gunthorpe wrote:
On Thu, Oct 14, 2021 at 10:39:28AM -0500, Alex Sierra wrote:
From: Ralph Campbell
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference
On 9/13/21 9:15 AM, Alex Sierra wrote:
From: Ralph Campbell
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not being used (gup, compaction,
migration
On 8/25/21 4:15 AM, Vlastimil Babka wrote:
On 8/25/21 05:48, Alex Sierra wrote:
From: Ralph Campbell
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page
On 8/17/21 5:35 PM, Felix Kuehling wrote:
Am 2021-08-17 um 8:01 p.m. schrieb Ralph Campbell:
On 8/12/21 11:31 PM, Alex Sierra wrote:
From: Ralph Campbell
ZONE_DEVICE struct pages have an extra reference count that
complicates the
code for put_page() and several places in the kernel that need
On 8/12/21 11:31 PM, Alex Sierra wrote:
From: Ralph Campbell
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not being used (gup, compaction,
migration
On 6/28/21 9:46 AM, Felix Kuehling wrote:
Am 2021-06-17 um 3:16 p.m. schrieb Ralph Campbell:
On 6/17/21 8:16 AM, Alex Sierra wrote:
From: Ralph Campbell
ZONE_DEVICE struct pages have an extra reference count that
complicates the
code for put_page() and several places in the kernel that need
On 6/17/21 8:16 AM, Alex Sierra wrote:
From: Ralph Campbell
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not being used (gup, compaction,
migration
gt_pfn
nvkm_vmm_iter()
REF_PTES == func == gp100_vmm_pgt_pfn()
dma_map_page()
Acked-by: Felix Kuehling
Tested-by: Ralph Campbell
Signed-off-by: Jason Gunthorpe
Signed-off-by: Christoph Hellwig
---
Documentation/vm/hmm.rst
of hmm_range_fault()
All the drivers are adjusted to process in the simplified format.
I would appreciated tested-by's for the two drivers, thanks!
For nouveau you can add:
Tested-by: Ralph Campbell
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https
ile Karol Herbst's mesa tree and Jerome's SVM tests to
test this with nouveau so for the series you can add,
Tested-by: Ralph Campbell
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
pu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +-
include/linux/hmm.h | 55 +-
mm/hmm.c| 238 +---
5 files changed, 98 insertions(+), 211 deletions(-)
The series looks good t
-by: Ralph Campbell
---
include/linux/hmm.h | 50 -
mm/hmm.c| 12 +++
2 files changed, 12 insertions(+), 50 deletions(-)
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index bb6be4428633a8..184a8633260f9d 100644
--- a/include
On 3/20/20 9:48 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
I've had these in my work queue for a bit, nothing profound here, just some
small edits for clarity.
The hmm tester changes are clear enough but I'm having a bit of trouble
figuring out
what this series applies cleanly to
On 3/19/20 5:14 PM, Jason Gunthorpe wrote:
On Tue, Mar 17, 2020 at 04:14:31PM -0700, Ralph Campbell wrote:
+static int dmirror_fault(struct dmirror *dmirror, unsigned long start,
+unsigned long end, bool write)
+{
+ struct mm_struct *mm = dmirror->
00:00:00 2001
From: Ralph Campbell
Date: Tue, 17 Mar 2020 11:10:38 -0700
Subject: [PATCH] mm/hmm/test: add self tests for HMM
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Add some basic stand alone self tests for HMM.
Signed-off-by: Ralph Campb
On 3/17/20 4:56 AM, Jason Gunthorpe wrote:
On Mon, Mar 16, 2020 at 01:24:09PM -0700, Ralph Campbell wrote:
The reason for it being backwards is that "normally" a device doesn't want
the device private page to be faulted back to system memory, it wants to
get the device private stru
On 3/17/20 12:34 AM, Christoph Hellwig wrote:
On Mon, Mar 16, 2020 at 03:49:51PM -0700, Ralph Campbell wrote:
On 3/16/20 12:32 PM, Christoph Hellwig wrote:
Remove the code to fault device private pages back into system memory
that has never been used by any driver. Also replace the usage
memory. Fix this by
passing in an expected pgmap owner in the hmm_range_fault structure.
Signed-off-by: Christoph Hellwig
Fixes: 4ef589dc9b10 ("mm/hmm/devmem: device memory hotplug using ZONE_DEVICE")
Looks good.
Reviewed-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_d
On 3/16/20 12:32 PM, Christoph Hellwig wrote:
Remove the code to fault device private pages back into system memory
that has never been used by any driver. Also replace the usage of the
HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple
is_device_private_page check in nouveau.
t
isn't, then it does make sense to not migrate whatever normal page is there.
nouveau_dmem_migrate_to_ram() sets src_owner so this case looks OK.
Just had to think this through.
Reviewed-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 1 +
drivers/gpu/drm/nouveau/nouveau_dm
This looks like a reasonable approach to take.
Reviewed-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 ++
drivers/gpu/drm/nouveau/nouveau_dmem.c | 1 +
include/linux/memremap.h | 4
mm/memremap.c | 4
4 files changed, 11
On 3/16/20 1:09 PM, Jason Gunthorpe wrote:
On Mon, Mar 16, 2020 at 07:49:35PM +0100, Christoph Hellwig wrote:
On Mon, Mar 16, 2020 at 11:42:19AM -0700, Ralph Campbell wrote:
On 3/16/20 10:52 AM, Christoph Hellwig wrote:
No driver has actually used properly wire up and support this feature
On 3/16/20 11:49 AM, Christoph Hellwig wrote:
On Mon, Mar 16, 2020 at 11:42:19AM -0700, Ralph Campbell wrote:
On 3/16/20 10:52 AM, Christoph Hellwig wrote:
No driver has actually used properly wire up and support this feature.
There is various code related to it in nouveau, but as far as I
On 3/16/20 10:52 AM, Christoph Hellwig wrote:
No driver has actually used properly wire up and support this feature.
There is various code related to it in nouveau, but as far as I can tell
it never actually got turned on, and the only changes since the initial
commit are global cleanups.
m/hmm: change hmm_vma_fault() to allow write fault on page
basis")
Signed-off-by: Jason Gunthorpe
Looks good to me.
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Bonus patch, this one got found after I made the series..
diff --git a/mm/
, then the return should be HMM_PFN_ERROR.
Fixes: a3e0d41c2b1f ("mm/hmm: improve driver API to work and wait over a range")
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 19 ---
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/mm/
n pmd")
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index 32dcbfd3908315..5f5ccf13dd1e85 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -394,7 +394,7 @@ static int hmm_vma_walk
ewalk: add p4d_entry() and pgd_entry()")
Cc: Steven Price
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index 9e8f68eb83287a..32dcbfd3908315 100644
---
t;)
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 31 +--
1 file changed, 9 insertions(+), 22 deletions(-)
We talked about just deleting this stuff, but I think it makes alot sense for
hmm_range_fault() to trigger fault on de
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 19 ---
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index f61fddf2ef6505..ca33d086bdc190 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -335,16 +335,21 @@ static int hmm_vma_handle_pte(struct mm_
unthorpe
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 38 +-
1 file changed, 21 insertions(+), 17 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index bf676cfef3e8ee..f61fddf2ef6505 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -363,8 +363,10 @@ static int hmm_vma_
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 35 +++
1 file changed, 15 insertions(+), 20 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index e10cd0adba7b37..bf676cfef3e8ee 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -282,15 +282,6 @@ static int hmm_vma_handle_
but one issue noted below.
In any case, you can add:
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index 72e5a6d9a41756..35f85424176d14 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -325,6 +325,7 @@ static in
On 11/13/19 8:46 AM, Jason Gunthorpe wrote:
On Wed, Nov 13, 2019 at 05:59:52AM -0800, Christoph Hellwig wrote:
+int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni,
+ struct mm_struct *mm, unsigned long start,
+
my Tested-by for the mm and nouveau changes.
IOW, patches 1-4, 10-11, and 15.
Tested-by: Ralph Campbell
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
On 9/12/19 1:26 AM, Christoph Hellwig wrote:
+static int hmm_pfns_fill(unsigned long addr,
+unsigned long end,
+struct hmm_range *range,
+enum hmm_pfn_value_e value)
Nit: can we use the space a little more efficient,
On 9/12/19 1:26 AM, Christoph Hellwig wrote:
On Wed, Sep 11, 2019 at 03:28:27PM -0700, Ralph Campbell wrote:
Allow hmm_range_fault() to return success (0) when the CPU pagetable
entry points to the special shared zero page.
The caller can then handle the zero page by possibly clearing device
Add self tests for HMM.
Signed-off-by: Ralph Campbell
---
MAINTAINERS|3 +
drivers/char/Kconfig | 11 +
drivers/char/Makefile |1 +
drivers/char/hmm_dmirror.c | 1504
include/Kbuild
Allow hmm_range_fault() to return success (0) when the CPU pagetable
entry points to the special shared zero page.
The caller can then handle the zero page by possibly clearing device
private memory instead of DMAing a zero page.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
efore calling hmm_range_fault().
If the call to hmm_range_fault() is not a snapshot, the caller can still
check that pfns have the desired access permissions.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
Cc: Christoph Hellwig
---
mm/hmm.c | 4 +++-
1 file chan
hmm_range_fault() was not checking
start >= vma->vm_start before checking vma->vm_flags so hmm_range_fault()
could return an error based on the wrong vma for the requested range.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
Cc: Christoph Hellwig
] https://lore.kernel.org/linux-mm/20190726005650.2566-6-rcampb...@nvidia.com/
Ralph Campbell (4):
mm/hmm: make full use of walk_page_range()
mm/hmm: allow snapshot of the special zero page
mm/hmm: allow hmm_range_fault() of mmap(PROT_NONE)
mm/hmm/test: add self tests for HMM
MAINTAINERS
On 8/27/19 11:41 AM, Jason Gunthorpe wrote:
On Fri, Aug 23, 2019 at 03:17:53PM -0700, Ralph Campbell wrote:
Signed-off-by: Ralph Campbell
mm/hmm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/hmm.c b/mm/hmm.c
index 29371485fe94..4882b83aeccb 100644
+++ b/mm/hmm.c
@@ -292,6
On 8/26/19 11:09 AM, Jason Gunthorpe wrote:
On Mon, Aug 26, 2019 at 11:02:12AM -0700, Ralph Campbell wrote:
On 8/24/19 3:37 PM, Christoph Hellwig wrote:
On Fri, Aug 23, 2019 at 03:17:52PM -0700, Ralph Campbell wrote:
Although hmm_range_fault() calls find_vma() to make sure that a vma exists
On 8/24/19 3:37 PM, Christoph Hellwig wrote:
On Fri, Aug 23, 2019 at 03:17:52PM -0700, Ralph Campbell wrote:
Although hmm_range_fault() calls find_vma() to make sure that a vma exists
before calling walk_page_range(), hmm_vma_walk_hole() can still be called
with walk->vma == NULL if the st
thought they shouldn't wait.
They should probably have a fixes line but with all the HMM changes,
I wasn't sure exactly which commit to use.
These are based on top of Jason's latest hmm branch.
Ralph Campbell (2):
mm/hmm: hmm_range_fault() NULL pointer bug
mm/hmm: hmm_range_fault() infinite loop
nge check */
walk_page_range() /* calls find_vma(), sets walk->vma = NULL */
__walk_page_range()
walk_pgd_range()
walk_p4d_range()
walk_pud_range()
hmm_vma_walk_hole()
hmm_vma_walk_hole_()
hmm_vma_do_fault()
handle_mm_fault(vma=0)
Signed-off-by:
rns -EBUSY */
/* returns -EBUSY */
/* loops on -EBUSY and range->valid */
Prevent this by checking for vma->vm_flags & VM_WRITE before calling
handle_mm_fault().
Signed-off-by: Ralph Campbell
---
mm/hmm.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/hmm.c b/m
On 8/16/19 10:28 AM, Jason Gunthorpe wrote:
On Fri, Aug 16, 2019 at 10:21:41AM -0700, Dan Williams wrote:
We can do a get_dev_pagemap inside the page_walk and touch the pgmap,
or we can do the 'device mutex && retry' pattern and touch the pgmap
in the driver, under that lock.
However in all
m/hmm.c | 121 +++-
mm/mmu_notifier.c| 230 +--
25 files changed, 408 insertions(+), 559 deletions(-)
For the core MM, HMM, and nouveau changes you can add:
Tested-by: Ralph C
Reviewed-by: Ralph Campbell
ere overly cautious, drivers are
already not permitted to free the mirror while a range exists.
Signed-off-by: Jason Gunthorpe
Looks good.
Reviewed-by: Ralph Campbell
lwig
Signed-off-by: Christoph Hellwig
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/mmu_notifier.h | 35
mm/mmu_notifier.c| 156 +--
2 files changed, 185 insertions(+), 6 deletions(-)
diff --git a/include/linux/
eterministic and we can use that to
decide if the allocation path is required, without speculation.
The actual update to mmu_notifier_mm must still be done under the
mm_take_all_locks() to ensure read-side coherency.
Signed-off-by: Jason Gunthorpe
Looks good to me.
Revi
a lockdep_assert to check that the callers are holding the lock
as expected.
Suggested-by: Christoph Hellwig
Signed-off-by: Jason Gunthorpe
Nice clean up.
Reviewed-by: Ralph Campbell
On 8/8/19 12:07 AM, Christoph Hellwig wrote:
On Wed, Aug 07, 2019 at 08:02:14AM -0700, Ralph Campbell wrote:
When memory is migrated to the GPU it is likely to be accessed by GPU
code soon afterwards. Instead of waiting for a GPU fault, map the
migrated memory into the GPU page tables
-off-by: Ralph Campbell
Cc: Christoph Hellwig
Cc: Jason Gunthorpe
Cc: "Jérôme Glisse"
Cc: Ben Skeggs
---
This patch is based on top of Christoph Hellwig's 9 patch series
https://lore.kernel.org/linux-mm/20190729234611.gc7...@redhat.com/T/#u
"turn the hmm migrate_vma upside down&quo
On 7/29/19 10:51 PM, Christoph Hellwig wrote:
The pagewalk code already passes the value as the hmask parameter.
Signed-off-by: Christoph Hellwig
---
mm/hmm.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index f26d6abc4ed2..88b77a4a6a1e
On 7/25/19 11:23 PM, Christoph Hellwig wrote:
Note: it seems like you've only CCed me on patches 2-7, but not on the
cover letter and patch 1. I'll try to find them later, but to make Ccs
useful they should normally cover the whole series.
Otherwise this looks fine to me:
Reviewed-by:
From: Christoph Hellwig
Add a HMM_FAULT_SNAPSHOT flag so that hmm_range_snapshot can be merged
into the almost identical hmm_range_fault function.
Signed-off-by: Christoph Hellwig
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
---
Documentation/vm/hm
hmm_range_fault() calls find_vma() and walk_page_range() in a loop.
This is unnecessary duplication since walk_page_range() calls find_vma()
in a loop already.
Simplify hmm_range_fault() by defining a walk_test() callback function
to filter unhandled vmas.
Signed-off-by: Ralph Campbell
Cc
From: Christoph Hellwig
This allows easier expansion to other flags, and also makes the
callers a little easier to read.
Signed-off-by: Christoph Hellwig
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 2 +-
d
walk_page_range() will only call hmm_vma_walk_hugetlb_entry() for
hugetlbfs pages and doesn't call hmm_vma_walk_pmd() in this case.
Therefore, it is safe to remove the check for vma->vm_flags & VM_HUGETLB
in hmm_vma_walk_pmd().
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse
A few more comments and minor programming style clean ups.
There should be no functional changes.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
Cc: Christoph Hellwig
---
mm/hmm.c | 39 +--
1 file changed, 17 inserti
m v1 to v2:
Added AMD GPU to hmm_update removal.
Added 2 patches from Christoph.
Added 2 patches as a result of Jason's suggestions.
Christoph Hellwig (2):
mm/hmm: replace the block argument to hmm_range_fault with a flags
value
mm: merge hmm_range_snapshot into hmm_range_fault
Ralph Campbell
The hmm_mirror_ops callback function sync_cpu_device_pagetables() passes
a struct hmm_update which is a simplified version of struct
mmu_notifier_range. This is unnecessary so replace hmm_update with
mmu_notifier_range directly.
Signed-off-by: Ralph Campbell
Reviewed: Christoph Hellwig
Cc
On 7/2/19 12:53 PM, Jason Gunthorpe wrote:
On Fri, Jun 07, 2019 at 05:14:52PM -0700, Ralph Campbell wrote:
HMM defines its own struct hmm_update which is passed to the
sync_cpu_device_pagetables() callback function. This is
sufficient when the only action is to invalidate. However,
a device
ing sync_cpu_device_pagetables().
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
Tested-by: Philip Yang
---
include/linux/hmm.h | 2 +-
mm/hmm.c| 72 +++--
2 files changed, 45 insertions(+),
On 6/10/19 9:02 AM, Jason Gunthorpe wrote:
On Fri, Jun 07, 2019 at 02:37:07PM -0700, Ralph Campbell wrote:
On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
hmm_release() is called exactly once per hmm. ops->release() cannot
accidentally trigger any action that would recu
ing sync_cpu_device_pagetables().
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
include/linux/hmm.h | 2 +-
mm/hmm.c| 77 +++--
2 files changed, 48 insertions(+), 31 deletions(-)
I almost lost this patch - it is part of the series, hasn't b
On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
Trying to misuse a range outside its lifetime is a kernel bug. Use WARN_ON
and poison bytes to detect this condition.
Signed-off-by: Jason Gunthorpe
Reviewed-by: Jérôme Glisse
Reviewed-by: Ralph Campbell
---
v2
- Keep
On 6/7/19 11:24 AM, Ralph Campbell wrote:
On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
Ralph observes that hmm_range_register() can only be called by a driver
while a mirror is registered. Make this clear in the API by passing in
the
mirror structure as a parameter
On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
hmm_release() is called exactly once per hmm. ops->release() cannot
accidentally trigger any action that would recurse back onto
hmm->mirrors_sem.
This fixes a use after-free race of the form:
CPU0
On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
This list is always read and written while holding hmm->lock so there is
no need for the confusing _rcu annotations.
Signed-off-by: Jason Gunthorpe
Reviewed-by: Jérôme Glisse
Reviewed-by: Ralph Campbell
---
mm/hm
.
Signed-off-by: Ralph Campbell
---
I'm sending this out now since we are updating many of the HMM APIs
and I think it will be useful.
drivers/gpu/drm/nouveau/nouveau_svm.c | 4 ++--
include/linux/hmm.h | 27 ++-
mm/hmm.c | 13
On 6/7/19 1:44 PM, Jason Gunthorpe wrote:
On Fri, Jun 07, 2019 at 01:21:12PM -0700, Ralph Campbell wrote:
What I want to get to is a pattern like this:
pagefault():
hmm_range_register();
again:
/* On the slow path, if we appear to be live locked then we get
the write side
On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
So we can check locking at runtime.
Signed-off-by: Jason Gunthorpe
Reviewed-by: Jérôme Glisse
Reviewed-by: Ralph Campbell
---
v2
- Fix missing & in lockdeps (Jason)
---
mm/hmm.c | 4 ++--
1 file changed, 2 insert
On 6/7/19 12:13 PM, Jason Gunthorpe wrote:
On Fri, Jun 07, 2019 at 12:01:45PM -0700, Ralph Campbell wrote:
On 6/6/19 11:44 AM, Jason Gunthorpe wrote:
From: Jason Gunthorpe
The wait_event_timeout macro already tests the condition as its first
action, so there is no reason to open code
ove dead and
hmm_mirror_mm_is_alive() entirely.
Signed-off-by: Jason Gunthorpe
Looks good to me.
Reviewed-by: Ralph Campbell
---
v2:
- Use Jerome's idea of just holding the mmget() for the range lifetime,
rework the patch to use that as as simplification to remove dead in
one s
a debugging POISON.
Signed-off-by: Jason Gunthorpe
Reviewed-by: Jérôme Glisse
Reviewed-by: Ralph Campbell
---
mm/hmm.c | 9 ++---
1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index c702cd72651b53..6802de7080d172 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
eturn range->valid;
+ return READ_ONCE(range->valid);
}
/*
Since we are simplifying things, perhaps we should consider merging
hmm_range_wait_until_valid() info hmm_range_register() and
removing hmm_range_wait_until_valid() since the pattern
is to always call the two together.
In
to understand by only ever reading or
writing mm->hmm while holding the write side of the mmap_sem.
Signed-off-by: Jason Gunthorpe
Reviewed-by: Ralph Campbell
---
v2:
- Fix error unwind of mmgrab (Jerome)
- Use hmm local instead of 2nd container_of (Jerome)
---
mm/hmm.c |
on struct mm) is now impossible with a !NULL
mm->hmm delete the hmm_hmm_destroy().
Signed-off-by: Jason Gunthorpe
Reviewed-by: Jérôme Glisse
Reviewed-by: Ralph Campbell
---
v2:
- Fix error unwind paths in hmm_get_or_create (Jerome/Jason)
---
include/linux/hmm.h | 3 ---
kernel/for
model for struct hmm, as
the hmm pointer must be valid as part of a registered mirror so all we
need in hmm_register_range() is a simple kref_get.
Suggested-by: Ralph Campbell
Signed-off-by: Jason Gunthorpe
You might CC Ben for the nouveau part.
CC: Ben Skeggs
Reviewed-by: Ralph Campbell
of and directly
check kref_get_unless_zero to lock it against free.
Signed-off-by: Jason Gunthorpe
You can add
Reviewed-by: Ralph Campbell
---
v2:
- Spell 'free' properly (Jerome/Ralph)
---
include/linux/hmm.h | 1 +
mm/hmm.c| 25 +++--
2 files changed, 20 insertions(+),
88 matches
Mail list logo