AL;
if (cmd.addr >= (cmd.addr + (cmd.npages << PAGE_SHIFT)))
return -EINVAL;
Looks good to me too. Thanks for sending this.
Reviewed-by: Ralph Campbell
On 3/3/21 10:16 PM, Alistair Popple wrote:
Some devices require exclusive write access to shared virtual
memory (SVM) ranges to perform atomic operations on that memory. This
requires CPU page tables to be updated to deny access whilst atomic
operations are occurring.
In order to do this intro
combinations
of TTU_XXX flags are needed in which case a careful check of try_to_migrate()
and try_to_unmap() will be needed.
Reviewed-by: Ralph Campbell
() which specifies no other flags. Therefore rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Looks good to me.
Reviewed-by: Ralph Campbell
both read and write entry
creation.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Looks good to me too.
Reviewed-by: Ralph Campbell
: Ralph Campbell
---
v4:
* Added pfn_swap_entry_to_page()
* Reinstated check that migration entries point to locked pages
* Removed #define swapcache_prepare which isn't needed for CONFIG_SWAP=0
builds
---
arch/s390/mm/pgtable.c | 2 +-
fs/proc/task_mmu.c
Popple
One minor nit below, but you can add
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
> +static int dmirror_exclusive(struct dmirror *dmirror,
> + struct hmm_dmirror_cmd *cmd)
> +{
> + unsigned long start, end, addr;
> + unsigned long s
> From: Alistair Popple
> Sent: Thursday, February 25, 2021 11:18 PM
> To: linux...@kvack.org; nouv...@lists.freedesktop.org;
> bske...@redhat.com; a...@linux-foundation.org
> Cc: linux-...@vger.kernel.org; linux-kernel@vger.kernel.org; dri-
> de...@lists.freedesktop.org; Jo
.org; John Hubbard
> ; Ralph Campbell ;
> jgli...@redhat.com; h...@infradead.org; dan...@ffwll.ch
> Subject: Re: [PATCH v3 6/8] mm: Selftests for exclusive device memory
>
> On Fri, Feb 26, 2021 at 06:18:30PM +1100, Alistair Popple wrote:
> > Adds some selftests for exclusi
On 11/11/20 12:40 PM, Zi Yan wrote:
From: Zi Yan
Huge pages in the process with the given pid and virtual address range
are split. It is used to test split huge page function. In addition,
a testing program is added to tools/testing/selftests/vm to utilize the
interface by splitting PMD THPs.
On 11/11/20 12:40 PM, Zi Yan wrote:
From: Zi Yan
To minimize the number of pages after a truncation, when truncating a
THP, we do not need to split it all the way down to order-0. The THP has
at most three parts, the part before offset, the part to be truncated,
the part left at the end. Use
On 11/11/20 12:40 PM, Zi Yan wrote:
From: Zi Yan
To split a THP to any lower order pages, we need to reform THPs on
subpages at given order and add page refcount based on the new page
order. Also we need to reinitialize page_deferred_list after removing
the page from the split_queue, otherwis
On 11/11/20 12:40 PM, Zi Yan wrote:
From: Zi Yan
It reads thp_nr_pages and splits to provided new_nr. It prepares for
upcoming changes to support split huge page to any lower order.
Signed-off-by: Zi Yan
Looks OK to me.
Reviewed-by: Ralph Campbell
---
include/linux/memcontrol.h | 5
On 11/11/20 12:40 PM, Zi Yan wrote:
From: Zi Yan
It adds a new_order parameter to set new page order in page owner.
It prepares for upcoming changes to support split huge page to any lower
order.
Signed-off-by: Zi Yan
Except for a minor fix below, you can add:
Reviewed-by: Ralph Campbell
On 11/9/20 1:14 AM, Christoph Hellwig wrote:
On Fri, Nov 06, 2020 at 01:26:50PM -0800, Ralph Campbell wrote:
On 11/6/20 12:03 AM, Christoph Hellwig wrote:
I hate the extra pin count magic here. IMHO we really need to finish
off the series to get rid of the extra references on the
On 11/9/20 1:14 AM, Christoph Hellwig wrote:
On Fri, Nov 06, 2020 at 01:26:50PM -0800, Ralph Campbell wrote:
On 11/6/20 12:03 AM, Christoph Hellwig wrote:
I hate the extra pin count magic here. IMHO we really need to finish
off the series to get rid of the extra references on the
The external function definitions don't need the "extern" keyword.
Remove them so future changes don't copy the function definition style.
Signed-off-by: Ralph Campbell
---
This applies cleanly to linux-mm 5.10.0-rc2 and is for Andrew's tree.
inc
On 11/6/20 12:03 AM, Christoph Hellwig wrote:
I hate the extra pin count magic here. IMHO we really need to finish
off the series to get rid of the extra references on the ZONE_DEVICE
pages first.
First, thanks for the review comments.
I don't like the extra refcount either, that is why I t
On 11/6/20 12:01 AM, Christoph Hellwig wrote:
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+extern struct page *alloc_transhugepage(struct vm_area_struct *vma,
+ unsigned long addr);
No need for the extern. And also here: do we actually need the stub,
or can the
On 11/5/20 11:55 PM, Christoph Hellwig wrote:
On Thu, Nov 05, 2020 at 04:51:42PM -0800, Ralph Campbell wrote:
+extern void prep_transhuge_device_private_page(struct page *page);
No need for the extern.
Right, I was just copying the style.
Would you like to see a preparatory patch that
On 11/6/20 4:14 AM, Matthew Wilcox wrote:
On Thu, Nov 05, 2020 at 04:51:42PM -0800, Ralph Campbell wrote:
Add a helper function to allow device drivers to create device private
transparent huge pages. This is intended to help support device private
THP migrations.
I think you'd be b
Add some basic stand alone self tests for migrating system memory to device
private memory and back.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 437 +
lib/test_hmm_uapi.h| 3 +
tools/testing/selftests/vm/hmm-tests.c
Add support for migrating transparent huge pages to and from device
private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 289 ++---
drivers/gpu/drm/nouveau/nouveau_svm.c | 11 +-
drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +-
3 files
0-rc2.
Changes in v2:
Added splitting a THP midway in the migration process:
i.e., in migrate_vma_pages().
[1] https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampb...@nvidia.com
[2] https://lore.kernel.org/linux-mm/20200902165830.5367-1-rcampb...@nvidia.com
Ralph Campbell (6)
Move the definition of migrate_vma_collect_skip() to make it callable
by migrate_vma_collect_hole(). This helps make the next patch easier
to read.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 30 +++---
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph Campbell
---
include/linux/gfp.h | 10 ++
mm/huge_memory.c| 14
Add a helper function to allow device drivers to create device private
transparent huge pages. This is intended to help support device private
THP migrations.
Signed-off-by: Ralph Campbell
---
include/linux/huge_mm.h | 5 +
mm/huge_memory.c| 9 +
2 files changed, 14
indicate a huge page can be migrated. If the device driver can allocate
a huge page, it sets the MIGRATE_PFN_COMPOUND flag in the destination PFN
array. migrate_vma_pages() will fallback to PAGE_SIZE pages if
MIGRATE_PFN_COMPOUND is not set in both source and destination arrays.
Signed-off-by: Ralph
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant since it can be computed
from the start and end addresses.
Signed-off-by: Ralph Campbell
---
I thought I sent thi
: Ralph Campbell
---
I found this by code inspection while working on converting ZONE_DEVICE
struct pages to have zero based reference counts. I don't think there is
an actual problem that this fixes, it's more to future proof new uses of
release_pages().
This is for Andrew Morton's mm
d the migrate_pgmap_owner field.
Signed-off-by: Ralph Campbell
---
This is for Andrew Morton's mm tree after the merge window.
mm/migrate.c | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 5ca5842df5db..560b57dde960 100644
--- a/mm
ed to
be treated specially for device private pages, leaving DAX as still being
a special case.
Signed-off-by: Ralph Campbell
---
I'm sending this as a separate patch since I think it is ready to
merge. Originally, this was part of an RFC:
https://lore.kernel.org/linux-mm/20201001181715.1741
On 10/12/20 6:28 AM, Johannes Weiner wrote:
On Fri, Oct 09, 2020 at 05:00:37PM -0700, Ralph Campbell wrote:
On 10/9/20 3:50 PM, Andrew Morton wrote:
On Fri, 9 Oct 2020 14:59:52 -0700 Ralph Campbell wrote:
The code in mc_handle_swap_pte() checks for non_swap_entry() and returns
NULL
On 10/9/20 3:50 PM, Andrew Morton wrote:
On Fri, 9 Oct 2020 14:59:52 -0700 Ralph Campbell wrote:
The code in mc_handle_swap_pte() checks for non_swap_entry() and returns
NULL before checking is_device_private_entry() so device private pages
are never handled.
Fix this by checking for
ot;mm/memcontrol: support MEMORY_DEVICE_PRIVATE")
Signed-off-by: Ralph Campbell
---
I'm not sure exactly how to test this. I ran the HMM self tests but
that is a minimal sanity check. I think moving the self test from one
memory cgroup to another while it is running would exercise
On 10/9/20 9:53 AM, Ira Weiny wrote:
On Thu, Oct 08, 2020 at 10:25:44AM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is
ed to
be treated specially for device private pages, leaving DAX as still being
a special case.
Signed-off-by: Ralph Campbell
---
I'm sending this as a separate patch since I think it is ready to
merge. Originally, this was part of an RFC:
https://lore.kernel.org/linux-mm/20201001181715.1741
On 10/7/20 10:17 PM, Ram Pai wrote:
On Thu, Oct 01, 2020 at 11:17:15AM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add helper functions to hide this detail.
Signed-off-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
Acked-by: Darrick J. Wong
Acked-by
On 10/7/20 1:25 AM, Jan Kara wrote:
On Tue 06-10-20 16:09:30, Ralph Campbell wrote:
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail.
Signed-off-by
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail.
Signed-off-by: Ralph Campbell
Reviewed-by: Christoph Hellwig
---
I'm resending this as a separate
On 10/1/20 10:59 PM, Christoph Hellwig wrote:
On Thu, Oct 01, 2020 at 11:17:15AM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a
On 10/1/20 10:56 PM, Christoph Hellwig wrote:
On Thu, Oct 01, 2020 at 11:17:14AM -0700, Ralph Campbell wrote:
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this
erence
count is handled.
Other changes in v2:
Rebased to Linux-5.9.0-rc6 to include pmem fixes.
I added patch 1 to introduce a page refcount helper for ext4 and xfs as
suggested by Christoph Hellwig.
I also applied Christoph Hellwig's other suggested changes for removing
the devmap_managed_key,
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail.
Signed-off-by: Ralph Campbell
---
fs/dax.c| 4 ++--
fs/ext4/inode.c | 5 +
fs/xfs
ed to
be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
fs/dax.c | 4 +-
include/linux/dax.h| 2 +-
include/linux/memre
On 9/25/20 11:41 PM, Christoph Hellwig wrote:
On Fri, Sep 25, 2020 at 01:44:42PM -0700, Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a
On 9/25/20 11:35 PM, Christoph Hellwig wrote:
On Fri, Sep 25, 2020 at 01:44:41PM -0700, Ralph Campbell wrote:
error = ___wait_var_event(&page->_refcount,
- atomic_read(&page->_refcount) == 1,
+ dax_layou
On 9/25/20 1:51 PM, Dan Williams wrote:
On Fri, Sep 25, 2020 at 1:45 PM Ralph Campbell wrote:
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail
also applied Christoph Hellwig's other suggested changes for removing
the devmap_managed_key, etc.
Ralph Campbell (2):
ext4/xfs: add page refcount helper
mm: remove extra ZONE_DEVICE struct page refcount
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +-
drivers/gpu/d
ed to
be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
include/linux/dax.h| 2 +-
include/linux/memremap.h | 7 ++-
include/linux
There are several places where ZONE_DEVICE struct pages assume a reference
count == 1 means the page is idle and free. Instead of open coding this,
add a helper function to hide this detail.
Signed-off-by: Ralph Campbell
---
fs/dax.c| 8
fs/ext4/inode.c | 2 +-
fs/xfs
/hmm/test: add selftest driver for HMM")
Signed-off-by: Dan Carpenter
Looks good.
Reviewed-by: Ralph Campbell
err_release:
release_mem_region(devmem->pagemap.range.start,
range_len(&devmem->pagemap.range));
-err:
- mutex_unlock(&mdevice->devmem_lock);
+err_devmem:
+ kfree(devmem);
+
return false;
}
With the suggested change, you can add
Reviewed-by: Ralph Campbell
think the right solution is to move the call
to compound_head() in release_pages() to a point before calling
is_huge_zero_page().
Signed-off-by: Ralph Campbell
---
I found this by code inspection while working on my patch
("mm: remove extra ZONE_DEVICE struct page refcount").
This app
On 9/15/20 11:09 PM, Christoph Hellwig wrote:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 517751310dd2..5a82037a4b26 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1093,34 +1093,6 @@ static inline bool is_zone_device_page(const struct page
*page)
#ifdef CONFIG_DE
On 9/15/20 11:10 PM, Christoph Hellwig wrote:
On Mon, Sep 14, 2020 at 04:10:38PM -0700, Dan Williams wrote:
You also need to fix up ext4_break_layouts() and
xfs_break_dax_layouts() to expect ->_refcount is 0 instead of 1. This
also needs some fstests exposure.
While we're at it, can we add a
On 9/15/20 10:36 PM, Christoph Hellwig wrote:
On Tue, Sep 15, 2020 at 09:39:47AM -0700, Ralph Campbell wrote:
I don't think any of the three ->page_free instances even cares about
the page refcount.
Not true. The page_free() callback records the page is free by setting
a bit or put
On 9/15/20 9:29 AM, Christoph Hellwig wrote:
On Mon, Sep 14, 2020 at 04:53:25PM -0700, Ralph Campbell wrote:
Since set_page_refcounted() is defined in mm_interal.h I would have to
move the definition to someplace like page_ref.h or have the drivers
cal init_page_count() or set_page_count
On 9/14/20 4:10 PM, Dan Williams wrote:
On Mon, Sep 14, 2020 at 3:45 PM Ralph Campbell wrote:
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not
ed to
be treated specially for ZONE_DEVICE.
Signed-off-by: Ralph Campbell
---
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE
struct page reference counting is ugly/broken. This is my attempt to
fix it and it works for the HMM migration self tests.
I'm only sending thi
ve it.
Signed-off-by: Ralph Campbell
---
This applies to linux-mm and is intended for Andrew Morton's git tree.
lib/test_hmm.c | 14 --
1 file changed, 14 deletions(-)
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e3065d6123f0..c8133f50160b 100644
--- a/lib/test_hmm.c
The migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize()
API usage by device drivers is not well documented.
Add a description for how device drivers are expected to use it.
Signed-off-by: Ralph Campbell
---
There shouldn't be any merge conflict with my previous patch
use it calls pmd_pfn(pmd) instead
of migration_entry_to_pfn(pmd_to_swp_entry(pmd)).
Fix these problems by checking for a PMD migration entry.
Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path")
cc: sta...@vger.kernel.org # 4.14+
Signed-off-by: Ralph Campbell
Reviewed
On 9/3/20 9:56 AM, Catalin Marinas wrote:
On Mon, Aug 17, 2020 at 02:49:43PM +0530, Anshuman Khandual wrote:
pmd_present() and pmd_trans_huge() are expected to behave in the following
manner during various phases of a given PMD. It is derived from a previous
detailed discussion on this topic [
Add Sphinx reference links to HMM and CPUSETS, and numerous small
editorial changes to make the page_migration.rst document more readable.
Signed-off-by: Ralph Campbell
---
The patch applies cleanly to the latest linux or linux-mm tree.
Since this is MM relatated, perhaps Andrew Morton would
On 9/2/20 2:47 PM, Zi Yan wrote:
On 2 Sep 2020, at 12:58, Ralph Campbell wrote:
A migrating transparent huge page has to already be unmapped. Otherwise,
the page could be modified while it is being copied to a new page and
data could be lost. The function __split_huge_pmd() checks for a PMD
On 9/2/20 12:41 PM, Randy Dunlap wrote:
Hey Ralph,
Thanks for the update/corrections. Nice job.
A few nits/comments below:
On 9/2/20 12:06 PM, Ralph Campbell wrote:
Add Sphinx reference links to HMM and CPUSETS, and numerous small
editorial changes to make the page_migration.rst document
Add Sphinx reference links to HMM and CPUSETS, and numerous small
editorial changes to make the page_migration.rst document more readable.
Signed-off-by: Ralph Campbell
---
.../admin-guide/cgroup-v1/cpusets.rst | 2 +
Documentation/vm/hmm.rst | 2
Move the definition of migrate_vma_collect_skip() to make it callable
by migrate_vma_collect_hole(). This helps make the next patch easier
to read.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 30 +++---
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a
indicate a huge page can be migrated. If the device driver can allocate
a huge page, it sets the MIGRATE_PFN_COMPOUND flag in the destination PFN
array. migrate_vma_pages() will fallback to PAGE_SIZE pages if
MIGRATE_PFN_COMPOUND is not set in both source and destination arrays.
Signed-off-by: Ralph
iners like Ben Skeggs.
[1] https://lore.kernel.org/linux-mm/20200619215649.32297-1-rcampb...@nvidia.com
Ralph Campbell (7):
mm/thp: fix __split_huge_pmd_locked() for migration PMD
mm/migrate: move migrate_vma_collect_skip()
mm: support THP migration to device private memory
mm/thp
Add some basic stand alone self tests for migrating system memory to device
private memory and back.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 439 +
lib/test_hmm_uapi.h| 3 +
tools/testing/selftests/vm/hmm-tests.c
Add support for migrating transparent huge pages to and from device
private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 289 ++---
drivers/gpu/drm/nouveau/nouveau_svm.c | 11 +-
drivers/gpu/drm/nouveau/nouveau_svm.h | 3 +-
3 files
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph Campbell
---
include/linux/gfp.h | 10 ++
mm/huge_memory.c| 14
Add a helper function to allow device drivers to create device private
transparent huge pages. This is intended to help support device private
THP migrations.
Signed-off-by: Ralph Campbell
---
include/linux/huge_mm.h | 5 +
mm/huge_memory.c| 8
2 files changed, 13
use it calls pmd_pfn(pmd) instead
of migration_entry_to_pfn(pmd_to_swp_entry(pmd)).
Fix these problems by checking for a PMD migration entry.
Signed-off-by: Ralph Campbell
---
mm/huge_memory.c | 42 +++---
1 file changed, 23 insertions(+), 19 deletions(-)
d
The check for is_zone_device_page() and is_device_private_page() is
unnecessary since the latter is sufficient to determine if the page
is a device private page. Simplify the code for easier reading.
Signed-off-by: Ralph Campbell
---
mm/migrate.c | 12 +---
1 file changed, 5 insertions
Two small patches for Andrew Morton's mm tree.
I happened to notice this from code inspection after seeing Alistair
Popple's patch ("mm/rmap: Fixup copying of soft dirty and uffd ptes").
Ralph Campbell (2):
mm/migrate: remove unnecessary is_zone_device_page() check
mm/mi
The code to remove a migration PTE and replace it with a device private
PTE was not copying the soft dirty bit from the migration entry.
This could lead to page contents not being marked dirty when faulting
the page back from device private memory.
Signed-off-by: Ralph Campbell
---
mm/migrate.c
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant since it can be computed
from the start and end addresses.
Signed-off-by: Ralph Campbell
---
This is for Ben Sk
On 8/31/20 11:02 AM, Jason Gunthorpe wrote:
On Mon, Aug 31, 2020 at 10:21:41AM -0700, Ralph Campbell wrote:
On 8/31/20 4:51 AM, Jason Gunthorpe wrote:
On Thu, Aug 27, 2020 at 02:37:44PM -0700, Ralph Campbell wrote:
The user level OpenCL code shouldn't have to align start and end
addr
On 8/31/20 4:51 AM, Jason Gunthorpe wrote:
On Thu, Aug 27, 2020 at 02:37:44PM -0700, Ralph Campbell wrote:
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant sin
The user level OpenCL code shouldn't have to align start and end
addresses to a page boundary. That is better handled in the nouveau
driver. The npages field is also redundant since it can be computed
from the start and end addresses.
Signed-off-by: Ralph Campbell
---
This is for Ben Sk
The variable struct migrate_vma->cpages is only used in
migrate_vma_setup(). There is no need to decrement it in
migrate_vma_finalize() since it is never checked.
Signed-off-by: Ralph Campbell
---
This applies to linux-mm and is for Andrew Morton's tree.
mm/migrate.c | 1 -
1 file ch
Device public memory never had an in tree consumer and was removed in
commit 25b2995a35b6 ("mm: remove MEMORY_DEVICE_PUBLIC support").
Delete the obsolete comment.
Signed-off-by: Ralph Campbell
---
This applies to linux-mm and is for Andrew Morton's tree.
mm/migrate.c | 2 +-
Some tests might not be able to be run if resources like huge pages are
not available. Mark these tests as skipped instead of simply passing.
Signed-off-by: Ralph Campbell
---
This applies to linux-mm and is for Andrew Morton's tree.
tools/testing/selftests/vm/hmm-tests.c | 4 ++--
1
p_owner = migrate->pgmap_owner;
^
Fixes: 998427b3ad2c ("mm/notifier: add migration invalidation type")
Signed-off-by: Ralph Campbell
Reported-by: Randy Dunlap
---
This is based on the latest linux and is for Andrew Morton's mm tree.
MMU_NOTIFIER is selected automatically by a numbe
On 8/6/20 7:50 AM, Randy Dunlap wrote:
On 8/5/20 11:21 PM, Stephen Rothwell wrote:
Hi all,
on x86_64:
when CONFIG_MMU_NOTIFIER is not set/enabled:
../mm/migrate.c: In function 'migrate_vma_collect':
../mm/migrate.c:2481:7: error: 'struct mmu_notifier_range' has no member named
'migrate_p
On 7/31/20 12:15 PM, Jason Gunthorpe wrote:
On Tue, Jul 28, 2020 at 03:04:07PM -0700, Ralph Campbell wrote:
On 7/28/20 12:19 PM, Jason Gunthorpe wrote:
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote:
When migrating the special zero page, migrate_vma_pages() calls
On 7/30/20 5:03 AM, Jason Gunthorpe wrote:
On Thu, Jul 30, 2020 at 07:21:10PM +1000, Stephen Rothwell wrote:
Hi all,
Today's linux-next merge of the hmm tree got a conflict in:
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
between commit:
7763d24f3ba0 ("drm/nouveau/vmm/gp100-: f
On 7/28/20 12:19 PM, Jason Gunthorpe wrote:
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote:
When migrating the special zero page, migrate_vma_pages() calls
mmu_notifier_invalidate_range_start() before replacing the zero page
PFN in the CPU page tables. This is unnecessary
On 7/28/20 12:15 PM, Jason Gunthorpe wrote:
On Thu, Jul 23, 2020 at 03:30:01PM -0700, Ralph Campbell wrote:
static inline int mm_has_notifiers(struct mm_struct *mm)
@@ -513,6 +519,7 @@ static inline void mmu_notifier_range_init(struct
mmu_notifier_range *range,
range->start = st
lue is then passed to the struct
mmu_notifier_range with a new event type which the driver's invalidation
function can use to avoid device MMU invalidations.
Signed-off-by: Ralph Campbell
---
include/linux/migrate.h | 3 +++
include/linux/mmu_notifier.h | 7 +++
mm/migrate.c
:
Rebase to Jason Gunthorpe's HMM tree.
Added reviewed-by from Bharata B Rao.
Rename the mmu_notifier_range::data field to migrate_pgmap_owner as
suggested by Jason Gunthorpe.
Ralph Campbell (6):
nouveau: fix storing invalid ptes
mm/migrate: add a flags parameter to migrate_vma
mm/n
d stores a bad valid GPU page table entry.
Fix this by skipping the invalid input PTEs when updating the GPU page
tables.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c | 13 +
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drive
When migrating the special zero page, migrate_vma_pages() calls
mmu_notifier_invalidate_range_start() before replacing the zero page
PFN in the CPU page tables. This is unnecessary since the range was
invalidated in migrate_vma_setup() and the page table entry is checked
to be sure it hasn't change
Use the new MMU_NOTIFY_MIGRATE event to skip GPU MMU invalidations of
device private memory and handle the invalidation in the driver as part
of migrating device private memory.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 15 ---
drivers/gpu/drm
device private pages owned by the caller of migrate_vma_setup().
Rename the src_owner field to pgmap_owner to reflect it is now used only
to identify which device private pages to migrate.
Signed-off-by: Ralph Campbell
Reviewed-by: Bharata B Rao
---
arch/powerpc/kvm/book3s_hv_uvmem.c | 4
Use the new MMU_NOTIFY_MIGRATE event to skip MMU invalidations of device
private memory and handle the invalidation in the driver as part of
migrating device private memory.
Signed-off-by: Ralph Campbell
---
lib/test_hmm.c | 30 +++---
tools/testing
1 - 100 of 278 matches
Mail list logo