ammed I/O instead of HW DMA.
> > This patch allows a verbs device driver to interpose on DMA mapping
> > function calls in order to avoid relying on bus_to_virt() and
> > phys_to_virt() to undo the mappings created by dma_map_single(),
> > dma_map_sg(), etc.
> >
On Wed, 2006-12-13 at 09:37 -0800, Luck, Tony wrote:
> Ralph,
>
> I'm seeing dozens of build warnings and errors on ia64 from
> infiniband. According to git you touched it last, so even
> if you aren't to blame, you are by definition an expert :-)
>
> E.g.
>
> In file included from include/rdma
On Wed, 2006-12-13 at 12:30 -0800, Andrew Morton wrote:
> On Wed, 13 Dec 2006 12:11:27 -0800
> Ralph Campbell <[EMAIL PROTECTED]> wrote:
snip.
> > My preference would be to change the offending uses of dma_addr_t
> > to u64. Do you have a better solution?
>
> We sh
This was fixed by a patch that Arthur Jones sent out to
[EMAIL PROTECTED]
Tue Jun 19 16:42:09 PDT 2007
[PATCH 17/28] IB/ipath - wait for PIO available interrupt
I imagine that it is working its way into Roland's git tree
for Linus.
On Mon, 2007-06-25 at 15:33 -0400, Steven Rostedt wrote:
> As so
2 Minor typos inline below:
On 01/24/2018 09:56 AM, Igor Stoppa wrote:
Detailed documentation about the protectable memory allocator.
Signed-off-by: Igor Stoppa
---
Documentation/core-api/pmalloc.txt | 104 +
1 file changed, 104 insertions(+)
create mod
On 04/09/2018 08:18 AM, jgli...@redhat.com wrote:
From: Jérôme Glisse
This fix typos and syntaxes, thanks to Randy Dunlap for pointing them
out (they were all my faults).
Signed-off-by: Jérôme Glisse
Cc: Randy Dunlap
Cc: Ralph Campbell
Cc: Andrew Morton
---
You can add:
Reviewed-by
On 3/26/19 6:52 AM, William Kucharski wrote:
Does this still happen on 5.1-rc2?
Do you have idea as to what max_low_pfn() gets set to on your system at boot
time?
From the screen shot I'm guessing it MIGHT be 0x373fe, but it's hard to know
for sure.
On Mar 21, 2019, at 2:22 PM, Meelis
On 7/30/20 5:03 AM, Jason Gunthorpe wrote:
On Thu, Jul 30, 2020 at 07:21:10PM +1000, Stephen Rothwell wrote:
Hi all,
Today's linux-next merge of the hmm tree got a conflict in:
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
between commit:
7763d24f3ba0 ("drm/nouveau/vmm/gp100-: f
zero filling it instead of failing to migrate the page.
Signed-off-by: Ralph Campbell
---
This patch applies cleanly to Jason's Gunthorpe's hmm tree plus two
patches I posted earlier. The first is queued in Ben Skegg's nouveau
tree and the second is still pending review/not queued
On 5/20/20 12:20 PM, Jason Gunthorpe wrote:
On Wed, May 20, 2020 at 11:36:52AM -0700, Ralph Campbell wrote:
When calling OpenCL clEnqueueSVMMigrateMem() on a region of memory that
is backed by pte_none() or zero pages, migrate_vma_setup() will fill the
source PFN array with an entry
On 7/31/20 12:15 PM, Jason Gunthorpe wrote:
On Tue, Jul 28, 2020 at 03:04:07PM -0700, Ralph Campbell wrote:
On 7/28/20 12:19 PM, Jason Gunthorpe wrote:
On Thu, Jul 23, 2020 at 03:30:04PM -0700, Ralph Campbell wrote:
When migrating the special zero page, migrate_vma_pages() calls
On 8/6/20 7:50 AM, Randy Dunlap wrote:
On 8/5/20 11:21 PM, Stephen Rothwell wrote:
Hi all,
on x86_64:
when CONFIG_MMU_NOTIFIER is not set/enabled:
../mm/migrate.c: In function 'migrate_vma_collect':
../mm/migrate.c:2481:7: error: 'struct mmu_notifier_range' has no member named
'migrate_p
p_owner = migrate->pgmap_owner;
^
Fixes: 998427b3ad2c ("mm/notifier: add migration invalidation type")
Signed-off-by: Ralph Campbell
Reported-by: Randy Dunlap
---
This is based on the latest linux and is for Andrew Morton's mm tree.
MMU_NOTIFIER is selected automatically by a numbe
AL;
if (cmd.addr >= (cmd.addr + (cmd.npages << PAGE_SHIFT)))
return -EINVAL;
Looks good to me too. Thanks for sending this.
Reviewed-by: Ralph Campbell
.org; John Hubbard
> ; Ralph Campbell ;
> jgli...@redhat.com; h...@infradead.org; dan...@ffwll.ch
> Subject: Re: [PATCH v3 6/8] mm: Selftests for exclusive device memory
>
> On Fri, Feb 26, 2021 at 06:18:30PM +1100, Alistair Popple wrote:
> > Adds some selftests for exclusi
> From: Alistair Popple
> Sent: Thursday, February 25, 2021 11:18 PM
> To: linux...@kvack.org; nouv...@lists.freedesktop.org;
> bske...@redhat.com; a...@linux-foundation.org
> Cc: linux-...@vger.kernel.org; linux-kernel@vger.kernel.org; dri-
> de...@lists.freedesktop.org; Jo
Popple
One minor nit below, but you can add
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
> +static int dmirror_exclusive(struct dmirror *dmirror,
> + struct hmm_dmirror_cmd *cmd)
> +{
> + unsigned long start, end, addr;
> + unsigned long s
: Ralph Campbell
---
v4:
* Added pfn_swap_entry_to_page()
* Reinstated check that migration entries point to locked pages
* Removed #define swapcache_prepare which isn't needed for CONFIG_SWAP=0
builds
---
arch/s390/mm/pgtable.c | 2 +-
fs/proc/task_mmu.c
both read and write entry
creation.
Signed-off-by: Alistair Popple
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason Gunthorpe
Looks good to me too.
Reviewed-by: Ralph Campbell
() which specifies no other flags. Therefore rather than
overload try_to_unmap_one() with unrelated behaviour split this out into
it's own function and remove the flag.
Signed-off-by: Alistair Popple
Looks good to me.
Reviewed-by: Ralph Campbell
combinations
of TTU_XXX flags are needed in which case a careful check of try_to_migrate()
and try_to_unmap() will be needed.
Reviewed-by: Ralph Campbell
On 3/3/21 10:16 PM, Alistair Popple wrote:
Some devices require exclusive write access to shared virtual
memory (SVM) ranges to perform atomic operations on that memory. This
requires CPU page tables to be updated to deny access whilst atomic
operations are occurring.
In order to do this intro
* By getting a reference on the page we pin it and that blocks
Thanks, I was planning to do this too.
Reviewed-by: Ralph Campbell
On 8/15/19 10:19 AM, Jerome Glisse wrote:
On Wed, Aug 07, 2019 at 04:41:12PM +0800, Pingfan Liu wrote:
Clean up useless 'pfn' variable.
NAK there is a bug see below:
Signed-off-by: Pingfan Liu
Cc: "Jérôme Glisse"
Cc: Andrew Morton
Cc: Mel Gorman
Cc: Jan Kara
Cc: "Kirill A. Shutemov"
On 8/26/19 8:11 AM, Vlastimil Babka wrote:
On 7/20/19 1:32 AM, Ralph Campbell wrote:
When CONFIG_MIGRATE_VMA_HELPER is enabled, migrate_vma() calls
migrate_vma_collect() which initializes a struct mm_walk but
didn't initialize mm_walk.pud_entry. (Found by code inspection)
Use a C stru
The return value from set_huge_zero_page() is never checked so simplify
the code by making it return void.
Signed-off-by: Ralph Campbell
---
mm/huge_memory.c | 6 ++
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c5cb6dcd6c69
On 10/1/19 4:52 AM, Kirill A. Shutemov wrote:
On Mon, Sep 30, 2019 at 12:55:28PM -0700, Ralph Campbell wrote:
The return value from set_huge_zero_page() is never checked so simplify
the code by making it return void.
Signed-off-by: Ralph Campbell
---
mm/huge_memory.c | 6 ++
1 file
On 7/15/19 4:34 PM, John Hubbard wrote:
On 7/15/19 3:00 PM, Andrew Morton wrote:
On Tue, 9 Jul 2019 18:24:57 -0700 Ralph Campbell wrote:
On 7/9/19 5:28 PM, Andrew Morton wrote:
On Tue, 9 Jul 2019 15:35:56 -0700 Ralph Campbell wrote:
When migrating a ZONE device private page from
this is v2 with an updated change log.
http://lkml.kernel.org/r/20190709223556.28908-1-rcampb...@nvidia.com
Ralph Campbell (3):
mm: document zone device struct page reserved fields
mm/hmm: fix ZONE_DEVICE anon page mapping reuse
mm/hmm: Fix bad subpage pointer in try_to_unmap_one
include
clear.
Signed-off-by: Ralph Campbell
Cc: Matthew Wilcox
Cc: Vlastimil Babka
Cc: Christoph Lameter
Cc: Dave Hansen
Cc: Jérôme Glisse
Cc: "Kirill A . Shutemov"
Cc: Lai Jiangshan
Cc: Martin Schwidefsky
Cc: Pekka Enberg
Cc: Randy Dunlap
Cc: Andrey Ryabinin
Cc: Christoph Hellwig
Cc: Jas
ubpage computation is needed and it can be set to "page".
Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in
migration")
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: "Kirill A. Shutemov"
Cc: Mike Kravetz
Cc: Christoph
: b7a523109fb5c9d2d6dd ("mm: don't clear ->mapping in hmm_devmem_free")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell
Cc: Christoph Hellwig
Cc: Dan Williams
Cc: Andrew Morton
Cc: Jason Gunthorpe
Cc: Logan Gunthorpe
Cc: Ira Weiny
Cc: Matthew Wilcox
Cc: Mel Gorman
Cc: J
On 7/16/19 9:38 PM, Christoph Hellwig wrote:
On Tue, Jul 16, 2019 at 09:31:33PM -0700, John Hubbard wrote:
OK, so just delete all the _zd_pad_* fields? Works for me. It's misleading to
calling something padding, if it's actually unavailable because it's used
in the other union, so deleting wou
if (mode == MIGRATE_SYNC_NO_COPY)
return -EINVAL;
Reviewed-by: Ralph Campbell
.pfn = gp100_vmm_pgt_pfn
nvkm_vmm_iter()
REF_PTES == func == gp100_vmm_pgt_pfn()
dma_map_page()
Acked-by: Felix Kuehling
Tested-by: Ralph Campbell
Signed-off-by: Jason Gunthorpe
Signed-off-by: Christoph Hellwig
---
Documenta
On 5/22/19 1:12 PM, Jason Gunthorpe wrote:
On Wed, May 22, 2019 at 01:48:52PM -0400, Jerome Glisse wrote:
static void put_per_mm(struct ib_umem_odp *umem_odp)
{
struct ib_ucontext_per_mm *per_mm = umem_odp->per_mm;
@@ -325,9 +283,10 @@ static void put_per_mm(struct ib_umem_odp *um
On 5/22/19 4:36 PM, Jason Gunthorpe wrote:
On Mon, May 06, 2019 at 04:35:14PM -0700, rcampb...@nvidia.com wrote:
From: Ralph Campbell
The last reference to struct hmm may be released long after the mm_struct
is destroyed because the struct hmm_mirror memory may be part of a
device driver
On 5/23/19 5:51 AM, Jason Gunthorpe wrote:
On Mon, May 06, 2019 at 04:29:40PM -0700, rcampb...@nvidia.com wrote:
From: Ralph Campbell
In hmm_range_register(), the call to hmm_get_or_create() implies that
hmm_range_register() could be called before hmm_mirror_register() when
in fact, that
On 5/12/19 8:07 AM, Jerome Glisse wrote:
On Tue, May 07, 2019 at 11:12:14AM -0700, Ralph Campbell wrote:
On 5/7/19 6:15 AM, Souptick Joarder wrote:
On Tue, May 7, 2019 at 5:00 AM wrote:
From: Ralph Campbell
The helper function hmm_vma_fault() calls hmm_range_register() but is
missing
On 5/12/19 8:08 AM, Jerome Glisse wrote:
On Mon, May 06, 2019 at 04:29:37PM -0700, rcampb...@nvidia.com wrote:
From: Ralph Campbell
I hit a use after free bug in hmm_free() with KASAN and then couldn't
stop myself from cleaning up a bunch of documentation and coding style
changes. S
On 5/7/19 6:15 AM, Souptick Joarder wrote:
On Tue, May 7, 2019 at 5:00 AM wrote:
From: Ralph Campbell
The helper function hmm_vma_fault() calls hmm_range_register() but is
missing a call to hmm_range_unregister() in one of the error paths.
This leads to a reference count leak and
On 7/24/19 4:53 AM, Jason Gunthorpe wrote:
On Wed, Jul 24, 2019 at 08:51:46AM +0200, Christoph Hellwig wrote:
On Tue, Jul 23, 2019 at 04:30:16PM -0700, Ralph Campbell wrote:
hmm_range_snapshot() and hmm_range_fault() both call find_vma() and
walk_page_range() in a loop. This is unnecessary
n has v1 queued in v5.2-mmotm-2019-07-18-16-08.
Ralph Campbell (3):
mm: document zone device struct page reserved fields
mm/hmm: fix ZONE_DEVICE anon page mapping reuse
mm/hmm: Fix bad subpage pointer in try_to_unmap_one
include/linux/mm_types.h | 9 -
kernel/memremap.c
backed fsdax pages also use the page->mapping and
page->index fields when files are mapped into a process address space.
Add comments to struct page and remove the unused "_zd_pad_1" field
to make this more clear.
Signed-off-by: Ralph Campbell
Reviewed-by: John Hubbard
Cc: Matthew
: b7a523109fb5c9d2d6dd ("mm: don't clear ->mapping in hmm_devmem_free")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell
Reviewed-by: John Hubbard
Reviewed-by: Christoph Hellwig
Cc: Dan Williams
Cc: Andrew Morton
Cc: Jason Gunthorpe
Cc: Logan Gunthorpe
Cc: Ira Weiny
Cc:
ubpage computation is needed and it can be set to "page".
Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in
migration")
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: "Kirill A. Shutemov"
Cc: Mike Kravetz
Cc: Christoph
On 7/24/19 6:22 PM, Jason Gunthorpe wrote:
On Wed, Jul 24, 2019 at 04:26:58PM -0700, Ralph Campbell wrote:
Struct page for ZONE_DEVICE private pages uses the page->mapping and
and page->index fields while the source anonymous pages are migrated to
device private memory. This is so rma
On 7/24/19 10:38 PM, Christoph Hellwig wrote:
On Wed, Jul 24, 2019 at 04:26:58PM -0700, Ralph Campbell wrote:
Struct page for ZONE_DEVICE private pages uses the page->mapping and
and page->index fields while the source anonymous pages are migrated to
device private memory. This
On 7/29/19 10:51 PM, Christoph Hellwig wrote:
The pagewalk code already passes the value as the hmask parameter.
Signed-off-by: Christoph Hellwig
---
mm/hmm.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index f26d6abc4ed2..88b77a4a6a1e 100
On 7/22/19 4:08 AM, Matthew Wilcox wrote:
On Sun, Jul 21, 2019 at 10:13:45PM -0700, Ira Weiny wrote:
On Sun, Jul 21, 2019 at 09:02:04AM -0700, Matthew Wilcox wrote:
On Fri, Jul 19, 2019 at 12:29:53PM -0700, Ralph Campbell wrote:
Struct page for ZONE_DEVICE private pages uses the page
Here are two more patches for things I found to clean up.
I assume this will go into Jason's tree since there will likely be
more HMM changes in this cycle.
Ralph Campbell (2):
mm/hmm: a few more C style and comment clean ups
mm/hmm: make full use of walk_page_range()
mm/hmm.c
unhandled vmas.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
Cc: Christoph Hellwig
---
mm/hmm.c | 197 ---
1 file changed, 70 insertions(+), 127 deletions(-)
diff --git a/mm/hmm.c b/mm/hmm.c
index 82
A few more comments and minor programming style clean ups.
There should be no functional changes.
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Jason Gunthorpe
Cc: Christoph Hellwig
---
mm/hmm.c | 34 --
1 file changed, 16 insertions(+), 18
On 6/6/19 7:02 AM, Jason Gunthorpe wrote:
On Mon, May 06, 2019 at 04:29:38PM -0700, rcampb...@nvidia.com wrote:
From: Ralph Campbell
Update the HMM documentation to reflect the latest API and make a few minor
wording changes.
Signed-off-by: Ralph Campbell
Reviewed-by: Jérôme Glisse
Cc
On 6/6/19 7:50 AM, Jason Gunthorpe wrote:
On Mon, May 06, 2019 at 04:29:41PM -0700, rcampb...@nvidia.com wrote:
From: Ralph Campbell
The helper function hmm_vma_fault() calls hmm_range_register() but is
missing a call to hmm_range_unregister() in one of the error paths.
This leads to a
On 6/6/19 12:54 PM, Jason Gunthorpe wrote:
On Thu, Jun 06, 2019 at 12:44:36PM -0700, Ralph Campbell wrote:
On 6/6/19 7:50 AM, Jason Gunthorpe wrote:
On Mon, May 06, 2019 at 04:29:41PM -0700, rcampb...@nvidia.com wrote:
From: Ralph Campbell
The helper function hmm_vma_fault() calls
On 6/6/19 8:57 AM, Jason Gunthorpe wrote:
On Mon, May 06, 2019 at 04:29:39PM -0700, rcampb...@nvidia.com wrote:
@@ -924,6 +922,7 @@ int hmm_range_register(struct hmm_range *range,
unsigned page_shift)
{
unsigned long mask = ((1UL << page_shift) - 1UL);
+
On 7/4/19 9:42 AM, Jason Gunthorpe wrote:
On Wed, Jul 03, 2019 at 03:02:08PM -0700, Christoph Hellwig wrote:
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which fixes up the mmap_sem
locking in nouveau and while at it also removes leftover legacy HMM APIs
only used by nouve
clear.
Signed-off-by: Ralph Campbell
Cc: Matthew Wilcox
Cc: Vlastimil Babka
Cc: Christoph Lameter
Cc: Dave Hansen
Cc: Jérôme Glisse
Cc: "Kirill A . Shutemov"
Cc: Lai Jiangshan
Cc: Martin Schwidefsky
Cc: Pekka Enberg
Cc: Randy Dunlap
Cc: Andrey Ryabinin
Cc: Christoph Hellwig
Cc: Jas
this is v2 with an updated change log.
http://lkml.kernel.org/r/20190709223556.28908-1-rcampb...@nvidia.com
Ralph Campbell (3):
mm: document zone device struct page reserved fields
mm/hmm: fix ZONE_DEVICE anon page mapping reuse
mm/hmm: Fix bad subpage pointer in try_to_unmap_one
include
ubpage computation is needed and it can be set to "page".
Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in
migration")
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: "Kirill A. Shutemov"
Cc: Mike Kravetz
Cc: Christoph
: b7a523109fb5c9d2d6dd ("mm: don't clear ->mapping in hmm_devmem_free")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell
Cc: Christoph Hellwig
Cc: Dan Williams
Cc: Andrew Morton
Cc: Jason Gunthorpe
Cc: Logan Gunthorpe
Cc: Ira Weiny
Cc: Matthew Wilcox
Cc: Mel Gorman
Cc: J
ubpage computation is needed and it can be set to "page".
Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page in
migration")
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: "Kirill A. Shutemov"
Cc: Mike Kravetz
Cc: Christoph
ued in v5.2-mmotm-2019-07-18-16-08.
Ralph Campbell (3):
mm: document zone device struct page reserved fields
mm/hmm: fix ZONE_DEVICE anon page mapping reuse
mm/hmm: Fix bad subpage pointer in try_to_unmap_one
include/linux/mm_types.h | 9 -
kernel/memremap.c| 4
mm/rma
: b7a523109fb5c9d2d6dd ("mm: don't clear ->mapping in hmm_devmem_free")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell
Cc: Christoph Hellwig
Cc: Dan Williams
Cc: Andrew Morton
Cc: Jason Gunthorpe
Cc: Logan Gunthorpe
Cc: Ira Weiny
Cc: Matthew Wilcox
Cc: Mel Gorman
Cc: J
backed fsdax pages also use the page->mapping and
page->index fields when files are mapped into a process address space.
Restructure struct page and add comments to make this more clear.
Signed-off-by: Ralph Campbell
Reviewed-by: John Hubbard
Cc: Matthew Wilcox
Cc: Vlastimil Babka
Cc: Chris
grate: new memory migration helper for use with
device memory")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell
Cc: "Jérôme Glisse"
Cc: Andrew Morton
---
mm/migrate.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/mm/migrate.c b/mm/mig
before calling page_remove_rmap().
Signed-off-by: Ralph Campbell
Cc: Andrew Morton
Cc: "Jérôme Glisse"
Cc: "Kirill A. Shutemov"
Cc: Mike Kravetz
---
mm/rmap.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/rmap.c b/mm/rmap.c
index e5dfe2ae6b0d..ec1af8b60423 100644
-
On 7/9/19 5:28 PM, Andrew Morton wrote:
On Tue, 9 Jul 2019 15:35:56 -0700 Ralph Campbell wrote:
When migrating a ZONE device private page from device memory to system
memory, the subpage pointer is initialized from a swap pte which computes
an invalid page pointer. A kernel panic results
Cc: Radim Krčmář
Cc: Michal Hocko
Cc: Christian Koenig
Cc: Ralph Campbell
Cc: John Hubbard
Cc: k...@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linux-r...@vger.kernel.org
Cc: linux-fsde...@vger.kernel.org
Cc: Arnd Bergmann
---
include/linux/mmu_notifier.h | 11 +++
1 f
illiams
Cc: Paolo Bonzini
Cc: Radim Krčmář
Cc: Michal Hocko
Cc: Christian Koenig
Cc: Ralph Campbell
Cc: John Hubbard
Cc: k...@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linux-r...@vger.kernel.org
Cc: Arnd Bergmann
---
drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 8
dr
: Jan Kara
Cc: Andrea Arcangeli
Cc: Peter Xu
Cc: Felix Kuehling
Cc: Jason Gunthorpe
Cc: Andrew Morton
Cc: Ross Zwisler
Cc: Dan Williams
Cc: Paolo Bonzini
Cc: Radim Krčmář
Cc: Michal Hocko
Cc: Christian Koenig
Cc: Ralph Campbell
Cc: John Hubbard
Cc: k...@vger.kernel.org
Cc: dri-de
Glisse
Cc: Christian König
Cc: Joonas Lahtinen
Cc: Jani Nikula
Cc: Rodrigo Vivi
Cc: Jan Kara
Cc: Andrea Arcangeli
Cc: Peter Xu
Cc: Felix Kuehling
Cc: Jason Gunthorpe
Cc: Ross Zwisler
Cc: Dan Williams
Cc: Paolo Bonzini
Cc: Radim Krčmář
Cc: Michal Hocko
Cc: Christian Koenig
Cc: Ralph
e Glisse
Cc: Christian König
Cc: Joonas Lahtinen
Cc: Jani Nikula
Cc: Rodrigo Vivi
Cc: Jan Kara
Cc: Andrea Arcangeli
Cc: Peter Xu
Cc: Felix Kuehling
Cc: Jason Gunthorpe
Cc: Ross Zwisler
Cc: Dan Williams
Cc: Paolo Bonzini
Cc: Radim Krčmář
Cc: Michal Hocko
Cc: Christian Koenig
Cc: Ralph
: Ralph Campbell
Cc: John Hubbard
Cc: k...@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linux-r...@vger.kernel.org
Cc: Arnd Bergmann
---
fs/proc/task_mmu.c | 4 ++--
kernel/events/uprobes.c | 2 +-
mm/huge_memory.c| 14 ++
mm/hugetlb.c| 8
Gunthorpe
Cc: Ross Zwisler
Cc: Dan Williams
Cc: Paolo Bonzini
Cc: Radim Krčmář
Cc: Michal Hocko
Cc: Christian Koenig
Cc: Ralph Campbell
Cc: John Hubbard
Cc: k...@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linux-r...@vger.kernel.org
Cc: Arnd Bergmann
---
include/linux/mmu_notifier.h
: Christian Koenig
Cc: Ralph Campbell
Cc: John Hubbard
Cc: k...@vger.kernel.org
Cc: dri-de...@lists.freedesktop.org
Cc: linux-r...@vger.kernel.org
Cc: Arnd Bergmann
---
include/linux/mmu_notifier.h | 4
mm/mmu_notifier.c| 10 ++
2 files changed, 14 insertions
Cc: Christian König
Cc: Joonas Lahtinen
Cc: Jani Nikula
Cc: Rodrigo Vivi
Cc: Jan Kara
Cc: Andrea Arcangeli
Cc: Peter Xu
Cc: Felix Kuehling
Cc: Jason Gunthorpe
Cc: Ross Zwisler
Cc: Dan Williams
Cc: Paolo Bonzini
Cc: Radim Krčmář
Cc: Michal Hocko
Cc: Christian Koenig
Cc: Ralph Campbell
/~glisse/linux/log/?h=odp-hmm
[2] https://cgit.freedesktop.org/~glisse/linux/log/?h=hmm-for-5.1
Cc: Andrew Morton
Cc: Felix Kuehling
Cc: Christian König
Cc: Ralph Campbell
Cc: John Hubbard
Cc: Jason Gunthorpe
Cc: Dan Williams
Jérôme Glisse (10):
mm/hmm: use reference counting for HMM
On 3/23/19 12:02 PM, Thomas Gleixner wrote:
Ralph,
On Mon, 18 Mar 2019, rcampb...@nvidia.com wrote:
From: Ralph Campbell
If CONFIG_DEBUG_VIRTUAL is enabled, a read or write to /dev/mem can
trigger a VIRTUAL_BUG_ON() depending on the value of high_memory.
For example:
read_mem
On 06/07/2018 07:57 AM, Matthew Wilcox wrote:
From: Matthew Wilcox
Need to do a bit of rearranging to make this work.
Signed-off-by: Matthew Wilcox
---
arch/x86/events/intel/uncore.c | 19 ++-
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/arch/x86/events
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon, Jun 22, 2020 at 04:38:53PM -0700, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will
migrate memory in the given address range to device private memory. The
source pages might already have been
Making sure to include linux-mm and Bharata B Rao for IBM's
use of migrate_vma*().
On 6/24/20 11:10 AM, Ralph Campbell wrote:
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
On Mon, Jun 22, 2020 at 04:38:53PM -0700, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), wi
On 6/25/20 10:31 AM, Jason Gunthorpe wrote:
On Thu, Jun 25, 2020 at 10:25:38AM -0700, Ralph Campbell wrote:
Making sure to include linux-mm and Bharata B Rao for IBM's
use of migrate_vma*().
On 6/24/20 11:10 AM, Ralph Campbell wrote:
On 6/24/20 12:23 AM, Christoph Hellwig wrote:
O
is NULL but dereferenced.
lib/test_hmm.c:524:29-36: ERROR: devmem is NULL but dereferenced.
Fix these by using the local variable 'res' instead of devmem.
Signed-off-by: Randy Dunlap
Cc: Jérôme Glisse
Cc: linux...@kvack.org
Cc: Ralph Campbell
---
lib/test_hmm.c |3 +--
1 file
On 6/22/20 5:39 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:33PM -0700, Ralph Campbell wrote:
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for
On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page
On 6/22/20 10:22 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:41PM -0700, Ralph Campbell wrote:
The SVM page fault handler groups faults into a range of contiguous
virtual addresses and requests hmm_range_fault() to populate and
return the page frame number of system memory mapped
On 6/21/20 4:20 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Support transparent huge page migration to ZONE_DEVICE private memory.
A new flag (MIGRATE_PFN_COMPOUND) is added to the input PFN array to
indicate the huge page was fully mapped by the CPU.
Export
On 6/22/20 1:10 PM, Zi Yan wrote:
On 22 Jun 2020, at 15:36, Ralph Campbell wrote:
On 6/21/20 4:20 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Support transparent huge page migration to ZONE_DEVICE private memory.
A new flag (MIGRATE_PFN_COMPOUND) is added to the
On 6/21/20 5:15 PM, Zi Yan wrote:
On 19 Jun 2020, at 17:56, Ralph Campbell wrote:
Transparent huge page allocation policy is controlled by several sysfs
variables. Rather than expose these to each device driver that needs to
allocate THPs, provide a helper function.
Signed-off-by: Ralph
s and allow a range of normal and device private
pages to be migrated.
Fixes: 800bb1c8dc80 ("mm: handle multiple owners of device private pages in
migrate_vma")
Signed-off-by: Ralph Campbell
---
This is based on 5.8.0-rc2 for Andrew Morton's mm tree.
I believe it can be queued for
On 6/22/20 4:18 PM, Jason Gunthorpe wrote:
On Mon, Jun 22, 2020 at 11:10:05AM -0700, Ralph Campbell wrote:
On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
hmm_range_fault() returns an array of page frame numbers and flags for
how
The functions nvkm_vmm_ctor() and nvkm_mmu_ptp_get() are not called outside
of the file defining them so make them static.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/base.c | 2 +-
drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c | 2 +-
drivers/gpu/drm/nouveau/nvkm
5649.32297-1-rcampb...@nvidia.com/
Note that in order to exercise/test patch 2 here, you will need a
kernel with patch 1 from the original series (the fix to mm/migrate.c).
It is safe to apply these changes before the fix to mm/migrate.c
though.
Ralph Campbell (3):
nouveau: fix migrate page
page and incorrectly computes the GPU's physical
address of local memory leading to data corruption.
Fix this by checking the source struct page and computing the correct
physical address.
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 8
1 file chang
uveau/nouveau/hmm: fix migrate zero page to GPU")
Signed-off-by: Ralph Campbell
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e5c230d9ae24..cc
On 6/22/20 4:54 PM, Yang Shi wrote:
On Mon, Jun 22, 2020 at 4:02 PM John Hubbard wrote:
On 2020-06-22 15:33, Yang Shi wrote:
On Mon, Jun 22, 2020 at 3:30 PM Yang Shi wrote:
On Mon, Jun 22, 2020 at 2:53 PM Zi Yan wrote:
On 22 Jun 2020, at 17:31, Ralph Campbell wrote:
On 6/22/20 1:10 PM
On 6/22/20 5:30 PM, John Hubbard wrote:
On 2020-06-22 16:38, Ralph Campbell wrote:
The OpenCL function clEnqueueSVMMigrateMem(), without any flags, will
migrate memory in the given address range to device private memory. The
source pages might already have been migrated to device private
In zap_pte_range(), the check for non_swap_entry() and
is_device_private_entry() is redundant since the latter is a subset of the
former. Remove the redundant check to simplify the code and for clarity.
Signed-off-by: Ralph Campbell
---
This is based on the current linux tree and is intended
1 - 100 of 278 matches
Mail list logo