Re: [Intel-gfx] [PATCH RFC PKS/PMEM 57/58] nvdimm/pmem: Stray access protection for pmem->virt_addr

2020-10-12 Thread John Hubbard

On 10/9/20 12:50 PM, ira.we...@intel.com wrote:

From: Ira Weiny 

The pmem driver uses a cached virtual address to access its memory
directly.  Because the nvdimm driver is well aware of the special
protections it has mapped memory with, we call dev_access_[en|dis]able()
around the direct pmem->virt_addr (pmem_addr) usage instead of the
unnecessary overhead of trying to get a page to kmap.

Signed-off-by: Ira Weiny 
---
  drivers/nvdimm/pmem.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index fab29b514372..e4dc1ae990fc 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -148,7 +148,9 @@ static blk_status_t pmem_do_read(struct pmem_device *pmem,
if (unlikely(is_bad_pmem(>bb, sector, len)))
return BLK_STS_IOERR;
  
+	dev_access_enable(false);

rc = read_pmem(page, page_off, pmem_addr, len);
+   dev_access_disable(false);


Hi Ira!

The APIs should be tweaked to use a symbol (GLOBAL, PER_THREAD), instead of
true/false. Try reading the above and you'll see that it sounds like it's
doing the opposite of what it is ("enable_this(false)" sounds like a clumsy
API design to *disable*, right?). And there is no hint about the scope.

And it *could* be so much more readable like this:

dev_access_enable(DEV_ACCESS_THIS_THREAD);



thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] mm: Skip opportunistic reclaim for dma pinned pages

2020-06-29 Thread John Hubbard

On 2020-06-24 12:21, Jason Gunthorpe wrote:

On Wed, Jun 24, 2020 at 08:14:17PM +0100, Chris Wilson wrote:

A general rule of thumb is that shrinkers should be fast and effective.
They are called from direct reclaim at the most incovenient of times when
the caller is waiting for a page. If we attempt to reclaim a page being
pinned for active dma [pin_user_pages()], we will incur far greater
latency than a normal anonymous page mapped multiple times. Worse the
page may be in use indefinitely by the HW and unable to be reclaimed
in a timely manner.


A pinned page can't be migrated, discarded or swapped by definition -
it would cause data corruption.

So, how do things even get here and/or work today at all? I think the
explanation is missing something important.



Well, those activities generally try to unmap the page, and
have to be prepared to deal with failure to unmap. From my reading,
it seemed very clear.

What's less clear is why the comment and the commit description
only talk about reclaim, when there are additional things that call
try_to_unmap(), including:

migrate_vma_unmap()
split_huge_page_to_list() --> unmap_page()

I do like this code change, though. And I *think* it's actually safe to
do this, as it stays away from writeback or other filesystem activity.
But let me double check that, in case I'm forgetting something.

thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] mm: Skip opportunistic reclaim for dma pinned pages

2020-06-29 Thread John Hubbard

On 2020-06-24 16:20, Jason Gunthorpe wrote:
...

I think Yang explained it - the page is removed from the mappings but
freeing it does not happen because page_ref_freeze() does not succeed
due to the pin.

Presumably the mappings can reconnect to the same physical page if
it is re-faulted to avoid any data corruption.

So, the issue here is the mappings are trashed while the page remains
- and trashing the mapping triggers a mmu notifier which upsets i915.


What's less clear is why the comment and the commit description
only talk about reclaim, when there are additional things that call
try_to_unmap(), including:

 migrate_vma_unmap()
 split_huge_page_to_list() --> unmap_page()


It looks like the same unmap first then abort if the refcount is still
elevated design as shrink_page_list() ?



Yes. I was just wondering why the documentation here seems to ignore the
other, non-reclaim cases. Anyway, though...





I do like this code change, though. And I *think* it's actually safe to
do this, as it stays away from writeback or other filesystem activity.
But let me double check that, in case I'm forgetting something.


...OK, I've checked, and I like it a little bit less now. Mainly for
structural reasons, though. I think it would work correctly. But
here's a concern: try_to_unmap() should only fail to unmap if there is a
reason to not unmap. Having a page be pinned for dma is a reason to not
*free* a page, and it's also a reason to be careful about writeback and
page buffers for writeback and such. But I'm not sure that it's a reason
to fail to remove mappings.

True, most (all?) of the reasons that we remove mappings, generally are
for things that are not allowed while a page is dma-pinned...at least,
today. But still, there's nothing fundamental about a mapping that
should prevent it from coming or going while a page is undergoing
dma.

So, it's merely a convenient, now-misnamed location in the call stack
to fail out. That's not great. It might be better, as Jason hints at
below, to fail out a little earlier, instead. That would lead to a more
places to call page_maybe_dma_pinned(), but that's not a real problem,
because it's still a small number of places.

After writing all of that...I don't feel strongly about it, because
TTU is kind of synonymous with "I'm about to do a dma-pin-unfriendly
operation".

Maybe some of the more experienced fs or mm people have strong opinions
one way or the other?




It would be nice to have an explanation why it is OK now to change
it..


Yes. Definitely good to explain that in the commit log. I think
it's triggered by the existence of page_maybe_dma_pinned(). Until
that was added, figuring out if dma was involved required basically
just guesswork. Now we have a way to guess much more accurately. :)



I don't know, but could it be that try_to_unmap() has to be done
before checking the refcount as each mapping is included in the
refcount? ie we couldn't know a DMA pin was active in advance?

Now that we have your pin stuff we can detect a DMA pin without doing
all the unmaps?



Once something calls pin_user_page*(), then the pages will be marked
as dma-pinned, yes. So no, there is no need to wait until try_to_unmap()
to find out.

A final note: depending on where page_maybe_dma_pinned() ends up
getting called, this might prevent a fair number of the problems that
Jan originally reported [1], and that I also reported separately!

Well, not all of the problems, and only after the filesystems get
converted to call pin_user_pages() (working on that next), but...I think
it would actually avoid the crash our customer reported back in early
2018. Even though we don't have the full file lease + pin_user_pages()
solution in place.

That's because reclaim is what triggers the problems that we saw. And
with this patch, we bail out of reclaim early.


[1] https://www.spinics.net/lists/linux-mm/msg142700.html


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v2 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()

2020-05-26 Thread John Hubbard

On 2020-05-23 02:41, Chris Wilson wrote:

Quoting John Hubbard (2020-05-22 06:19:27)

The purpose of posting this series is to launch a test in the
intel-gfx-ci tree. (The patches have already been merged into Andrew's
linux-mm tree.)

This applies to today's linux.git (note the base-commit tag at the
bottom).

Changes since V1:

* Fixed a bug in the refactoring patch: added FOLL_FAST_ONLY to the
   list of gup_flags *not* to WARN() on. This lead to a failure in the
   first intel-gfx-ci test run [1].

[1] 
https://lore.kernel.org/r/159008745422.32320.5724805750977048...@build.alporthouse.com


Ran this through our CI, warn and subsequent lockup were gone. That

DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
t=1590273216; bh=oK85oUq4LCrgTs8kxvJryKE7a7GUQfAveFtGpNOU2dQ=;
h=X-PGP-Universal:Subject:To:CC:References:From:X-Nvconfidentiality:
 Message-ID:Date:User-Agent:MIME-Version:In-Reply-To:
 X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language:
 Content-Transfer-Encoding;
b=QoI4eJbYYVxcoARKgFJdRrxzB/GBPqy5yKIF46/pjR75LEiZvvAX947VBwywSMYhx
 It8aQpMm6kMaF/rxiv0IPBf3tNGxNziWBAAhDXCyNqmvAS5s1HfdQh5ZoYbyDynKbJ
 uF+u9JjBOYo5uTnn3IUaGPRgl/p9k6OhwRhbJ9nYreDwIF1/1pPeo97jwP2jW7AtDf
 xDO5iJhGmwLYHPzRLilgiDdLbNhIGAP1XJ/4t/DByshidOUalduU7HxVQ9IOnysnCw
 QcqSlpyPgx5LkJOvs63gO8n28hHJnoJ4FggNXC3D311lBWRuD7iekdP5WuvmrxUb8N
 rZKwTpl0vJl9w==


Yea! Thanks again for these test runs. I really don't like posting
patches that I can't run-time test, but this CI system mitigates
that pretty well.



lockup is worrying me now, but that doesn't seem to be an issue from
this series.



I do think it's worth following up on. And it seems like it would be
very easy to repro: just hack in a forced failure at the call site of
pin_user_pages_fast_only(), and follow the breadcrumbs.




The i915 changes were simple enough, I would have computed the pin flags
just once (since the readonly bit is static, that would be interesting
if that was allowed to change mid gup :)
Reviewed-by: Chris Wilson 
-Chris



Thanks for the review! And if lifting that check up higher in the call
stack is desired, I'm all in favor of that being done...in a separate
patch. :)

I'm trying to keep a very light touch when converting these call sites.

thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v2 3/4] mm/gup: introduce pin_user_pages_fast_only()

2020-05-23 Thread John Hubbard
This is the FOLL_PIN equivalent of __get_user_pages_fast(),
except with a more descriptive name, and gup_flags instead of
a boolean "write" in the argument list.

Signed-off-by: John Hubbard 
---
 include/linux/mm.h |  2 ++
 mm/gup.c   | 36 
 2 files changed, 38 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 84b601cab699..98be7289d7e9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1820,6 +1820,8 @@ extern int mprotect_fixup(struct vm_area_struct *vma,
  */
 int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
  struct page **pages);
+int pin_user_pages_fast_only(unsigned long start, int nr_pages,
+unsigned int gup_flags, struct page **pages);
 /*
  * per-process(per-mm_struct) statistics.
  */
diff --git a/mm/gup.c b/mm/gup.c
index 4564b0dc7d0b..6fa9b2016a53 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2859,6 +2859,42 @@ int pin_user_pages_fast(unsigned long start, int 
nr_pages,
 }
 EXPORT_SYMBOL_GPL(pin_user_pages_fast);
 
+/*
+ * This is the FOLL_PIN equivalent of __get_user_pages_fast(). Behavior is the
+ * same, except that this one sets FOLL_PIN instead of FOLL_GET.
+ *
+ * The API rules are the same, too: no negative values may be returned.
+ */
+int pin_user_pages_fast_only(unsigned long start, int nr_pages,
+unsigned int gup_flags, struct page **pages)
+{
+   int nr_pinned;
+
+   /*
+* FOLL_GET and FOLL_PIN are mutually exclusive. Note that the API
+* rules require returning 0, rather than -errno:
+*/
+   if (WARN_ON_ONCE(gup_flags & FOLL_GET))
+   return 0;
+   /*
+* FOLL_FAST_ONLY is required in order to match the API description of
+* this routine: no fall back to regular ("slow") GUP.
+*/
+   gup_flags |= (FOLL_PIN | FOLL_FAST_ONLY);
+   nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags,
+pages);
+   /*
+* This routine is not allowed to return negative values. However,
+* internal_get_user_pages_fast() *can* return -errno. Therefore,
+* correct for that here:
+*/
+   if (nr_pinned < 0)
+   nr_pinned = 0;
+
+   return nr_pinned;
+}
+EXPORT_SYMBOL_GPL(pin_user_pages_fast_only);
+
 /**
  * pin_user_pages_remote() - pin pages of a remote process (task != current)
  *
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v2 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()

2020-05-23 Thread John Hubbard
The purpose of posting this series is to launch a test in the
intel-gfx-ci tree. (The patches have already been merged into Andrew's
linux-mm tree.)

This applies to today's linux.git (note the base-commit tag at the
bottom).

Changes since V1:

* Fixed a bug in the refactoring patch: added FOLL_FAST_ONLY to the
  list of gup_flags *not* to WARN() on. This lead to a failure in the
  first intel-gfx-ci test run [1].

[1] 
https://lore.kernel.org/r/159008745422.32320.5724805750977048...@build.alporthouse.com

Original cover letter:

This needs to go through Andrew's -mm tree, due to adding a new gup.c
routine. However, I would really love to have some testing from the
drm/i915 folks, because I haven't been able to run-time test that part
of it.

Otherwise, though, the series has passed my basic run time testing:
some LTP tests, some xfs and etx4 non-destructive xfstests, and an
assortment of other smaller ones: vm selftests, io_uring_register, a
few more. But that's only on one particular machine. Also, cross-compile
tests for half a dozen arches all pass.

Details:

In order to convert the drm/i915 driver from get_user_pages() to
pin_user_pages(), a FOLL_PIN equivalent of __get_user_pages_fast() was
required. That led to refactoring __get_user_pages_fast(), with the
following goals:

1) As above: provide a pin_user_pages*() routine for drm/i915 to call,
   in place of __get_user_pages_fast(),

2) Get rid of the gup.c duplicate code for walking page tables with
   interrupts disabled. This duplicate code is a minor maintenance
   problem anyway.

3) Make it easy for an upcoming patch from Souptick, which aims to
   convert __get_user_pages_fast() to use a gup_flags argument, instead
   of a bool writeable arg.  Also, if this series looks good, we can
   ask Souptick to change the name as well, to whatever the consensus
   is. My initial recommendation is: get_user_pages_fast_only(), to
   match the new pin_user_pages_only().

John Hubbard (4):
  mm/gup: move __get_user_pages_fast() down a few lines in gup.c
  mm/gup: refactor and de-duplicate gup_fast() code
  mm/gup: introduce pin_user_pages_fast_only()
  drm/i915: convert get_user_pages() --> pin_user_pages()

 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  22 +--
 include/linux/mm.h  |   3 +
 mm/gup.c| 153 
 3 files changed, 109 insertions(+), 69 deletions(-)


base-commit: 051143e1602d90ea71887d92363edd539d411de5
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()

2020-05-23 Thread John Hubbard

On 2020-05-22 04:40, Souptick Joarder wrote:
...

3) Make it easy for an upcoming patch from Souptick, which aims to
convert __get_user_pages_fast() to use a gup_flags argument, instead
of a bool writeable arg.  Also, if this series looks good, we can
ask Souptick to change the name as well, to whatever the consensus
is. My initial recommendation is: get_user_pages_fast_only(), to
match the new pin_user_pages_only().


Shall I hold my changes till 5.8-rc1 , when this series will appear upstream ?


I don't really see any problem with your posting something that is based on
the latest linux-next (which has my changes now). Should be fine. And in
fact it would be nice to get that done in this round, so that the pin* and
get* APIs look the same.


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v2 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c

2020-05-23 Thread John Hubbard
This is in order to avoid a forward declaration of
internal_get_user_pages_fast(), in the next patch.

This is code movement only--all generated code should
be identical.

Signed-off-by: John Hubbard 
---
 mm/gup.c | 112 +++
 1 file changed, 56 insertions(+), 56 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 50cd9323efff..4502846d57f9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2666,62 +2666,6 @@ static bool gup_fast_permitted(unsigned long start, 
unsigned long end)
 }
 #endif
 
-/*
- * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back 
to
- * the regular GUP.
- * Note a difference with get_user_pages_fast: this always returns the
- * number of pages pinned, 0 if no pages were pinned.
- *
- * If the architecture does not support this function, simply return with no
- * pages pinned.
- */
-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
- struct page **pages)
-{
-   unsigned long len, end;
-   unsigned long flags;
-   int nr_pinned = 0;
-   /*
-* Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
-* because gup fast is always a "pin with a +1 page refcount" request.
-*/
-   unsigned int gup_flags = FOLL_GET;
-
-   if (write)
-   gup_flags |= FOLL_WRITE;
-
-   start = untagged_addr(start) & PAGE_MASK;
-   len = (unsigned long) nr_pages << PAGE_SHIFT;
-   end = start + len;
-
-   if (end <= start)
-   return 0;
-   if (unlikely(!access_ok((void __user *)start, len)))
-   return 0;
-
-   /*
-* Disable interrupts.  We use the nested form as we can already have
-* interrupts disabled by get_futex_key.
-*
-* With interrupts disabled, we block page table pages from being
-* freed from under us. See struct mmu_table_batch comments in
-* include/asm-generic/tlb.h for more details.
-*
-* We do not adopt an rcu_read_lock(.) here as we also want to
-* block IPIs that come from THPs splitting.
-*/
-
-   if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
-   gup_fast_permitted(start, end)) {
-   local_irq_save(flags);
-   gup_pgd_range(start, end, gup_flags, pages, _pinned);
-   local_irq_restore(flags);
-   }
-
-   return nr_pinned;
-}
-EXPORT_SYMBOL_GPL(__get_user_pages_fast);
-
 static int __gup_longterm_unlocked(unsigned long start, int nr_pages,
   unsigned int gup_flags, struct page **pages)
 {
@@ -2794,6 +2738,62 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
return ret;
 }
 
+/*
+ * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back 
to
+ * the regular GUP.
+ * Note a difference with get_user_pages_fast: this always returns the
+ * number of pages pinned, 0 if no pages were pinned.
+ *
+ * If the architecture does not support this function, simply return with no
+ * pages pinned.
+ */
+int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ struct page **pages)
+{
+   unsigned long len, end;
+   unsigned long flags;
+   int nr_pinned = 0;
+   /*
+* Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
+* because gup fast is always a "pin with a +1 page refcount" request.
+*/
+   unsigned int gup_flags = FOLL_GET;
+
+   if (write)
+   gup_flags |= FOLL_WRITE;
+
+   start = untagged_addr(start) & PAGE_MASK;
+   len = (unsigned long) nr_pages << PAGE_SHIFT;
+   end = start + len;
+
+   if (end <= start)
+   return 0;
+   if (unlikely(!access_ok((void __user *)start, len)))
+   return 0;
+
+   /*
+* Disable interrupts.  We use the nested form as we can already have
+* interrupts disabled by get_futex_key.
+*
+* With interrupts disabled, we block page table pages from being
+* freed from under us. See struct mmu_table_batch comments in
+* include/asm-generic/tlb.h for more details.
+*
+* We do not adopt an rcu_read_lock(.) here as we also want to
+* block IPIs that come from THPs splitting.
+*/
+
+   if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
+   gup_fast_permitted(start, end)) {
+   local_irq_save(flags);
+   gup_pgd_range(start, end, gup_flags, pages, _pinned);
+   local_irq_restore(flags);
+   }
+
+   return nr_pinned;
+}
+EXPORT_SYMBOL_GPL(__get_user_pages_fast);
+
 /**
  * get_user_pages_fast() - pin user pages in memory
  * @start:  starting user address
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v2 2/4] mm/gup: refactor and de-duplicate gup_fast() code

2020-05-23 Thread John Hubbard
There were two nearly identical sets of code for gup_fast()
style of walking the page tables with interrupts disabled.
This has lead to the usual maintenance problems that arise from
having duplicated code.

There is already a core internal routine in gup.c for gup_fast(),
so just enhance it very slightly: allow skipping the fall-back
to "slow" (regular) get_user_pages(), via the new FOLL_FAST_ONLY
flag. Then, just call internal_get_user_pages_fast() from
__get_user_pages_fast(), and adjust the API to match pre-existing
API behavior.

There is a change in behavior from this refactoring: the nested
form of interrupt disabling is used in all gup_fast() variants
now. That's because there is only one place that interrupt disabling
for page walking is done, and so the safer form is required. This
should, if anything, eliminate possible (rare) bugs, because the
non-nested form of enabling interrupts was fragile at best.

Signed-off-by: John Hubbard 
---
 include/linux/mm.h |  1 +
 mm/gup.c   | 63 ++
 2 files changed, 31 insertions(+), 33 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a5594ac9ebe3..84b601cab699 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2782,6 +2782,7 @@ struct page *follow_page(struct vm_area_struct *vma, 
unsigned long address,
 #define FOLL_LONGTERM  0x1 /* mapping lifetime is indefinite: see below */
 #define FOLL_SPLIT_PMD 0x2 /* split huge pmd before returning */
 #define FOLL_PIN   0x4 /* pages must be released via unpin_user_page */
+#define FOLL_FAST_ONLY 0x8 /* gup_fast: prevent fall-back to slow gup */
 
 /*
  * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
diff --git a/mm/gup.c b/mm/gup.c
index 4502846d57f9..4564b0dc7d0b 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2694,10 +2694,12 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
struct page **pages)
 {
unsigned long addr, len, end;
+   unsigned long flags;
int nr_pinned = 0, ret = 0;
 
if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
-  FOLL_FORCE | FOLL_PIN | FOLL_GET)))
+  FOLL_FORCE | FOLL_PIN | FOLL_GET |
+  FOLL_FAST_ONLY)))
return -EINVAL;
 
start = untagged_addr(start) & PAGE_MASK;
@@ -2710,15 +2712,26 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
if (unlikely(!access_ok((void __user *)start, len)))
return -EFAULT;
 
+   /*
+* Disable interrupts. The nested form is used, in order to allow full,
+* general purpose use of this routine.
+*
+* With interrupts disabled, we block page table pages from being
+* freed from under us. See struct mmu_table_batch comments in
+* include/asm-generic/tlb.h for more details.
+*
+* We do not adopt an rcu_read_lock(.) here as we also want to
+* block IPIs that come from THPs splitting.
+*/
if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
gup_fast_permitted(start, end)) {
-   local_irq_disable();
+   local_irq_save(flags);
gup_pgd_range(addr, end, gup_flags, pages, _pinned);
-   local_irq_enable();
+   local_irq_restore(flags);
ret = nr_pinned;
}
 
-   if (nr_pinned < nr_pages) {
+   if (nr_pinned < nr_pages && !(gup_flags & FOLL_FAST_ONLY)) {
/* Try to get the remaining pages with get_user_pages */
start += nr_pinned << PAGE_SHIFT;
pages += nr_pinned;
@@ -2750,45 +2763,29 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
 int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
  struct page **pages)
 {
-   unsigned long len, end;
-   unsigned long flags;
-   int nr_pinned = 0;
+   int nr_pinned;
/*
 * Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
 * because gup fast is always a "pin with a +1 page refcount" request.
+*
+* FOLL_FAST_ONLY is required in order to match the API description of
+* this routine: no fall back to regular ("slow") GUP.
 */
-   unsigned int gup_flags = FOLL_GET;
+   unsigned int gup_flags = FOLL_GET | FOLL_FAST_ONLY;
 
if (write)
gup_flags |= FOLL_WRITE;
 
-   start = untagged_addr(start) & PAGE_MASK;
-   len = (unsigned long) nr_pages << PAGE_SHIFT;
-   end = start + len;
-
-   if (end <= start)
-   return 0;
-   if (unlikely(!access_ok((void __user *)start, len)))
-   retu

[Intel-gfx] [PATCH] mm/gup: fixup gup.c for "mm/gup: refactor and de-duplicate gup_fast() code"

2020-05-23 Thread John Hubbard
Include FOLL_FAST_ONLY in the list of flags to *not* WARN()
on, in internal_get_user_pages_fast().

Cc: Chris Wilson 
Cc: Daniel Vetter 
Cc: David Airlie 
Cc: Jani Nikula 
Cc: "Joonas Lahtinen" 
Cc: Matthew Auld 
Cc: Matthew Wilcox 
Cc: Rodrigo Vivi 
Cc: Souptick Joarder 
Cc: Tvrtko Ursulin 
Signed-off-by: John Hubbard 
---

Hi Andrew, Chris,

Andrew: This is a fixup that applies to today's (20200521) linux-next.
In that tree, this fixes up:

commit dfb8dfe80808 ("mm/gup: refactor and de-duplicate gup_fast() code")

Chris: I'd like to request another CI run for the drm/i915 changes, so
for that, would you prefer that I post a v2 of the series [1], or
is it easier for you to just apply this patch here, on top of [2]?

[1] https://lore.kernel.org/r/20200519002124.2025955-1-jhubb...@nvidia.com

[2] 
https://lore.kernel.org/r/158985123351.31239.10766458886430429...@emeril.freedesktop.org

thanks,
John Hubbard
NVIDIA

 mm/gup.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/gup.c b/mm/gup.c
index dd8895f2fafa1..ada6aa79576dc 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2724,7 +2724,8 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
int nr_pinned = 0, ret = 0;
 
if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
-  FOLL_FORCE | FOLL_PIN | FOLL_GET)))
+  FOLL_FORCE | FOLL_PIN | FOLL_GET |
+  FOLL_FAST_ONLY)))
return -EINVAL;
 
start = untagged_addr(start) & PAGE_MASK;
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] mm/gup: fixup gup.c for "mm/gup: refactor and de-duplicate gup_fast() code"

2020-05-23 Thread John Hubbard

On 2020-05-21 19:46, Chris Wilson wrote:

Quoting John Hubbard (2020-05-22 00:38:41)

Include FOLL_FAST_ONLY in the list of flags to *not* WARN()
on, in internal_get_user_pages_fast().

Cc: Chris Wilson 
Cc: Daniel Vetter 
Cc: David Airlie 
Cc: Jani Nikula 
Cc: "Joonas Lahtinen" 
Cc: Matthew Auld 
Cc: Matthew Wilcox 
Cc: Rodrigo Vivi 
Cc: Souptick Joarder 
Cc: Tvrtko Ursulin 
Signed-off-by: John Hubbard 
---

Hi Andrew, Chris,

Andrew: This is a fixup that applies to today's (20200521) linux-next.
In that tree, this fixes up:

commit dfb8dfe80808 ("mm/gup: refactor and de-duplicate gup_fast() code")

Chris: I'd like to request another CI run for the drm/i915 changes, so
for that, would you prefer that I post a v2 of the series [1], or
is it easier for you to just apply this patch here, on top of [2]?


If you post your series again with this patch included to intel-gfx, CI
will pick it up. Or I'll do that in the morning.
-Chris



OK, perfect. I'll post a version for linux.git in a moment here.


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v2 4/4] drm/i915: convert get_user_pages() --> pin_user_pages()

2020-05-23 Thread John Hubbard
This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/

Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 22 -
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 7ffd7afeb7a5..b55ac7563189 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -471,7 +471,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct 
*_work)
down_read(>mmap_sem);
locked = 1;
}
-   ret = get_user_pages_remote
+   ret = pin_user_pages_remote
(work->task, mm,
 obj->userptr.ptr + pinned * PAGE_SIZE,
 npages - pinned,
@@ -507,7 +507,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct 
*_work)
}
mutex_unlock(>mm.lock);
 
-   release_pages(pvec, pinned);
+   unpin_user_pages(pvec, pinned);
kvfree(pvec);
 
i915_gem_object_put(obj);
@@ -564,6 +564,7 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
struct sg_table *pages;
bool active;
int pinned;
+   unsigned int gup_flags = 0;
 
/* If userspace should engineer that these pages are replaced in
 * the vma between us binding this page into the GTT and completion
@@ -598,11 +599,14 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
  GFP_KERNEL |
  __GFP_NORETRY |
  __GFP_NOWARN);
-   if (pvec) /* defer to worker if malloc fails */
-   pinned = __get_user_pages_fast(obj->userptr.ptr,
-  num_pages,
-  
!i915_gem_object_is_readonly(obj),
-  pvec);
+   /* defer to worker if malloc fails */
+   if (pvec) {
+   if (!i915_gem_object_is_readonly(obj))
+   gup_flags |= FOLL_WRITE;
+   pinned = pin_user_pages_fast_only(obj->userptr.ptr,
+ num_pages, gup_flags,
+ pvec);
+   }
}
 
active = false;
@@ -620,7 +624,7 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
__i915_gem_userptr_set_active(obj, true);
 
if (IS_ERR(pages))
-   release_pages(pvec, pinned);
+   unpin_user_pages(pvec, pinned);
kvfree(pvec);
 
return PTR_ERR_OR_ZERO(pages);
@@ -675,7 +679,7 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
}
 
mark_page_accessed(page);
-   put_page(page);
+   unpin_user_page(page);
}
obj->mm.dirty = false;
 
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()

2020-05-21 Thread John Hubbard

On 2020-05-21 11:57, Chris Wilson wrote:

Quoting John Hubbard (2020-05-19 01:21:20)

This needs to go through Andrew's -mm tree, due to adding a new gup.c
routine. However, I would really love to have some testing from the
drm/i915 folks, because I haven't been able to run-time test that part
of it.


CI hit

<4> [185.667750] WARNING: CPU: 0 PID: 1387 at mm/gup.c:2699 
internal_get_user_pages_fast+0x63a/0xac0
<4> [185.667752] Modules linked in: vgem snd_hda_codec_hdmi 
snd_hda_codec_realtek snd_hda_codec_generic i915 mei_hdcp x86_pkg_temp_thermal 
coretemp snd_hda_intel snd_intel_dspcfg crct10dif_pclmul snd_hda_codec crc32_pclmul 
snd_hwdep snd_hda_core ghash_clmulni_intel cdc_ether usbnet mii snd_pcm e1000e mei_me 
ptp pps_core mei intel_lpss_pci prime_numbers
<4> [185.667774] CPU: 0 PID: 1387 Comm: gem_userptr_bli Tainted: G U
5.7.0-rc5-CI-Patchwork_17704+ #1
<4> [185.66] Hardware name: Intel Corporation Ice Lake Client 
Platform/IceLake U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3234.A01.1906141750 
06/14/2019
<4> [185.667782] RIP: 0010:internal_get_user_pages_fast+0x63a/0xac0
<4> [185.667785] Code: 24 40 08 48 39 5c 24 38 49 89 df 0f 85 74 fc ff ff 48 83 44 24 
50 08 48 39 5c 24 58 49 89 dc 0f 85 e0 fb ff ff e9 14 fe ff ff <0f> 0b b8 ea ff ff ff 
e9 36 fb ff ff 4c 89 e8 48 21 e8 48 39 e8 0f
<4> [185.667789] RSP: 0018:c90001133c38 EFLAGS: 00010206
<4> [185.667792] RAX:  RBX:  RCX: 
8884999ee800
<4> [185.667795] RDX: 000c0001 RSI: 0100 RDI: 
7f419e774000
<4> [185.667798] RBP: 888453dbf040 R08:  R09: 
0001
<4> [185.667800] R10:  R11:  R12: 
888453dbf380
<4> [185.667803] R13: 8884999ee800 R14: 888453dbf3e8 R15: 
0040
<4> [185.667806] FS:  7f419e875e40() GS:88849fe0() 
knlGS:
<4> [185.667808] CS:  0010 DS:  ES:  CR0: 80050033
<4> [185.667811] CR2: 7f419e873000 CR3: 000458bd2004 CR4: 
00760ef0
<4> [185.667814] PKRU: 5554
<4> [185.667816] Call Trace:
<4> [185.667912]  ? i915_gem_userptr_get_pages+0x1c6/0x290 [i915]
<4> [185.667918]  ? mark_held_locks+0x49/0x70
<4> [185.667998]  ? i915_gem_userptr_get_pages+0x1c6/0x290 [i915]
<4> [185.668073]  ? i915_gem_userptr_get_pages+0x1c6/0x290 [i915]

and then panicked, across a range of systems.
-Chris



Thanks for this report! I'm looking into it now.

thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] Solved: [PATCH 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()

2020-05-21 Thread John Hubbard

On 2020-05-21 12:11, John Hubbard wrote:

On 2020-05-21 11:57, Chris Wilson wrote:

Quoting John Hubbard (2020-05-19 01:21:20)

This needs to go through Andrew's -mm tree, due to adding a new gup.c
routine. However, I would really love to have some testing from the
drm/i915 folks, because I haven't been able to run-time test that part
of it.


CI hit

<4> [185.667750] WARNING: CPU: 0 PID: 1387 at mm/gup.c:2699 
internal_get_user_pages_fast+0x63a/0xac0



OK, what happened here is that it's WARN()'ing due to passing in the new
FOLL_FAST_ONLY flag, which was not added to the whitelist.

So the fix is easy, and should be applied to the refactoring patch. I'll
send out a v2 of the series, which will effectively have this applied:


diff --git a/mm/gup.c b/mm/gup.c
index 6cbe98c93466..4f0ca3f849d1 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2696,7 +2696,8 @@ static int internal_get_user_pages_fast(unsigned long start, 
int nr_pages,

int nr_pinned = 0, ret = 0;

if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
-  FOLL_FORCE | FOLL_PIN | FOLL_GET)))
+  FOLL_FORCE | FOLL_PIN | FOLL_GET |
+  FOLL_FAST_ONLY)))
return -EINVAL;

start = untagged_addr(start) & PAGE_MASK;


<4> [185.667752] Modules linked in: vgem snd_hda_codec_hdmi snd_hda_codec_realtek 
snd_hda_codec_generic i915 mei_hdcp x86_pkg_temp_thermal coretemp snd_hda_intel 
snd_intel_dspcfg crct10dif_pclmul snd_hda_codec crc32_pclmul snd_hwdep snd_hda_core 
ghash_clmulni_intel cdc_ether usbnet mii snd_pcm e1000e mei_me ptp pps_core mei 
intel_lpss_pci prime_numbers
<4> [185.667774] CPU: 0 PID: 1387 Comm: gem_userptr_bli Tainted: G U
5.7.0-rc5-CI-Patchwork_17704+ #1
<4> [185.66] Hardware name: Intel Corporation Ice Lake Client Platform/IceLake 
U DDR4 SODIMM PD RVP, BIOS ICLSFWR1.R00.3234.A01.1906141750 06/14/2019

<4> [185.667782] RIP: 0010:internal_get_user_pages_fast+0x63a/0xac0
<4> [185.667785] Code: 24 40 08 48 39 5c 24 38 49 89 df 0f 85 74 fc ff ff 48 83 44 
24 50 08 48 39 5c 24 58 49 89 dc 0f 85 e0 fb ff ff e9 14 fe ff ff <0f> 0b b8 ea ff 
ff ff e9 36 fb ff ff 4c 89 e8 48 21 e8 48 39 e8 0f

<4> [185.667789] RSP: 0018:c90001133c38 EFLAGS: 00010206
<4> [185.667792] RAX:  RBX:  RCX: 
8884999ee800
<4> [185.667795] RDX: 000c0001 RSI: 0100 RDI: 
7f419e774000
<4> [185.667798] RBP: 888453dbf040 R08:  R09: 
0001
<4> [185.667800] R10:  R11:  R12: 
888453dbf380
<4> [185.667803] R13: 8884999ee800 R14: 888453dbf3e8 R15: 
0040
<4> [185.667806] FS:  7f419e875e40() GS:88849fe0() 
knlGS:

<4> [185.667808] CS:  0010 DS:  ES:  CR0: 80050033
<4> [185.667811] CR2: 7f419e873000 CR3: 000458bd2004 CR4: 
00760ef0
<4> [185.667814] PKRU: 5554
<4> [185.667816] Call Trace:
<4> [185.667912]  ? i915_gem_userptr_get_pages+0x1c6/0x290 [i915]
<4> [185.667918]  ? mark_held_locks+0x49/0x70
<4> [185.667998]  ? i915_gem_userptr_get_pages+0x1c6/0x290 [i915]
<4> [185.668073]  ? i915_gem_userptr_get_pages+0x1c6/0x290 [i915]

and then panicked, across a range of systems.
-Chris



btw, the panic seems to indicate an additional, pre-existing problem:
i915_gem_userptr_get_pages(), in this case at least, is not able to
recover from a get_user_pages/pin_user_pages failure.


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 2/4] mm/gup: refactor and de-duplicate gup_fast() code

2020-05-18 Thread John Hubbard
There were two nearly identical sets of code for gup_fast()
style of walking the page tables with interrupts disabled.
This has lead to the usual maintenance problems that arise from
having duplicated code.

There is already a core internal routine in gup.c for gup_fast(),
so just enhance it very slightly: allow skipping the fall-back
to "slow" (regular) get_user_pages(), via the new FOLL_FAST_ONLY
flag. Then, just call internal_get_user_pages_fast() from
__get_user_pages_fast(), and adjust the API to match pre-existing
API behavior.

There is a change in behavior from this refactoring: the nested
form of interrupt disabling is used in all gup_fast() variants
now. That's because there is only one place that interrupt disabling
for page walking is done, and so the safer form is required. This
should, if anything, eliminate possible (rare) bugs, because the
non-nested form of enabling interrupts was fragile at best.

Signed-off-by: John Hubbard 
---
 include/linux/mm.h |  1 +
 mm/gup.c   | 60 ++
 2 files changed, 29 insertions(+), 32 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a5594ac9ebe3..84b601cab699 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2782,6 +2782,7 @@ struct page *follow_page(struct vm_area_struct *vma, 
unsigned long address,
 #define FOLL_LONGTERM  0x1 /* mapping lifetime is indefinite: see below */
 #define FOLL_SPLIT_PMD 0x2 /* split huge pmd before returning */
 #define FOLL_PIN   0x4 /* pages must be released via unpin_user_page */
+#define FOLL_FAST_ONLY 0x8 /* gup_fast: prevent fall-back to slow gup */
 
 /*
  * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each
diff --git a/mm/gup.c b/mm/gup.c
index 4502846d57f9..bb3e2c4288c3 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2694,6 +2694,7 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
struct page **pages)
 {
unsigned long addr, len, end;
+   unsigned long flags;
int nr_pinned = 0, ret = 0;
 
if (WARN_ON_ONCE(gup_flags & ~(FOLL_WRITE | FOLL_LONGTERM |
@@ -2710,15 +2711,26 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
if (unlikely(!access_ok((void __user *)start, len)))
return -EFAULT;
 
+   /*
+* Disable interrupts. The nested form is used, in order to allow full,
+* general purpose use of this routine.
+*
+* With interrupts disabled, we block page table pages from being
+* freed from under us. See struct mmu_table_batch comments in
+* include/asm-generic/tlb.h for more details.
+*
+* We do not adopt an rcu_read_lock(.) here as we also want to
+* block IPIs that come from THPs splitting.
+*/
if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
gup_fast_permitted(start, end)) {
-   local_irq_disable();
+   local_irq_save(flags);
gup_pgd_range(addr, end, gup_flags, pages, _pinned);
-   local_irq_enable();
+   local_irq_restore(flags);
ret = nr_pinned;
}
 
-   if (nr_pinned < nr_pages) {
+   if (nr_pinned < nr_pages && !(gup_flags & FOLL_FAST_ONLY)) {
/* Try to get the remaining pages with get_user_pages */
start += nr_pinned << PAGE_SHIFT;
pages += nr_pinned;
@@ -2750,45 +2762,29 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
 int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
  struct page **pages)
 {
-   unsigned long len, end;
-   unsigned long flags;
-   int nr_pinned = 0;
+   int nr_pinned;
/*
 * Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
 * because gup fast is always a "pin with a +1 page refcount" request.
+*
+* FOLL_FAST_ONLY is required in order to match the API description of
+* this routine: no fall back to regular ("slow") GUP.
 */
-   unsigned int gup_flags = FOLL_GET;
+   unsigned int gup_flags = FOLL_GET | FOLL_FAST_ONLY;
 
if (write)
gup_flags |= FOLL_WRITE;
 
-   start = untagged_addr(start) & PAGE_MASK;
-   len = (unsigned long) nr_pages << PAGE_SHIFT;
-   end = start + len;
-
-   if (end <= start)
-   return 0;
-   if (unlikely(!access_ok((void __user *)start, len)))
-   return 0;
-
+   nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags,
+pages);
/*
-* Disable interrupts.  We use the nested form as we can already have
-* interrupts disabled by get_futex_key.
-*
-* With interr

[Intel-gfx] [PATCH 3/4] mm/gup: introduce pin_user_pages_fast_only()

2020-05-18 Thread John Hubbard
This is the FOLL_PIN equivalent of __get_user_pages_fast(),
except with a more descriptive name, and gup_flags instead of
a boolean "write" in the argument list.

Signed-off-by: John Hubbard 
---
 include/linux/mm.h |  2 ++
 mm/gup.c   | 36 
 2 files changed, 38 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 84b601cab699..98be7289d7e9 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1820,6 +1820,8 @@ extern int mprotect_fixup(struct vm_area_struct *vma,
  */
 int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
  struct page **pages);
+int pin_user_pages_fast_only(unsigned long start, int nr_pages,
+unsigned int gup_flags, struct page **pages);
 /*
  * per-process(per-mm_struct) statistics.
  */
diff --git a/mm/gup.c b/mm/gup.c
index bb3e2c4288c3..4413f0f94b68 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2858,6 +2858,42 @@ int pin_user_pages_fast(unsigned long start, int 
nr_pages,
 }
 EXPORT_SYMBOL_GPL(pin_user_pages_fast);
 
+/*
+ * This is the FOLL_PIN equivalent of __get_user_pages_fast(). Behavior is the
+ * same, except that this one sets FOLL_PIN instead of FOLL_GET.
+ *
+ * The API rules are the same, too: no negative values may be returned.
+ */
+int pin_user_pages_fast_only(unsigned long start, int nr_pages,
+unsigned int gup_flags, struct page **pages)
+{
+   int nr_pinned;
+
+   /*
+* FOLL_GET and FOLL_PIN are mutually exclusive. Note that the API
+* rules require returning 0, rather than -errno:
+*/
+   if (WARN_ON_ONCE(gup_flags & FOLL_GET))
+   return 0;
+   /*
+* FOLL_FAST_ONLY is required in order to match the API description of
+* this routine: no fall back to regular ("slow") GUP.
+*/
+   gup_flags |= (FOLL_PIN | FOLL_FAST_ONLY);
+   nr_pinned = internal_get_user_pages_fast(start, nr_pages, gup_flags,
+pages);
+   /*
+* This routine is not allowed to return negative values. However,
+* internal_get_user_pages_fast() *can* return -errno. Therefore,
+* correct for that here:
+*/
+   if (nr_pinned < 0)
+   nr_pinned = 0;
+
+   return nr_pinned;
+}
+EXPORT_SYMBOL_GPL(pin_user_pages_fast_only);
+
 /**
  * pin_user_pages_remote() - pin pages of a remote process (task != current)
  *
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 4/4] drm/i915: convert get_user_pages() --> pin_user_pages()

2020-05-18 Thread John Hubbard
This code was using get_user_pages*(), in a "Case 2" scenario
(DMA/RDMA), using the categorization from [1]. That means that it's
time to convert the get_user_pages*() + put_page() calls to
pin_user_pages*() + unpin_user_pages() calls.

There is some helpful background in [2]: basically, this is a small
part of fixing a long-standing disconnect between pinning pages, and
file systems' use of those pages.

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
https://lwn.net/Articles/807108/

Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 22 -
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 7ffd7afeb7a5..b55ac7563189 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -471,7 +471,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct 
*_work)
down_read(>mmap_sem);
locked = 1;
}
-   ret = get_user_pages_remote
+   ret = pin_user_pages_remote
(work->task, mm,
 obj->userptr.ptr + pinned * PAGE_SIZE,
 npages - pinned,
@@ -507,7 +507,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct 
*_work)
}
mutex_unlock(>mm.lock);
 
-   release_pages(pvec, pinned);
+   unpin_user_pages(pvec, pinned);
kvfree(pvec);
 
i915_gem_object_put(obj);
@@ -564,6 +564,7 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
struct sg_table *pages;
bool active;
int pinned;
+   unsigned int gup_flags = 0;
 
/* If userspace should engineer that these pages are replaced in
 * the vma between us binding this page into the GTT and completion
@@ -598,11 +599,14 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
  GFP_KERNEL |
  __GFP_NORETRY |
  __GFP_NOWARN);
-   if (pvec) /* defer to worker if malloc fails */
-   pinned = __get_user_pages_fast(obj->userptr.ptr,
-  num_pages,
-  
!i915_gem_object_is_readonly(obj),
-  pvec);
+   /* defer to worker if malloc fails */
+   if (pvec) {
+   if (!i915_gem_object_is_readonly(obj))
+   gup_flags |= FOLL_WRITE;
+   pinned = pin_user_pages_fast_only(obj->userptr.ptr,
+ num_pages, gup_flags,
+ pvec);
+   }
}
 
active = false;
@@ -620,7 +624,7 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
__i915_gem_userptr_set_active(obj, true);
 
if (IS_ERR(pages))
-   release_pages(pvec, pinned);
+   unpin_user_pages(pvec, pinned);
kvfree(pvec);
 
return PTR_ERR_OR_ZERO(pages);
@@ -675,7 +679,7 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
}
 
mark_page_accessed(page);
-   put_page(page);
+   unpin_user_page(page);
}
obj->mm.dirty = false;
 
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 1/4] mm/gup: move __get_user_pages_fast() down a few lines in gup.c

2020-05-18 Thread John Hubbard
This is in order to avoid a forward declaration of
internal_get_user_pages_fast(), in the next patch.

This is code movement only--all generated code should
be identical.

Signed-off-by: John Hubbard 
---
 mm/gup.c | 112 +++
 1 file changed, 56 insertions(+), 56 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 50cd9323efff..4502846d57f9 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2666,62 +2666,6 @@ static bool gup_fast_permitted(unsigned long start, 
unsigned long end)
 }
 #endif
 
-/*
- * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back 
to
- * the regular GUP.
- * Note a difference with get_user_pages_fast: this always returns the
- * number of pages pinned, 0 if no pages were pinned.
- *
- * If the architecture does not support this function, simply return with no
- * pages pinned.
- */
-int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
- struct page **pages)
-{
-   unsigned long len, end;
-   unsigned long flags;
-   int nr_pinned = 0;
-   /*
-* Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
-* because gup fast is always a "pin with a +1 page refcount" request.
-*/
-   unsigned int gup_flags = FOLL_GET;
-
-   if (write)
-   gup_flags |= FOLL_WRITE;
-
-   start = untagged_addr(start) & PAGE_MASK;
-   len = (unsigned long) nr_pages << PAGE_SHIFT;
-   end = start + len;
-
-   if (end <= start)
-   return 0;
-   if (unlikely(!access_ok((void __user *)start, len)))
-   return 0;
-
-   /*
-* Disable interrupts.  We use the nested form as we can already have
-* interrupts disabled by get_futex_key.
-*
-* With interrupts disabled, we block page table pages from being
-* freed from under us. See struct mmu_table_batch comments in
-* include/asm-generic/tlb.h for more details.
-*
-* We do not adopt an rcu_read_lock(.) here as we also want to
-* block IPIs that come from THPs splitting.
-*/
-
-   if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
-   gup_fast_permitted(start, end)) {
-   local_irq_save(flags);
-   gup_pgd_range(start, end, gup_flags, pages, _pinned);
-   local_irq_restore(flags);
-   }
-
-   return nr_pinned;
-}
-EXPORT_SYMBOL_GPL(__get_user_pages_fast);
-
 static int __gup_longterm_unlocked(unsigned long start, int nr_pages,
   unsigned int gup_flags, struct page **pages)
 {
@@ -2794,6 +2738,62 @@ static int internal_get_user_pages_fast(unsigned long 
start, int nr_pages,
return ret;
 }
 
+/*
+ * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back 
to
+ * the regular GUP.
+ * Note a difference with get_user_pages_fast: this always returns the
+ * number of pages pinned, 0 if no pages were pinned.
+ *
+ * If the architecture does not support this function, simply return with no
+ * pages pinned.
+ */
+int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ struct page **pages)
+{
+   unsigned long len, end;
+   unsigned long flags;
+   int nr_pinned = 0;
+   /*
+* Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
+* because gup fast is always a "pin with a +1 page refcount" request.
+*/
+   unsigned int gup_flags = FOLL_GET;
+
+   if (write)
+   gup_flags |= FOLL_WRITE;
+
+   start = untagged_addr(start) & PAGE_MASK;
+   len = (unsigned long) nr_pages << PAGE_SHIFT;
+   end = start + len;
+
+   if (end <= start)
+   return 0;
+   if (unlikely(!access_ok((void __user *)start, len)))
+   return 0;
+
+   /*
+* Disable interrupts.  We use the nested form as we can already have
+* interrupts disabled by get_futex_key.
+*
+* With interrupts disabled, we block page table pages from being
+* freed from under us. See struct mmu_table_batch comments in
+* include/asm-generic/tlb.h for more details.
+*
+* We do not adopt an rcu_read_lock(.) here as we also want to
+* block IPIs that come from THPs splitting.
+*/
+
+   if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
+   gup_fast_permitted(start, end)) {
+   local_irq_save(flags);
+   gup_pgd_range(start, end, gup_flags, pages, _pinned);
+   local_irq_restore(flags);
+   }
+
+   return nr_pinned;
+}
+EXPORT_SYMBOL_GPL(__get_user_pages_fast);
+
 /**
  * get_user_pages_fast() - pin user pages in memory
  * @start:  starting user address
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 0/4] mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()

2020-05-18 Thread John Hubbard
This needs to go through Andrew's -mm tree, due to adding a new gup.c
routine. However, I would really love to have some testing from the
drm/i915 folks, because I haven't been able to run-time test that part
of it.

Otherwise, though, the series has passed my basic run time testing:
some LTP tests, some xfs and etx4 non-destructive xfstests, and an
assortment of other smaller ones: vm selftests, io_uring_register, a
few more. But that's only on one particular machine. Also, cross-compile
tests for half a dozen arches all pass.

Details:

In order to convert the drm/i915 driver from get_user_pages() to
pin_user_pages(), a FOLL_PIN equivalent of __get_user_pages_fast() was
required. That led to refactoring __get_user_pages_fast(), with the
following goals:

1) As above: provide a pin_user_pages*() routine for drm/i915 to call,
   in place of __get_user_pages_fast(),

2) Get rid of the gup.c duplicate code for walking page tables with
   interrupts disabled. This duplicate code is a minor maintenance
   problem anyway.

3) Make it easy for an upcoming patch from Souptick, which aims to
   convert __get_user_pages_fast() to use a gup_flags argument, instead
   of a bool writeable arg.  Also, if this series looks good, we can
   ask Souptick to change the name as well, to whatever the consensus
   is. My initial recommendation is: get_user_pages_fast_only(), to
   match the new pin_user_pages_only().

John Hubbard (4):
  mm/gup: move __get_user_pages_fast() down a few lines in gup.c
  mm/gup: refactor and de-duplicate gup_fast() code
  mm/gup: introduce pin_user_pages_fast_only()
  drm/i915: convert get_user_pages() --> pin_user_pages()

 drivers/gpu/drm/i915/gem/i915_gem_userptr.c |  22 +--
 include/linux/mm.h  |   3 +
 mm/gup.c| 150 
 3 files changed, 107 insertions(+), 68 deletions(-)


base-commit: 642b151f45dd54809ea00ecd3976a56c1ec9b53d
-- 
2.26.2

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault

2020-05-05 Thread John Hubbard
eems like we could get stuck in a loop here,
if we're not issuing a new REQ, right?



if (ret == -EBUSY)
continue;
return ret;
@@ -562,7 +587,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
break;
}
  
-	nouveau_dmem_convert_pfn(drm, );

+   nouveau_hmm_convert_pfn(drm, , ioctl_addr);
  
  	svmm->vmm->vmm.object.client->super = true;

ret = nvif_object_ioctl(>vmm->vmm.object, data, size, NULL);
@@ -589,6 +614,7 @@ nouveau_svm_fault(struct nvif_notify *notify)
} i;
u64 phys[16];
} args;
+   unsigned long hmm_pfns[ARRAY_SIZE(args.phys)];



Is there a risk of blowing up the stack here?

...


--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -19,45 +19,45 @@
  #include 
  
  /*

- * hmm_pfn_flag_e - HMM flag enums
+ * On output:
+ * 0 - The page is faultable and a future call with
+ * HMM_PFN_REQ_FAULT could succeed.
+ * HMM_PFN_VALID - the pfn field points to a valid PFN. This PFN is at
+ * least readable. If dev_private_owner is !NULL then this 
could
+ * point at a DEVICE_PRIVATE page.
+ * HMM_PFN_WRITE - if the page memory can be written to (requires 
HMM_PFN_VALID)
+ * HMM_PFN_ERROR - accessing the pfn is impossible and the device should
+ * fail. ie poisoned memory, special pages, no vma, etc
   *
- * Flags:
- * HMM_PFN_VALID: pfn is valid. It has, at least, read permission.
- * HMM_PFN_WRITE: CPU page table has write permission set
- *
- * The driver provides a flags array for mapping page protections to device
- * PTE bits. If the driver valid bit for an entry is bit 3,
- * i.e., (entry & (1 << 3)), then the driver must provide
- * an array in hmm_range.flags with hmm_range.flags[HMM_PFN_VALID] == 1 << 3.
- * Same logic apply to all flags. This is the same idea as vm_page_prot in vma
- * except that this is per device driver rather than per architecture.
+ * On input:
+ * 0 - Return the current state of the page, do not fault it.
+ * HMM_PFN_REQ_FAULT - The output must have HMM_PFN_VALID or hmm_range_fault()
+ * will fail
+ * HMM_PFN_REQ_WRITE - The output must have HMM_PFN_WRITE or hmm_range_fault()
+ * will fail. Must be combined with HMM_PFN_REQ_FAULT.
   */
-enum hmm_pfn_flag_e {
-   HMM_PFN_VALID = 0,
-   HMM_PFN_WRITE,
-   HMM_PFN_FLAG_MAX
+enum hmm_pfn_flags {


Let's add:

/* Output flags: */


+   HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1),
+   HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2),
+   HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3),
+


/* Input flags: */

...


@@ -174,44 +162,44 @@ static int hmm_vma_walk_hole(unsigned long addr, unsigned 
long end,
}
if (required_fault)
return hmm_vma_fault(addr, end, required_fault, walk);
-   return hmm_pfns_fill(addr, end, range, HMM_PFN_NONE);
+   return hmm_pfns_fill(addr, end, range, 0);
  }
  
-static inline uint64_t pmd_to_hmm_pfn_flags(struct hmm_range *range, pmd_t pmd)

+static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
+pmd_t pmd)
  {
if (pmd_protnone(pmd))
return 0;
-   return pmd_write(pmd) ? range->flags[HMM_PFN_VALID] |
-   range->flags[HMM_PFN_WRITE] :
-   range->flags[HMM_PFN_VALID];
+   return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID;



I always found the previous range->flags[...] approach hard to remember, so it's
nice to see a simpler version now.


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH hmm v2 4/5] mm/hmm: remove HMM_PFN_SPECIAL

2020-05-05 Thread John Hubbard

On 2020-05-01 11:20, Jason Gunthorpe wrote:

From: Jason Gunthorpe 

This is just an alias for HMM_PFN_ERROR, nothing cares that the error was
because of a special page vs any other error case.


Reviewed-by: John Hubbard 

thanks,
--
John Hubbard
NVIDIA


Acked-by: Felix Kuehling 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jason Gunthorpe 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 -
  drivers/gpu/drm/nouveau/nouveau_svm.c   | 1 -
  include/linux/hmm.h | 8 
  mm/hmm.c| 2 +-
  4 files changed, 1 insertion(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 41ae7f96f48194..76b4a4fa39ed04 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -775,7 +775,6 @@ static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] = {
  static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] = {
0xfffeUL, /* HMM_PFN_ERROR */
0, /* HMM_PFN_NONE */
-   0xfffcUL /* HMM_PFN_SPECIAL */
  };
  
  /**

diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c 
b/drivers/gpu/drm/nouveau/nouveau_svm.c
index c68e9317cf0740..cf0d9bd61bebf9 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -379,7 +379,6 @@ static const u64
  nouveau_svm_pfn_values[HMM_PFN_VALUE_MAX] = {
[HMM_PFN_ERROR  ] = ~NVIF_VMM_PFNMAP_V0_V,
[HMM_PFN_NONE   ] =  NVIF_VMM_PFNMAP_V0_NONE,
-   [HMM_PFN_SPECIAL] = ~NVIF_VMM_PFNMAP_V0_V,
  };
  
  /* Issue fault replay for GPU to retry accesses that faulted previously. */

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 0df27dd03d53d7..81c302c884c0e3 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -44,10 +44,6 @@ enum hmm_pfn_flag_e {
   * Flags:
   * HMM_PFN_ERROR: corresponding CPU page table entry points to poisoned memory
   * HMM_PFN_NONE: corresponding CPU page table entry is pte_none()
- * HMM_PFN_SPECIAL: corresponding CPU page table entry is special; i.e., the
- *  result of vmf_insert_pfn() or vm_insert_page(). Therefore, it should 
not
- *  be mirrored by a device, because the entry will never have 
HMM_PFN_VALID
- *  set and the pfn value is undefined.
   *
   * Driver provides values for none entry, error entry, and special entry.
   * Driver can alias (i.e., use same value) error and special, but
@@ -56,12 +52,10 @@ enum hmm_pfn_flag_e {
   * HMM pfn value returned by hmm_vma_get_pfns() or hmm_vma_fault() will be:
   * hmm_range.values[HMM_PFN_ERROR] if CPU page table entry is poisonous,
   * hmm_range.values[HMM_PFN_NONE] if there is no CPU page table entry,
- * hmm_range.values[HMM_PFN_SPECIAL] if CPU page table entry is a special one
   */
  enum hmm_pfn_value_e {
HMM_PFN_ERROR,
HMM_PFN_NONE,
-   HMM_PFN_SPECIAL,
HMM_PFN_VALUE_MAX
  };
  
@@ -110,8 +104,6 @@ static inline struct page *hmm_device_entry_to_page(const struct hmm_range *rang

return NULL;
if (entry == range->values[HMM_PFN_ERROR])
return NULL;
-   if (entry == range->values[HMM_PFN_SPECIAL])
-   return NULL;
if (!(entry & range->flags[HMM_PFN_VALID]))
return NULL;
return pfn_to_page(entry >> range->pfn_shift);
diff --git a/mm/hmm.c b/mm/hmm.c
index f06bcac948a79b..2e975eedb14f89 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -301,7 +301,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, 
unsigned long addr,
pte_unmap(ptep);
return -EFAULT;
}
-   *pfn = range->values[HMM_PFN_SPECIAL];
+   *pfn = range->values[HMM_PFN_ERROR];
return 0;
}
  



___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH hmm v2 2/5] mm/hmm: make hmm_range_fault return 0 or -1

2020-05-05 Thread John Hubbard

On 2020-05-01 11:20, Jason Gunthorpe wrote:

From: Jason Gunthorpe 

hmm_vma_walk->last is supposed to be updated after every write to the
pfns, so that it can be returned by hmm_range_fault(). However, this is
not done consistently. Fortunately nothing checks the return code of
hmm_range_fault() for anything other than error.

More importantly last must be set before returning -EBUSY as it is used to
prevent reading an output pfn as an input flags when the loop restarts.

For clarity and simplicity make hmm_range_fault() return 0 or -ERRNO. Only
set last when returning -EBUSY.


Yes, this is also a nice simplification.


...
@@ -590,10 +580,13 @@ long hmm_range_fault(struct hmm_range *range)
return -EBUSY;
ret = walk_page_range(mm, hmm_vma_walk.last, range->end,
  _walk_ops, _vma_walk);
+   /*
+* When -EBUSY is returned the loop restarts with
+* hmm_vma_walk.last set to an address that has not been stored
+* in pfns. All entries < last in the pfn array are set to their
+* output, and all >= are still at their input values.
+*/


I'm glad you added that comment. This is much easier to figure out with
that in place. After poking around this patch and eventually understanding the
.last handling, I wondered if you might like this slightly tweaked wording
instead:

/*
 * Each of the hmm_walk_ops routines returns -EBUSY if and only
 * hmm_vma_walk.last has been set to an address that has not yet
 * been stored in pfns. All entries < last in the pfn array are
 * set to their output, and all >= are still at their input
 * values.
 */

Either way,

    Reviewed-by: John Hubbard 

thanks,
--
John Hubbard
NVIDIA


} while (ret == -EBUSY);
-
-   if (ret)
-   return ret;
-   return (hmm_vma_walk.last - range->start) >> PAGE_SHIFT;
+   return ret;
  }
  EXPORT_SYMBOL(hmm_range_fault);



___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH hmm v2 1/5] mm/hmm: make CONFIG_DEVICE_PRIVATE into a select

2020-05-04 Thread John Hubbard

On 2020-05-01 11:20, Jason Gunthorpe wrote:

From: Jason Gunthorpe 

There is no reason for a user to select this or not directly - it should
be selected by drivers that are going to use the feature, similar to how
CONFIG_HMM_MIRROR works.


Yes, this is a nice touch.

Reviewed-by: John Hubbard 

thanks,
--
John Hubbard
NVIDIA



Currently all drivers provide a feature kconfig that will disable use of
DEVICE_PRIVATE in that driver, allowing users to avoid enabling this if
they don't want the overhead.

Acked-by: Felix Kuehling 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jason Gunthorpe 
---
  arch/powerpc/Kconfig| 2 +-
  drivers/gpu/drm/nouveau/Kconfig | 2 +-
  mm/Kconfig  | 7 +--
  3 files changed, 3 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 924c541a926008..8de52aefdc74cc 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -455,7 +455,7 @@ config PPC_TRANSACTIONAL_MEM
  config PPC_UV
bool "Ultravisor support"
depends on KVM_BOOK3S_HV_POSSIBLE
-   depends on DEVICE_PRIVATE
+   select DEVICE_PRIVATE
default n
help
  This option paravirtualizes the kernel to run in POWER platforms that
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index d6e4ae1ef7053a..af5793f3e7c2cf 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -86,10 +86,10 @@ config DRM_NOUVEAU_BACKLIGHT
  
  config DRM_NOUVEAU_SVM

bool "(EXPERIMENTAL) Enable SVM (Shared Virtual Memory) support"
-   depends on DEVICE_PRIVATE
depends on DRM_NOUVEAU
depends on MMU
depends on STAGING
+   select DEVICE_PRIVATE
select HMM_MIRROR
select MMU_NOTIFIER
default n
diff --git a/mm/Kconfig b/mm/Kconfig
index c1acc34c1c358c..7ca36bf5f5058e 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -805,15 +805,10 @@ config HMM_MIRROR
depends on MMU
  
  config DEVICE_PRIVATE

-   bool "Unaddressable device memory (GPU memory, ...)"
+   bool
depends on ZONE_DEVICE
select DEV_PAGEMAP_OPS
  
-	help

- Allows creation of struct pages to represent unaddressable device
- memory; i.e., memory that is only accessible from the device (or
- group of devices). You likely also want to select HMM_MIRROR.
-
  config FRAME_VECTOR
bool
  



___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH v3 00/39] put_user_pages(): miscellaneous call sites

2019-08-30 Thread John Hubbard

On 8/29/2019 6:29 PM, Mike Marshall wrote:

Hi John...

I added this patch series on top of Linux 5.3rc6 and ran
xfstests with no regressions...

Acked-by: Mike Marshall 



Hi Mike (and I hope Ira and others are reading as well, because
I'm making a bunch of claims further down),

That's great news, thanks for running that test suite and for
the report and the ACK.

There is an interesting pause right now, due to the fact that
we've made some tentative decisions about gup pinning, that affect
the call sites. A key decision is that only pages that were
requested via FOLL_PIN, will require put_user_page*() to release
them. There are 4 main cases, which were first explained by Jan
Kara and Vlastimil Babka, and are now written up in my FOLL_PIN
patch [1].

So, what that means for this series is that:

1. Some call sites (mlock.c for example, and a lot of the mm/ files
in fact, and more) will not be converted: some of these patches will
get dropped, especially in mm/.

2. Call sites that do DirectIO or RDMA will need to set FOLL_PIN, and
will also need to call put_user_page().

3. Call sites that do RDMA will need to set FOLL_LONGTERM *and* FOLL_PIN,

   3.a. ...and will at least in some cases need to provide a link to a
   vaddr_pin object, and thus back to a struct file*...maybe. Still
   under discussion.

4. It's desirable to keep FOLL_* flags (or at least FOLL_PIN) internal
to the gup() calls. That implies using a wrapper call such as Ira's
vaddr_pin_[user]_pages(), instead of gup(), and vaddr_unpin_[user]_pages()
instead of put_user_page*().

5. We don't want to churn the call sites unnecessarily.

With that in mind, I've taken another pass through all these patches
and narrowed it down to:

a) 12 call sites that I'd like to convert soon, but even those
   really look cleaner with a full conversion to a wrapper call
   similar to (identical to?) vaddr_pin_[user]_pages(), probably
   just the FOLL_PIN only variant (not FOLL_LONGTERM). That
   wrapper call is not ready yet, though.

b) Some more call sites that require both FOLL_PIN and FOLL_LONGTERM.
   Definitely will wait to use the wrapper calls for these, because
   they may also require hooking up to a struct file*.

c) A few more that were already applied, which is fine, because they
   show where to convert, and simplify a few sites anyway. But they'll
   need follow-on changes to, one way or another, set FOLL_PIN.

d) And of course a few sites whose patches get dropped, as mentioned
   above.

[1] https://lore.kernel.org/r/20190821040727.19650-3-jhubb...@nvidia.com

thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 07/41] drm/etnaviv: convert release_pages() to put_user_pages()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Joerg Roedel 
Cc: Paolo Bonzini 
Cc: Radim Krčmář 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: H. Peter Anvin 
Cc: x...@kernel.org
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/etnaviv/etnaviv_gem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c 
b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index e8778ebb72e6..a0144a5ee325 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -686,7 +686,7 @@ static int etnaviv_gem_userptr_get_pages(struct 
etnaviv_gem_object *etnaviv_obj)
ret = get_user_pages_fast(ptr, num_pages,
  !userptr->ro ? FOLL_WRITE : 0, pages);
if (ret < 0) {
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
return ret;
}
@@ -710,7 +710,7 @@ static void etnaviv_gem_userptr_release(struct 
etnaviv_gem_object *etnaviv_obj)
if (etnaviv_obj->pages) {
int npages = etnaviv_obj->base.size >> PAGE_SHIFT;
 
-   release_pages(etnaviv_obj->pages, npages);
+   put_user_pages(etnaviv_obj->pages, npages);
kvfree(etnaviv_obj->pages);
}
 }
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 41/41] mm/ksm: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Dan Williams 
Cc: Daniel Black 
Cc: Jan Kara 
Cc: Jérôme Glisse 
Cc: Matthew Wilcox 
Cc: Mike Kravetz 
Signed-off-by: John Hubbard 
---
 mm/ksm.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 3dc4346411e4..e10ee4d5fdd8 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -456,7 +456,7 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
  * We use break_ksm to break COW on a ksm page: it's a stripped down
  *
  * if (get_user_pages(addr, 1, 1, 1, , NULL) == 1)
- * put_page(page);
+ * put_user_page(page);
  *
  * but taking great care only to touch a ksm page, in a VM_MERGEABLE vma,
  * in case the application has unmapped and remapped mm,addr meanwhile.
@@ -483,7 +483,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned 
long addr)
FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE);
else
ret = VM_FAULT_WRITE;
-   put_page(page);
+   put_user_page(page);
} while (!(ret & (VM_FAULT_WRITE | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | 
VM_FAULT_OOM)));
/*
 * We must loop because handle_mm_fault() may back out if there's
@@ -568,7 +568,7 @@ static struct page *get_mergeable_page(struct rmap_item 
*rmap_item)
flush_anon_page(vma, page, addr);
flush_dcache_page(page);
} else {
-   put_page(page);
+   put_user_page(page);
 out:
page = NULL;
}
@@ -1974,10 +1974,10 @@ struct rmap_item *unstable_tree_search_insert(struct 
rmap_item *rmap_item,
 
parent = *new;
if (ret < 0) {
-   put_page(tree_page);
+   put_user_page(tree_page);
new = >rb_left;
} else if (ret > 0) {
-   put_page(tree_page);
+   put_user_page(tree_page);
new = >rb_right;
} else if (!ksm_merge_across_nodes &&
   page_to_nid(tree_page) != nid) {
@@ -1986,7 +1986,7 @@ struct rmap_item *unstable_tree_search_insert(struct 
rmap_item *rmap_item,
 * it will be flushed out and put in the right unstable
 * tree next time: only merge with it when across_nodes.
 */
-   put_page(tree_page);
+   put_user_page(tree_page);
return NULL;
} else {
*tree_pagep = tree_page;
@@ -2328,7 +2328,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct 
page **page)
_item->rmap_list;
ksm_scan.address += PAGE_SIZE;
} else
-   put_page(*page);
+   put_user_page(*page);
up_read(>mmap_sem);
return rmap_item;
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 07/34] drm/radeon: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Alex Deucher 
Cc: Christian König 
Cc: David (ChunMing) Zhou 
Cc: David Airlie 
Cc: amd-...@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/radeon/radeon_ttm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index fb3696bc616d..4c9943fa10df 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -540,7 +540,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_tt *ttm)
kfree(ttm->sg);
 
 release_pages:
-   release_pages(ttm->pages, pinned);
+   put_user_pages(ttm->pages, pinned);
return r;
 }
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v3 00/39] put_user_pages(): miscellaneous call sites

2019-08-09 Thread John Hubbard
On 8/6/19 6:32 PM, john.hubb...@gmail.com wrote:
> From: John Hubbard 
> ...
> 
> John Hubbard (38):
>   mm/gup: add make_dirty arg to put_user_pages_dirty_lock()
...
>  54 files changed, 191 insertions(+), 323 deletions(-)
> 
ahem, yes, apparently this is what happens if I add a few patches while editing
the cover letter... :) 

The subject line should read "00/41", and the list of files affected here is
therefore under-reported in this cover letter. However, the patch series itself 
is 
intact and ready for submission.

thanks,
-- 
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 26/41] futex: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Peter Zijlstra 
Cc: Darren Hart 
Signed-off-by: John Hubbard 
---
 kernel/futex.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index 6d50728ef2e7..4b4cae58ec57 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -623,7 +623,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
lock_page(page);
shmem_swizzled = PageSwapCache(page) || page->mapping;
unlock_page(page);
-   put_page(page);
+   put_user_page(page);
 
if (shmem_swizzled)
goto again;
@@ -675,7 +675,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
 
if (READ_ONCE(page->mapping) != mapping) {
rcu_read_unlock();
-   put_page(page);
+   put_user_page(page);
 
goto again;
}
@@ -683,7 +683,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
inode = READ_ONCE(mapping->host);
if (!inode) {
rcu_read_unlock();
-   put_page(page);
+   put_user_page(page);
 
goto again;
}
@@ -702,7 +702,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
 */
if (!atomic_inc_not_zero(>i_count)) {
rcu_read_unlock();
-   put_page(page);
+   put_user_page(page);
 
goto again;
}
@@ -723,7 +723,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
}
 
 out:
-   put_page(page);
+   put_user_page(page);
return err;
 }
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 03/41] net/xdp: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Acked-by: Björn Töpel 
Cc: Magnus Karlsson 
Cc: David S. Miller 
Cc: net...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 net/xdp/xdp_umem.c | 9 +
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
index 83de74ca729a..17c4b3d3dc34 100644
--- a/net/xdp/xdp_umem.c
+++ b/net/xdp/xdp_umem.c
@@ -166,14 +166,7 @@ void xdp_umem_clear_dev(struct xdp_umem *umem)
 
 static void xdp_umem_unpin_pages(struct xdp_umem *umem)
 {
-   unsigned int i;
-
-   for (i = 0; i < umem->npgs; i++) {
-   struct page *page = umem->pgs[i];
-
-   set_page_dirty_lock(page);
-   put_page(page);
-   }
+   put_user_pages_dirty_lock(umem->pgs, umem->npgs, true);
 
kfree(umem->pgs);
umem->pgs = NULL;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 00/34] put_user_pages(): miscellaneous call sites

2019-08-09 Thread John Hubbard
On 8/8/19 9:25 AM, Weiny, Ira wrote:
>>
>> On 8/7/19 7:36 PM, Ira Weiny wrote:
>>> On Wed, Aug 07, 2019 at 10:46:49AM +0200, Michal Hocko wrote:
>>>> On Wed 07-08-19 10:37:26, Jan Kara wrote:
>>>>> On Fri 02-08-19 12:14:09, John Hubbard wrote:
>>>>>> On 8/2/19 7:52 AM, Jan Kara wrote:
>>>>>>> On Fri 02-08-19 07:24:43, Matthew Wilcox wrote:
>>>>>>>> On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote:
>>>>>>>>> On Fri 02-08-19 11:12:44, Michal Hocko wrote:
>>>>>>>>>> On Thu 01-08-19 19:19:31, john.hubb...@gmail.com wrote:
>>   [...]
> Yep I can do this.  I did not realize that Andrew had accepted any of this 
> work.  I'll check out his tree.  But I don't think he is going to accept this 
> series through his tree.  So what is the ETA on that landing in Linus' tree?
> 

I'd expect it to go into 5.4, according to my understanding of how
the release cycles are arranged.


> To that point I'm still not sure who would take all this as I am now touching 
> mm, procfs, rdma, ext4, and xfs.
> 
> I just thought I would chime in with my progress because I'm to a point where 
> things are working and so I can submit the code but I'm not sure what I 
> can/should depend on landing...  Also, now that 0day has run overnight it has 
> found issues with this rebase so I need to clean those up...  Perhaps I will 
> base on Andrew's tree prior to doing that...

I'm certainly not the right person to answer, but in spite of that, I'd think
Andrew's tree is a reasonable place for it. Sort of.

thanks,
-- 
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 17/41] staging/vc04_services: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Acked-by: Greg Kroah-Hartman 

Cc: Eric Anholt 
Cc: Stefan Wahren 
Cc: Greg Kroah-Hartman 
Cc: Mihaela Muraru 
Cc: Suniel Mahesh 
Cc: Al Viro 
Cc: Sidong Yang 
Cc: Kishore KP 
Cc: linux-rpi-ker...@lists.infradead.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: de...@driverdev.osuosl.org
Signed-off-by: John Hubbard 
---
 .../vc04_services/interface/vchiq_arm/vchiq_2835_arm.c | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c 
b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
index 61c69f353cdb..ec92b4c50e95 100644
--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
+++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c
@@ -336,10 +336,7 @@ cleanup_pagelistinfo(struct vchiq_pagelist_info 
*pagelistinfo)
}
 
if (pagelistinfo->pages_need_release) {
-   unsigned int i;
-
-   for (i = 0; i < pagelistinfo->num_pages; i++)
-   put_page(pagelistinfo->pages[i]);
+   put_user_pages(pagelistinfo->pages, pagelistinfo->num_pages);
}
 
dma_free_coherent(g_dev, pagelistinfo->pagelist_buffer_size,
@@ -454,10 +451,7 @@ create_pagelist(char __user *buf, size_t count, unsigned 
short type)
   __func__, actual_pages, num_pages);
 
/* This is probably due to the process being killed */
-   while (actual_pages > 0) {
-   actual_pages--;
-   put_page(pages[actual_pages]);
-   }
+   put_user_pages(pages, actual_pages);
cleanup_pagelistinfo(pagelistinfo);
return NULL;
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 18/34] fbdev/pvr2fb: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Bartlomiej Zolnierkiewicz 
Cc: Kees Cook 
Cc: Al Viro 
Cc: Bhumika Goyal 
Cc: Arvind Yadav 
Cc: dri-de...@lists.freedesktop.org
Cc: linux-fb...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/video/fbdev/pvr2fb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/video/fbdev/pvr2fb.c b/drivers/video/fbdev/pvr2fb.c
index 7ff4b6b84282..0e4f9aa6444d 100644
--- a/drivers/video/fbdev/pvr2fb.c
+++ b/drivers/video/fbdev/pvr2fb.c
@@ -700,8 +700,7 @@ static ssize_t pvr2fb_write(struct fb_info *info, const 
char *buf,
ret = count;
 
 out_unmap:
-   for (i = 0; i < nr_pages; i++)
-   put_page(pages[i]);
+   put_user_pages(pages, nr_pages);
 
kfree(pages);
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 17/34] vfio: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
qp_release_pages(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Alex Williamson 
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/vfio/vfio_iommu_type1.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 054391f30fa8..5a5461a14299 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -320,9 +320,9 @@ static int put_pfn(unsigned long pfn, int prot)
 {
if (!is_invalid_reserved_pfn(pfn)) {
struct page *page = pfn_to_page(pfn);
-   if (prot & IOMMU_WRITE)
-   SetPageDirty(page);
-   put_page(page);
+   bool dirty = prot & IOMMU_WRITE;
+
+   put_user_pages_dirty_lock(, 1, dirty);
return 1;
}
return 0;
@@ -356,7 +356,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned 
long vaddr,
 */
if (ret > 0 && vma_is_fsdax(vmas[0])) {
ret = -EOPNOTSUPP;
-   put_page(page[0]);
+   put_user_page(page[0]);
}
}
up_read(>mmap_sem);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 12/41] genwqe: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

This changes the release code slightly, because each page slot in the
page_list[] array is no longer checked for NULL. However, that check
was wrong anyway, because the get_user_pages() pattern of usage here
never allowed for NULL entries within a range of pinned pages.

Acked-by: Greg Kroah-Hartman 

Cc: Frank Haverkamp 
Cc: Guilherme G. Piccoli 
Cc: Arnd Bergmann 
Cc: Greg Kroah-Hartman 
Signed-off-by: John Hubbard 
---
 drivers/misc/genwqe/card_utils.c | 17 +++--
 1 file changed, 3 insertions(+), 14 deletions(-)

diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c
index 2e1c4d2905e8..2a888f31d2c5 100644
--- a/drivers/misc/genwqe/card_utils.c
+++ b/drivers/misc/genwqe/card_utils.c
@@ -517,24 +517,13 @@ int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct 
genwqe_sgl *sgl)
 /**
  * genwqe_free_user_pages() - Give pinned pages back
  *
- * Documentation of get_user_pages is in mm/gup.c:
- *
- * If the page is written to, set_page_dirty (or set_page_dirty_lock,
- * as appropriate) must be called after the page is finished with, and
- * before put_page is called.
+ * The pages may have been written to, so we call put_user_pages_dirty_lock(),
+ * rather than put_user_pages().
  */
 static int genwqe_free_user_pages(struct page **page_list,
unsigned int nr_pages, int dirty)
 {
-   unsigned int i;
-
-   for (i = 0; i < nr_pages; i++) {
-   if (page_list[i] != NULL) {
-   if (dirty)
-   set_page_dirty_lock(page_list[i]);
-   put_page(page_list[i]);
-   }
-   }
+   put_user_pages_dirty_lock(page_list, nr_pages, dirty);
return 0;
 }
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 06/34] drm/i915: convert put_page() to put_user_page*()

2019-08-09 Thread John Hubbard
On 8/2/19 11:48 AM, John Hubbard wrote:
> On 8/2/19 2:19 AM, Joonas Lahtinen wrote:
>> Quoting john.hubb...@gmail.com (2019-08-02 05:19:37)
>>> From: John Hubbard 
...
> In order to deal with the merge problem, I'll drop this patch from my series,
> and I'd recommend that the drm-intel-next take the following approach:

Actually, I just pulled the latest linux.git, and there are a few changes:

> 
> 1) For now, s/put_page/put_user_page/ in i915_gem_userptr_put_pages(),
> and fix up the set_page_dirty() --> set_page_dirty_lock() issue, like this
> (based against linux.git):
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
> b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> index 528b61678334..94721cc0093b 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
> @@ -664,10 +664,10 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object 
> *obj,
> 
>     for_each_sgt_page(page, sgt_iter, pages) {
>     if (obj->mm.dirty)
> -   set_page_dirty(page);
> +   set_page_dirty_lock(page);

I see you've already applied this fix to your tree, in linux.git already.

> 
>     mark_page_accessed(page);
> -   put_page(page);
> +   put_user_page(page);

But this conversion still needs doing. So I'll repost a patch that only does 
this (plus the other call sites). 

That can go in via either your tree, or Andrew's -mm tree, without generating
any conflicts.

thanks,
-- 
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 31/34] fs/nfs: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Calum Mackay 
Cc: Trond Myklebust 
Cc: Anna Schumaker 
Cc: linux-...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 fs/nfs/direct.c | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
index 0cb442406168..c0c1b9f2c069 100644
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -276,13 +276,6 @@ ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter 
*iter)
return nfs_file_direct_write(iocb, iter);
 }
 
-static void nfs_direct_release_pages(struct page **pages, unsigned int npages)
-{
-   unsigned int i;
-   for (i = 0; i < npages; i++)
-   put_page(pages[i]);
-}
-
 void nfs_init_cinfo_from_dreq(struct nfs_commit_info *cinfo,
  struct nfs_direct_req *dreq)
 {
@@ -512,7 +505,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct 
nfs_direct_req *dreq,
pos += req_len;
dreq->bytes_left -= req_len;
}
-   nfs_direct_release_pages(pagevec, npages);
+   put_user_pages(pagevec, npages);
kvfree(pagevec);
if (result < 0)
break;
@@ -935,7 +928,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct 
nfs_direct_req *dreq,
pos += req_len;
dreq->bytes_left -= req_len;
}
-   nfs_direct_release_pages(pagevec, npages);
+   put_user_pages(pagevec, npages);
kvfree(pagevec);
if (result < 0)
break;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 17/34] vfio: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
qp_release_pages(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christophe Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Alex Williamson 
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/vfio/vfio_iommu_type1.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 054391f30fa8..5a5461a14299 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -320,9 +320,9 @@ static int put_pfn(unsigned long pfn, int prot)
 {
if (!is_invalid_reserved_pfn(pfn)) {
struct page *page = pfn_to_page(pfn);
-   if (prot & IOMMU_WRITE)
-   SetPageDirty(page);
-   put_page(page);
+   bool dirty = prot & IOMMU_WRITE;
+
+   put_user_pages_dirty_lock(, 1, dirty);
return 1;
}
return 0;
@@ -356,7 +356,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned 
long vaddr,
 */
if (ret > 0 && vma_is_fsdax(vmas[0])) {
ret = -EOPNOTSUPP;
-   put_page(page[0]);
+   put_user_page(page[0]);
}
}
up_read(>mmap_sem);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 05/34] drm/etnaviv: convert release_pages() to put_user_pages()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Joerg Roedel 
Cc: Paolo Bonzini 
Cc: Radim Krčmář 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: H. Peter Anvin 
Cc: x...@kernel.org
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/etnaviv/etnaviv_gem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c 
b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index e8778ebb72e6..a0144a5ee325 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -686,7 +686,7 @@ static int etnaviv_gem_userptr_get_pages(struct 
etnaviv_gem_object *etnaviv_obj)
ret = get_user_pages_fast(ptr, num_pages,
  !userptr->ro ? FOLL_WRITE : 0, pages);
if (ret < 0) {
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
return ret;
}
@@ -710,7 +710,7 @@ static void etnaviv_gem_userptr_release(struct 
etnaviv_gem_object *etnaviv_obj)
if (etnaviv_obj->pages) {
int npages = etnaviv_obj->base.size >> PAGE_SHIFT;
 
-   release_pages(etnaviv_obj->pages, npages);
+   put_user_pages(etnaviv_obj->pages, npages);
kvfree(etnaviv_obj->pages);
}
 }
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 21/41] fsl_hypervisor: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

This changes the release code slightly, because each page slot in the
page_list[] array is no longer checked for NULL. However, that check
was wrong anyway, because the get_user_pages() pattern of usage here
never allowed for NULL entries within a range of pinned pages.

Cc: Al Viro 
Cc: Kees Cook 
Cc: Rob Herring 
Signed-off-by: John Hubbard 
---
 drivers/virt/fsl_hypervisor.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/virt/fsl_hypervisor.c b/drivers/virt/fsl_hypervisor.c
index 93d5bebf9572..a8f78d572c45 100644
--- a/drivers/virt/fsl_hypervisor.c
+++ b/drivers/virt/fsl_hypervisor.c
@@ -292,11 +292,8 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user 
*p)
virt_to_phys(sg_list), num_pages);
 
 exit:
-   if (pages) {
-   for (i = 0; i < num_pages; i++)
-   if (pages[i])
-   put_page(pages[i]);
-   }
+   if (pages)
+   put_user_pages(pages, num_pages);
 
kfree(sg_list_unaligned);
kfree(pages);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 27/41] mm/frame_vector.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Dan Williams 
Cc: Jan Kara 
Cc: Mel Gorman 
Cc: Vlastimil Babka 
Signed-off-by: John Hubbard 
---
 mm/frame_vector.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/mm/frame_vector.c b/mm/frame_vector.c
index c64dca6e27c2..f590badac776 100644
--- a/mm/frame_vector.c
+++ b/mm/frame_vector.c
@@ -120,7 +120,6 @@ EXPORT_SYMBOL(get_vaddr_frames);
  */
 void put_vaddr_frames(struct frame_vector *vec)
 {
-   int i;
struct page **pages;
 
if (!vec->got_ref)
@@ -133,8 +132,7 @@ void put_vaddr_frames(struct frame_vector *vec)
 */
if (WARN_ON(IS_ERR(pages)))
goto out;
-   for (i = 0; i < vec->nr_frames; i++)
-   put_page(pages[i]);
+   put_user_pages(pages, vec->nr_frames);
vec->got_ref = false;
 out:
vec->nr_frames = 0;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 20/41] fbdev/pvr2fb: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Bartlomiej Zolnierkiewicz 
Cc: Kees Cook 
Cc: Al Viro 
Cc: Bhumika Goyal 
Cc: Arvind Yadav 
Cc: dri-de...@lists.freedesktop.org
Cc: linux-fb...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/video/fbdev/pvr2fb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/video/fbdev/pvr2fb.c b/drivers/video/fbdev/pvr2fb.c
index 7ff4b6b84282..0e4f9aa6444d 100644
--- a/drivers/video/fbdev/pvr2fb.c
+++ b/drivers/video/fbdev/pvr2fb.c
@@ -700,8 +700,7 @@ static ssize_t pvr2fb_write(struct fb_info *info, const 
char *buf,
ret = count;
 
 out_unmap:
-   for (i = 0; i < nr_pages; i++)
-   put_page(pages[i]);
+   put_user_pages(pages, nr_pages);
 
kfree(pages);
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 32/41] crypt: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Herbert Xu 
Cc: David S. Miller 
Cc: linux-cry...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 crypto/af_alg.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/crypto/af_alg.c b/crypto/af_alg.c
index 879cf23f7489..edd358ea64da 100644
--- a/crypto/af_alg.c
+++ b/crypto/af_alg.c
@@ -428,10 +428,7 @@ static void af_alg_link_sg(struct af_alg_sgl *sgl_prev,
 
 void af_alg_free_sg(struct af_alg_sgl *sgl)
 {
-   int i;
-
-   for (i = 0; i < sgl->npages; i++)
-   put_page(sgl->pages[i]);
+   put_user_pages(sgl->pages, sgl->npages);
 }
 EXPORT_SYMBOL_GPL(af_alg_free_sg);
 
@@ -668,7 +665,7 @@ static void af_alg_free_areq_sgls(struct af_alg_async_req 
*areq)
for_each_sg(tsgl, sg, areq->tsgl_entries, i) {
if (!sg_page(sg))
continue;
-   put_page(sg_page(sg));
+   put_user_page(sg_page(sg));
}
 
sock_kfree_s(sk, tsgl, areq->tsgl_entries * sizeof(*tsgl));
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 05/41] net/ceph: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Acked-by: Jeff Layton 

Cc: Ilya Dryomov 
Cc: Sage Weil 
Cc: David S. Miller 
Cc: ceph-de...@vger.kernel.org
Cc: net...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 net/ceph/pagevec.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/net/ceph/pagevec.c b/net/ceph/pagevec.c
index 64305e7056a1..c88fff2ab9bd 100644
--- a/net/ceph/pagevec.c
+++ b/net/ceph/pagevec.c
@@ -12,13 +12,7 @@
 
 void ceph_put_page_vector(struct page **pages, int num_pages, bool dirty)
 {
-   int i;
-
-   for (i = 0; i < num_pages; i++) {
-   if (dirty)
-   set_page_dirty_lock(pages[i]);
-   put_page(pages[i]);
-   }
+   put_user_pages_dirty_lock(pages, num_pages, dirty);
kvfree(pages);
 }
 EXPORT_SYMBOL(ceph_put_page_vector);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 06/34] drm/i915: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
i915_gem_userptr_put_pages(): it now calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christophe Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Jani Nikula 
Cc: Joonas Lahtinen 
Cc: Rodrigo Vivi 
Cc: David Airlie 
Cc: intel-gfx@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 528b61678334..c18008d3cc2a 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -527,7 +527,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct 
*_work)
}
mutex_unlock(>mm.lock);
 
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
 
i915_gem_object_put(obj);
@@ -640,7 +640,7 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
__i915_gem_userptr_set_active(obj, true);
 
if (IS_ERR(pages))
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
 
return PTR_ERR_OR_ZERO(pages);
@@ -663,11 +663,8 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
i915_gem_gtt_finish_pages(obj, pages);
 
for_each_sgt_page(page, sgt_iter, pages) {
-   if (obj->mm.dirty)
-   set_page_dirty(page);
-
mark_page_accessed(page);
-   put_page(page);
+   put_user_pages_dirty_lock(, 1, obj->mm.dirty);
}
obj->mm.dirty = false;
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 06/34] drm/i915: convert put_page() to put_user_page*()

2019-08-09 Thread John Hubbard

On 8/2/19 2:19 AM, Joonas Lahtinen wrote:

Quoting john.hubb...@gmail.com (2019-08-02 05:19:37)

From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
i915_gem_userptr_put_pages(): it now calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.


We've already fixed this in drm-tip where the current code uses
set_page_dirty_lock().

This would conflict with our tree. Rodrigo is handling
drm-intel-next for 5.4, so you guys want to coordinate how
to merge.



Hi Joonas, Rodrigo,

First of all, I apologize for the API breakage: put_user_pages_dirty_lock()
has an additional "dirty" parameter.

In order to deal with the merge problem, I'll drop this patch from my series,
and I'd recommend that the drm-intel-next take the following approach:

1) For now, s/put_page/put_user_page/ in i915_gem_userptr_put_pages(),
and fix up the set_page_dirty() --> set_page_dirty_lock() issue, like this
(based against linux.git):

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c

index 528b61678334..94721cc0093b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -664,10 +664,10 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object 
*obj,

for_each_sgt_page(page, sgt_iter, pages) {
if (obj->mm.dirty)
-   set_page_dirty(page);
+   set_page_dirty_lock(page);

mark_page_accessed(page);
-   put_page(page);
+   put_user_page(page);
}
obj->mm.dirty = false;


That will leave you with your original set_page_dirty_lock() calls
and everything works properly.

2) Next cycle, move to the new put_user_pages_dirty_lock().

thanks,
--
John Hubbard
NVIDIA



Regards, Joonas


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 09/34] media/v4l2-core/mm: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Mauro Carvalho Chehab 
Cc: Kees Cook 
Cc: Hans Verkuil 
Cc: Sakari Ailus 
Cc: Jan Kara 
Cc: Robin Murphy 
Cc: Souptick Joarder 
Cc: Dan Williams 
Cc: linux-me...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/media/v4l2-core/videobuf-dma-sg.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c 
b/drivers/media/v4l2-core/videobuf-dma-sg.c
index 66a6c6c236a7..d6eeb437ec19 100644
--- a/drivers/media/v4l2-core/videobuf-dma-sg.c
+++ b/drivers/media/v4l2-core/videobuf-dma-sg.c
@@ -349,8 +349,7 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma)
BUG_ON(dma->sglen);
 
if (dma->pages) {
-   for (i = 0; i < dma->nr_pages; i++)
-   put_page(dma->pages[i]);
+   put_user_pages(dma->pages, dma->nr_pages);
kfree(dma->pages);
dma->pages = NULL;
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 08/34] media/ivtv: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Andy Walls 
Cc: Mauro Carvalho Chehab 
Cc: linux-me...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/media/pci/ivtv/ivtv-udma.c | 14 --
 drivers/media/pci/ivtv/ivtv-yuv.c  | 11 +++
 2 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/drivers/media/pci/ivtv/ivtv-udma.c 
b/drivers/media/pci/ivtv/ivtv-udma.c
index 5f8883031c9c..7c7f33c2412b 100644
--- a/drivers/media/pci/ivtv/ivtv-udma.c
+++ b/drivers/media/pci/ivtv/ivtv-udma.c
@@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 {
struct ivtv_dma_page_info user_dma;
struct ivtv_user_dma *dma = >udma;
-   int i, err;
+   int err;
 
IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned 
int)ivtv_dest_addr);
 
@@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
IVTV_DEBUG_WARN("failed to map user pages, returned %d instead 
of %d\n",
   err, user_dma.page_count);
if (err >= 0) {
-   for (i = 0; i < err; i++)
-   put_page(dma->map[i]);
+   put_user_pages(dma->map, err);
return -EINVAL;
}
return err;
@@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 
/* Fill SG List with new values */
if (ivtv_udma_fill_sg_list(dma, _dma, 0) < 0) {
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 0;
return -ENOMEM;
}
@@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 void ivtv_udma_unmap(struct ivtv *itv)
 {
struct ivtv_user_dma *dma = >udma;
-   int i;
 
IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n");
 
@@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv)
ivtv_udma_sync_for_cpu(itv);
 
/* Release User Pages */
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 0;
 }
 
diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c 
b/drivers/media/pci/ivtv/ivtv-yuv.c
index cd2fe2d444c0..2c61a11d391d 100644
--- a/drivers/media/pci/ivtv/ivtv-yuv.c
+++ b/drivers/media/pci/ivtv/ivtv-yuv.c
@@ -30,7 +30,6 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
struct yuv_playback_info *yi = >yuv_info;
u8 frame = yi->draw_frame;
struct yuv_frame_info *f = >new_frame_info[frame];
-   int i;
int y_pages, uv_pages;
unsigned long y_buffer_offset, uv_buffer_offset;
int y_decode_height, uv_decode_height, y_size;
@@ -81,8 +80,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
 uv_pages, uv_dma.page_count);
 
if (uv_pages >= 0) {
-   for (i = 0; i < uv_pages; i++)
-   put_page(dma->map[y_pages + i]);
+   put_user_pages(>map[y_pages], uv_pages);
rc = -EFAULT;
} else {
rc = uv_pages;
@@ -93,8 +91,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
 y_pages, y_dma.page_count);
}
if (y_pages >= 0) {
-   for (i = 0; i < y_pages; i++)
-   put_page(dma->map[i]);
+   put_user_pages(dma->map, y_pages);
/*
 * Inherit the -EFAULT from rc's
 * initialization, but allow it to be
@@ -112,9 +109,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
/* Fill & map SG List */
if (ivtv_udma_fill_sg_list (dma, _dma, ivtv_udma_fill_sg_list (dma, 
_dma, 0)) < 0) {
IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem 
userspace buffers\n");
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 

[Intel-gfx] [PATCH v3 29/41] mm/memory.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Aneesh Kumar K.V 
Cc: Huang Ying 
Cc: Jérôme Glisse 
Cc: Matthew Wilcox 
Cc: Michal Hocko 
Cc: Peter Zijlstra 
Cc: Rik van Riel 
Cc: Souptick Joarder 
Cc: Will Deacon 
Signed-off-by: John Hubbard 
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index e2bb51b6242e..8870968496ea 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4337,7 +4337,7 @@ int __access_remote_vm(struct task_struct *tsk, struct 
mm_struct *mm,
buf, maddr + offset, bytes);
}
kunmap(page);
-   put_page(page);
+   put_user_page(page);
}
len -= bytes;
buf += bytes;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v3 38/41] powerpc: convert put_page() to put_user_page*()

2019-08-09 Thread John Hubbard
On 8/7/19 10:42 PM, Michael Ellerman wrote:
> Hi John,
> 
> john.hubb...@gmail.com writes:
>> diff --git a/arch/powerpc/mm/book3s64/iommu_api.c 
>> b/arch/powerpc/mm/book3s64/iommu_api.c
>> index b056cae3388b..e126193ba295 100644
>> --- a/arch/powerpc/mm/book3s64/iommu_api.c
>> +++ b/arch/powerpc/mm/book3s64/iommu_api.c
>> @@ -203,6 +202,7 @@ static void mm_iommu_unpin(struct 
>> mm_iommu_table_group_mem_t *mem)
>>  {
>>  long i;
>>  struct page *page = NULL;
>> +bool dirty = false;
> 
> I don't think you need that initialisation do you?
> 

Nope, it can go. Fixed locally, thanks.

Did you get a chance to look at enough of the other bits to feel comfortable 
with the patch, overall?

thanks,
-- 
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 10/34] genwqe: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

This changes the release code slightly, because each page slot in the
page_list[] array is no longer checked for NULL. However, that check
was wrong anyway, because the get_user_pages() pattern of usage here
never allowed for NULL entries within a range of pinned pages.

Acked-by: Greg Kroah-Hartman 

Cc: Frank Haverkamp 
Cc: Guilherme G. Piccoli 
Cc: Arnd Bergmann 
Cc: Greg Kroah-Hartman 
Signed-off-by: John Hubbard 
---
 drivers/misc/genwqe/card_utils.c | 17 +++--
 1 file changed, 3 insertions(+), 14 deletions(-)

diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c
index 2e1c4d2905e8..2a888f31d2c5 100644
--- a/drivers/misc/genwqe/card_utils.c
+++ b/drivers/misc/genwqe/card_utils.c
@@ -517,24 +517,13 @@ int genwqe_free_sync_sgl(struct genwqe_dev *cd, struct 
genwqe_sgl *sgl)
 /**
  * genwqe_free_user_pages() - Give pinned pages back
  *
- * Documentation of get_user_pages is in mm/gup.c:
- *
- * If the page is written to, set_page_dirty (or set_page_dirty_lock,
- * as appropriate) must be called after the page is finished with, and
- * before put_page is called.
+ * The pages may have been written to, so we call put_user_pages_dirty_lock(),
+ * rather than put_user_pages().
  */
 static int genwqe_free_user_pages(struct page **page_list,
unsigned int nr_pages, int dirty)
 {
-   unsigned int i;
-
-   for (i = 0; i < nr_pages; i++) {
-   if (page_list[i] != NULL) {
-   if (dirty)
-   set_page_dirty_lock(page_list[i]);
-   put_page(page_list[i]);
-   }
-   }
+   put_user_pages_dirty_lock(page_list, nr_pages, dirty);
return 0;
 }
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 09/41] drm/radeon: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Alex Deucher 
Cc: Christian König 
Cc: David (ChunMing) Zhou 
Cc: David Airlie 
Cc: amd-...@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/radeon/radeon_ttm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index fb3696bc616d..4c9943fa10df 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -540,7 +540,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_tt *ttm)
kfree(ttm->sg);
 
 release_pages:
-   release_pages(ttm->pages, pinned);
+   put_user_pages(ttm->pages, pinned);
return r;
 }
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 12/34] vmci: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
qp_release_pages(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Arnd Bergmann 
Cc: Al Viro 
Cc: Gustavo A. R. Silva 
Cc: Kees Cook 
Signed-off-by: John Hubbard 
---
 drivers/misc/vmw_vmci/vmci_context.c|  2 +-
 drivers/misc/vmw_vmci/vmci_queue_pair.c | 11 ++-
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/misc/vmw_vmci/vmci_context.c 
b/drivers/misc/vmw_vmci/vmci_context.c
index 16695366ec92..9daa52ee63b7 100644
--- a/drivers/misc/vmw_vmci/vmci_context.c
+++ b/drivers/misc/vmw_vmci/vmci_context.c
@@ -587,7 +587,7 @@ void vmci_ctx_unset_notify(struct vmci_ctx *context)
 
if (notify_page) {
kunmap(notify_page);
-   put_page(notify_page);
+   put_user_page(notify_page);
}
 }
 
diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c 
b/drivers/misc/vmw_vmci/vmci_queue_pair.c
index 8531ae781195..e5434551d0ef 100644
--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -626,15 +626,8 @@ static void qp_release_queue_mutex(struct vmci_queue 
*queue)
 static void qp_release_pages(struct page **pages,
 u64 num_pages, bool dirty)
 {
-   int i;
-
-   for (i = 0; i < num_pages; i++) {
-   if (dirty)
-   set_page_dirty(pages[i]);
-
-   put_page(pages[i]);
-   pages[i] = NULL;
-   }
+   put_user_pages_dirty_lock(pages, num_pages, dirty);
+   memset(pages, 0, num_pages * sizeof(struct page *));
 }
 
 /*
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 08/41] drm/i915: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

This is a merge-able version of the fix, because it restricts
itself to put_user_page() and put_user_pages(), both of which
have not changed their APIs. Later, i915_gem_userptr_put_pages()
can be simplified to use put_user_pages_dirty_lock().

Acked-by: Rodrigo Vivi 

Cc: Jani Nikula 
Cc: Joonas Lahtinen 
Cc: David Airlie 
Cc: intel-gfx@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 2caa594322bc..76dda2923cf1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -527,7 +527,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct 
*_work)
}
mutex_unlock(>mm.lock);
 
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
 
i915_gem_object_put(obj);
@@ -640,7 +640,7 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
__i915_gem_userptr_set_active(obj, true);
 
if (IS_ERR(pages))
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
 
return PTR_ERR_OR_ZERO(pages);
@@ -675,7 +675,7 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
set_page_dirty_lock(page);
 
mark_page_accessed(page);
-   put_page(page);
+   put_user_page(page);
}
obj->mm.dirty = false;
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 36/41] fs/binfmt_elf: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: Ira Weiny 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

get_dump_page calls get_user_page so put_user_page must be used
to match.

Signed-off-by: Ira Weiny 
Signed-off-by: John Hubbard 
---
 fs/binfmt_elf.c   | 2 +-
 fs/binfmt_elf_fdpic.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index d4e11b2e04f6..92e4a5ca99d8 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -2377,7 +2377,7 @@ static int elf_core_dump(struct coredump_params *cprm)
void *kaddr = kmap(page);
stop = !dump_emit(cprm, kaddr, PAGE_SIZE);
kunmap(page);
-   put_page(page);
+   put_user_page(page);
} else
stop = !dump_skip(cprm, PAGE_SIZE);
if (stop)
diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
index d86ebd0dcc3d..321724b3be22 100644
--- a/fs/binfmt_elf_fdpic.c
+++ b/fs/binfmt_elf_fdpic.c
@@ -1511,7 +1511,7 @@ static bool elf_fdpic_dump_segments(struct 
coredump_params *cprm)
void *kaddr = kmap(page);
res = dump_emit(cprm, kaddr, PAGE_SIZE);
kunmap(page);
-   put_page(page);
+   put_user_page(page);
} else {
res = dump_skip(cprm, PAGE_SIZE);
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 19/41] vfio: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
qp_release_pages(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Alex Williamson 
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/vfio/vfio_iommu_type1.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index 054391f30fa8..5a5461a14299 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -320,9 +320,9 @@ static int put_pfn(unsigned long pfn, int prot)
 {
if (!is_invalid_reserved_pfn(pfn)) {
struct page *page = pfn_to_page(pfn);
-   if (prot & IOMMU_WRITE)
-   SetPageDirty(page);
-   put_page(page);
+   bool dirty = prot & IOMMU_WRITE;
+
+   put_user_pages_dirty_lock(, 1, dirty);
return 1;
}
return 0;
@@ -356,7 +356,7 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned 
long vaddr,
 */
if (ret > 0 && vma_is_fsdax(vmas[0])) {
ret = -EOPNOTSUPP;
-   put_page(page[0]);
+   put_user_page(page[0]);
}
}
up_read(>mmap_sem);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 18/41] drivers/tee: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Acked-by: Jens Wiklander 
Signed-off-by: John Hubbard 
---
 drivers/tee/tee_shm.c | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
index 2da026fd12c9..c967d0420b67 100644
--- a/drivers/tee/tee_shm.c
+++ b/drivers/tee/tee_shm.c
@@ -31,16 +31,13 @@ static void tee_shm_release(struct tee_shm *shm)
 
poolm->ops->free(poolm, shm);
} else if (shm->flags & TEE_SHM_REGISTER) {
-   size_t n;
int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
 
if (rc)
dev_err(teedev->dev.parent,
"unregister shm %p failed: %d", shm, rc);
 
-   for (n = 0; n < shm->num_pages; n++)
-   put_page(shm->pages[n]);
-
+   put_user_pages(shm->pages, shm->num_pages);
kfree(shm->pages);
}
 
@@ -313,16 +310,13 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, 
unsigned long addr,
return shm;
 err:
if (shm) {
-   size_t n;
-
if (shm->id >= 0) {
mutex_lock(>mutex);
idr_remove(>idr, shm->id);
mutex_unlock(>mutex);
}
if (shm->pages) {
-   for (n = 0; n < shm->num_pages; n++)
-   put_page(shm->pages[n]);
+   put_user_pages(shm->pages, shm->num_pages);
kfree(shm->pages);
}
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 33/41] fs/nfs: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Reviewed-by: Calum Mackay 

Cc: Trond Myklebust 
Cc: Anna Schumaker 
Cc: linux-...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 fs/nfs/direct.c | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
index 0cb442406168..c0c1b9f2c069 100644
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -276,13 +276,6 @@ ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter 
*iter)
return nfs_file_direct_write(iocb, iter);
 }
 
-static void nfs_direct_release_pages(struct page **pages, unsigned int npages)
-{
-   unsigned int i;
-   for (i = 0; i < npages; i++)
-   put_page(pages[i]);
-}
-
 void nfs_init_cinfo_from_dreq(struct nfs_commit_info *cinfo,
  struct nfs_direct_req *dreq)
 {
@@ -512,7 +505,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct 
nfs_direct_req *dreq,
pos += req_len;
dreq->bytes_left -= req_len;
}
-   nfs_direct_release_pages(pagevec, npages);
+   put_user_pages(pagevec, npages);
kvfree(pagevec);
if (result < 0)
break;
@@ -935,7 +928,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct 
nfs_direct_req *dreq,
pos += req_len;
dreq->bytes_left -= req_len;
}
-   nfs_direct_release_pages(pagevec, npages);
+   put_user_pages(pagevec, npages);
kvfree(pagevec);
if (result < 0)
break;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 04/34] x86/kvm: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Joerg Roedel 
Cc: Paolo Bonzini 
Cc: Radim Krčmář 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: H. Peter Anvin 
Cc: x...@kernel.org
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 arch/x86/kvm/svm.c  | 4 ++--
 virt/kvm/kvm_main.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7eafc6907861..ff93c923ed36 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1827,7 +1827,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, 
unsigned long uaddr,
 
 err:
if (npinned > 0)
-   release_pages(pages, npinned);
+   put_user_pages(pages, npinned);
 
kvfree(pages);
return NULL;
@@ -1838,7 +1838,7 @@ static void sev_unpin_memory(struct kvm *kvm, struct page 
**pages,
 {
struct kvm_sev_info *sev = _kvm_svm(kvm)->sev_info;
 
-   release_pages(pages, npages);
+   put_user_pages(pages, npages);
kvfree(pages);
sev->pages_locked -= npages;
 }
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 887f3b0c2b60..4b6a596ea8e9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1499,7 +1499,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool 
*async, bool write_fault,
 
if (__get_user_pages_fast(addr, 1, 1, ) == 1) {
*writable = true;
-   put_page(page);
+   put_user_page(page);
page = wpage;
}
}
@@ -1831,7 +1831,7 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean);
 void kvm_release_pfn_clean(kvm_pfn_t pfn)
 {
if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn))
-   put_page(pfn_to_page(pfn));
+   put_user_page(pfn_to_page(pfn));
 }
 EXPORT_SYMBOL_GPL(kvm_release_pfn_clean);
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 31/41] mm/process_vm_access.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Al Viro 
Cc: Andrea Arcangeli 
Cc: Christopher Yeoh 
Cc: Dave Hansen 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Jann Horn 
Cc: Lorenzo Stoakes 
Cc: Mathieu Desnoyers 
Cc: Mike Rapoport 
Cc: Rashika Kheria 
Signed-off-by: John Hubbard 
---
 mm/process_vm_access.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c
index 357aa7bef6c0..4d29d54ec93f 100644
--- a/mm/process_vm_access.c
+++ b/mm/process_vm_access.c
@@ -96,7 +96,7 @@ static int process_vm_rw_single_vec(unsigned long addr,
flags |= FOLL_WRITE;
 
while (!rc && nr_pages && iov_iter_count(iter)) {
-   int pages = min(nr_pages, max_pages_per_loop);
+   int pinned_pages = min(nr_pages, max_pages_per_loop);
int locked = 1;
size_t bytes;
 
@@ -106,14 +106,15 @@ static int process_vm_rw_single_vec(unsigned long addr,
 * current/current->mm
 */
down_read(>mmap_sem);
-   pages = get_user_pages_remote(task, mm, pa, pages, flags,
- process_pages, NULL, );
+   pinned_pages = get_user_pages_remote(task, mm, pa, pinned_pages,
+flags, process_pages, NULL,
+);
if (locked)
up_read(>mmap_sem);
-   if (pages <= 0)
+   if (pinned_pages <= 0)
return -EFAULT;
 
-   bytes = pages * PAGE_SIZE - start_offset;
+   bytes = pinned_pages * PAGE_SIZE - start_offset;
if (bytes > len)
bytes = len;
 
@@ -122,10 +123,9 @@ static int process_vm_rw_single_vec(unsigned long addr,
 vm_write);
len -= bytes;
start_offset = 0;
-   nr_pages -= pages;
-   pa += pages * PAGE_SIZE;
-   while (pages)
-   put_page(process_pages[--pages]);
+   nr_pages -= pinned_pages;
+   pa += pinned_pages * PAGE_SIZE;
+   put_user_pages(process_pages, pinned_pages);
}
 
return rc;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 01/34] mm/gup: add make_dirty arg to put_user_pages_dirty_lock()

2019-08-09 Thread john . hubbard
From: John Hubbard 

Provide a more capable variation of put_user_pages_dirty_lock(),
and delete put_user_pages_dirty(). This is based on the
following:

1. Lots of call sites become simpler if a bool is passed
into put_user_page*(), instead of making the call site
choose which put_user_page*() variant to call.

2. Christoph Hellwig's observation that set_page_dirty_lock()
is usually correct, and set_page_dirty() is usually a
bug, or at least questionable, within a put_user_page*()
calling chain.

This leads to the following API choices:

* put_user_pages_dirty_lock(page, npages, make_dirty)

* There is no put_user_pages_dirty(). You have to
  hand code that, in the rare case that it's
  required.

Reviewed-by: Christoph Hellwig 
Cc: Matthew Wilcox 
Cc: Jan Kara 
Cc: Ira Weiny 
Cc: Jason Gunthorpe 
Signed-off-by: John Hubbard 
---
 drivers/infiniband/core/umem.c |   5 +-
 drivers/infiniband/hw/hfi1/user_pages.c|   5 +-
 drivers/infiniband/hw/qib/qib_user_pages.c |  13 +--
 drivers/infiniband/hw/usnic/usnic_uiom.c   |   5 +-
 drivers/infiniband/sw/siw/siw_mem.c|  19 +---
 include/linux/mm.h |   5 +-
 mm/gup.c   | 115 +
 7 files changed, 61 insertions(+), 106 deletions(-)

diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
index 08da840ed7ee..965cf9dea71a 100644
--- a/drivers/infiniband/core/umem.c
+++ b/drivers/infiniband/core/umem.c
@@ -54,10 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, struct 
ib_umem *umem, int d
 
for_each_sg_page(umem->sg_head.sgl, _iter, umem->sg_nents, 0) {
page = sg_page_iter_page(_iter);
-   if (umem->writable && dirty)
-   put_user_pages_dirty_lock(, 1);
-   else
-   put_user_page(page);
+   put_user_pages_dirty_lock(, 1, umem->writable && dirty);
}
 
sg_free_table(>sg_head);
diff --git a/drivers/infiniband/hw/hfi1/user_pages.c 
b/drivers/infiniband/hw/hfi1/user_pages.c
index b89a9b9aef7a..469acb961fbd 100644
--- a/drivers/infiniband/hw/hfi1/user_pages.c
+++ b/drivers/infiniband/hw/hfi1/user_pages.c
@@ -118,10 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned 
long vaddr, size_t np
 void hfi1_release_user_pages(struct mm_struct *mm, struct page **p,
 size_t npages, bool dirty)
 {
-   if (dirty)
-   put_user_pages_dirty_lock(p, npages);
-   else
-   put_user_pages(p, npages);
+   put_user_pages_dirty_lock(p, npages, dirty);
 
if (mm) { /* during close after signal, mm can be NULL */
atomic64_sub(npages, >pinned_vm);
diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c 
b/drivers/infiniband/hw/qib/qib_user_pages.c
index bfbfbb7e0ff4..26c1fb8d45cc 100644
--- a/drivers/infiniband/hw/qib/qib_user_pages.c
+++ b/drivers/infiniband/hw/qib/qib_user_pages.c
@@ -37,15 +37,6 @@
 
 #include "qib.h"
 
-static void __qib_release_user_pages(struct page **p, size_t num_pages,
-int dirty)
-{
-   if (dirty)
-   put_user_pages_dirty_lock(p, num_pages);
-   else
-   put_user_pages(p, num_pages);
-}
-
 /**
  * qib_map_page - a safety wrapper around pci_map_page()
  *
@@ -124,7 +115,7 @@ int qib_get_user_pages(unsigned long start_page, size_t 
num_pages,
 
return 0;
 bail_release:
-   __qib_release_user_pages(p, got, 0);
+   put_user_pages_dirty_lock(p, got, false);
 bail:
atomic64_sub(num_pages, >mm->pinned_vm);
return ret;
@@ -132,7 +123,7 @@ int qib_get_user_pages(unsigned long start_page, size_t 
num_pages,
 
 void qib_release_user_pages(struct page **p, size_t num_pages)
 {
-   __qib_release_user_pages(p, num_pages, 1);
+   put_user_pages_dirty_lock(p, num_pages, true);
 
/* during close after signal, mm can be NULL */
if (current->mm)
diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c 
b/drivers/infiniband/hw/usnic/usnic_uiom.c
index 0b0237d41613..62e6ffa9ad78 100644
--- a/drivers/infiniband/hw/usnic/usnic_uiom.c
+++ b/drivers/infiniband/hw/usnic/usnic_uiom.c
@@ -75,10 +75,7 @@ static void usnic_uiom_put_pages(struct list_head 
*chunk_list, int dirty)
for_each_sg(chunk->page_list, sg, chunk->nents, i) {
page = sg_page(sg);
pa = sg_phys(sg);
-   if (dirty)
-   put_user_pages_dirty_lock(, 1);
-   else
-   put_user_page(page);
+   put_user_pages_dirty_lock(, 1, dirty);
usnic_dbg("pa: %pa\n", );
}
kfree(chunk);
diff --git a/drivers/infiniband/sw/siw/siw_mem.c 
b/drivers/infiniband/sw/siw/siw_mem.

[Intel-gfx] [PATCH 29/34] mm/process_vm_access.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Al Viro 
Cc: Andrea Arcangeli 
Cc: Christopher Yeoh 
Cc: Dave Hansen 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Jann Horn 
Cc: Lorenzo Stoakes 
Cc: Mathieu Desnoyers 
Cc: Mike Rapoport 
Cc: Rashika Kheria 
Signed-off-by: John Hubbard 
---
 mm/process_vm_access.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c
index 357aa7bef6c0..4d29d54ec93f 100644
--- a/mm/process_vm_access.c
+++ b/mm/process_vm_access.c
@@ -96,7 +96,7 @@ static int process_vm_rw_single_vec(unsigned long addr,
flags |= FOLL_WRITE;
 
while (!rc && nr_pages && iov_iter_count(iter)) {
-   int pages = min(nr_pages, max_pages_per_loop);
+   int pinned_pages = min(nr_pages, max_pages_per_loop);
int locked = 1;
size_t bytes;
 
@@ -106,14 +106,15 @@ static int process_vm_rw_single_vec(unsigned long addr,
 * current/current->mm
 */
down_read(>mmap_sem);
-   pages = get_user_pages_remote(task, mm, pa, pages, flags,
- process_pages, NULL, );
+   pinned_pages = get_user_pages_remote(task, mm, pa, pinned_pages,
+flags, process_pages, NULL,
+);
if (locked)
up_read(>mmap_sem);
-   if (pages <= 0)
+   if (pinned_pages <= 0)
return -EFAULT;
 
-   bytes = pages * PAGE_SIZE - start_offset;
+   bytes = pinned_pages * PAGE_SIZE - start_offset;
if (bytes > len)
bytes = len;
 
@@ -122,10 +123,9 @@ static int process_vm_rw_single_vec(unsigned long addr,
 vm_write);
len -= bytes;
start_offset = 0;
-   nr_pages -= pages;
-   pa += pages * PAGE_SIZE;
-   while (pages)
-   put_page(process_pages[--pages]);
+   nr_pages -= pinned_pages;
+   pa += pinned_pages * PAGE_SIZE;
+   put_user_pages(process_pages, pinned_pages);
}
 
return rc;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 26/34] mm/gup_benchmark.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Dan Carpenter 
Cc: Greg Kroah-Hartman 
Cc: Keith Busch 
Cc: Kirill A. Shutemov 
Cc: Michael S. Tsirkin 
Cc: YueHaibing 
Signed-off-by: John Hubbard 
---
 mm/gup_benchmark.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
index 7dd602d7f8db..515ac8eeb6ee 100644
--- a/mm/gup_benchmark.c
+++ b/mm/gup_benchmark.c
@@ -79,7 +79,7 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
for (i = 0; i < nr_pages; i++) {
if (!pages[i])
break;
-   put_page(pages[i]);
+   put_user_page(pages[i]);
}
end_time = ktime_get();
gup->put_delta_usec = ktime_us_delta(end_time, start_time);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 27/34] mm/memory.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Aneesh Kumar K.V 
Cc: Huang Ying 
Cc: Jérôme Glisse 
Cc: Matthew Wilcox 
Cc: Michal Hocko 
Cc: Peter Zijlstra 
Cc: Rik van Riel 
Cc: Souptick Joarder 
Cc: Will Deacon 
Signed-off-by: John Hubbard 
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index e2bb51b6242e..8870968496ea 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4337,7 +4337,7 @@ int __access_remote_vm(struct task_struct *tsk, struct 
mm_struct *mm,
buf, maddr + offset, bytes);
}
kunmap(page);
-   put_page(page);
+   put_user_page(page);
}
len -= bytes;
buf += bytes;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 00/34] put_user_pages(): miscellaneous call sites

2019-08-09 Thread John Hubbard
On 8/1/19 7:16 PM, john.hubb...@gmail.com wrote:
> From: John Hubbard 
> 
> Hi,
> 
> These are best characterized as miscellaneous conversions: many (not all)
> call sites that don't involve biovec or iov_iter, nor mm/. It also leaves
> out a few call sites that require some more work. These are mostly pretty
> simple ones.
> 
> It's probably best to send all of these via Andrew's -mm tree, assuming
> that there are no significant merge conflicts with ongoing work in other
> trees (which I doubt, given that these are small changes).
> 

In case anyone is wondering, this truncated series is due to a script failure:
git-send-email chokes when it hits email addresses whose names have a
comma in them, as happened here with patch 0003.  

Please disregard this set and reply to the other thread.

thanks,
-- 
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 15/41] rapidio: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Matt Porter 
Cc: Alexandre Bounine 
Cc: Al Viro 
Cc: Logan Gunthorpe 
Cc: Christophe JAILLET 
Cc: Ioan Nicu 
Cc: Kees Cook 
Cc: Tvrtko Ursulin 
Signed-off-by: John Hubbard 
---
 drivers/rapidio/devices/rio_mport_cdev.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/rapidio/devices/rio_mport_cdev.c 
b/drivers/rapidio/devices/rio_mport_cdev.c
index 8155f59ece38..0e8ea0e5a89e 100644
--- a/drivers/rapidio/devices/rio_mport_cdev.c
+++ b/drivers/rapidio/devices/rio_mport_cdev.c
@@ -572,14 +572,12 @@ static void dma_req_free(struct kref *ref)
struct mport_dma_req *req = container_of(ref, struct mport_dma_req,
refcount);
struct mport_cdev_priv *priv = req->priv;
-   unsigned int i;
 
dma_unmap_sg(req->dmach->device->dev,
 req->sgt.sgl, req->sgt.nents, req->dir);
sg_free_table(>sgt);
if (req->page_list) {
-   for (i = 0; i < req->nr_pages; i++)
-   put_page(req->page_list[i]);
+   put_user_pages(req->page_list, req->nr_pages);
kfree(req->page_list);
}
 
@@ -815,7 +813,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
struct mport_dma_req *req;
struct mport_dev *md = priv->md;
struct dma_chan *chan;
-   int i, ret;
+   int ret;
int nents;
 
if (xfer->length == 0)
@@ -946,8 +944,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
 
 err_pg:
if (!req->page_list) {
-   for (i = 0; i < nr_pages; i++)
-   put_page(page_list[i]);
+   put_user_pages(page_list, nr_pages);
kfree(page_list);
}
 err_req:
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 26/34] mm/gup_benchmark.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Reviewed-by: Keith Busch 

Cc: Dan Carpenter 
Cc: Greg Kroah-Hartman 
Cc: Keith Busch 
Cc: Kirill A. Shutemov 
Cc: Michael S. Tsirkin 
Cc: YueHaibing 
Signed-off-by: John Hubbard 
---
 mm/gup_benchmark.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
index 7dd602d7f8db..515ac8eeb6ee 100644
--- a/mm/gup_benchmark.c
+++ b/mm/gup_benchmark.c
@@ -79,7 +79,7 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
for (i = 0; i < nr_pages; i++) {
if (!pages[i])
break;
-   put_page(pages[i]);
+   put_user_page(pages[i]);
}
end_time = ktime_get();
gup->put_delta_usec = ktime_us_delta(end_time, start_time);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 27/34] mm/memory.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Aneesh Kumar K.V 
Cc: Huang Ying 
Cc: Jérôme Glisse 
Cc: Matthew Wilcox 
Cc: Michal Hocko 
Cc: Peter Zijlstra 
Cc: Rik van Riel 
Cc: Souptick Joarder 
Cc: Will Deacon 
Signed-off-by: John Hubbard 
---
 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index e2bb51b6242e..8870968496ea 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4337,7 +4337,7 @@ int __access_remote_vm(struct task_struct *tsk, struct 
mm_struct *mm,
buf, maddr + offset, bytes);
}
kunmap(page);
-   put_page(page);
+   put_user_page(page);
}
len -= bytes;
buf += bytes;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 19/34] fsl_hypervisor: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

This changes the release code slightly, because each page slot in the
page_list[] array is no longer checked for NULL. However, that check
was wrong anyway, because the get_user_pages() pattern of usage here
never allowed for NULL entries within a range of pinned pages.

Cc: Al Viro 
Cc: Kees Cook 
Cc: Rob Herring 
Signed-off-by: John Hubbard 
---
 drivers/virt/fsl_hypervisor.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/virt/fsl_hypervisor.c b/drivers/virt/fsl_hypervisor.c
index 93d5bebf9572..a8f78d572c45 100644
--- a/drivers/virt/fsl_hypervisor.c
+++ b/drivers/virt/fsl_hypervisor.c
@@ -292,11 +292,8 @@ static long ioctl_memcpy(struct fsl_hv_ioctl_memcpy __user 
*p)
virt_to_phys(sg_list), num_pages);
 
 exit:
-   if (pages) {
-   for (i = 0; i < num_pages; i++)
-   if (pages[i])
-   put_page(pages[i]);
-   }
+   if (pages)
+   put_user_pages(pages, num_pages);
 
kfree(sg_list_unaligned);
kfree(pages);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH v2 01/34] mm/gup: add make_dirty arg to put_user_pages_dirty_lock()

2019-08-09 Thread John Hubbard
On 8/6/19 10:39 AM, Ira Weiny wrote:
> On Sun, Aug 04, 2019 at 03:48:42PM -0700, john.hubb...@gmail.com wrote:
>> From: John Hubbard 
...
>> -
>>  /**
>> - * put_user_pages_dirty() - release and dirty an array of gup-pinned pages
>> - * @pages:  array of pages to be marked dirty and released.
>> + * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned 
>> pages
>> + * @pages:  array of pages to be maybe marked dirty, and definitely 
>> released.
> 
> Better would be.
> 
> @pages:  array of pages to be put

OK, I'll change to that wording.

> 
>>   * @npages: number of pages in the @pages array.
>> + * @make_dirty: whether to mark the pages dirty
>>   *
>>   * "gup-pinned page" refers to a page that has had one of the 
>> get_user_pages()
>>   * variants called on that page.
>>   *
>>   * For each page in the @pages array, make that page (or its head page, if a
>> - * compound page) dirty, if it was previously listed as clean. Then, release
>> - * the page using put_user_page().
>> + * compound page) dirty, if @make_dirty is true, and if the page was 
>> previously
>> + * listed as clean. In any case, releases all pages using put_user_page(),
>> + * possibly via put_user_pages(), for the non-dirty case.
> 
> I don't think users of this interface need this level of detail.  I think
> something like.
> 
>  * For each page in the @pages array, release the page.  If @make_dirty is
>  * true, mark the page dirty prior to release.

Yes, it is too wordy, I'll change to that.

> 
...
>> -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages)
>> -{
>> -__put_user_pages_dirty(pages, npages, set_page_dirty_lock);
>> +/*
>> + * TODO: this can be optimized for huge pages: if a series of pages is
>> + * physically contiguous and part of the same compound page, then a
>> + * single operation to the head page should suffice.
>> + */
> 
> I think this comment belongs to the for loop below...  or just something about
> how to make this and put_user_pages() more efficient.  It is odd, that this is
> the same comment as in put_user_pages()...

Actually I think I'll just delete the comment entirely, it's just noise really.

> 
> The code is good.  So... Other than the comments.
> 
> Reviewed-by: Ira Weiny 


Thanks for the review!


thanks,
-- 
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 22/34] orangefs: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Mike Marshall 
Cc: Martin Brandenburg 
Cc: de...@lists.orangefs.org
Signed-off-by: John Hubbard 
---
 fs/orangefs/orangefs-bufmap.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/fs/orangefs/orangefs-bufmap.c b/fs/orangefs/orangefs-bufmap.c
index 2bb916d68576..f2f33a16d604 100644
--- a/fs/orangefs/orangefs-bufmap.c
+++ b/fs/orangefs/orangefs-bufmap.c
@@ -168,10 +168,7 @@ static DEFINE_SPINLOCK(orangefs_bufmap_lock);
 static void
 orangefs_bufmap_unmap(struct orangefs_bufmap *bufmap)
 {
-   int i;
-
-   for (i = 0; i < bufmap->page_count; i++)
-   put_page(bufmap->page_array[i]);
+   put_user_pages(bufmap->page_array, bufmap->page_count);
 }
 
 static void
@@ -280,7 +277,7 @@ orangefs_bufmap_map(struct orangefs_bufmap *bufmap,
 
for (i = 0; i < ret; i++) {
SetPageError(bufmap->page_array[i]);
-   put_page(bufmap->page_array[i]);
+   put_user_page(bufmap->page_array[i]);
}
return -ENOMEM;
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 20/34] xen: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Boris Ostrovsky 
Cc: Juergen Gross 
Cc: xen-de...@lists.xenproject.org
Signed-off-by: John Hubbard 
---
 drivers/xen/gntdev.c  | 5 +
 drivers/xen/privcmd.c | 7 +--
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 4c339c7e66e5..2586b3df2bb6 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -864,10 +864,7 @@ static int gntdev_get_page(struct gntdev_copy_batch 
*batch, void __user *virt,
 
 static void gntdev_put_pages(struct gntdev_copy_batch *batch)
 {
-   unsigned int i;
-
-   for (i = 0; i < batch->nr_pages; i++)
-   put_page(batch->pages[i]);
+   put_user_pages(batch->pages, batch->nr_pages);
batch->nr_pages = 0;
 }
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 2f5ce7230a43..29e461dbee2d 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -611,15 +611,10 @@ static int lock_pages(
 
 static void unlock_pages(struct page *pages[], unsigned int nr_pages)
 {
-   unsigned int i;
-
if (!pages)
return;
 
-   for (i = 0; i < nr_pages; i++) {
-   if (pages[i])
-   put_page(pages[i]);
-   }
+   put_user_pages(pages, nr_pages);
 }
 
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 13/34] rapidio: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Matt Porter 
Cc: Alexandre Bounine 
Cc: Al Viro 
Cc: Logan Gunthorpe 
Cc: Christophe JAILLET 
Cc: Ioan Nicu 
Cc: Kees Cook 
Cc: Tvrtko Ursulin 
Signed-off-by: John Hubbard 
---
 drivers/rapidio/devices/rio_mport_cdev.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/rapidio/devices/rio_mport_cdev.c 
b/drivers/rapidio/devices/rio_mport_cdev.c
index 8155f59ece38..0e8ea0e5a89e 100644
--- a/drivers/rapidio/devices/rio_mport_cdev.c
+++ b/drivers/rapidio/devices/rio_mport_cdev.c
@@ -572,14 +572,12 @@ static void dma_req_free(struct kref *ref)
struct mport_dma_req *req = container_of(ref, struct mport_dma_req,
refcount);
struct mport_cdev_priv *priv = req->priv;
-   unsigned int i;
 
dma_unmap_sg(req->dmach->device->dev,
 req->sgt.sgl, req->sgt.nents, req->dir);
sg_free_table(>sgt);
if (req->page_list) {
-   for (i = 0; i < req->nr_pages; i++)
-   put_page(req->page_list[i]);
+   put_user_pages(req->page_list, req->nr_pages);
kfree(req->page_list);
}
 
@@ -815,7 +813,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
struct mport_dma_req *req;
struct mport_dev *md = priv->md;
struct dma_chan *chan;
-   int i, ret;
+   int ret;
int nents;
 
if (xfer->length == 0)
@@ -946,8 +944,7 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
 
 err_pg:
if (!req->page_list) {
-   for (i = 0; i < nr_pages; i++)
-   put_page(page_list[i]);
+   put_user_pages(page_list, nr_pages);
kfree(page_list);
}
 err_req:
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 10/41] media/ivtv: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Andy Walls 
Cc: Mauro Carvalho Chehab 
Cc: linux-me...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/media/pci/ivtv/ivtv-udma.c | 14 --
 drivers/media/pci/ivtv/ivtv-yuv.c  | 11 +++
 2 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/drivers/media/pci/ivtv/ivtv-udma.c 
b/drivers/media/pci/ivtv/ivtv-udma.c
index 5f8883031c9c..7c7f33c2412b 100644
--- a/drivers/media/pci/ivtv/ivtv-udma.c
+++ b/drivers/media/pci/ivtv/ivtv-udma.c
@@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 {
struct ivtv_dma_page_info user_dma;
struct ivtv_user_dma *dma = >udma;
-   int i, err;
+   int err;
 
IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned 
int)ivtv_dest_addr);
 
@@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
IVTV_DEBUG_WARN("failed to map user pages, returned %d instead 
of %d\n",
   err, user_dma.page_count);
if (err >= 0) {
-   for (i = 0; i < err; i++)
-   put_page(dma->map[i]);
+   put_user_pages(dma->map, err);
return -EINVAL;
}
return err;
@@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 
/* Fill SG List with new values */
if (ivtv_udma_fill_sg_list(dma, _dma, 0) < 0) {
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 0;
return -ENOMEM;
}
@@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 void ivtv_udma_unmap(struct ivtv *itv)
 {
struct ivtv_user_dma *dma = >udma;
-   int i;
 
IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n");
 
@@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv)
ivtv_udma_sync_for_cpu(itv);
 
/* Release User Pages */
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 0;
 }
 
diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c 
b/drivers/media/pci/ivtv/ivtv-yuv.c
index cd2fe2d444c0..2c61a11d391d 100644
--- a/drivers/media/pci/ivtv/ivtv-yuv.c
+++ b/drivers/media/pci/ivtv/ivtv-yuv.c
@@ -30,7 +30,6 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
struct yuv_playback_info *yi = >yuv_info;
u8 frame = yi->draw_frame;
struct yuv_frame_info *f = >new_frame_info[frame];
-   int i;
int y_pages, uv_pages;
unsigned long y_buffer_offset, uv_buffer_offset;
int y_decode_height, uv_decode_height, y_size;
@@ -81,8 +80,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
 uv_pages, uv_dma.page_count);
 
if (uv_pages >= 0) {
-   for (i = 0; i < uv_pages; i++)
-   put_page(dma->map[y_pages + i]);
+   put_user_pages(>map[y_pages], uv_pages);
rc = -EFAULT;
} else {
rc = uv_pages;
@@ -93,8 +91,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
 y_pages, y_dma.page_count);
}
if (y_pages >= 0) {
-   for (i = 0; i < y_pages; i++)
-   put_page(dma->map[i]);
+   put_user_pages(dma->map, y_pages);
/*
 * Inherit the -EFAULT from rc's
 * initialization, but allow it to be
@@ -112,9 +109,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
/* Fill & map SG List */
if (ivtv_udma_fill_sg_list (dma, _dma, ivtv_udma_fill_sg_list (dma, 
_dma, 0)) < 0) {
IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem 
userspace buffers\n");
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 

Re: [Intel-gfx] [PATCH 00/34] put_user_pages(): miscellaneous call sites

2019-08-09 Thread John Hubbard

On 8/2/19 1:05 AM, Peter Zijlstra wrote:

On Thu, Aug 01, 2019 at 07:16:19PM -0700, john.hubb...@gmail.com wrote:


This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions"). That commit
has an extensive description of the problem and the planned steps to
solve it, but the highlites are:


That is one horridly mangled Changelog there :-/ It looks like it's
partially duplicated.


Yeah. It took so long to merge that I think I was no longer able to
actually see the commit description, after N readings. sigh



Anyway; no objections to any of that, but I just wanted to mention that
there are other problems with long term pinning that haven't been
mentioned, notably they inhibit compaction.

A long time ago I proposed an interface to mark pages as pinned, such
that we could run compaction before we actually did the pinning.



This is all heading toward marking pages as pinned, so we should finally
get there.  I'll post the RFC for tracking pinned pages shortly.


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 32/34] goldfish_pipe: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
qp_release_pages(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christophe Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Greg Kroah-Hartman 
Cc: Roman Kiryanov 
Signed-off-by: John Hubbard 
---
 drivers/platform/goldfish/goldfish_pipe.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/platform/goldfish/goldfish_pipe.c 
b/drivers/platform/goldfish/goldfish_pipe.c
index cef0133aa47a..2bd21020e288 100644
--- a/drivers/platform/goldfish/goldfish_pipe.c
+++ b/drivers/platform/goldfish/goldfish_pipe.c
@@ -288,15 +288,12 @@ static int pin_user_pages(unsigned long first_page,
 static void release_user_pages(struct page **pages, int pages_count,
   int is_write, s32 consumed_size)
 {
-   int i;
+   bool dirty = !is_write && consumed_size > 0;
 
-   for (i = 0; i < pages_count; i++) {
-   if (!is_write && consumed_size > 0)
-   set_page_dirty(pages[i]);
-   put_page(pages[i]);
-   }
+   put_user_pages_dirty_lock(pages, pages_count, dirty);
 }
 
+
 /* Populate the call parameters, merging adjacent pages together */
 static void populate_rw_params(struct page **pages,
   int pages_count,
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 02/34] net/rds: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Santosh Shilimkar 
Cc: David S. Miller 
Cc: net...@vger.kernel.org
Cc: linux-r...@vger.kernel.org
Cc: rds-de...@oss.oracle.com
Signed-off-by: John Hubbard 
---
 net/rds/info.c|  5 ++---
 net/rds/message.c |  2 +-
 net/rds/rdma.c| 15 +++
 3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/net/rds/info.c b/net/rds/info.c
index 03f6fd56d237..ca6af2889adf 100644
--- a/net/rds/info.c
+++ b/net/rds/info.c
@@ -162,7 +162,6 @@ int rds_info_getsockopt(struct socket *sock, int optname, 
char __user *optval,
struct rds_info_lengths lens;
unsigned long nr_pages = 0;
unsigned long start;
-   unsigned long i;
rds_info_func func;
struct page **pages = NULL;
int ret;
@@ -235,8 +234,8 @@ int rds_info_getsockopt(struct socket *sock, int optname, 
char __user *optval,
ret = -EFAULT;
 
 out:
-   for (i = 0; pages && i < nr_pages; i++)
-   put_page(pages[i]);
+   if (pages)
+   put_user_pages(pages, nr_pages);
kfree(pages);
 
return ret;
diff --git a/net/rds/message.c b/net/rds/message.c
index 50f13f1d4ae0..d7b0d266c437 100644
--- a/net/rds/message.c
+++ b/net/rds/message.c
@@ -404,7 +404,7 @@ static int rds_message_zcopy_from_user(struct rds_message 
*rm, struct iov_iter *
int i;
 
for (i = 0; i < rm->data.op_nents; i++)
-   put_page(sg_page(>data.op_sg[i]));
+   put_user_page(sg_page(>data.op_sg[i]));
mmp = >data.op_mmp_znotifier->z_mmp;
mm_unaccount_pinned_pages(mmp);
ret = -EFAULT;
diff --git a/net/rds/rdma.c b/net/rds/rdma.c
index 916f5ec373d8..6762e8696b99 100644
--- a/net/rds/rdma.c
+++ b/net/rds/rdma.c
@@ -162,8 +162,7 @@ static int rds_pin_pages(unsigned long user_addr, unsigned 
int nr_pages,
  pages);
 
if (ret >= 0 && ret < nr_pages) {
-   while (ret--)
-   put_page(pages[ret]);
+   put_user_pages(pages, ret);
ret = -EFAULT;
}
 
@@ -276,7 +275,7 @@ static int __rds_rdma_map(struct rds_sock *rs, struct 
rds_get_mr_args *args,
 
if (IS_ERR(trans_private)) {
for (i = 0 ; i < nents; i++)
-   put_page(sg_page([i]));
+   put_user_page(sg_page([i]));
kfree(sg);
ret = PTR_ERR(trans_private);
goto out;
@@ -464,9 +463,10 @@ void rds_rdma_free_op(struct rm_rdma_op *ro)
 * to local memory */
if (!ro->op_write) {
WARN_ON(!page->mapping && irqs_disabled());
-   set_page_dirty(page);
+   put_user_pages_dirty_lock(, 1, true);
+   } else {
+   put_user_page(page);
}
-   put_page(page);
}
 
kfree(ro->op_notifier);
@@ -481,8 +481,7 @@ void rds_atomic_free_op(struct rm_atomic_op *ao)
/* Mark page dirty if it was possibly modified, which
 * is the case for a RDMA_READ which copies from remote
 * to local memory */
-   set_page_dirty(page);
-   put_page(page);
+   put_user_pages_dirty_lock(, 1, true);
 
kfree(ao->op_notifier);
ao->op_notifier = NULL;
@@ -867,7 +866,7 @@ int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message 
*rm,
return ret;
 err:
if (page)
-   put_page(page);
+   put_user_page(page);
rm->atomic.op_active = 0;
kfree(rm->atomic.op_notifier);
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 39/41] mm/mlock.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Dan Williams 
Cc: Daniel Black 
Cc: Jan Kara 
Cc: Jérôme Glisse 
Cc: Matthew Wilcox 
Cc: Mike Kravetz 
Signed-off-by: John Hubbard 
---
 mm/mlock.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index a90099da4fb4..b980e6270e8a 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -345,7 +345,7 @@ static void __munlock_pagevec(struct pagevec *pvec, struct 
zone *zone)
get_page(page); /* for putback_lru_page() */
__munlock_isolated_page(page);
unlock_page(page);
-   put_page(page); /* from follow_page_mask() */
+   put_user_page(page); /* from follow_page_mask() 
*/
}
}
}
@@ -467,7 +467,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
if (page && !IS_ERR(page)) {
if (PageTransTail(page)) {
VM_BUG_ON_PAGE(PageMlocked(page), page);
-   put_page(page); /* follow_page_mask() */
+   put_user_page(page); /* follow_page_mask() */
} else if (PageTransHuge(page)) {
lock_page(page);
/*
@@ -478,7 +478,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
 */
page_mask = munlock_vma_page(page);
unlock_page(page);
-   put_page(page); /* follow_page_mask() */
+   put_user_page(page); /* follow_page_mask() */
} else {
/*
 * Non-huge pages are handled in batches via
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 00/34] put_user_pages(): miscellaneous call sites

2019-08-09 Thread John Hubbard

On 8/7/19 7:36 PM, Ira Weiny wrote:

On Wed, Aug 07, 2019 at 10:46:49AM +0200, Michal Hocko wrote:

On Wed 07-08-19 10:37:26, Jan Kara wrote:

On Fri 02-08-19 12:14:09, John Hubbard wrote:

On 8/2/19 7:52 AM, Jan Kara wrote:

On Fri 02-08-19 07:24:43, Matthew Wilcox wrote:

On Fri, Aug 02, 2019 at 02:41:46PM +0200, Jan Kara wrote:

On Fri 02-08-19 11:12:44, Michal Hocko wrote:

On Thu 01-08-19 19:19:31, john.hubb...@gmail.com wrote:

 [...]

Before I go on, I would like to say that the "imbalance" of get_user_pages()
and put_page() bothers me from a purist standpoint...  However, since this
discussion cropped up I went ahead and ported my work to Linus' current master
(5.3-rc3+) and in doing so I only had to steal a bit of Johns code...  Sorry
John...  :-(

I don't have the commit messages all cleaned up and I know there may be some
discussion on these new interfaces but I wanted to throw this series out there
because I think it may be what Jan and Michal are driving at (or at least in
that direction.

Right now only RDMA and DAX FS's are supported.  Other users of GUP will still
fail on a DAX file and regular files will still be at risk.[2]

I've pushed this work (based 5.3-rc3+ (33920f1ec5bf)) here[3]:

https://github.com/weiny2/linux-kernel/tree/linus-rdmafsdax-b0-v3

I think the most relevant patch to this conversation is:

https://github.com/weiny2/linux-kernel/commit/5d377653ba5cf11c3b716f904b057bee6641aaf6



ohhh...can you please avoid using the old __put_user_pages_dirty()
function? I thought I'd caught things early enough to get away with
the rename and deletion of that. You could either:

a) open code an implementation of vaddr_put_pages_dirty_lock() that
doesn't call any of the *put_user_pages_dirty*() variants, or

b) include my first patch ("") are part of your series, or

c) base this on Andrews's tree, which already has merged in my first patch.


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 16/41] oradax: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: David S. Miller 
Cc: Jonathan Helman 
Cc: Rob Gardner 
Cc: Andy Shevchenko 
Cc: Jonathan Corbet 
Cc: Wei Yongjun 
Cc: Mauro Carvalho Chehab 
Cc: sparcli...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/sbus/char/oradax.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/sbus/char/oradax.c b/drivers/sbus/char/oradax.c
index 8af216287a84..029e619992fc 100644
--- a/drivers/sbus/char/oradax.c
+++ b/drivers/sbus/char/oradax.c
@@ -412,7 +412,7 @@ static void dax_unlock_pages(struct dax_ctx *ctx, int 
ccb_index, int nelem)
dax_dbg("freeing page %p", p);
if (j == OUT)
set_page_dirty(p);
-   put_page(p);
+   put_user_page(p);
ctx->pages[i][j] = NULL;
}
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 06/41] x86/kvm: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Joerg Roedel 
Cc: Paolo Bonzini 
Cc: Radim Krčmář 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: H. Peter Anvin 
Cc: x...@kernel.org
Cc: k...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 arch/x86/kvm/svm.c  | 4 ++--
 virt/kvm/kvm_main.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 7eafc6907861..ff93c923ed36 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1827,7 +1827,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, 
unsigned long uaddr,
 
 err:
if (npinned > 0)
-   release_pages(pages, npinned);
+   put_user_pages(pages, npinned);
 
kvfree(pages);
return NULL;
@@ -1838,7 +1838,7 @@ static void sev_unpin_memory(struct kvm *kvm, struct page 
**pages,
 {
struct kvm_sev_info *sev = _kvm_svm(kvm)->sev_info;
 
-   release_pages(pages, npages);
+   put_user_pages(pages, npages);
kvfree(pages);
sev->pages_locked -= npages;
 }
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 887f3b0c2b60..4b6a596ea8e9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1499,7 +1499,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool 
*async, bool write_fault,
 
if (__get_user_pages_fast(addr, 1, 1, ) == 1) {
*writable = true;
-   put_page(page);
+   put_user_page(page);
page = wpage;
}
}
@@ -1831,7 +1831,7 @@ EXPORT_SYMBOL_GPL(kvm_release_page_clean);
 void kvm_release_pfn_clean(kvm_pfn_t pfn)
 {
if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn))
-   put_page(pfn_to_page(pfn));
+   put_user_page(pfn_to_page(pfn));
 }
 EXPORT_SYMBOL_GPL(kvm_release_pfn_clean);
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 13/41] scif: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Sudeep Dutt 
Cc: Ashutosh Dixit 
Cc: Arnd Bergmann 
Cc: Joerg Roedel 
Cc: Robin Murphy 
Cc: Zhen Lei 
Signed-off-by: John Hubbard 
---
 drivers/misc/mic/scif/scif_rma.c | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/misc/mic/scif/scif_rma.c b/drivers/misc/mic/scif/scif_rma.c
index 01e27682ea30..d84ed9466920 100644
--- a/drivers/misc/mic/scif/scif_rma.c
+++ b/drivers/misc/mic/scif/scif_rma.c
@@ -113,13 +113,14 @@ static int scif_destroy_pinned_pages(struct 
scif_pinned_pages *pin)
int writeable = pin->prot & SCIF_PROT_WRITE;
int kernel = SCIF_MAP_KERNEL & pin->map_flags;
 
-   for (j = 0; j < pin->nr_pages; j++) {
-   if (pin->pages[j] && !kernel) {
+   if (kernel) {
+   for (j = 0; j < pin->nr_pages; j++) {
if (writeable)
-   SetPageDirty(pin->pages[j]);
+   set_page_dirty_lock(pin->pages[j]);
put_page(pin->pages[j]);
}
-   }
+   } else
+   put_user_pages_dirty_lock(pin->pages, pin->nr_pages, writeable);
 
scif_free(pin->pages,
  pin->nr_pages * sizeof(*pin->pages));
@@ -1385,11 +1386,9 @@ int __scif_pin_pages(void *addr, size_t len, int 
*out_prot,
if (ulimit)
__scif_dec_pinned_vm_lock(mm, nr_pages);
/* Roll back any pinned pages */
-   for (i = 0; i < pinned_pages->nr_pages; i++) {
-   if (pinned_pages->pages[i])
-   put_page(
-   pinned_pages->pages[i]);
-   }
+   put_user_pages(pinned_pages->pages,
+  pinned_pages->nr_pages);
+
prot &= ~SCIF_PROT_WRITE;
try_upgrade = false;
goto retry;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 35/41] kernel/events/core.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Peter Zijlstra 
Cc: Ingo Molnar 
Cc: Arnaldo Carvalho de Melo 
Cc: Alexander Shishkin 
Cc: Jiri Olsa 
Cc: Namhyung Kim 
Signed-off-by: John Hubbard 
---
 kernel/events/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 0463c1151bae..7be52bbbfe87 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6426,7 +6426,7 @@ static u64 perf_virt_to_phys(u64 virt)
phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
 
if (p)
-   put_page(p);
+   put_user_page(p);
}
 
return phys_addr;
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 08/34] media/ivtv: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Andy Walls 
Cc: Mauro Carvalho Chehab 
Cc: linux-me...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/media/pci/ivtv/ivtv-udma.c | 14 --
 drivers/media/pci/ivtv/ivtv-yuv.c  | 10 +++---
 2 files changed, 7 insertions(+), 17 deletions(-)

diff --git a/drivers/media/pci/ivtv/ivtv-udma.c 
b/drivers/media/pci/ivtv/ivtv-udma.c
index 5f8883031c9c..7c7f33c2412b 100644
--- a/drivers/media/pci/ivtv/ivtv-udma.c
+++ b/drivers/media/pci/ivtv/ivtv-udma.c
@@ -92,7 +92,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 {
struct ivtv_dma_page_info user_dma;
struct ivtv_user_dma *dma = >udma;
-   int i, err;
+   int err;
 
IVTV_DEBUG_DMA("ivtv_udma_setup, dst: 0x%08x\n", (unsigned 
int)ivtv_dest_addr);
 
@@ -119,8 +119,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
IVTV_DEBUG_WARN("failed to map user pages, returned %d instead 
of %d\n",
   err, user_dma.page_count);
if (err >= 0) {
-   for (i = 0; i < err; i++)
-   put_page(dma->map[i]);
+   put_user_pages(dma->map, err);
return -EINVAL;
}
return err;
@@ -130,9 +129,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 
/* Fill SG List with new values */
if (ivtv_udma_fill_sg_list(dma, _dma, 0) < 0) {
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 0;
return -ENOMEM;
}
@@ -153,7 +150,6 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long 
ivtv_dest_addr,
 void ivtv_udma_unmap(struct ivtv *itv)
 {
struct ivtv_user_dma *dma = >udma;
-   int i;
 
IVTV_DEBUG_INFO("ivtv_unmap_user_dma\n");
 
@@ -170,9 +166,7 @@ void ivtv_udma_unmap(struct ivtv *itv)
ivtv_udma_sync_for_cpu(itv);
 
/* Release User Pages */
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 0;
 }
 
diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c 
b/drivers/media/pci/ivtv/ivtv-yuv.c
index cd2fe2d444c0..9465a7d450b6 100644
--- a/drivers/media/pci/ivtv/ivtv-yuv.c
+++ b/drivers/media/pci/ivtv/ivtv-yuv.c
@@ -81,8 +81,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
 uv_pages, uv_dma.page_count);
 
if (uv_pages >= 0) {
-   for (i = 0; i < uv_pages; i++)
-   put_page(dma->map[y_pages + i]);
+   put_user_pages(>map[y_pages], uv_pages);
rc = -EFAULT;
} else {
rc = uv_pages;
@@ -93,8 +92,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
 y_pages, y_dma.page_count);
}
if (y_pages >= 0) {
-   for (i = 0; i < y_pages; i++)
-   put_page(dma->map[i]);
+   put_user_pages(dma->map, y_pages);
/*
 * Inherit the -EFAULT from rc's
 * initialization, but allow it to be
@@ -112,9 +110,7 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct 
ivtv_user_dma *dma,
/* Fill & map SG List */
if (ivtv_udma_fill_sg_list (dma, _dma, ivtv_udma_fill_sg_list (dma, 
_dma, 0)) < 0) {
IVTV_DEBUG_WARN("could not allocate bounce buffers for highmem 
userspace buffers\n");
-   for (i = 0; i < dma->page_count; i++) {
-   put_page(dma->map[i]);
-   }
+   put_user_pages(dma->map, dma->page_count);
dma->page_count = 0;
return -ENOMEM;
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 06/34] drm/i915: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

This is a merge-able version of the fix, because it restricts
itself to put_user_page() and put_user_pages(), both of which
have not changed their APIs. Later, i915_gem_userptr_put_pages()
can be simplified to use put_user_pages_dirty_lock().

Cc: Jani Nikula 
Cc: Joonas Lahtinen 
Cc: Rodrigo Vivi 
Cc: David Airlie 
Cc: intel-gfx@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c 
b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
index 2caa594322bc..76dda2923cf1 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
@@ -527,7 +527,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct 
*_work)
}
mutex_unlock(>mm.lock);
 
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
 
i915_gem_object_put(obj);
@@ -640,7 +640,7 @@ static int i915_gem_userptr_get_pages(struct 
drm_i915_gem_object *obj)
__i915_gem_userptr_set_active(obj, true);
 
if (IS_ERR(pages))
-   release_pages(pvec, pinned);
+   put_user_pages(pvec, pinned);
kvfree(pvec);
 
return PTR_ERR_OR_ZERO(pages);
@@ -675,7 +675,7 @@ i915_gem_userptr_put_pages(struct drm_i915_gem_object *obj,
set_page_dirty_lock(page);
 
mark_page_accessed(page);
-   put_page(page);
+   put_user_page(page);
}
obj->mm.dirty = false;
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 03/34] net/ceph: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Acked-by: Jeff Layton 

Cc: Ilya Dryomov 
Cc: Sage Weil 
Cc: David S. Miller 
Cc: ceph-de...@vger.kernel.org
Cc: net...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 net/ceph/pagevec.c | 8 +---
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/net/ceph/pagevec.c b/net/ceph/pagevec.c
index 64305e7056a1..c88fff2ab9bd 100644
--- a/net/ceph/pagevec.c
+++ b/net/ceph/pagevec.c
@@ -12,13 +12,7 @@
 
 void ceph_put_page_vector(struct page **pages, int num_pages, bool dirty)
 {
-   int i;
-
-   for (i = 0; i < num_pages; i++) {
-   if (dirty)
-   set_page_dirty_lock(pages[i]);
-   put_page(pages[i]);
-   }
+   put_user_pages_dirty_lock(pages, num_pages, dirty);
kvfree(pages);
 }
 EXPORT_SYMBOL(ceph_put_page_vector);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 23/41] fs/exec.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Alexander Viro 
Cc: linux-fsde...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 fs/exec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/exec.c b/fs/exec.c
index f7f6a140856a..ee442151582f 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -227,7 +227,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, 
unsigned long pos,
 
 static void put_arg_page(struct page *page)
 {
-   put_page(page);
+   put_user_page(page);
 }
 
 static void free_arg_pages(struct linux_binprm *bprm)
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 23/34] uprobes: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Peter Zijlstra 
Cc: Ingo Molnar 
Cc: Arnaldo Carvalho de Melo 
Cc: Alexander Shishkin 
Cc: Jiri Olsa 
Cc: Namhyung Kim 
Signed-off-by: John Hubbard 
---
 kernel/events/uprobes.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 84fa00497c49..4a575de8cec8 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -397,7 +397,7 @@ __update_ref_ctr(struct mm_struct *mm, unsigned long vaddr, 
short d)
ret = 0;
 out:
kunmap_atomic(kaddr);
-   put_page(page);
+   put_user_page(page);
return ret;
 }
 
@@ -504,7 +504,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct 
mm_struct *mm,
ret = __replace_page(vma, vaddr, old_page, new_page);
put_page(new_page);
 put_old:
-   put_page(old_page);
+   put_user_page(old_page);
 
if (unlikely(ret == -EAGAIN))
goto retry;
@@ -1981,7 +1981,7 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned 
long vaddr)
return result;
 
copy_from_page(page, vaddr, , UPROBE_SWBP_INSN_SIZE);
-   put_page(page);
+   put_user_page(page);
  out:
/* This needs to return true for any variant of the trap insn */
return is_trap_insn();
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Re: [Intel-gfx] [PATCH 20/34] xen: convert put_page() to put_user_page*()

2019-08-09 Thread John Hubbard

On 8/1/19 9:36 PM, Juergen Gross wrote:

On 02.08.19 04:19, john.hubb...@gmail.com wrote:

From: John Hubbard 

...

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 2f5ce7230a43..29e461dbee2d 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -611,15 +611,10 @@ static int lock_pages(
  static void unlock_pages(struct page *pages[], unsigned int nr_pages)
  {
-    unsigned int i;
-
  if (!pages)
  return;
-    for (i = 0; i < nr_pages; i++) {
-    if (pages[i])
-    put_page(pages[i]);
-    }
+    put_user_pages(pages, nr_pages);


You are not handling the case where pages[i] is NULL here. Or am I
missing a pending patch to put_user_pages() here?



Hi Juergen,

You are correct--this no longer handles the cases where pages[i]
is NULL. It's intentional, though possibly wrong. :)

I see that I should have added my standard blurb to this
commit description. I missed this one, but some of the other patches
have it. It makes the following, possibly incorrect claim:

"This changes the release code slightly, because each page slot in the
page_list[] array is no longer checked for NULL. However, that check
was wrong anyway, because the get_user_pages() pattern of usage here
never allowed for NULL entries within a range of pinned pages."

The way I've seen these page arrays used with get_user_pages(),
things are either done single page, or with a contiguous range. So
unless I'm missing a case where someone is either

a) releasing individual pages within a range (and thus likely messing
up their count of pages they have), or

b) allocating two gup ranges within the same pages[] array, with a
gap between the allocations,

...then it should be correct. If so, then I'll add the above blurb
to this patch's commit description.

If that's not the case (both here, and in 3 or 4 other patches in this
series, then as you said, I should add NULL checks to put_user_pages()
and put_user_pages_dirty_lock().


thanks,
--
John Hubbard
NVIDIA
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 34/41] goldfish_pipe: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Note that this effectively changes the code's behavior in
qp_release_pages(): it now ultimately calls set_page_dirty_lock(),
instead of set_page_dirty(). This is probably more accurate.

As Christoph Hellwig put it, "set_page_dirty() is only safe if we are
dealing with a file backed page where we have reference on the inode it
hangs off." [1]

[1] https://lore.kernel.org/r/20190723153640.gb...@lst.de

Cc: Greg Kroah-Hartman 
Cc: Roman Kiryanov 
Signed-off-by: John Hubbard 
---
 drivers/platform/goldfish/goldfish_pipe.c | 9 +++--
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/platform/goldfish/goldfish_pipe.c 
b/drivers/platform/goldfish/goldfish_pipe.c
index cef0133aa47a..2bd21020e288 100644
--- a/drivers/platform/goldfish/goldfish_pipe.c
+++ b/drivers/platform/goldfish/goldfish_pipe.c
@@ -288,15 +288,12 @@ static int pin_user_pages(unsigned long first_page,
 static void release_user_pages(struct page **pages, int pages_count,
   int is_write, s32 consumed_size)
 {
-   int i;
+   bool dirty = !is_write && consumed_size > 0;
 
-   for (i = 0; i < pages_count; i++) {
-   if (!is_write && consumed_size > 0)
-   set_page_dirty(pages[i]);
-   put_page(pages[i]);
-   }
+   put_user_pages_dirty_lock(pages, pages_count, dirty);
 }
 
+
 /* Populate the call parameters, merging adjacent pages together */
 static void populate_rw_params(struct page **pages,
   int pages_count,
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 16/34] drivers/tee: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Jens Wiklander 
Signed-off-by: John Hubbard 
---
 drivers/tee/tee_shm.c | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/tee/tee_shm.c b/drivers/tee/tee_shm.c
index 2da026fd12c9..c967d0420b67 100644
--- a/drivers/tee/tee_shm.c
+++ b/drivers/tee/tee_shm.c
@@ -31,16 +31,13 @@ static void tee_shm_release(struct tee_shm *shm)
 
poolm->ops->free(poolm, shm);
} else if (shm->flags & TEE_SHM_REGISTER) {
-   size_t n;
int rc = teedev->desc->ops->shm_unregister(shm->ctx, shm);
 
if (rc)
dev_err(teedev->dev.parent,
"unregister shm %p failed: %d", shm, rc);
 
-   for (n = 0; n < shm->num_pages; n++)
-   put_page(shm->pages[n]);
-
+   put_user_pages(shm->pages, shm->num_pages);
kfree(shm->pages);
}
 
@@ -313,16 +310,13 @@ struct tee_shm *tee_shm_register(struct tee_context *ctx, 
unsigned long addr,
return shm;
 err:
if (shm) {
-   size_t n;
-
if (shm->id >= 0) {
mutex_lock(>mutex);
idr_remove(>idr, shm->id);
mutex_unlock(>mutex);
}
if (shm->pages) {
-   for (n = 0; n < shm->num_pages; n++)
-   put_page(shm->pages[n]);
+   put_user_pages(shm->pages, shm->num_pages);
kfree(shm->pages);
}
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 18/34] fbdev/pvr2fb: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Bartlomiej Zolnierkiewicz 
Cc: Kees Cook 
Cc: Al Viro 
Cc: Bhumika Goyal 
Cc: Arvind Yadav 
Cc: dri-de...@lists.freedesktop.org
Cc: linux-fb...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 drivers/video/fbdev/pvr2fb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/video/fbdev/pvr2fb.c b/drivers/video/fbdev/pvr2fb.c
index 7ff4b6b84282..0e4f9aa6444d 100644
--- a/drivers/video/fbdev/pvr2fb.c
+++ b/drivers/video/fbdev/pvr2fb.c
@@ -700,8 +700,7 @@ static ssize_t pvr2fb_write(struct fb_info *info, const 
char *buf,
ret = count;
 
 out_unmap:
-   for (i = 0; i < nr_pages; i++)
-   put_page(pages[i]);
+   put_user_pages(pages, nr_pages);
 
kfree(pages);
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 21/34] fs/exec.c: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Alexander Viro 
Cc: linux-fsde...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 fs/exec.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/exec.c b/fs/exec.c
index f7f6a140856a..ee442151582f 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -227,7 +227,7 @@ static struct page *get_arg_page(struct linux_binprm *bprm, 
unsigned long pos,
 
 static void put_arg_page(struct page *page)
 {
-   put_page(page);
+   put_user_page(page);
 }
 
 static void free_arg_pages(struct linux_binprm *bprm)
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 24/34] futex: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Peter Zijlstra 
Cc: Darren Hart 
Signed-off-by: John Hubbard 
---
 kernel/futex.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/futex.c b/kernel/futex.c
index 6d50728ef2e7..4b4cae58ec57 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -623,7 +623,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
lock_page(page);
shmem_swizzled = PageSwapCache(page) || page->mapping;
unlock_page(page);
-   put_page(page);
+   put_user_page(page);
 
if (shmem_swizzled)
goto again;
@@ -675,7 +675,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
 
if (READ_ONCE(page->mapping) != mapping) {
rcu_read_unlock();
-   put_page(page);
+   put_user_page(page);
 
goto again;
}
@@ -683,7 +683,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
inode = READ_ONCE(mapping->host);
if (!inode) {
rcu_read_unlock();
-   put_page(page);
+   put_user_page(page);
 
goto again;
}
@@ -702,7 +702,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
 */
if (!atomic_inc_not_zero(>i_count)) {
rcu_read_unlock();
-   put_page(page);
+   put_user_page(page);
 
goto again;
}
@@ -723,7 +723,7 @@ get_futex_key(u32 __user *uaddr, int fshared, union 
futex_key *key, enum futex_a
}
 
 out:
-   put_page(page);
+   put_user_page(page);
return err;
 }
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 04/41] net/rds: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Santosh Shilimkar 
Cc: David S. Miller 
Cc: net...@vger.kernel.org
Cc: linux-r...@vger.kernel.org
Cc: rds-de...@oss.oracle.com
Signed-off-by: John Hubbard 
---
 net/rds/info.c|  5 ++---
 net/rds/message.c |  2 +-
 net/rds/rdma.c| 15 +++
 3 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/net/rds/info.c b/net/rds/info.c
index 03f6fd56d237..ca6af2889adf 100644
--- a/net/rds/info.c
+++ b/net/rds/info.c
@@ -162,7 +162,6 @@ int rds_info_getsockopt(struct socket *sock, int optname, 
char __user *optval,
struct rds_info_lengths lens;
unsigned long nr_pages = 0;
unsigned long start;
-   unsigned long i;
rds_info_func func;
struct page **pages = NULL;
int ret;
@@ -235,8 +234,8 @@ int rds_info_getsockopt(struct socket *sock, int optname, 
char __user *optval,
ret = -EFAULT;
 
 out:
-   for (i = 0; pages && i < nr_pages; i++)
-   put_page(pages[i]);
+   if (pages)
+   put_user_pages(pages, nr_pages);
kfree(pages);
 
return ret;
diff --git a/net/rds/message.c b/net/rds/message.c
index 50f13f1d4ae0..d7b0d266c437 100644
--- a/net/rds/message.c
+++ b/net/rds/message.c
@@ -404,7 +404,7 @@ static int rds_message_zcopy_from_user(struct rds_message 
*rm, struct iov_iter *
int i;
 
for (i = 0; i < rm->data.op_nents; i++)
-   put_page(sg_page(>data.op_sg[i]));
+   put_user_page(sg_page(>data.op_sg[i]));
mmp = >data.op_mmp_znotifier->z_mmp;
mm_unaccount_pinned_pages(mmp);
ret = -EFAULT;
diff --git a/net/rds/rdma.c b/net/rds/rdma.c
index 916f5ec373d8..6762e8696b99 100644
--- a/net/rds/rdma.c
+++ b/net/rds/rdma.c
@@ -162,8 +162,7 @@ static int rds_pin_pages(unsigned long user_addr, unsigned 
int nr_pages,
  pages);
 
if (ret >= 0 && ret < nr_pages) {
-   while (ret--)
-   put_page(pages[ret]);
+   put_user_pages(pages, ret);
ret = -EFAULT;
}
 
@@ -276,7 +275,7 @@ static int __rds_rdma_map(struct rds_sock *rs, struct 
rds_get_mr_args *args,
 
if (IS_ERR(trans_private)) {
for (i = 0 ; i < nents; i++)
-   put_page(sg_page([i]));
+   put_user_page(sg_page([i]));
kfree(sg);
ret = PTR_ERR(trans_private);
goto out;
@@ -464,9 +463,10 @@ void rds_rdma_free_op(struct rm_rdma_op *ro)
 * to local memory */
if (!ro->op_write) {
WARN_ON(!page->mapping && irqs_disabled());
-   set_page_dirty(page);
+   put_user_pages_dirty_lock(, 1, true);
+   } else {
+   put_user_page(page);
}
-   put_page(page);
}
 
kfree(ro->op_notifier);
@@ -481,8 +481,7 @@ void rds_atomic_free_op(struct rm_atomic_op *ao)
/* Mark page dirty if it was possibly modified, which
 * is the case for a RDMA_READ which copies from remote
 * to local memory */
-   set_page_dirty(page);
-   put_page(page);
+   put_user_pages_dirty_lock(, 1, true);
 
kfree(ao->op_notifier);
ao->op_notifier = NULL;
@@ -867,7 +866,7 @@ int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message 
*rm,
return ret;
 err:
if (page)
-   put_page(page);
+   put_user_page(page);
rm->atomic.op_active = 0;
kfree(rm->atomic.op_notifier);
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v3 37/41] security/tomoyo: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Acked-by: Tetsuo Handa 

Cc: Kentaro Takeda 
Cc: linux-security-mod...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 security/tomoyo/domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/security/tomoyo/domain.c b/security/tomoyo/domain.c
index 8526a0a74023..6887beecfb6e 100644
--- a/security/tomoyo/domain.c
+++ b/security/tomoyo/domain.c
@@ -931,7 +931,7 @@ bool tomoyo_dump_page(struct linux_binprm *bprm, unsigned 
long pos,
}
/* Same with put_arg_page(page) in fs/exec.c */
 #ifdef CONFIG_MMU
-   put_page(page);
+   put_user_page(page);
 #endif
return true;
 }
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 07/34] drm/radeon: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Alex Deucher 
Cc: Christian König 
Cc: David (ChunMing) Zhou 
Cc: David Airlie 
Cc: amd-...@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: John Hubbard 
---
 drivers/gpu/drm/radeon/radeon_ttm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index fb3696bc616d..4c9943fa10df 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -540,7 +540,7 @@ static int radeon_ttm_tt_pin_userptr(struct ttm_tt *ttm)
kfree(ttm->sg);
 
 release_pages:
-   release_pages(ttm->pages, pinned);
+   put_user_pages(ttm->pages, pinned);
return r;
 }
 
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 20/34] xen: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

This also handles pages[i] == NULL cases, thanks to an approach
that is actually written by Juergen Gross.

Signed-off-by: Juergen Gross 
Signed-off-by: John Hubbard 

Cc: Boris Ostrovsky 
Cc: xen-de...@lists.xenproject.org
---

Hi Juergen,

Say, this is *exactly* what you proposed in your gup.patch, so
I've speculatively added your Signed-off-by above, but need your
approval before that's final. Let me know please...

thanks,
John Hubbard


 drivers/xen/privcmd.c | 32 +++-
 1 file changed, 11 insertions(+), 21 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index c6070e70dd73..c7d0763ca8c2 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -582,10 +582,11 @@ static long privcmd_ioctl_mmap_batch(
 
 static int lock_pages(
struct privcmd_dm_op_buf kbufs[], unsigned int num,
-   struct page *pages[], unsigned int nr_pages)
+   struct page *pages[], unsigned int *nr_pages)
 {
-   unsigned int i;
+   unsigned int i, free = *nr_pages;
 
+   *nr_pages = 0;
for (i = 0; i < num; i++) {
unsigned int requested;
int pinned;
@@ -593,35 +594,22 @@ static int lock_pages(
requested = DIV_ROUND_UP(
offset_in_page(kbufs[i].uptr) + kbufs[i].size,
PAGE_SIZE);
-   if (requested > nr_pages)
+   if (requested > free)
return -ENOSPC;
 
pinned = get_user_pages_fast(
(unsigned long) kbufs[i].uptr,
-   requested, FOLL_WRITE, pages);
+   requested, FOLL_WRITE, pages + *nr_pages);
if (pinned < 0)
return pinned;
 
-   nr_pages -= pinned;
-   pages += pinned;
+   free -= pinned;
+   *nr_pages += pinned;
}
 
return 0;
 }
 
-static void unlock_pages(struct page *pages[], unsigned int nr_pages)
-{
-   unsigned int i;
-
-   if (!pages)
-   return;
-
-   for (i = 0; i < nr_pages; i++) {
-   if (pages[i])
-   put_page(pages[i]);
-   }
-}
-
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
 {
struct privcmd_data *data = file->private_data;
@@ -681,11 +669,12 @@ static long privcmd_ioctl_dm_op(struct file *file, void 
__user *udata)
 
xbufs = kcalloc(kdata.num, sizeof(*xbufs), GFP_KERNEL);
if (!xbufs) {
+   nr_pages = 0;
rc = -ENOMEM;
goto out;
}
 
-   rc = lock_pages(kbufs, kdata.num, pages, nr_pages);
+   rc = lock_pages(kbufs, kdata.num, pages, _pages);
if (rc)
goto out;
 
@@ -699,7 +688,8 @@ static long privcmd_ioctl_dm_op(struct file *file, void 
__user *udata)
xen_preemptible_hcall_end();
 
 out:
-   unlock_pages(pages, nr_pages);
+   if (pages)
+   put_user_pages(pages, nr_pages);
kfree(xbufs);
kfree(pages);
kfree(kbufs);
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH v2 22/34] orangefs: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Mike Marshall 
Cc: Martin Brandenburg 
Cc: de...@lists.orangefs.org
Signed-off-by: John Hubbard 
---
 fs/orangefs/orangefs-bufmap.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/fs/orangefs/orangefs-bufmap.c b/fs/orangefs/orangefs-bufmap.c
index 2bb916d68576..f2f33a16d604 100644
--- a/fs/orangefs/orangefs-bufmap.c
+++ b/fs/orangefs/orangefs-bufmap.c
@@ -168,10 +168,7 @@ static DEFINE_SPINLOCK(orangefs_bufmap_lock);
 static void
 orangefs_bufmap_unmap(struct orangefs_bufmap *bufmap)
 {
-   int i;
-
-   for (i = 0; i < bufmap->page_count; i++)
-   put_page(bufmap->page_array[i]);
+   put_user_pages(bufmap->page_array, bufmap->page_count);
 }
 
 static void
@@ -280,7 +277,7 @@ orangefs_bufmap_map(struct orangefs_bufmap *bufmap,
 
for (i = 0; i < ret; i++) {
SetPageError(bufmap->page_array[i]);
-   put_page(bufmap->page_array[i]);
+   put_user_page(bufmap->page_array[i]);
}
return -ENOMEM;
}
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

[Intel-gfx] [PATCH 31/34] nfs: convert put_page() to put_user_page*()

2019-08-09 Thread john . hubbard
From: John Hubbard 

For pages that were retained via get_user_pages*(), release those pages
via the new put_user_page*() routines, instead of via put_page() or
release_pages().

This is part a tree-wide conversion, as described in commit fc1d8e7cca2d
("mm: introduce put_user_page*(), placeholder versions").

Cc: Trond Myklebust 
Cc: Anna Schumaker 
Cc: linux-...@vger.kernel.org
Signed-off-by: John Hubbard 
---
 fs/nfs/direct.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
index 0cb442406168..b00b89dda3c5 100644
--- a/fs/nfs/direct.c
+++ b/fs/nfs/direct.c
@@ -278,9 +278,7 @@ ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter 
*iter)
 
 static void nfs_direct_release_pages(struct page **pages, unsigned int npages)
 {
-   unsigned int i;
-   for (i = 0; i < npages; i++)
-   put_page(pages[i]);
+   put_user_pages(pages, npages);
 }
 
 void nfs_init_cinfo_from_dreq(struct nfs_commit_info *cinfo,
-- 
2.22.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  1   2   >