[Nouveau] [PATCH] mm: Take a page reference when removing device exclusive entries

2023-03-27 Thread Alistair Popple
Device exclusive page table entries are used to prevent CPU access to
a page whilst it is being accessed from a device. Typically this is
used to implement atomic operations when the underlying bus does not
support atomic access. When a CPU thread encounters a device exclusive
entry it locks the page and restores the original entry after calling
mmu notifiers to signal drivers that exclusive access is no longer
available.

The device exclusive entry holds a reference to the page making it
safe to access the struct page whilst the entry is present. However
the fault handling code does not hold the PTL when taking the page
lock. This means if there are multiple threads faulting concurrently
on the device exclusive entry one will remove the entry whilst others
will wait on the page lock without holding a reference.

This can lead to threads locking or waiting on a page with a zero
refcount. Whilst mmap_lock prevents the pages getting freed via
munmap() they may still be freed by a migration. This leads to
warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked
when the refcount drops to zero. Note that during removal of the
device exclusive entry the PTE is currently re-checked under the PTL
so no futher bad page accesses occur once it is locked.

Signed-off-by: Alistair Popple 
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Cc: sta...@vger.kernel.org
---
 mm/memory.c | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index 8c8420934d60..b499bd283d8e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3623,8 +3623,19 @@ static vm_fault_t remove_device_exclusive_entry(struct 
vm_fault *vmf)
struct vm_area_struct *vma = vmf->vma;
struct mmu_notifier_range range;
 
-   if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
+   /*
+* We need a page reference to lock the page because we don't
+* hold the PTL so a racing thread can remove the
+* device-exclusive entry and unmap the page. If the page is
+* free the entry must have been removed already.
+*/
+   if (!get_page_unless_zero(vmf->page))
+   return 0;
+
+   if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
+   put_page(vmf->page);
return VM_FAULT_RETRY;
+   }
mmu_notifier_range_init_owner(, MMU_NOTIFY_EXCLUSIVE, 0, vma,
vma->vm_mm, vmf->address & PAGE_MASK,
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct 
vm_fault *vmf)
 
pte_unmap_unlock(vmf->pte, vmf->ptl);
folio_unlock(folio);
+   put_page(vmf->page);
 
mmu_notifier_invalidate_range_end();
return 0;
-- 
2.39.2



Re: [Nouveau] [PATCH] drm/nouveau: Fix bug in buffer relocs for Nouveau

2023-03-27 Thread Christian König

Am 27.03.23 um 10:42 schrieb John Ogness:

On 2023-01-19, Tanmay Bhushan <0070472...@gmail.com> wrote:

dma_resv_wait_timeout returns greater than zero on success
as opposed to ttm_bo_wait_ctx. As a result of that relocs
will fail and give failure even when it was a success.

Today I switched my workstation from 6.2 to 6.3-rc3 and started seeing
lots of new kernel messages:

[  642.138313][ T1751] nouveau :f0:10.0: X[1751]: reloc wait_idle failed: 
1500
[  642.138389][ T1751] nouveau :f0:10.0: X[1751]: reloc apply: 1500
[  646.123490][ T1751] nouveau :f0:10.0: X[1751]: reloc wait_idle failed: 
1500
[  646.123573][ T1751] nouveau :f0:10.0: X[1751]: reloc apply: 1500

The graphics seemed to go slower or hang a bit when these messages would
appear. I then found your patch! However, I have some comments about it.

First, it should include a fixes tag:

Fixes: 41d351f29528 ("drm/nouveau: stop using ttm_bo_wait")


Signed-off-by: Tanmay Bhushan <0070472...@gmail.com>
---
  drivers/gpu/drm/nouveau/nouveau_gem.c | 3 +--
  1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c 
b/drivers/gpu/drm/nouveau/nouveau_gem.c
index f77e44958037..0e3690459144 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -706,9 +706,8 @@ nouveau_gem_pushbuf_reloc_apply(struct nouveau_cli *cli,
ret = dma_resv_wait_timeout(nvbo->bo.base.resv,
DMA_RESV_USAGE_BOOKKEEP,
false, 15 * HZ);
-   if (ret == 0)
+   if (ret <= 0) {
ret = -EBUSY;

This is incorrect for 2 reasons:

* it treats restarts as timeouts

* this function now returns >0 on success


-   if (ret) {
NV_PRINTK(err, cli, "reloc wait_idle failed: %ld\n",
  ret);
break;

I rearranged things to basically correctly translate the return code of
dma_resv_wait_timeout() to match the previous ttm_bo_wait():

ret = dma_resv_wait_timeout(nvbo->bo.base.resv,
DMA_RESV_USAGE_BOOKKEEP,
false, 15 * HZ);
if (ret == 0)
ret = -EBUSY;
if (ret > 0)
ret = 0;
if (ret) {
NV_PRINTK(err, cli, "reloc wait_idle failed: %ld\n",
  ret);
break;
}

So the patch just becomes:

@@ -708,6 +708,8 @@ nouveau_gem_pushbuf_reloc_apply(struct n
false, 15 * HZ);
if (ret == 0)
ret = -EBUSY;
+   if (ret > 0)
+   ret = 0;
if (ret) {
NV_PRINTK(err, cli, "reloc wait_idle failed: %ld\n",
  ret);

With this variant, everything runs correctly on my workstation again.

It probably deserves a comment about why @ret is being translated. Or
perhaps a new variable should be introduced to separate the return value
of dma_resv_wait_timeout() from the return value of this function.


I'm going to take a look tomorrow, but your code already looks pretty 
correct to me.


And sorry for the noise, missed the different in the conversion.

Thanks,
Christian.



Either way, this is an important fix for 6.3-rc!

John Ogness