Re: [Nouveau] [PATCH v2] mm: Take a page reference when removing device exclusive entries

2023-03-29 Thread Christoph Hellwig
s/page/folio/ in the entire commit log?


Re: [Nouveau] [PATCH v2] mm: Take a page reference when removing device exclusive entries

2023-03-29 Thread Alistair Popple


Christoph Hellwig  writes:

> s/page/folio/ in the entire commit log?

I debated that but settled on leaving it as is because device exclusive
entries only deal with non-compound pages for now and didn't want to
give any other impression. Happy to change that though if people think
it would be better/clearer.


Re: [Nouveau] [PATCH v2] mm: Take a page reference when removing device exclusive entries

2023-03-29 Thread John Hubbard

On 3/29/23 18:25, Alistair Popple wrote:

Device exclusive page table entries are used to prevent CPU access to
a page whilst it is being accessed from a device. Typically this is
used to implement atomic operations when the underlying bus does not
support atomic access. When a CPU thread encounters a device exclusive
entry it locks the page and restores the original entry after calling
mmu notifiers to signal drivers that exclusive access is no longer
available.

The device exclusive entry holds a reference to the page making it
safe to access the struct page whilst the entry is present. However
the fault handling code does not hold the PTL when taking the page
lock. This means if there are multiple threads faulting concurrently
on the device exclusive entry one will remove the entry whilst others
will wait on the page lock without holding a reference.

This can lead to threads locking or waiting on a folio with a zero
refcount. Whilst mmap_lock prevents the pages getting freed via
munmap() they may still be freed by a migration. This leads to
warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked
when the refcount drops to zero.

Fix this by trying to take a reference on the folio before locking
it. The code already checks the PTE under the PTL and aborts if the
entry is no longer there. It is also possible the folio has been
unmapped, freed and re-allocated allowing a reference to be taken on
an unrelated folio. This case is also detected by the PTE check and
the folio is unlocked without further changes.

Signed-off-by: Alistair Popple 
Reviewed-by: Ralph Campbell 
Reviewed-by: John Hubbard 
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Cc: sta...@vger.kernel.org

---

Changes for v2:

  - Rebased to Linus master
  - Reworded commit message
  - Switched to using folios (thanks Matthew!)
  - Added Reviewed-by's


v2 looks correct to me.

thanks,
--
John Hubbard
NVIDIA


---
  mm/memory.c | 16 +++-
  1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index f456f3b5049c..01a23ad48a04 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3563,8 +3563,21 @@ static vm_fault_t remove_device_exclusive_entry(struct 
vm_fault *vmf)
struct vm_area_struct *vma = vmf->vma;
struct mmu_notifier_range range;
  
-	if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))

+   /*
+* We need a reference to lock the folio because we don't hold
+* the PTL so a racing thread can remove the device-exclusive
+* entry and unmap it. If the folio is free the entry must
+* have been removed already. If it happens to have already
+* been re-allocated after being freed all we do is lock and
+* unlock it.
+*/
+   if (!folio_try_get(folio))
+   return 0;
+
+   if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
+   folio_put(folio);
return VM_FAULT_RETRY;
+   }
mmu_notifier_range_init_owner(, MMU_NOTIFY_EXCLUSIVE, 0,
vma->vm_mm, vmf->address & PAGE_MASK,
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct 
vm_fault *vmf)
  
  	pte_unmap_unlock(vmf->pte, vmf->ptl);

folio_unlock(folio);
+   folio_put(folio);
  
  	mmu_notifier_invalidate_range_end();

return 0;





[Nouveau] [PATCH v2] mm: Take a page reference when removing device exclusive entries

2023-03-29 Thread Alistair Popple
Device exclusive page table entries are used to prevent CPU access to
a page whilst it is being accessed from a device. Typically this is
used to implement atomic operations when the underlying bus does not
support atomic access. When a CPU thread encounters a device exclusive
entry it locks the page and restores the original entry after calling
mmu notifiers to signal drivers that exclusive access is no longer
available.

The device exclusive entry holds a reference to the page making it
safe to access the struct page whilst the entry is present. However
the fault handling code does not hold the PTL when taking the page
lock. This means if there are multiple threads faulting concurrently
on the device exclusive entry one will remove the entry whilst others
will wait on the page lock without holding a reference.

This can lead to threads locking or waiting on a folio with a zero
refcount. Whilst mmap_lock prevents the pages getting freed via
munmap() they may still be freed by a migration. This leads to
warnings such as PAGE_FLAGS_CHECK_AT_FREE due to the page being locked
when the refcount drops to zero.

Fix this by trying to take a reference on the folio before locking
it. The code already checks the PTE under the PTL and aborts if the
entry is no longer there. It is also possible the folio has been
unmapped, freed and re-allocated allowing a reference to be taken on
an unrelated folio. This case is also detected by the PTE check and
the folio is unlocked without further changes.

Signed-off-by: Alistair Popple 
Reviewed-by: Ralph Campbell 
Reviewed-by: John Hubbard 
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Cc: sta...@vger.kernel.org

---

Changes for v2:

 - Rebased to Linus master
 - Reworded commit message
 - Switched to using folios (thanks Matthew!)
 - Added Reviewed-by's
---
 mm/memory.c | 16 +++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index f456f3b5049c..01a23ad48a04 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3563,8 +3563,21 @@ static vm_fault_t remove_device_exclusive_entry(struct 
vm_fault *vmf)
struct vm_area_struct *vma = vmf->vma;
struct mmu_notifier_range range;
 
-   if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags))
+   /*
+* We need a reference to lock the folio because we don't hold
+* the PTL so a racing thread can remove the device-exclusive
+* entry and unmap it. If the folio is free the entry must
+* have been removed already. If it happens to have already
+* been re-allocated after being freed all we do is lock and
+* unlock it.
+*/
+   if (!folio_try_get(folio))
+   return 0;
+
+   if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) {
+   folio_put(folio);
return VM_FAULT_RETRY;
+   }
mmu_notifier_range_init_owner(, MMU_NOTIFY_EXCLUSIVE, 0,
vma->vm_mm, vmf->address & PAGE_MASK,
(vmf->address & PAGE_MASK) + PAGE_SIZE, NULL);
@@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct 
vm_fault *vmf)
 
pte_unmap_unlock(vmf->pte, vmf->ptl);
folio_unlock(folio);
+   folio_put(folio);
 
mmu_notifier_invalidate_range_end();
return 0;
-- 
2.39.2



Re: [Nouveau] [PATCH] mm: Take a page reference when removing device exclusive entries

2023-03-29 Thread Alistair Popple


Alistair Popple  writes:

> John Hubbard  writes:
>
>> On 3/28/23 20:16, Matthew Wilcox wrote:
>> ...
 +  if (!get_page_unless_zero(vmf->page))
 +  return 0;
>>>  From a folio point of view: what the hell are you doing here?  Tail
>>> pages don't have individual refcounts; all the refcounts are actually
>
> I had stuck with using the page because none of this stuff (yet)
> supports compound pages anyway so we shouldn't see a tail page
> anyway. But point taken, I admit I need to find some time to get a
> deeper internalised understanding of folios than just s/page/folio.

And looking at this while updating it made the mixed usage of page/folio
look really werid/wrong so thanks for pointing that out. Will send an
update.


Re: [Nouveau] [PATCH] mm: Take a page reference when removing device exclusive entries

2023-03-29 Thread Alistair Popple


John Hubbard  writes:

> On 3/28/23 20:16, Matthew Wilcox wrote:
> ...
>>> +   if (!get_page_unless_zero(vmf->page))
>>> +   return 0;
>>  From a folio point of view: what the hell are you doing here?  Tail
>> pages don't have individual refcounts; all the refcounts are actually

I had stuck with using the page because none of this stuff (yet)
supports compound pages anyway so we shouldn't see a tail page
anyway. But point taken, I admit I need to find some time to get a
deeper internalised understanding of folios than just s/page/folio.

> ohh, and I really should have caught that too. I plead spending too much
> time recently in a somewhat more driver-centric mindset, and failing to
> mentally shift gears properly for this case.
>
> Sorry for missing that!
>
> thanks,



[Nouveau] [PATCH] drm/nouveau/svm: remove unused ret variable

2023-03-29 Thread Tom Rix
clang with W=1 reports
drivers/gpu/drm/nouveau/nouveau_svm.c:929:6: error: variable
  'ret' set but not used [-Werror,-Wunused-but-set-variable]
int ret;
^
This variable is not used so remove it.

Signed-off-by: Tom Rix 
---
 drivers/gpu/drm/nouveau/nouveau_svm.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c 
b/drivers/gpu/drm/nouveau/nouveau_svm.c
index a74ba8d84ba7..e072d610f2f9 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -926,15 +926,14 @@ nouveau_pfns_map(struct nouveau_svmm *svmm, struct 
mm_struct *mm,
 unsigned long addr, u64 *pfns, unsigned long npages)
 {
struct nouveau_pfnmap_args *args = nouveau_pfns_to_args(pfns);
-   int ret;
 
args->p.addr = addr;
args->p.size = npages << PAGE_SHIFT;
 
mutex_lock(>mutex);
 
-   ret = nvif_object_ioctl(>vmm->vmm.object, args,
-   struct_size(args, p.phys, npages), NULL);
+   nvif_object_ioctl(>vmm->vmm.object, args,
+ struct_size(args, p.phys, npages), NULL);
 
mutex_unlock(>mutex);
 }
-- 
2.27.0



Re: [Nouveau] [PATCH 6.2 regression fix] drm/nouveau/kms: Fix backlight registration

2023-03-29 Thread Hans de Goede
Hi,

On 3/29/23 00:23, Lyude Paul wrote:
> Reviewed-by: Lyude Paul 
> 
> (Also note to Mark: this is my way of letting you know someone fixed the
> regression with backlight controls upstream, looking into the weird bright
> screen after resume issue)

Thanks.

I have pushed this to drm-misc-fixes now.

I'll also submit a downstream Fedora kernel pull-req with this
to get this resolved in the Fedora kernels .

Regards,

Hans



> 
> On Sun, 2023-03-26 at 22:54 +0200, Hans de Goede wrote:
>> The nouveau code used to call drm_fb_helper_initial_config() from
>> nouveau_fbcon_init() before calling drm_dev_register(). This would
>> probe all connectors so that drm_connector->status could be used during
>> backlight registration which runs from nouveau_connector_late_register().
>>
>> After commit 4a16dd9d18a0 ("drm/nouveau/kms: switch to drm fbdev helpers")
>> the fbdev emulation code, which now is a drm-client, can only run after
>> drm_dev_register(). So during backlight registration the connectors are
>> not probed yet and the drm_connector->status == connected check in
>> nv50_backlight_init() would now always fail.
>>
>> Replace the drm_connector->status == connected check with
>> a drm_helper_probe_detect() == connected check to fix nv_backlight
>> no longer getting registered because of this.
>>
>> Fixes: 4a16dd9d18a0 ("drm/nouveau/kms: switch to drm fbdev helpers")
>> Link: https://gitlab.freedesktop.org/drm/nouveau/-/issues/202
>> Signed-off-by: Hans de Goede 
>> ---
>>  drivers/gpu/drm/nouveau/nouveau_backlight.c | 7 ++-
>>  1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c 
>> b/drivers/gpu/drm/nouveau/nouveau_backlight.c
>> index 40409a29f5b6..91b5ecc57538 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
>> @@ -33,6 +33,7 @@
>>  #include 
>>  #include 
>>  #include 
>> +#include 
>>  
>>  #include "nouveau_drv.h"
>>  #include "nouveau_reg.h"
>> @@ -299,8 +300,12 @@ nv50_backlight_init(struct nouveau_backlight *bl,
>>  struct nouveau_drm *drm = nouveau_drm(nv_encoder->base.base.dev);
>>  struct nvif_object *device = >client.device.object;
>>  
>> +/*
>> + * Note when this runs the connectors have not been probed yet,
>> + * so nv_conn->base.status is not set yet.
>> + */
>>  if (!nvif_rd32(device, NV50_PDISP_SOR_PWM_CTL(ffs(nv_encoder->dcb->or) 
>> - 1)) ||
>> -nv_conn->base.status != connector_status_connected)
>> +drm_helper_probe_detect(_conn->base, NULL, false) != 
>> connector_status_connected)
>>  return -ENODEV;
>>  
>>  if (nv_conn->type == DCB_CONNECTOR_eDP) {
>