From: Ralph Campbell <rcampb...@nvidia.com>

[ Upstream commit ed710a6ed797430026aa5116dd0ab22378798b69 ]

If system memory is migrated to device private memory and no GPU MMU
page table entry exists, the GPU will fault and call hmm_range_fault()
to get the PFN for the page. Since the .dev_private_owner pointer in
struct hmm_range is not set, hmm_range_fault returns an error which
results in the GPU program stopping with a fatal fault.
Fix this by setting .dev_private_owner appropriately.

Fixes: 08ddddda667b ("mm/hmm: check the device private page owner in 
hmm_range_fault()")
Cc: sta...@vger.kernel.org
Signed-off-by: Ralph Campbell <rcampb...@nvidia.com>
Reviewed-by: Jason Gunthorpe <j...@mellanox.com>
Signed-off-by: Ben Skeggs <bske...@redhat.com>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 drivers/gpu/drm/nouveau/nouveau_svm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c 
b/drivers/gpu/drm/nouveau/nouveau_svm.c
index 645fedd77e21b..a9ce86740799f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
@@ -534,6 +534,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
                .flags = nouveau_svm_pfn_flags,
                .values = nouveau_svm_pfn_values,
                .pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT,
+               .dev_private_owner = drm->dev,
        };
        struct mm_struct *mm = notifier->notifier.mm;
        long ret;
-- 
2.25.1



Reply via email to