Some NVIDIA GPUs do not support direct atomic access to system memory
via PCIe. Instead this must be emulated by granting the GPU exclusive
access to the memory. This is achieved by replacing CPU page table
entries with special swap entries that fault on userspace access.
The driver then grants th
Call mmu_interval_notifier_insert() as part of nouveau_range_fault().
This doesn't introduce any functional change but makes it easier for a
subsequent patch to alter the behaviour of nouveau_range_fault() to
support GPU atomic operations.
Signed-off-by: Alistair Popple
---
drivers/gpu/drm/nouve
Adds some selftests for exclusive device memory.
Signed-off-by: Alistair Popple
Acked-by: Jason Gunthorpe
Tested-by: Ralph Campbell
Reviewed-by: Ralph Campbell
---
lib/test_hmm.c | 124 +++
lib/test_hmm_uapi.h| 2 +
tools/testing/s
Some devices require exclusive write access to shared virtual
memory (SVM) ranges to perform atomic operations on that memory. This
requires CPU page tables to be updated to deny access whilst atomic
operations are occurring.
In order to do this introduce a new swap entry
type (SWP_DEVICE_EXCLUSIV
Migration is currently implemented as a mode of operation for
try_to_unmap_one() generally specified by passing the TTU_MIGRATION flag
or in the case of splitting a huge anonymous page TTU_SPLIT_FREEZE.
However it does not have much in common with the rest of the unmap
functionality of try_to_unma
The behaviour of try_to_unmap_one() is difficult to follow because it
performs different operations based on a fairly large set of flags used
in different combinations.
TTU_MUNLOCK is one such flag. However it is exclusively used by
try_to_munlock() which specifies no other flags. Therefore rather
Both migration and device private pages use special swap entries that
are manipluated by a range of inline functions. The arguments to these
are somewhat inconsitent so rework them to remove flag type arguments
and to make the arguments similar for both read and write entry
creation.
Signed-off-by
Remove multiple similar inline functions for dealing with different
types of special swap entries.
Both migration and device private swap entries use the swap offset to
store a pfn. Instead of multiple inline functions to obtain a struct
page for each swap entry type use a common function
pfn_swap
This is the seventh version of a series to add support to Nouveau for
atomic memory operations on OpenCL shared virtual memory (SVM) regions.
This version primarily improves readability of the Nouveau fault priority
calculation code along with other minor functional and cosmetic
improvements liste
On Wed, Mar 24, 2021 at 06:36:06PM -0400, Lyude wrote:
> From: Lyude Paul
>
> This introduces the igt_nouveau library, which enables support for tiling
> formats on nouveau, along with accelerated clears for allocated bos in VRAM
> using the dma-copy engine present on Nvidia hardware since Tesla.
From: Tobias Klausmann
[ Upstream commit e94c55b8e0a0bbe9a026250cf31e2fa45957d776 ]
Starting with commit f295c8cfec833c2707ff1512da10d65386dde7af
("drm/nouveau: fix dma syncing warning with debugging on.")
the following oops occures:
BUG: kernel NULL pointer dereference, address: 000
On Wed, Mar 24, 2021 at 06:24:54PM -0400, Lyude Paul wrote:
> On Thu, 2021-03-18 at 11:17 +0200, Petri Latvala wrote:
> > On Wed, Mar 17, 2021 at 06:38:27PM -0400, Lyude wrote:
> > > From: Lyude Paul
> > >
> > > This introduces the igt_nouveau library, which enables support for tiling
> > > forma
12 matches
Mail list logo