Re: [Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access

2021-06-02 Thread Balbir Singh
On Wed, May 26, 2021 at 12:17:18AM -0700, John Hubbard wrote:
> On 5/25/21 4:51 AM, Balbir Singh wrote:
> ...
> > > How beneficial is this code to nouveau users?  I see that it permits a
> > > part of OpenCL to be implemented, but how useful/important is this in
> > > the real world?
> > 
> > That is a very good question! I've not reviewed the code, but a sample
> > program with the described use case would make things easy to parse.
> > I suspect that is not easy to build at the moment?
> > 
> 
> The cover letter says this:
> 
> This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program
> which checks that GPU atomic accesses to system memory are atomic. Without
> this series the test fails as there is no way of write-protecting the page
> mapping which results in the device clobbering CPU writes. For reference
> the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/
> 
> Further testing has been performed by adding support for testing exclusive
> access to the hmm-tests kselftests.
> 
> ...so that seems to cover the "sample program" request, at least.

Thanks, I'll take a look

> 
> > I wonder how we co-ordinate all the work the mm is doing, page migration,
> > reclaim with device exclusive access? Do we have any numbers for the worst
> > case page fault latency when something is marked away for exclusive access?
> 
> CPU page fault latency is approximately "terrible", if a page is resident on
> the GPU. We have to spin up a DMA engine on the GPU and have it copy the page
> over the PCIe bus, after all.
> 
> > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE would
> 
> Yes, for now.
> 
> > only impact the address space of programs using the GPU. Should the 
> > exclusively
> > marked range live in the unreclaimable list and recycled back to 
> > active/in-active
> > to account for the fact that
> > 
> > 1. It is not reclaimable and reclaim will only hurt via page faults?
> > 2. It ages the page correctly or at-least allows for that possibility when 
> > the
> > page is used by the GPU.
> 
> I'm not sure that that is *necessarily* something we can conclude. It depends 
> upon
> access patterns of each program. For example, a "reduction" parallel program 
> sends
> over lots of data to the GPU, and only a tiny bit of (reduced!) data comes 
> back
> to the CPU. In that case, freeing the physical page on the CPU is actually the
> best decision for the OS to make (if the OS is sufficiently prescient).
>

With a shared device or a device exclusive range, it would be good to get the 
device
usage pattern and update the mm with that knowledge, so that the LRU can be 
better
maintained. With your comment you seem to suggest that a page used by the GPU 
might
be a good candidate for reclaim based on the CPU's understanding of the age of
the page should not account for use by the device
(are GPU workloads - access once and discard?) 

Balbir Singh.

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau


Re: [Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access

2021-05-25 Thread Balbir Singh
On Mon, May 24, 2021 at 03:11:57PM -0700, Andrew Morton wrote:
> On Mon, 24 May 2021 23:27:22 +1000 Alistair Popple  wrote:
> 
> > Some devices require exclusive write access to shared virtual
> > memory (SVM) ranges to perform atomic operations on that memory. This
> > requires CPU page tables to be updated to deny access whilst atomic
> > operations are occurring.
> > 
> > In order to do this introduce a new swap entry
> > type (SWP_DEVICE_EXCLUSIVE). When a SVM range needs to be marked for
> > exclusive access by a device all page table mappings for the particular
> > range are replaced with device exclusive swap entries. This causes any
> > CPU access to the page to result in a fault.
> > 
> > Faults are resovled by replacing the faulting entry with the original
> > mapping. This results in MMU notifiers being called which a driver uses
> > to update access permissions such as revoking atomic access. After
> > notifiers have been called the device will no longer have exclusive
> > access to the region.
> > 
> > Walking of the page tables to find the target pages is handled by
> > get_user_pages() rather than a direct page table walk. A direct page
> > table walk similar to what migrate_vma_collect()/unmap() does could also
> > have been utilised. However this resulted in more code similar in
> > functionality to what get_user_pages() provides as page faulting is
> > required to make the PTEs present and to break COW.
> > 
> > ...
> >
> >  Documentation/vm/hmm.rst |  17 
> >  include/linux/mmu_notifier.h |   6 ++
> >  include/linux/rmap.h |   4 +
> >  include/linux/swap.h |   7 +-
> >  include/linux/swapops.h  |  44 -
> >  mm/hmm.c |   5 +
> >  mm/memory.c  | 128 +++-
> >  mm/mprotect.c|   8 ++
> >  mm/page_vma_mapped.c |   9 +-
> >  mm/rmap.c| 186 +++
> >  10 files changed, 405 insertions(+), 9 deletions(-)
> > 
> 
> This is quite a lot of code added to core MM for a single driver.
> 
> Is there any expectation that other drivers will use this code?
> 
> Is there a way of reducing the impact (code size, at least) for systems
> which don't need this code?
>
> How beneficial is this code to nouveau users?  I see that it permits a
> part of OpenCL to be implemented, but how useful/important is this in
> the real world?

That is a very good question! I've not reviewed the code, but a sample
program with the described use case would make things easy to parse.
I suspect that is not easy to build at the moment?

I wonder how we co-ordinate all the work the mm is doing, page migration,
reclaim with device exclusive access? Do we have any numbers for the worst
case page fault latency when something is marked away for exclusive access?
I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE would
only impact the address space of programs using the GPU. Should the exclusively
marked range live in the unreclaimable list and recycled back to 
active/in-active
to account for the fact that

1. It is not reclaimable and reclaim will only hurt via page faults?
2. It ages the page correctly or at-least allows for that possibility when the
   page is used by the GPU.

Balbir Singh.
 
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau