On Friday, 19 February 2021 8:47:41 PM AEDT Christoph Hellwig wrote:
> > page = migration_entry_to_page(swpent);
> > else if (is_device_private_entry(swpent))
> > page = device_private_entry_to_page(swpent);
> > + else if
On Fri, Feb 19, 2021 at 09:47:41AM +, Christoph Hellwig wrote:
> > diff --git a/include/linux/hmm.h b/include/linux/hmm.h
> > index 866a0fa104c4..5d28ff6d4d80 100644
> > +++ b/include/linux/hmm.h
> > @@ -109,6 +109,10 @@ struct hmm_range {
> > */
> > int hmm_range_fault(struct hmm_range
> page = migration_entry_to_page(swpent);
> else if (is_device_private_entry(swpent))
> page = device_private_entry_to_page(swpent);
> + else if (is_device_exclusive_entry(swpent))
> + page =
Hi Alistair,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on kselftest/next]
[also build test ERROR on linus/master v5.11 next-20210218]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting
Apologies for the noise, looks like I don't have a CONFIG_DEVICE_PRIVATE=n
build in my tests and missed creating definitions for the new static inline
functions for that configuration.
I'll wait for some feedback on the overall approach and fix this in a v3.
- Alistair
On Friday, 19 February
Hi Alistair,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on kselftest/next]
[also build test ERROR on linus/master v5.11 next-20210218]
[cannot apply to hnaz-linux-mm/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting
Some devices require exclusive write access to shared virtual
memory (SVM) ranges to perform atomic operations on that memory. This
requires CPU page tables to be updated to deny access whilst atomic
operations are occurring.
In order to do this introduce a new swap entry
type