On Thu, Oct 04, 2018 at 09:02:24PM -0700, [email protected] wrote: > From: John Hubbard <[email protected]> > > Introduces put_user_page(), which simply calls put_page(). > This provides a way to update all get_user_pages*() callers, > so that they call put_user_page(), instead of put_page(). > > Also introduces put_user_pages(), and a few dirty/locked variations, > as a replacement for release_pages(), for the same reasons. > These may be used for subsequent performance improvements, > via batching of pages to be released. > > This prepares for eventually fixing the problem described > in [1], and is following a plan listed in [2], [3], [4]. > > [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" > > [2] https://lkml.kernel.org/r/[email protected] > Proposed steps for fixing get_user_pages() + DMA problems. > > [3]https://lkml.kernel.org/r/[email protected] > Bounce buffers (otherwise [2] is not really viable). > > [4] https://lkml.kernel.org/r/[email protected] > Follow-up discussions. > > CC: Matthew Wilcox <[email protected]> > CC: Michal Hocko <[email protected]> > CC: Christopher Lameter <[email protected]> > CC: Jason Gunthorpe <[email protected]> > CC: Dan Williams <[email protected]> > CC: Jan Kara <[email protected]> > CC: Al Viro <[email protected]> > CC: Jerome Glisse <[email protected]> > CC: Christoph Hellwig <[email protected]> > Signed-off-by: John Hubbard <[email protected]> > include/linux/mm.h | 42 ++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 40 insertions(+), 2 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index a61ebe8ad4ca..1a9aae7c659f 100644 > +++ b/include/linux/mm.h > @@ -137,6 +137,8 @@ extern int overcommit_ratio_handler(struct ctl_table *, > int, void __user *, > size_t *, loff_t *); > extern int overcommit_kbytes_handler(struct ctl_table *, int, void __user *, > size_t *, loff_t *); > +int set_page_dirty(struct page *page); > +int set_page_dirty_lock(struct page *page); > > #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) > > @@ -943,6 +945,44 @@ static inline void put_page(struct page *page) > __put_page(page); > } > > +/* Placeholder version, until all get_user_pages*() callers are updated. */ > +static inline void put_user_page(struct page *page) > +{ > + put_page(page); > +} > + > +/* For get_user_pages*()-pinned pages, use these variants instead of > + * release_pages(): > + */ > +static inline void put_user_pages_dirty(struct page **pages, > + unsigned long npages) > +{ > + while (npages) { > + set_page_dirty(pages[npages]); > + put_user_page(pages[npages]); > + --npages; > + } > +}
Shouldn't these do the !PageDirty(page) thing? Jason

