On 7/15/19 12:52 PM, Bharath Vedartham wrote:
> There have been issues with get_user_pages and filesystem writeback.
> The issues are better described in [1].
> 
> The solution being proposed wants to keep track of gup_pinned pages which 
> will allow to take furthur steps to coordinate between subsystems using gup.
> 
> put_user_page() simply calls put_page inside for now. But the implementation 
> will change once all call sites of put_page() are converted.
> 
> I currently do not have the driver to test. Could I have some suggestions to 
> test this code? The solution is currently implemented in [2] and
> it would be great if we could apply the patch on top of [2] and run some 
> tests to check if any regressions occur.

Hi Bharath,

Process point: the above paragraph, and other meta-questions (about the patch, 
rather than part of the patch) should be placed either after the "---", or in a 
cover letter (git-send-email --cover-letter). That way, the patch itself is in 
a commit-able state.

One more below:

> 
> [1] https://lwn.net/Articles/753027/
> [2] https://github.com/johnhubbard/linux/tree/gup_dma_core
> 
> Cc: Matt Sickler <matt.sick...@daktronics.com>
> Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
> Cc: Jérôme Glisse <jgli...@redhat.com>
> Cc: Ira Weiny <ira.we...@intel.com>
> Cc: John Hubbard <jhubb...@nvidia.com>
> Cc: linux...@kvack.org
> Cc: de...@driverdev.osuosl.org
> 
> Signed-off-by: Bharath Vedartham <linux.b...@gmail.com>
> ---
>  drivers/staging/kpc2000/kpc_dma/fileops.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/staging/kpc2000/kpc_dma/fileops.c 
> b/drivers/staging/kpc2000/kpc_dma/fileops.c
> index 6166587..82c70e6 100644
> --- a/drivers/staging/kpc2000/kpc_dma/fileops.c
> +++ b/drivers/staging/kpc2000/kpc_dma/fileops.c
> @@ -198,9 +198,7 @@ int  kpc_dma_transfer(struct dev_private_data *priv, 
> struct kiocb *kcb, unsigned
>       sg_free_table(&acd->sgt);
>   err_dma_map_sg:
>   err_alloc_sg_table:
> -     for (i = 0 ; i < acd->page_count ; i++){
> -             put_page(acd->user_pages[i]);
> -     }
> +     put_user_pages(acd->user_pages, acd->page_count);
>   err_get_user_pages:
>       kfree(acd->user_pages);
>   err_alloc_userpages:
> @@ -229,9 +227,7 @@ void  transfer_complete_cb(struct aio_cb_data *acd, 
> size_t xfr_count, u32 flags)
>       
>       dma_unmap_sg(&acd->ldev->pldev->dev, acd->sgt.sgl, acd->sgt.nents, 
> acd->ldev->dir);
>       
> -     for (i = 0 ; i < acd->page_count ; i++){
> -             put_page(acd->user_pages[i]);
> -     }
> +     put_user_pages(acd->user_pages, acd->page_count);
>       
>       sg_free_table(&acd->sgt);
>       
> 

Because this is a common pattern, and because the code here doesn't likely need 
to set page dirty before the dma_unmap_sg call, I think the following would be 
better (it's untested), instead of the above diff hunk:

diff --git a/drivers/staging/kpc2000/kpc_dma/fileops.c 
b/drivers/staging/kpc2000/kpc_dma/fileops.c
index 48ca88bc6b0b..d486f9866449 100644
--- a/drivers/staging/kpc2000/kpc_dma/fileops.c
+++ b/drivers/staging/kpc2000/kpc_dma/fileops.c
@@ -211,16 +211,13 @@ void  transfer_complete_cb(struct aio_cb_data *acd, 
size_t xfr_count, u32 flags)
        BUG_ON(acd->ldev == NULL);
        BUG_ON(acd->ldev->pldev == NULL);
 
-       for (i = 0 ; i < acd->page_count ; i++) {
-               if (!PageReserved(acd->user_pages[i])) {
-                       set_page_dirty(acd->user_pages[i]);
-               }
-       }
-
        dma_unmap_sg(&acd->ldev->pldev->dev, acd->sgt.sgl, acd->sgt.nents, 
acd->ldev->dir);
 
        for (i = 0 ; i < acd->page_count ; i++) {
-               put_page(acd->user_pages[i]);
+               if (!PageReserved(acd->user_pages[i])) {
+                       put_user_pages_dirty(&acd->user_pages[i], 1);
+               else
+                       put_user_page(acd->user_pages[i]);
        }
 
        sg_free_table(&acd->sgt);

Assuming that you make those two changes, you can add:

    Reviewed-by: John Hubbard <jhubb...@nvidia.com>


thanks,
-- 
John Hubbard
NVIDIA

Reply via email to