Re: [RFC PATCH 3/3] powerpc/mm/iommu: Allow migration of cma allocated pages during mm_iommu_get

2018-09-18 Thread Aneesh Kumar K.V

On 9/18/18 9:21 AM, David Gibson wrote:

On Mon, Sep 03, 2018 at 10:07:33PM +0530, Aneesh Kumar K.V wrote:

Current code doesn't do page migration if the page allocated is a compound page.
With HugeTLB migration support, we can end up allocating hugetlb pages from
CMA region. Also THP pages can be allocated from CMA region. This patch updates
the code to handle compound pages correctly.

This add a new helper get_user_pages_cma_migrate. It does one get_user_pages
with right count, instead of doing one get_user_pages per page. That avoids
reading page table multiple times. The helper could possibly used by other
subystem if we have more users.

The patch also convert the hpas member of mm_iommu_table_group_mem_t to a union.
We use the same storage location to store pointers to struct page. We cannot
update alll the code path use struct page *, because we access hpas in real mode
and we can't do that struct page * to pfn conversion in real mode.

Signed-off-by: Aneesh Kumar K.V 


This approach doesn't seem quite right to me.  It's specific to pages
mapped into the IOMMU.  It's true that will address the obvious case
we have, of vfio-using guests fragmenting the CMA for other guests.

But AFAICT, fragmenting the CMA coud happen with *any* locked memory,
not just things that are IOMMU mapped for VFIO.  So, for example a
guest not using vfio, but using -realtime mlock=on, or an unrelated
program using locked memory (e.g. gpg or something else that locks
memory for security reasons).

AFAICT this approach won't fix the problem for that case.



yes and we should be migrate away pages that we allocated out of CMA 
region before we pin/mlock them. This handle the long term pin w.r.t 
vfio. For mlock too we should do that.


-aneesh



Re: [RFC PATCH 3/3] powerpc/mm/iommu: Allow migration of cma allocated pages during mm_iommu_get

2018-09-17 Thread David Gibson
On Mon, Sep 03, 2018 at 10:07:33PM +0530, Aneesh Kumar K.V wrote:
> Current code doesn't do page migration if the page allocated is a compound 
> page.
> With HugeTLB migration support, we can end up allocating hugetlb pages from
> CMA region. Also THP pages can be allocated from CMA region. This patch 
> updates
> the code to handle compound pages correctly.
> 
> This add a new helper get_user_pages_cma_migrate. It does one get_user_pages
> with right count, instead of doing one get_user_pages per page. That avoids
> reading page table multiple times. The helper could possibly used by other
> subystem if we have more users.
> 
> The patch also convert the hpas member of mm_iommu_table_group_mem_t to a 
> union.
> We use the same storage location to store pointers to struct page. We cannot
> update alll the code path use struct page *, because we access hpas in real 
> mode
> and we can't do that struct page * to pfn conversion in real mode.
> 
> Signed-off-by: Aneesh Kumar K.V 

This approach doesn't seem quite right to me.  It's specific to pages
mapped into the IOMMU.  It's true that will address the obvious case
we have, of vfio-using guests fragmenting the CMA for other guests.

But AFAICT, fragmenting the CMA coud happen with *any* locked memory,
not just things that are IOMMU mapped for VFIO.  So, for example a
guest not using vfio, but using -realtime mlock=on, or an unrelated
program using locked memory (e.g. gpg or something else that locks
memory for security reasons).

AFAICT this approach won't fix the problem for that case.

> ---
>  arch/powerpc/mm/mmu_context_iommu.c | 195 ++--
>  1 file changed, 123 insertions(+), 72 deletions(-)
> 
> diff --git a/arch/powerpc/mm/mmu_context_iommu.c 
> b/arch/powerpc/mm/mmu_context_iommu.c
> index f472965f7638..597b88a0abce 100644
> --- a/arch/powerpc/mm/mmu_context_iommu.c
> +++ b/arch/powerpc/mm/mmu_context_iommu.c
> @@ -20,6 +20,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  static DEFINE_MUTEX(mem_list_mutex);
>  
> @@ -30,8 +31,18 @@ struct mm_iommu_table_group_mem_t {
>   atomic64_t mapped;
>   unsigned int pageshift;
>   u64 ua; /* userspace address */
> - u64 entries;/* number of entries in hpas[] */
> - u64 *hpas;  /* vmalloc'ed */
> + u64 entries;/* number of entries in hpages[] */
> + /*
> +  * in mm_iommu_get we temporarily use this to store
> +  * struct page address.
> +  *
> +  * We need to convert ua to hpa in real mode. Make it
> +  * simpler by storing physicall address.
> +  */
> + union {
> + struct page **hpages;   /* vmalloc'ed */
> + phys_addr_t *hpas;
> + };
>  };
>  
>  static long mm_iommu_adjust_locked_vm(struct mm_struct *mm,
> @@ -75,62 +86,112 @@ bool mm_iommu_preregistered(struct mm_struct *mm)
>  EXPORT_SYMBOL_GPL(mm_iommu_preregistered);
>  
>  /*
> - * Taken from alloc_migrate_target with changes to remove CMA allocations
> + * Taken from alloc_migrate_target/alloc_migrate_huge_page with changes to 
> remove
> + * CMA allocations
> + * Is this the right allocator for hugetlb?
>   */
>  struct page *new_iommu_non_cma_page(struct page *page, unsigned long private)
>  {
> - gfp_t gfp_mask = GFP_USER;
> - struct page *new_page;
> + /* is this the right nid? */
> + int nid = numa_mem_id();
> + gfp_t gfp_mask = GFP_HIGHUSER;
>  
> - if (PageCompound(page))
> - return NULL;
> + if (PageHuge(page)) {
>  
> - if (PageHighMem(page))
> - gfp_mask |= __GFP_HIGHMEM;
> + struct hstate *h = page_hstate(page);
> + /*
> +  * We don't want to dequeue from the pool because pool pages 
> will
> +  * mostly be from the CMA region.
> +  */
> + return alloc_migrate_huge_page(h, gfp_mask, nid, NULL);
>  
> - /*
> -  * We don't want the allocation to force an OOM if possibe
> -  */
> - new_page = alloc_page(gfp_mask | __GFP_NORETRY | __GFP_NOWARN);
> - return new_page;
> + } else if (PageTransHuge(page)) {
> + struct page *thp;
> + gfp_t thp_gfpmask = GFP_TRANSHUGE & ~__GFP_MOVABLE;
> +
> + thp = __alloc_pages_node(nid, thp_gfpmask, HPAGE_PMD_ORDER);
> + if (!thp)
> + return NULL;
> + prep_transhuge_page(thp);
> + return thp;
> + }
> + return __alloc_pages_node(nid, gfp_mask, 0);
>  }
>  
> -static int mm_iommu_move_page_from_cma(struct page *page)
> +int get_user_pages_cma_migrate(unsigned long start, int nr_pages, int write,
> +struct page **pages)
>  {
> - int ret = 0;
> - LIST_HEAD(cma_migrate_pages);
> -
> - /* Ignore huge pages for now */
> - if (PageCompound(page))
> - return -EBUSY;
> -
> - lru_add_drain();
> - ret = isolate_lru_page(page);
> - if