On 9/30/2020 1:14 PM, vji...@codeaurora.org wrote:
> From: Vijayanand Jitta <vji...@codeaurora.org>
> 
> When ever a new iova alloc request comes iova is always searched
> from the cached node and the nodes which are previous to cached
> node. So, even if there is free iova space available in the nodes
> which are next to the cached node iova allocation can still fail
> because of this approach.
> 
> Consider the following sequence of iova alloc and frees on
> 1GB of iova space
> 
> 1) alloc - 500MB
> 2) alloc - 12MB
> 3) alloc - 499MB
> 4) free -  12MB which was allocated in step 2
> 5) alloc - 13MB
> 
> After the above sequence we will have 12MB of free iova space and
> cached node will be pointing to the iova pfn of last alloc of 13MB
> which will be the lowest iova pfn of that iova space. Now if we get an
> alloc request of 2MB we just search from cached node and then look
> for lower iova pfn's for free iova and as they aren't any, iova alloc
> fails though there is 12MB of free iova space.
> 
> To avoid such iova search failures do a retry from the last rb tree node
> when iova search fails, this will search the entire tree and get an iova
> if its available.
> 
> Signed-off-by: Vijayanand Jitta <vji...@codeaurora.org>
> Reviewed-by: Robin Murphy <robin.mur...@arm.com>
> ---
>  drivers/iommu/iova.c | 23 +++++++++++++++++------
>  1 file changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 30d969a..c3a1a8e 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -184,8 +184,9 @@ static int __alloc_and_insert_iova_range(struct 
> iova_domain *iovad,
>       struct rb_node *curr, *prev;
>       struct iova *curr_iova;
>       unsigned long flags;
> -     unsigned long new_pfn;
> +     unsigned long new_pfn, retry_pfn;
>       unsigned long align_mask = ~0UL;
> +     unsigned long high_pfn = limit_pfn, low_pfn = iovad->start_pfn;
>  
>       if (size_aligned)
>               align_mask <<= fls_long(size - 1);
> @@ -198,15 +199,25 @@ static int __alloc_and_insert_iova_range(struct 
> iova_domain *iovad,
>  
>       curr = __get_cached_rbnode(iovad, limit_pfn);
>       curr_iova = rb_entry(curr, struct iova, node);
> +     retry_pfn = curr_iova->pfn_hi + 1;
> +
> +retry:
>       do {
> -             limit_pfn = min(limit_pfn, curr_iova->pfn_lo);
> -             new_pfn = (limit_pfn - size) & align_mask;
> +             high_pfn = min(high_pfn, curr_iova->pfn_lo);
> +             new_pfn = (high_pfn - size) & align_mask;
>               prev = curr;
>               curr = rb_prev(curr);
>               curr_iova = rb_entry(curr, struct iova, node);
> -     } while (curr && new_pfn <= curr_iova->pfn_hi);
> -
> -     if (limit_pfn < size || new_pfn < iovad->start_pfn) {
> +     } while (curr && new_pfn <= curr_iova->pfn_hi && new_pfn >= low_pfn);
> +
> +     if (high_pfn < size || new_pfn < low_pfn) {
> +             if (low_pfn == iovad->start_pfn && retry_pfn < limit_pfn) {
> +                     high_pfn = limit_pfn;
> +                     low_pfn = retry_pfn;
> +                     curr = &iovad->anchor.node;
> +                     curr_iova = rb_entry(curr, struct iova, node);
> +                     goto retry;
> +             }
>               iovad->max32_alloc_size = size;
>               goto iova32_full;
>       }
> 

Gentle ping.

Thanks,
Vijay
-- 
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a
member of Code Aurora Forum, hosted by The Linux Foundation
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to