5.0-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Robert Richter <[email protected]>

commit 80ef4464d5e27408685e609d389663aad46644b9 upstream.

If a 32 bit allocation request is too big to possibly succeed, it
early exits with a failure and then should never update max32_alloc_
size. This patch fixes current code, now the size is only updated if
the slow path failed while walking the tree. Without the fix the
allocation may enter the slow path again even if there was a failure
before of a request with the same or a smaller size.

Cc: <[email protected]> # 4.20+
Fixes: bee60e94a1e2 ("iommu/iova: Optimise attempts to allocate iova from 32bit 
address range")
Reviewed-by: Robin Murphy <[email protected]>
Signed-off-by: Robert Richter <[email protected]>
Signed-off-by: Joerg Roedel <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 drivers/iommu/iova.c |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -207,8 +207,10 @@ static int __alloc_and_insert_iova_range
                curr_iova = rb_entry(curr, struct iova, node);
        } while (curr && new_pfn <= curr_iova->pfn_hi);
 
-       if (limit_pfn < size || new_pfn < iovad->start_pfn)
+       if (limit_pfn < size || new_pfn < iovad->start_pfn) {
+               iovad->max32_alloc_size = size;
                goto iova32_full;
+       }
 
        /* pfn_lo will point to size aligned address if size_aligned is set */
        new->pfn_lo = new_pfn;
@@ -222,7 +224,6 @@ static int __alloc_and_insert_iova_range
        return 0;
 
 iova32_full:
-       iovad->max32_alloc_size = size;
        spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
        return -ENOMEM;
 }


Reply via email to