If the StreamIDs in a system can all be resolved by a single level-2
stream table (i.e. SIDSIZE < SPLIT), then we currently get our maths
wrong and allocate the largest strtab we support, thanks to unsigned
overflow in our calculation.

This patch fixes the issue by checking the SIDSIZE explicitly when
calculating the size of our first-level stream table.

Reported-by: Matt Evans <matt.ev...@arm.com>
Signed-off-by: Will Deacon <will.dea...@arm.com>
---
 drivers/iommu/arm-smmu-v3.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index c2c1ad8915d9..4f093373f4c3 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -2054,9 +2054,17 @@ static int arm_smmu_init_strtab_2lvl(struct 
arm_smmu_device *smmu)
        int ret;
        struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
 
-       /* Calculate the L1 size, capped to the SIDSIZE */
-       size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
-       size = min(size, smmu->sid_bits - STRTAB_SPLIT);
+       /*
+        * If we can resolve everything with a single L2 table, then we
+        * just need a single L1 descriptor. Otherwise, calculate the L1
+        * size, capped to the SIDSIZE.
+        */
+       if (smmu->sid_bits < STRTAB_SPLIT) {
+               size = 0;
+       } else {
+               size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
+               size = min(size, smmu->sid_bits - STRTAB_SPLIT);
+       }
        cfg->num_l1_ents = 1 << size;
 
        size += STRTAB_SPLIT;
-- 
2.1.4

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to