On Thu, 2 Jul 2020 at 16:27, Eric Auger <eric.au...@redhat.com> wrote:
>
> At the moment each entry in the IOTLB corresponds to a page sized
> mapping (4K, 16K or 64K), even if the page belongs to a mapped
> block. In case of block mapping this unefficiently consumes IOTLB
> entries.
>
> Change the value of the entry so that it reflects the actual
> mapping it belongs to (block or page start address and size).
>
> Also the level/tg of the entry is encoded in the key. In subsequent
> patches we will enable range invalidation. This latter is able
> to provide the level/tg of the entry.
>
> Encoding the level/tg directly in the key will allow to invalidate
> using g_hash_table_remove() when num_pages equals to 1.
>
> Signed-off-by: Eric Auger <eric.au...@redhat.com>

>  SMMUTLBEntry *smmu_iotlb_lookup(SMMUState *bs, SMMUTransCfg *cfg,
> -                                hwaddr iova)
> +                                SMMUTransTableInfo *tt, hwaddr iova)
>  {
> -    SMMUIOTLBKey key = smmu_get_iotlb_key(cfg->asid, iova);
> -    SMMUTLBEntry *entry = g_hash_table_lookup(bs->iotlb, &key);
> +    uint8_t tg = (tt->granule_sz - 10) / 2;
> +    uint8_t inputsize = 64 - tt->tsz;
> +    uint8_t stride = tt->granule_sz - 3;
> +    uint8_t level = 4 - (inputsize - 4) / stride;
> +    SMMUTLBEntry *entry = NULL;
> +
> +    while (level <= 3) {
> +        uint64_t subpage_size = 1ULL << level_shift(level, tt->granule_sz);
> +        uint64_t mask = subpage_size - 1;
> +        SMMUIOTLBKey key;
> +
> +        key = smmu_get_iotlb_key(cfg->asid, iova & ~mask, tg, level);
> +        entry = g_hash_table_lookup(bs->iotlb, &key);
> +        if (entry) {
> +            break;
> +        }
> +        level++;
> +    }

So, this next bit is something of a digression:

Effectively what we're doing here is "look up in the hash
table for each of the 3 possible page sizes this could be
for this TG". I've had suggested to me a possible alternative
data structure:
 * hash on the asid (and vmid eventually?) to get...
 * a sorted list of (start, length) ranges representing
   the TLB for that asid ...
 * which we can binary-search to find the matching range
   (and the associated info)

The theoretical benefit is that we don't need to do three
hash table lookups for each iotlb_lookup, and that it can
support arbitrary-length ranges (so we could honour the Contiguous
bit hint in the page tables, for instance). But to avoid 3 hash
lookups it's probably not worth doing a complete rework of
the data structures.


> -inline void smmu_iotlb_inv_iova(SMMUState *s, uint16_t asid, dma_addr_t iova)
> +static gboolean smmu_hash_remove_by_asid_iova(gpointer key, gpointer value,
> +                                         gpointer user_data)
>  {
> -    SMMUIOTLBKey key = smmu_get_iotlb_key(asid, iova);
> +    SMMUTLBEntry *iter = (SMMUTLBEntry *)value;
> +    IOMMUTLBEntry *entry = &iter->entry;
> +    SMMUIOTLBPageInvInfo *info = (SMMUIOTLBPageInvInfo *)user_data;
> +    uint64_t *iotlb_key = (uint64_t *)key;
> +
> +    if (info->asid >= 0) {
> +        return (info->asid == SMMU_IOTLB_ASID(*iotlb_key)) &&
> +                ((info->iova & ~entry->addr_mask) == entry->iova);
> +    } else {
> +        return (info->iova & ~entry->addr_mask) == entry->iova;
> +    }

Since the iova comparison logic is the same in both branches
of this if(), you can write

    if (info->asid >= 0 && info->asid != SMMU_IOTLB_ASID(*iotlb_key)) {
        return false;
    }
    return (info->iova & ~entry->addr_mask) == entry->iova;

This seems particularly worthwhile given that a later patch
makes the iova comparison logic more complicated.

Otherwise
Reviewed-by: Peter Maydell <peter.mayd...@linaro.org>

thanks
-- PMM

Reply via email to