Hi Yi,

On 26/04/17 11:12, Liu, Yi L wrote:
> From: "Liu, Yi L" <yi.l....@linux.intel.com>
> 
> This patch adds VFIO_IOMMU_TLB_INVALIDATE to propagate IOMMU TLB
> invalidate request from guest to host.
> 
> In the case of SVM virtualization on VT-d, host IOMMU driver has
> no knowledge of caching structure updates unless the guest
> invalidation activities are passed down to the host. So a new
> IOCTL is needed to propagate the guest cache invalidation through
> VFIO.
> 
> Signed-off-by: Liu, Yi L <yi.l....@linux.intel.com>
> ---
>  include/uapi/linux/vfio.h | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
> index 6b97987..50c51f8 100644
> --- a/include/uapi/linux/vfio.h
> +++ b/include/uapi/linux/vfio.h
> @@ -564,6 +564,15 @@ struct vfio_device_svm {
>  
>  #define VFIO_IOMMU_SVM_BIND_TASK     _IO(VFIO_TYPE, VFIO_BASE + 22)
>  
> +/* For IOMMU TLB Invalidation Propagation */
> +struct vfio_iommu_tlb_invalidate {
> +     __u32   argsz;
> +     __u32   length;
> +     __u8    data[];
> +};

We initially discussed something a little more generic than this, with
most info explicitly described and only pIOMMU-specific quirks and hints
in an opaque structure. Out of curiosity, why the change? I'm not against
a fully opaque structure, but there seem to be a large overlap between TLB
invalidations across architectures.


For what it's worth, when prototyping the paravirtualized IOMMU I came up
with the following.

(From the paravirtualized POV, the SMMU also has to swizzle endianess
after unpacking an opaque structure, since userspace doesn't know what's
in it and guest might use a different endianess. So we need to force all
opaque data to be e.g. little-endian.)

struct vfio_iommu_tlb_invalidate {
        __u32   argsz;
        __u32   scope;
        __u32   flags;
        __u32   pasid;
        __u64   vaddr;
        __u64   size;
        __u8    data[];
};

Scope is a bitfields restricting the invalidation scope. By default
invalidate the whole container (all PASIDs and all VAs). @pasid, @vaddr
and @size are unused.

Adding VFIO_IOMMU_INVALIDATE_PASID (1 << 0) restricts the invalidation
scope to the pasid described by @pasid.
Adding VFIO_IOMMU_INVALIDATE_VADDR (1 << 1) restricts the invalidation
scope to the address range described by (@vaddr, @size).

So setting scope = VFIO_IOMMU_INVALIDATE_VADDR would invalidate the VA
range for *all* pasids (as well as no_pasid). Setting scope =
(VFIO_IOMMU_INVALIDATE_VADDR|VFIO_IOMMU_INVALIDATE_PASID) would invalidate
the VA range only for @pasid.

Flags depend on the selected scope:

VFIO_IOMMU_INVALIDATE_NO_PASID, indicating that invalidation (either
without scope or with INVALIDATE_VADDR) targets non-pasid mappings
exclusively (some architectures, e.g. SMMU, allow this)

VFIO_IOMMU_INVALIDATE_VADDR_LEAF, indicating that the pIOMMU doesn't need
to invalidate all intermediate tables cached as part of the PTW for vaddr,
only the last-level entry (pte). This is a hint.

I guess what's missing for Intel IOMMU and would go in @data is the
"global" hint (which we don't have in SMMU invalidations). Do you see
anything else, that the pIOMMU cannot deduce from this structure?

Thanks,
Jean


> +#define VFIO_IOMMU_TLB_INVALIDATE    _IO(VFIO_TYPE, VFIO_BASE + 23)
> +
>  /* -------- Additional API for SPAPR TCE (Server POWERPC) IOMMU -------- */
>  
>  /*
> 


Reply via email to