Re: VFIO iommu page size masking

2015-02-13 Thread Alex Williamson
On Fri, 2015-02-13 at 03:41 +0100, Alexander Graf wrote:
> Hi Alex,
> 
> While trying to get VFIO-PCI working on AArch64 (with 64k page size), I
> stumbled over the following piece of code:
> 
> > static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> > {
> > struct vfio_domain *domain;
> > unsigned long bitmap = PAGE_MASK;
> > 
> > mutex_lock(&iommu->lock);
> > list_for_each_entry(domain, &iommu->domain_list, next)
> > bitmap &= domain->domain->ops->pgsize_bitmap;
> > mutex_unlock(&iommu->lock);
> > 
> > return bitmap;
> > }
> 
> The SMMU page mask is
> 
> [3.054302] arm-smmu e0a0.smmu:Supported page sizes: 0x40201000
> 
> but after this function, we end up supporting one 2MB pages and above.
> The reason for that is simple: You restrict the bitmap to PAGE_MASK and
> above.
> 
> Now the big question is why you're doing that. I don't see why it would
> be a problem if the IOMMU maps a page in smaller chunks.
> 
> So I tried to patch the code above with s/PAGE_MASK/1UL/ and everything
> seems to run fine. But maybe we're not lacking some sanity checks?

Hey Alex,

Yeah, we may need to double check if we prevent sub-PAGE_SIZE mappings
elsewhere in the DMA mapping path, but that's probably the right thing
to do.  On x86 we have AMD-Vi, which actually supports just about any
power-of-two mapping and therefore exposes effectively PAGE_MASK and
VT-d, which only natively supports a few page sizes, but breaks down
mappings itself and therefore muddies the interface by exposing
PAGE_MASK also.  So the IOMMU API ends up not really being a way to
expose native IOMMU page sizes anyway.

BTW, I'm on holiday until late next week, so I apologize to all the vfio
threads that won't be getting any attention until then.  Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


VFIO iommu page size masking

2015-02-12 Thread Alexander Graf
Hi Alex,

While trying to get VFIO-PCI working on AArch64 (with 64k page size), I
stumbled over the following piece of code:

> static unsigned long vfio_pgsize_bitmap(struct vfio_iommu *iommu)
> {
> struct vfio_domain *domain;
> unsigned long bitmap = PAGE_MASK;
> 
> mutex_lock(&iommu->lock);
> list_for_each_entry(domain, &iommu->domain_list, next)
> bitmap &= domain->domain->ops->pgsize_bitmap;
> mutex_unlock(&iommu->lock);
> 
> return bitmap;
> }

The SMMU page mask is

[3.054302] arm-smmu e0a0.smmu:  Supported page sizes: 0x40201000

but after this function, we end up supporting one 2MB pages and above.
The reason for that is simple: You restrict the bitmap to PAGE_MASK and
above.

Now the big question is why you're doing that. I don't see why it would
be a problem if the IOMMU maps a page in smaller chunks.

So I tried to patch the code above with s/PAGE_MASK/1UL/ and everything
seems to run fine. But maybe we're not lacking some sanity checks?


Alex
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html