On 2022-05-02 17:42, Jason Gunthorpe wrote:
On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote:
On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:
Hi Joerg,

This is a resend version of v8 posted here:
https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/
as we discussed in this thread:
https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/

All patches can be applied perfectly except this one:
  - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type
It conflicts with below refactoring commit:
  - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks"
The conflict has been fixed in this post.

No functional changes in this series. I suppress cc-ing this series to
all v8 reviewers in order to avoid spam.

Please consider it for your iommu tree.

Reverting this series fixed an user-after-free while doing SR-IOV.

  BUG: KASAN: use-after-free in __lock_acquire
  Read of size 8 at addr ffff080279825d78 by task qemu-system-aar/22429
  CPU: 24 PID: 22429 Comm: qemu-system-aar Not tainted 5.18.0-rc5-next-20220502 
#69
  Call trace:
   dump_backtrace
   show_stack
   dump_stack_lvl
   print_address_description.constprop.0
   print_report
   kasan_report
   __asan_report_load8_noabort
   __lock_acquire
   lock_acquire.part.0
   lock_acquire
   _raw_spin_lock_irqsave
   arm_smmu_detach_dev
   arm_smmu_detach_dev at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2377
   arm_smmu_attach_dev

Hum.

So what has happened is that VFIO does this sequence:

  iommu_detach_group()
  iommu_domain_free()
  iommu_group_release_dma_owner()

Which, I think should be valid, API wise.

 From what I can see reading the code SMMUv3 blows up above because it
doesn't have a detach_dev op:

        .default_domain_ops = &(const struct iommu_domain_ops) {
                .attach_dev             = arm_smmu_attach_dev,
                .map_pages              = arm_smmu_map_pages,
                .unmap_pages            = arm_smmu_unmap_pages,
                .flush_iotlb_all        = arm_smmu_flush_iotlb_all,
                .iotlb_sync             = arm_smmu_iotlb_sync,
                .iova_to_phys           = arm_smmu_iova_to_phys,
                .enable_nesting         = arm_smmu_enable_nesting,
                .free                   = arm_smmu_domain_free,
        }

But it is internally tracking the domain inside the master - so when
the next domain is attached it does this:

static void arm_smmu_detach_dev(struct arm_smmu_master *master)
{
        struct arm_smmu_domain *smmu_domain = master->domain;

        spin_lock_irqsave(&smmu_domain->devices_lock, flags);

And explodes as the domain has been freed but master->domain was not
NULL'd.

It worked before because iommu_detach_group() used to attach the
default group and that was before the domain was freed in the above
sequence.

Oof, I totally overlooked the significance of that little subtlety in review :(

I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from
the detach_dev op and null it's cached copy of the domain, but I don't
know this driver.. Robin?

The original intent was that .detach_dev is deprecated in favour of default domains, and when the latter are in use, a device is always attached *somewhere* once probed (i.e. group->domain is never NULL). At face value, the neatest fix IMO would probably be for SMMUv3's .domain_free to handle smmu_domain->devices being non-empty and detach them at that point. However that wouldn't be viable for virtio-iommu or anyone else keeping an internal one-way association of devices to their current domains.

If we're giving up entirely on that notion of .detach_dev going away then all default-domain-supporting drivers probably want checking to make sure that path hasn't bitrotted; both Arm SMMU drivers had it proactively removed 6 years ago; virtio-iommu never had it at all; newer drivers like apple-dart have some code there, but it won't have ever run until now.

We *could* stay true to the original paradigm by introducing some real usage of IOMMU_DOMAIN_BLOCKED, such that we could keep one or more of those around to actively attach to instead of having groups in this unattached limbo state, but that's a bigger job involving adding support to drivers as well; too much for a quick fix now...

Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to