Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-13 Thread Jason Gunthorpe via iommu
On Fri, May 13, 2022 at 10:25:48AM -0600, Alex Williamson wrote:
> On Fri, 13 May 2022 17:49:44 +0200
> Joerg Roedel  wrote:
> 
> > Hi Alex,
> > 
> > On Wed, May 04, 2022 at 10:29:56AM -0600, Alex Williamson wrote:
> > > Done, and thanks for the heads-up.  Please try to cc me when the
> > > vfio-notifier-fix branch is merged back into your next branch.  Thanks,  
> > 
> > This has happened now, the vfio-notifier-fix branch got the fix and is
> > merged back into my next branch.
> 
> Thanks, Joerg!
> 
> Jason, I'll push a merge of this with
> 
> Subject: [PATCH] vfio: Delete container_q
> 0-v1-a1e8791d795b+6b-vfio_container_q_...@nvidia.com
> 
> and
> 
> Subject: [PATCH v3 0/8] Remove vfio_group from the struct file facing VFIO API
> 0-v3-f7729924a7ea+25e33-vfio_kvm_no_group_...@nvidia.com
> 
> as soon as my sanity build finishes.  Thanks,

Thanks, I'll rebase and repost the remaining vfio series.

Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-13 Thread Alex Williamson
On Fri, 13 May 2022 17:49:44 +0200
Joerg Roedel  wrote:

> Hi Alex,
> 
> On Wed, May 04, 2022 at 10:29:56AM -0600, Alex Williamson wrote:
> > Done, and thanks for the heads-up.  Please try to cc me when the
> > vfio-notifier-fix branch is merged back into your next branch.  Thanks,  
> 
> This has happened now, the vfio-notifier-fix branch got the fix and is
> merged back into my next branch.

Thanks, Joerg!

Jason, I'll push a merge of this with

Subject: [PATCH] vfio: Delete container_q
0-v1-a1e8791d795b+6b-vfio_container_q_...@nvidia.com

and

Subject: [PATCH v3 0/8] Remove vfio_group from the struct file facing VFIO API
0-v3-f7729924a7ea+25e33-vfio_kvm_no_group_...@nvidia.com

as soon as my sanity build finishes.  Thanks,

Alex

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-13 Thread Joerg Roedel
Hi Alex,

On Wed, May 04, 2022 at 10:29:56AM -0600, Alex Williamson wrote:
> Done, and thanks for the heads-up.  Please try to cc me when the
> vfio-notifier-fix branch is merged back into your next branch.  Thanks,

This has happened now, the vfio-notifier-fix branch got the fix and is
merged back into my next branch.

Regards,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-13 Thread Tian, Kevin
> From: Jason Gunthorpe
> Sent: Tuesday, May 10, 2022 2:33 AM
> 
> On Wed, May 04, 2022 at 01:57:05PM +0200, Joerg Roedel wrote:
> > On Wed, May 04, 2022 at 08:51:35AM -0300, Jason Gunthorpe wrote:
> > > Nicolin and Eric have been testing with this series on ARM for a long
> > > time now, it is not like it is completely broken.
> >
> > Yeah, I am also optimistic this can be fixed soon. But the rule is that
> > the next branch should only contain patches which I would send to Linus.
> > And with a known issue in it I wouldn't, so it is excluded at least from
> > my next branch for now. The topic branch is still alive and I will merge
> > it again when the fix is in.
> 
> The fix is out, lets merge it back in so we can have some more time to
> discover any additional issues. People seem to test when it is in your
> branch.
> 

Joerg, any chance you may give it a priority? This is the first step of
a long refactoring effort and it has been gating quite a few
well-reviewed improvements down the road. having it tested earlier
in your branch is definitely appreciated. 

Thanks
Kevin

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-09 Thread Jason Gunthorpe via iommu
On Wed, May 04, 2022 at 01:57:05PM +0200, Joerg Roedel wrote:
> On Wed, May 04, 2022 at 08:51:35AM -0300, Jason Gunthorpe wrote:
> > Nicolin and Eric have been testing with this series on ARM for a long
> > time now, it is not like it is completely broken.
> 
> Yeah, I am also optimistic this can be fixed soon. But the rule is that
> the next branch should only contain patches which I would send to Linus.
> And with a known issue in it I wouldn't, so it is excluded at least from
> my next branch for now. The topic branch is still alive and I will merge
> it again when the fix is in.

The fix is out, lets merge it back in so we can have some more time to
discover any additional issues. People seem to test when it is in your
branch.

Thanks,
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-04 Thread Alex Williamson
On Wed, 4 May 2022 10:42:07 +0200
Joerg Roedel  wrote:

> On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote:
> > Reverting this series fixed an user-after-free while doing SR-IOV.
> > 
> >  BUG: KASAN: use-after-free in __lock_acquire  
> 
> Hrm, okay. I am going exclude this series from my next branch for now
> until this has been sorted out.
> 
> Alex, I suggest you do the same.

Done, and thanks for the heads-up.  Please try to cc me when the
vfio-notifier-fix branch is merged back into your next branch.  Thanks,

Alex

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-04 Thread Joerg Roedel
On Wed, May 04, 2022 at 08:51:35AM -0300, Jason Gunthorpe wrote:
> Nicolin and Eric have been testing with this series on ARM for a long
> time now, it is not like it is completely broken.

Yeah, I am also optimistic this can be fixed soon. But the rule is that
the next branch should only contain patches which I would send to Linus.
And with a known issue in it I wouldn't, so it is excluded at least from
my next branch for now. The topic branch is still alive and I will merge
it again when the fix is in.

Regards,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-04 Thread Jason Gunthorpe via iommu
On Wed, May 04, 2022 at 10:42:07AM +0200, Joerg Roedel wrote:
> On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote:
> > Reverting this series fixed an user-after-free while doing SR-IOV.
> > 
> >  BUG: KASAN: use-after-free in __lock_acquire
>
> Hrm, okay. I am going exclude this series from my next branch for now
> until this has been sorted out.

This is going to blow up everything going on in vfio right now, let's
not do something so drastic please.

There is already a patch to fix it, lets wait for it to get sorted
out.

Nicolin and Eric have been testing with this series on ARM for a long
time now, it is not like it is completely broken.

Thanks,
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-04 Thread Joerg Roedel
On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote:
> Reverting this series fixed an user-after-free while doing SR-IOV.
> 
>  BUG: KASAN: use-after-free in __lock_acquire

Hrm, okay. I am going exclude this series from my next branch for now
until this has been sorted out.

Alex, I suggest you do the same.

Regards,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-03 Thread Robin Murphy

On 2022-05-03 16:23, Jason Gunthorpe wrote:

On Tue, May 03, 2022 at 02:04:37PM +0100, Robin Murphy wrote:


I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from
the detach_dev op and null it's cached copy of the domain, but I don't
know this driver.. Robin?


The original intent was that .detach_dev is deprecated in favour of default
domains, and when the latter are in use, a device is always attached
*somewhere* once probed (i.e. group->domain is never NULL). At face value,
the neatest fix IMO would probably be for SMMUv3's .domain_free to handle
smmu_domain->devices being non-empty and detach them at that point. However
that wouldn't be viable for virtio-iommu or anyone else keeping an internal
one-way association of devices to their current domains.


Oh wow that is not obvious

Actually, I think it is much worse than this because
iommu_group_claim_dma_owner() does a __iommu_detach_group() with the
expecation that this would actually result in DMA being blocked,
immediately. The idea that __iomuu_detatch_group() is a NOP is kind of
scary.


Scarier than the fact that even where it *is* implemented, .detach_dev
has never had a well-defined behaviour either, and plenty of drivers
treat it as a "remove the IOMMU from the picture altogether" operation
which ends up with the device in bypass rather than blocked?


Leaving the group attached to the kernel DMA domain will allow
userspace to DMA to all kernel memory :\


Note that a fair amount of IOMMU hardware only has two states, thus
could only actually achieve a blocking behaviour by enabling translation
with an empty pagetable anyway. (Trivia: and technically some of them
aren't even capable of blocking invalid accesses *when* translating -
they can only apply a "default" translation targeting some scratch page)


So one approach could be to block use of iommu_group_claim_dma_owner()
if no detatch_dev op is present and then go through and put them back
or do something else. This could be short-term OK if we add an op to
SMMUv3, but long term everything would have to be fixed

Or we can allocate a dummy empty/blocked domain during
iommu_group_claim_dma_owner() and attach it whenever.


How does the compile-tested diff below seem? There's a fair chance it's
still broken, but I don't have the bandwidth to give it much more
thought right now.

Cheers,
Robin.

->8-
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 29906bc16371..597d70ed7007 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -45,6 +45,7 @@ struct iommu_group {
int id;
struct iommu_domain *default_domain;
struct iommu_domain *domain;
+   struct iommu_domain *purgatory;
struct list_head entry;
unsigned int owner_cnt;
void *owner;
@@ -596,6 +597,8 @@ static void iommu_group_release(struct kobject *kobj)
 
 	if (group->default_domain)

iommu_domain_free(group->default_domain);
+   if (group->purgatory)
+   iommu_domain_free(group->purgatory);
 
 	kfree(group->name);

kfree(group);
@@ -2041,6 +2044,12 @@ struct iommu_domain *iommu_get_dma_domain(struct device 
*dev)
return dev->iommu_group->default_domain;
 }
 
+static bool iommu_group_user_attached(struct iommu_group *group)

+{
+   return group->domain && group->domain != group->default_domain &&
+  group->domain != group->purgatory;
+}
+
 /*
  * IOMMU groups are really the natural working unit of the IOMMU, but
  * the IOMMU API works on domains and devices.  Bridge that gap by
@@ -2063,7 +2072,7 @@ static int __iommu_attach_group(struct iommu_domain 
*domain,
 {
int ret;
 
-	if (group->domain && group->domain != group->default_domain)

+   if (iommu_group_user_attached(group))
return -EBUSY;
 
 	ret = __iommu_group_for_each_dev(group, domain,

@@ -2104,7 +2113,12 @@ static void __iommu_detach_group(struct iommu_domain 
*domain,
 * If the group has been claimed already, do not re-attach the default
 * domain.
 */
-   if (!group->default_domain || group->owner) {
+   if (group->owner) {
+   WARN_ON(__iommu_attach_group(group->purgatory, group));
+   return;
+   }
+
+   if (!group->default_domain) {
__iommu_group_for_each_dev(group, domain,
   iommu_group_do_detach_device);
group->domain = NULL;
@@ -3111,6 +3125,25 @@ void iommu_device_unuse_default_domain(struct device 
*dev)
iommu_group_put(group);
 }
 
+static struct iommu_domain *iommu_group_get_purgatory(struct iommu_group *group)

+{
+   struct group_device *dev;
+
+   mutex_lock(>mutex);
+   if (group->purgatory)
+   goto out;
+
+   dev = list_first_entry(>devices, struct group_device, list);
+   group->purgatory = __iommu_domain_alloc(dev->dev->bus,
+   IOMMU_DOMAIN_BLOCKED);

Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-03 Thread Jason Gunthorpe via iommu
On Tue, May 03, 2022 at 02:04:37PM +0100, Robin Murphy wrote:

> > I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from
> > the detach_dev op and null it's cached copy of the domain, but I don't
> > know this driver.. Robin?
> 
> The original intent was that .detach_dev is deprecated in favour of default
> domains, and when the latter are in use, a device is always attached
> *somewhere* once probed (i.e. group->domain is never NULL). At face value,
> the neatest fix IMO would probably be for SMMUv3's .domain_free to handle
> smmu_domain->devices being non-empty and detach them at that point. However
> that wouldn't be viable for virtio-iommu or anyone else keeping an internal
> one-way association of devices to their current domains.

Oh wow that is not obvious

Actually, I think it is much worse than this because
iommu_group_claim_dma_owner() does a __iommu_detach_group() with the
expecation that this would actually result in DMA being blocked,
immediately. The idea that __iomuu_detatch_group() is a NOP is kind of
scary.

Leaving the group attached to the kernel DMA domain will allow
userspace to DMA to all kernel memory :\

So one approach could be to block use of iommu_group_claim_dma_owner()
if no detatch_dev op is present and then go through and put them back
or do something else. This could be short-term OK if we add an op to
SMMUv3, but long term everything would have to be fixed

Or we can allocate a dummy empty/blocked domain during
iommu_group_claim_dma_owner() and attach it whenever.

The really ugly trick is that detatch cannot fail, so attach to this
blocking domain must also not fail - IMHO this is a very complicated
API to expect for the driver to implement correctly... I see there is
already a WARN_ON that attaching to the default domain cannot
fail. Maybe this warrants an actual no-fail attach op so the driver
can be more aware of this..

And some of these internal APIs could stand some adjusting if we
really never want a true "detatch" it is always some kind of
replace/swap type operation, either to the default domain or to the
blocking domain.

> We *could* stay true to the original paradigm by introducing some real usage
> of IOMMU_DOMAIN_BLOCKED, such that we could keep one or more of those around
> to actively attach to instead of having groups in this unattached limbo
> state, but that's a bigger job involving adding support to drivers as well;
> too much for a quick fix now...

I suspect for the short term we can get by with an empty mapping
domain - using DOMAIN_BLOCKED is a bit of a refinement.

Thanks,
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-03 Thread Robin Murphy

On 2022-05-02 17:42, Jason Gunthorpe wrote:

On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote:

On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:

Hi Joerg,

This is a resend version of v8 posted here:
https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/
as we discussed in this thread:
https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/

All patches can be applied perfectly except this one:
  - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type
It conflicts with below refactoring commit:
  - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks"
The conflict has been fixed in this post.

No functional changes in this series. I suppress cc-ing this series to
all v8 reviewers in order to avoid spam.

Please consider it for your iommu tree.


Reverting this series fixed an user-after-free while doing SR-IOV.

  BUG: KASAN: use-after-free in __lock_acquire
  Read of size 8 at addr 080279825d78 by task qemu-system-aar/22429
  CPU: 24 PID: 22429 Comm: qemu-system-aar Not tainted 5.18.0-rc5-next-20220502 
#69
  Call trace:
   dump_backtrace
   show_stack
   dump_stack_lvl
   print_address_description.constprop.0
   print_report
   kasan_report
   __asan_report_load8_noabort
   __lock_acquire
   lock_acquire.part.0
   lock_acquire
   _raw_spin_lock_irqsave
   arm_smmu_detach_dev
   arm_smmu_detach_dev at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2377
   arm_smmu_attach_dev


Hum.

So what has happened is that VFIO does this sequence:

  iommu_detach_group()
  iommu_domain_free()
  iommu_group_release_dma_owner()

Which, I think should be valid, API wise.

 From what I can see reading the code SMMUv3 blows up above because it
doesn't have a detach_dev op:

.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = arm_smmu_attach_dev,
.map_pages  = arm_smmu_map_pages,
.unmap_pages= arm_smmu_unmap_pages,
.flush_iotlb_all= arm_smmu_flush_iotlb_all,
.iotlb_sync = arm_smmu_iotlb_sync,
.iova_to_phys   = arm_smmu_iova_to_phys,
.enable_nesting = arm_smmu_enable_nesting,
.free   = arm_smmu_domain_free,
}

But it is internally tracking the domain inside the master - so when
the next domain is attached it does this:

static void arm_smmu_detach_dev(struct arm_smmu_master *master)
{
struct arm_smmu_domain *smmu_domain = master->domain;

spin_lock_irqsave(_domain->devices_lock, flags);

And explodes as the domain has been freed but master->domain was not
NULL'd.

It worked before because iommu_detach_group() used to attach the
default group and that was before the domain was freed in the above
sequence.


Oof, I totally overlooked the significance of that little subtlety in 
review :(



I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from
the detach_dev op and null it's cached copy of the domain, but I don't
know this driver.. Robin?


The original intent was that .detach_dev is deprecated in favour of 
default domains, and when the latter are in use, a device is always 
attached *somewhere* once probed (i.e. group->domain is never NULL). At 
face value, the neatest fix IMO would probably be for SMMUv3's 
.domain_free to handle smmu_domain->devices being non-empty and detach 
them at that point. However that wouldn't be viable for virtio-iommu or 
anyone else keeping an internal one-way association of devices to their 
current domains.


If we're giving up entirely on that notion of .detach_dev going away 
then all default-domain-supporting drivers probably want checking to 
make sure that path hasn't bitrotted; both Arm SMMU drivers had it 
proactively removed 6 years ago; virtio-iommu never had it at all; newer 
drivers like apple-dart have some code there, but it won't have ever run 
until now.


We *could* stay true to the original paradigm by introducing some real 
usage of IOMMU_DOMAIN_BLOCKED, such that we could keep one or more of 
those around to actively attach to instead of having groups in this 
unattached limbo state, but that's a bigger job involving adding support 
to drivers as well; too much for a quick fix now...


Robin.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-02 Thread Jason Gunthorpe via iommu
On Mon, May 02, 2022 at 12:12:04PM -0400, Qian Cai wrote:
> On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:
> > Hi Joerg,
> > 
> > This is a resend version of v8 posted here:
> > https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/
> > as we discussed in this thread:
> > https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/
> > 
> > All patches can be applied perfectly except this one:
> >  - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type
> > It conflicts with below refactoring commit:
> >  - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks"
> > The conflict has been fixed in this post.
> > 
> > No functional changes in this series. I suppress cc-ing this series to
> > all v8 reviewers in order to avoid spam.
> > 
> > Please consider it for your iommu tree.
> 
> Reverting this series fixed an user-after-free while doing SR-IOV.
> 
>  BUG: KASAN: use-after-free in __lock_acquire
>  Read of size 8 at addr 080279825d78 by task qemu-system-aar/22429
>  CPU: 24 PID: 22429 Comm: qemu-system-aar Not tainted 
> 5.18.0-rc5-next-20220502 #69
>  Call trace:
>   dump_backtrace
>   show_stack
>   dump_stack_lvl
>   print_address_description.constprop.0
>   print_report
>   kasan_report
>   __asan_report_load8_noabort
>   __lock_acquire
>   lock_acquire.part.0
>   lock_acquire
>   _raw_spin_lock_irqsave
>   arm_smmu_detach_dev
>   arm_smmu_detach_dev at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2377
>   arm_smmu_attach_dev

Hum.

So what has happened is that VFIO does this sequence:

 iommu_detach_group()
 iommu_domain_free()
 iommu_group_release_dma_owner()

Which, I think should be valid, API wise.

>From what I can see reading the code SMMUv3 blows up above because it
doesn't have a detach_dev op:

.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = arm_smmu_attach_dev,
.map_pages  = arm_smmu_map_pages,
.unmap_pages= arm_smmu_unmap_pages,
.flush_iotlb_all= arm_smmu_flush_iotlb_all,
.iotlb_sync = arm_smmu_iotlb_sync,
.iova_to_phys   = arm_smmu_iova_to_phys,
.enable_nesting = arm_smmu_enable_nesting,
.free   = arm_smmu_domain_free,
}

But it is internally tracking the domain inside the master - so when
the next domain is attached it does this:

static void arm_smmu_detach_dev(struct arm_smmu_master *master)
{
struct arm_smmu_domain *smmu_domain = master->domain;

spin_lock_irqsave(_domain->devices_lock, flags);

And explodes as the domain has been freed but master->domain was not
NULL'd.

It worked before because iommu_detach_group() used to attach the
default group and that was before the domain was freed in the above
sequence.

I'm guessing SMMU3 needs to call it's arm_smmu_detach_dev(master) from
the detach_dev op and null it's cached copy of the domain, but I don't
know this driver.. Robin?

Thanks,
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-05-02 Thread Qian Cai
On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:
> Hi Joerg,
> 
> This is a resend version of v8 posted here:
> https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/
> as we discussed in this thread:
> https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/
> 
> All patches can be applied perfectly except this one:
>  - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type
> It conflicts with below refactoring commit:
>  - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks"
> The conflict has been fixed in this post.
> 
> No functional changes in this series. I suppress cc-ing this series to
> all v8 reviewers in order to avoid spam.
> 
> Please consider it for your iommu tree.

Reverting this series fixed an user-after-free while doing SR-IOV.

 BUG: KASAN: use-after-free in __lock_acquire
 Read of size 8 at addr 080279825d78 by task qemu-system-aar/22429
 CPU: 24 PID: 22429 Comm: qemu-system-aar Not tainted 5.18.0-rc5-next-20220502 
#69
 Call trace:
  dump_backtrace
  show_stack
  dump_stack_lvl
  print_address_description.constprop.0
  print_report
  kasan_report
  __asan_report_load8_noabort
  __lock_acquire
  lock_acquire.part.0
  lock_acquire
  _raw_spin_lock_irqsave
  arm_smmu_detach_dev
  arm_smmu_detach_dev at drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2377
  arm_smmu_attach_dev
  __iommu_attach_group
  __iommu_attach_device at drivers/iommu/iommu.c:1942
  (inlined by) iommu_group_do_attach_device at drivers/iommu/iommu.c:2058
  (inlined by) __iommu_group_for_each_dev at drivers/iommu/iommu.c:989
  (inlined by) __iommu_attach_group at drivers/iommu/iommu.c:2069
  iommu_group_release_dma_owner
  __vfio_group_unset_container
  vfio_group_try_dissolve_container
  vfio_group_put_external_user
  kvm_vfio_destroy
  kvm_destroy_vm
  kvm_vm_release
  __fput
  fput
  task_work_run
  do_exit
  do_group_exit
  get_signal
  do_signal
  do_notify_resume
  el0_svc
  el0t_64_sync_handler
  el0t_64_sync

 Allocated by task 22427:
  kasan_save_stack
  __kasan_kmalloc
  kmem_cache_alloc_trace
  arm_smmu_domain_alloc
  iommu_domain_alloc
  vfio_iommu_type1_attach_group
  vfio_ioctl_set_iommu
  vfio_fops_unl_ioctl
  __arm64_sys_ioctl
  invoke_syscall
  el0_svc_common.constprop.0
  do_el0_svc
  el0_svc
  el0t_64_sync_handler
  el0t_64_sync

 Freed by task 22429:
  kasan_save_stack
  kasan_set_track
  kasan_set_free_info
  kasan_slab_free
  __kasan_slab_free
  slab_free_freelist_hook
  kfree
  arm_smmu_domain_free
  arm_smmu_domain_free at iommu/arm/arm-smmu-v3/arm-smmu-v3.c:2067
  iommu_domain_free
  vfio_iommu_type1_detach_group
  __vfio_group_unset_container
  vfio_group_try_dissolve_container
  vfio_group_put_external_user
  kvm_vfio_destroy
  kvm_destroy_vm
  kvm_vm_release
  __fput
  fput
  task_work_run
  do_exit
  do_group_exit
  get_signal
  do_signal
  do_notify_resume
  el0_svc
  el0t_64_sync_handler
  el0t_64_sync
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-04-28 Thread Joerg Roedel
On Thu, Apr 28, 2022 at 08:54:11AM -0300, Jason Gunthorpe wrote:
> Can we get this on a topic branch so Alex can pull it? There are
> conflicts with other VFIO patches

Right, we already discussed this. Moved the patches to a separate topic
branch. It will appear as 'vfio-notifier-fix' once I pushed the changes.

Regards,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-04-28 Thread Jason Gunthorpe via iommu
On Thu, Apr 28, 2022 at 11:32:04AM +0200, Joerg Roedel wrote:
> On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:
> > Lu Baolu (10):
> >   iommu: Add DMA ownership management interfaces
> >   driver core: Add dma_cleanup callback in bus_type
> >   amba: Stop sharing platform_dma_configure()
> >   bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management
> >   PCI: pci_stub: Set driver_managed_dma
> >   PCI: portdrv: Set driver_managed_dma
> >   vfio: Set DMA ownership for VFIO devices
> >   vfio: Remove use of vfio_group_viable()
> >   vfio: Remove iommu group notifier
> >   iommu: Remove iommu group changes notifier
> 
> Applied to core branch, thanks Baolu.

Can we get this on a topic branch so Alex can pull it? There are
conflicts with other VFIO patches

Thanks!
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-04-28 Thread Joerg Roedel
On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:
> Lu Baolu (10):
>   iommu: Add DMA ownership management interfaces
>   driver core: Add dma_cleanup callback in bus_type
>   amba: Stop sharing platform_dma_configure()
>   bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management
>   PCI: pci_stub: Set driver_managed_dma
>   PCI: portdrv: Set driver_managed_dma
>   vfio: Set DMA ownership for VFIO devices
>   vfio: Remove use of vfio_group_viable()
>   vfio: Remove iommu group notifier
>   iommu: Remove iommu group changes notifier

Applied to core branch, thanks Baolu.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[RESEND PATCH v8 00/11] Fix BUG_ON in vfio_iommu_group_notifier()

2022-04-17 Thread Lu Baolu
Hi Joerg,

This is a resend version of v8 posted here:
https://lore.kernel.org/linux-iommu/20220308054421.847385-1-baolu...@linux.intel.com/
as we discussed in this thread:
https://lore.kernel.org/linux-iommu/yk%2fq1bgn8pc5h...@8bytes.org/

All patches can be applied perfectly except this one:
 - [PATCH v8 02/11] driver core: Add dma_cleanup callback in bus_type
It conflicts with below refactoring commit:
 - 4b775aaf1ea99 "driver core: Refactor sysfs and drv/bus remove hooks"
The conflict has been fixed in this post.

No functional changes in this series. I suppress cc-ing this series to
all v8 reviewers in order to avoid spam.

Please consider it for your iommu tree.

Best regards,
baolu

Change log:
- v8 and before:
  - Please refer to v8 post for all the change history.

- v8-resend
  - Rebase the series on top of v5.18-rc3.
  - Add Reviewed-by's granted by Robin.
  - Add a Tested-by granted by Eric.

Jason Gunthorpe (1):
  vfio: Delete the unbound_list

Lu Baolu (10):
  iommu: Add DMA ownership management interfaces
  driver core: Add dma_cleanup callback in bus_type
  amba: Stop sharing platform_dma_configure()
  bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management
  PCI: pci_stub: Set driver_managed_dma
  PCI: portdrv: Set driver_managed_dma
  vfio: Set DMA ownership for VFIO devices
  vfio: Remove use of vfio_group_viable()
  vfio: Remove iommu group notifier
  iommu: Remove iommu group changes notifier

 include/linux/amba/bus.h  |   8 +
 include/linux/device/bus.h|   3 +
 include/linux/fsl/mc.h|   8 +
 include/linux/iommu.h |  54 +++---
 include/linux/pci.h   |   8 +
 include/linux/platform_device.h   |  10 +-
 drivers/amba/bus.c|  37 +++-
 drivers/base/dd.c |   5 +
 drivers/base/platform.c   |  21 ++-
 drivers/bus/fsl-mc/fsl-mc-bus.c   |  24 ++-
 drivers/iommu/iommu.c | 228 
 drivers/pci/pci-driver.c  |  18 ++
 drivers/pci/pci-stub.c|   1 +
 drivers/pci/pcie/portdrv_pci.c|   2 +
 drivers/vfio/fsl-mc/vfio_fsl_mc.c |   1 +
 drivers/vfio/pci/vfio_pci.c   |   1 +
 drivers/vfio/platform/vfio_amba.c |   1 +
 drivers/vfio/platform/vfio_platform.c |   1 +
 drivers/vfio/vfio.c   | 245 ++
 19 files changed, 338 insertions(+), 338 deletions(-)

-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu