Re: [RFC PATCH v7 2/5] iommu/dma: Add a new dma_map_ops of get_merge_boundary()

2019-06-21 Thread Marek Szyprowski
Hi,

On 2019-06-20 10:50, Yoshihiro Shimoda wrote:
> This patch adds a new dma_map_ops of get_merge_boundary() to
> expose the DMA merge boundary if the domain type is IOMMU_DOMAIN_DMA.
>
> Signed-off-by: Yoshihiro Shimoda 
> ---
>   drivers/iommu/dma-iommu.c | 11 +++
>   1 file changed, 11 insertions(+)
>
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 205d694..9950cb5 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1091,6 +1091,16 @@ static int iommu_dma_get_sgtable(struct device *dev, 
> struct sg_table *sgt,
>   return ret;
>   }
>   
> +static unsigned long iommu_dma_get_merge_boundary(struct device *dev)
> +{
> + struct iommu_domain *domain = iommu_get_dma_domain(dev);
> +
> + if (domain->type != IOMMU_DOMAIN_DMA)
> + return 0;   /* can't merge */
> +
> + return (1 << __ffs(domain->pgsize_bitmap)) - 1;
> +}

I really wonder if there is any IOMMU, which doesn't support 4KiB pages. 
Cannot you simply assume that the merge boundary is 4KiB and avoid 
adding this new API?

> +
>   static const struct dma_map_ops iommu_dma_ops = {
>   .alloc  = iommu_dma_alloc,
>   .free   = iommu_dma_free,
> @@ -1106,6 +1116,7 @@ static const struct dma_map_ops iommu_dma_ops = {
>   .sync_sg_for_device = iommu_dma_sync_sg_for_device,
>   .map_resource   = iommu_dma_map_resource,
>   .unmap_resource = iommu_dma_unmap_resource,
> + .get_merge_boundary = iommu_dma_get_merge_boundary,
>   };
>   
>   /*

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland



Re: [PATCH v7 19/21] iommu/mediatek: Rename enable_4GB to dram_is_4gb

2019-06-21 Thread Matthias Brugger



On 20/06/2019 15:59, Yong Wu wrote:
> On Tue, 2019-06-18 at 18:06 +0200, Matthias Brugger wrote:
>>
>> On 10/06/2019 14:17, Yong Wu wrote:
>>> This patch only rename the variable name from enable_4GB to
>>> dram_is_4gb for readable.
>>
>> From my understanding this is true when available RAM > 4GB so I think the 
>> name
>> should be something like dram_bigger_4gb otherwise it may create confusion 
>> again.
> 
> Strictly, It is not "dram_bigger_4gb". actually if the dram size is over
> 3GB (the first 1GB is the register space), the "4GB mode" will be
> enabled. then how about the name "dram_enable_32bit"?(the PA 32bit will
> be enabled in the 4GB mode.)

Ok I think dram_is_4gb is ok then. But I'd suggest to add an explanation above
the struct mtk_iommu_data to explain exactly what this means.

>  
> There is another option, please see the last part in [1] suggested by
> Evan, something like below:
> 
> data->enable_4GB = !!(max_pfn > (BIT_ULL(32) >> PAGE_SHIFT));
> if (!data->plat_data->has_4gb_mode)
> data->enable_4GB = false;
> Then mtk_iommu_map would only have:
> if (data->enable_4GB)
>  paddr |= BIT_ULL(32);
> 

I think that's a nicer way to handle it.

Regards,
Matthias

> 
> Which one do you prefer?  
>   
> [1] https://lore.kernel.org/patchwork/patch/1028421/
> 
>>
>> Also from my point of view this patch should be done before
>> "[PATCH 06/21] iommu/io-pgtable-arm-v7s: Extend MediaTek 4GB Mode"
> 
> OK.
> 
>>
>> Regards,
>> Matthias
>>
>>>
>>> Signed-off-by: Yong Wu 
>>> Reviewed-by: Evan Green 
>>> ---
>>>  drivers/iommu/mtk_iommu.c | 10 +-
>>>  drivers/iommu/mtk_iommu.h |  2 +-
>>>  2 files changed, 6 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
>>> index 86158d8..67cab2d 100644
>>> --- a/drivers/iommu/mtk_iommu.c
>>> +++ b/drivers/iommu/mtk_iommu.c
>>> @@ -382,7 +382,7 @@ static int mtk_iommu_map(struct iommu_domain *domain, 
>>> unsigned long iova,
>>> int ret;
>>>  
>>> /* The "4GB mode" M4U physically can not use the lower remap of Dram. */
>>> -   if (data->plat_data->has_4gb_mode && data->enable_4GB)
>>> +   if (data->plat_data->has_4gb_mode && data->dram_is_4gb)
>>> paddr |= BIT_ULL(32);
>>>  
>>> spin_lock_irqsave(&dom->pgtlock, flags);
>>> @@ -554,13 +554,13 @@ static int mtk_iommu_hw_init(const struct 
>>> mtk_iommu_data *data)
>>> writel_relaxed(regval, data->base + REG_MMU_INT_MAIN_CONTROL);
>>>  
>>> if (data->plat_data->m4u_plat == M4U_MT8173)
>>> -   regval = (data->protect_base >> 1) | (data->enable_4GB << 31);
>>> +   regval = (data->protect_base >> 1) | (data->dram_is_4gb << 31);
>>> else
>>> regval = lower_32_bits(data->protect_base) |
>>>  upper_32_bits(data->protect_base);
>>> writel_relaxed(regval, data->base + REG_MMU_IVRP_PADDR);
>>>  
>>> -   if (data->enable_4GB && data->plat_data->has_vld_pa_rng) {
>>> +   if (data->dram_is_4gb && data->plat_data->has_vld_pa_rng) {
>>> /*
>>>  * If 4GB mode is enabled, the validate PA range is from
>>>  * 0x1__ to 0x1__. here record bit[32:30].
>>> @@ -611,8 +611,8 @@ static int mtk_iommu_probe(struct platform_device *pdev)
>>> return -ENOMEM;
>>> data->protect_base = ALIGN(virt_to_phys(protect), MTK_PROTECT_PA_ALIGN);
>>>  
>>> -   /* Whether the current dram is over 4GB */
>>> -   data->enable_4GB = !!(max_pfn > (BIT_ULL(32) >> PAGE_SHIFT));
>>> +   /* Whether the current dram is 4GB. */
>>> +   data->dram_is_4gb = !!(max_pfn > (BIT_ULL(32) >> PAGE_SHIFT));
>>>  
>>> res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>>> data->base = devm_ioremap_resource(dev, res);
>>> diff --git a/drivers/iommu/mtk_iommu.h b/drivers/iommu/mtk_iommu.h
>>> index 753266b..e8114b2 100644
>>> --- a/drivers/iommu/mtk_iommu.h
>>> +++ b/drivers/iommu/mtk_iommu.h
>>> @@ -65,7 +65,7 @@ struct mtk_iommu_data {
>>> struct mtk_iommu_domain *m4u_dom;
>>> struct iommu_group  *m4u_group;
>>> struct mtk_smi_iommusmi_imu;  /* SMI larb iommu info */
>>> -   boolenable_4GB;
>>> +   booldram_is_4gb;
>>> booltlb_flush_active;
>>>  
>>> struct iommu_device iommu;
>>>
> 
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] Revert "iommu/vt-d: Fix lock inversion between iommu->lock and device_domain_lock"

2019-06-21 Thread Chris Wilson
Quoting Peter Xu (2019-06-21 03:32:05)
> This reverts commit 7560cc3ca7d9d11555f80c830544e463fcdb28b8.
> 
> With 5.2.0-rc5 I can easily trigger this with lockdep and iommu=pt:
> 
> ==
> WARNING: possible circular locking dependency detected
> 5.2.0-rc5 #78 Not tainted
> --
> swapper/0/1 is trying to acquire lock:
> ea2b3beb (&(&iommu->lock)->rlock){+.+.}, at: 
> domain_context_mapping_one+0xa5/0x4e0
> but task is already holding lock:
> a681907b (device_domain_lock){}, at: 
> domain_context_mapping_one+0x8d/0x4e0
> which lock already depends on the new lock.
> the existing dependency chain (in reverse order) is:
> -> #1 (device_domain_lock){}:
>_raw_spin_lock_irqsave+0x3c/0x50
>dmar_insert_one_dev_info+0xbb/0x510
>domain_add_dev_info+0x50/0x90
>dev_prepare_static_identity_mapping+0x30/0x68
>intel_iommu_init+0xddd/0x1422
>pci_iommu_init+0x16/0x3f
>do_one_initcall+0x5d/0x2b4
>kernel_init_freeable+0x218/0x2c1
>kernel_init+0xa/0x100
>ret_from_fork+0x3a/0x50
> -> #0 (&(&iommu->lock)->rlock){+.+.}:
>lock_acquire+0x9e/0x170
>_raw_spin_lock+0x25/0x30
>domain_context_mapping_one+0xa5/0x4e0
>pci_for_each_dma_alias+0x30/0x140
>dmar_insert_one_dev_info+0x3b2/0x510
>domain_add_dev_info+0x50/0x90
>dev_prepare_static_identity_mapping+0x30/0x68
>intel_iommu_init+0xddd/0x1422
>pci_iommu_init+0x16/0x3f
>do_one_initcall+0x5d/0x2b4
>kernel_init_freeable+0x218/0x2c1
>kernel_init+0xa/0x100
>ret_from_fork+0x3a/0x50
> 
> other info that might help us debug this:
>  Possible unsafe locking scenario:
>CPU0CPU1
>
>   lock(device_domain_lock);
>lock(&(&iommu->lock)->rlock);
>lock(device_domain_lock);
>   lock(&(&iommu->lock)->rlock);
> 
>  *** DEADLOCK ***
> 2 locks held by swapper/0/1:
>  #0: 033eb13d (dmar_global_lock){}, at: 
> intel_iommu_init+0x1e0/0x1422
>  #1: a681907b (device_domain_lock){}, at: 
> domain_context_mapping_one+0x8d/0x4e0
> 
> stack backtrace:
> CPU: 2 PID: 1 Comm: swapper/0 Not tainted 5.2.0-rc5 #78
> Hardware name: LENOVO 20KGS35G01/20KGS35G01, BIOS N23ET50W (1.25 ) 
> 06/25/2018
> Call Trace:
>  dump_stack+0x85/0xc0
>  print_circular_bug.cold.57+0x15c/0x195
>  __lock_acquire+0x152a/0x1710
>  lock_acquire+0x9e/0x170
>  ? domain_context_mapping_one+0xa5/0x4e0
>  _raw_spin_lock+0x25/0x30
>  ? domain_context_mapping_one+0xa5/0x4e0
>  domain_context_mapping_one+0xa5/0x4e0
>  ? domain_context_mapping_one+0x4e0/0x4e0
>  pci_for_each_dma_alias+0x30/0x140
>  dmar_insert_one_dev_info+0x3b2/0x510
>  domain_add_dev_info+0x50/0x90
>  dev_prepare_static_identity_mapping+0x30/0x68
>  intel_iommu_init+0xddd/0x1422
>  ? printk+0x58/0x6f
>  ? lockdep_hardirqs_on+0xf0/0x180
>  ? do_early_param+0x8e/0x8e
>  ? e820__memblock_setup+0x63/0x63
>  pci_iommu_init+0x16/0x3f
>  do_one_initcall+0x5d/0x2b4
>  ? do_early_param+0x8e/0x8e
>  ? rcu_read_lock_sched_held+0x55/0x60
>  ? do_early_param+0x8e/0x8e
>  kernel_init_freeable+0x218/0x2c1
>  ? rest_init+0x230/0x230
>  kernel_init+0xa/0x100
>  ret_from_fork+0x3a/0x50
> 
> domain_context_mapping_one() is taking device_domain_lock first then
> iommu lock, while dmar_insert_one_dev_info() is doing the reverse.
> 
> That should be introduced by commit:
> 
> 7560cc3ca7d9 ("iommu/vt-d: Fix lock inversion between iommu->lock and
>   device_domain_lock", 2019-05-27)
> 
> So far I still cannot figure out how the previous deadlock was
> triggered (I cannot find iommu lock taken before calling of
> iommu_flush_dev_iotlb()), however I'm pretty sure that that change
> should be incomplete at least because it does not fix all the places
> so we're still taking the locks in different orders, while reverting
> that commit is very clean to me so far that we should always take
> device_domain_lock first then the iommu lock.
> 
> We can continue to try to find the real culprit mentioned in
> 7560cc3ca7d9, but for now I think we should revert it to fix current
> breakage.
> 
> CC: Joerg Roedel 
> CC: Lu Baolu 
> CC: dave.ji...@intel.com
> Signed-off-by: Peter Xu 

I've run this through our CI which was also reporting the inversion, so
Tested-by: Chris Wilson 
-Chris
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Use after free from intel_alloc_iova

2019-06-21 Thread Chris Wilson
We see a use-after-free in our CI about 20% of the time on a Skylake
iommu testing host, present since enabling that host. Sadly, it has not
presented itself while running under KASAN.

<4> [302.391799] general protection fault:  [#1] PREEMPT SMP PTI
<4> [302.391803] CPU: 7 PID: 4854 Comm: i915_selftest Tainted: G U  
  5.2.0-rc5-CI-CI_DRM_6320+ #1
<4> [302.391805] Hardware name: System manufacturer System Product Name/Z170I 
PRO GAMING, BIOS 1809 07/11/2016
<4> [302.391809] RIP: 0010:rb_prev+0x16/0x50
<4> [302.391811] Code: d0 e9 a5 fe ff ff 4c 89 49 10 c3 4c 89 41 10 c3 0f 1f 40 
00 48 8b 0f 48 39 cf 74 36 48 8b 47 10 48 85 c0 75 05 eb 1a 48 89 d0 <48> 8b 50 
08 48 85 d2 75 f4 f3 c3 48 3b 79 10 75 15 48 8b 09 48 89
<4> [302.391813] RSP: 0018:c954f850 EFLAGS: 00010002
<4> [302.391816] RAX: 6b6b6b6b6b6b6b6b RBX: 0010 RCX: 
6b6b6b6b6b6b6b6b
<4> [302.391818] RDX: 0001 RSI:  RDI: 
88806504dfc0
<4> [302.391820] RBP: 2000 R08: 0001 R09: 

<4> [302.391821] R10: c954f7d0 R11:  R12: 
88822b1d0370
<4> [302.391823] R13: 000fe000 R14: 88809a48f840 R15: 
88806504dfc0
<4> [302.391825] FS:  7fdec7d6de40() GS:88822eb8() 
knlGS:
<4> [302.391827] CS:  0010 DS:  ES:  CR0: 80050033
<4> [302.391829] CR2: 55e125021b78 CR3: 00011277e004 CR4: 
003606e0
<4> [302.391830] DR0:  DR1:  DR2: 

<4> [302.391832] DR3:  DR6: fffe0ff0 DR7: 
0400
<4> [302.391833] Call Trace:
<4> [302.391838]  alloc_iova+0xb3/0x150
<4> [302.391842]  alloc_iova_fast+0x51/0x270
<4> [302.391846]  intel_alloc_iova+0xa0/0xd0
<4> [302.391849]  intel_map_sg+0xae/0x190
<4> [302.391902]  i915_gem_gtt_prepare_pages+0x3e/0xf0 [i915]
<4> [302.391946]  i915_gem_object_get_pages_internal+0x225/0x2b0 [i915]
<4> [302.391981]  i915_gem_object_get_pages+0x1d/0xa0 [i915]
<4> [302.392027]  i915_gem_object_pin_map+0x1cf/0x2a0 [i915]
<4> [302.392073]  igt_fill_blt+0xdb/0x4e0 [i915]
<4> [302.392130]  __i915_subtests+0x1a4/0x1e0 [i915]
<4> [302.392184]  __run_selftests+0x112/0x170 [i915]
<4> [302.392236]  i915_live_selftests+0x2c/0x60 [i915]
<4> [302.392279]  i915_pci_probe+0x83/0x1a0 [i915]
<4> [302.392282]  ? _raw_spin_unlock_irqrestore+0x39/0x60
<4> [302.392285]  pci_device_probe+0x9e/0x120
<4> [302.392287]  really_probe+0xea/0x3c0
<4> [302.392289]  driver_probe_device+0x10b/0x120
<4> [302.392291]  device_driver_attach+0x4a/0x50
<4> [302.392293]  __driver_attach+0x97/0x130
<4> [302.392295]  ? device_driver_attach+0x50/0x50
<4> [302.392296]  bus_for_each_dev+0x74/0xc0
<4> [302.392298]  bus_add_driver+0x13f/0x210
<4> [302.392300]  ? 0xa01d8000
<4> [302.392302]  driver_register+0x56/0xe0
<4> [302.392303]  ? 0xa01d8000
<4> [302.392305]  do_one_initcall+0x58/0x300
<4> [302.392308]  ? kmem_cache_alloc_trace+0x1e8/0x290
<4> [302.392311]  do_init_module+0x56/0x1f6
<4> [302.392312]  load_module+0x24d1/0x2990
<4> [302.392318]  ? __se_sys_finit_module+0xd3/0xf0
<4> [302.392319]  __se_sys_finit_module+0xd3/0xf0
<4> [302.392323]  do_syscall_64+0x55/0x1c0
<4> [302.392325]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [302.392326] RIP: 0033:0x7fdec7428839
<4> [302.392329] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 
f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 
f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48
<4> [302.392331] RSP: 002b:7ffec5007258 EFLAGS: 0246 ORIG_RAX: 
0139
<4> [302.392333] RAX: ffda RBX: 55fcf119cc00 RCX: 
7fdec7428839
<4> [302.392335] RDX:  RSI: 55fcf119e570 RDI: 
0006
<4> [302.392336] RBP: 55fcf119e570 R08: 0004 R09: 
55fcf000bc1b
<4> [302.392338] R10: 7ffec50074a0 R11: 0246 R12: 

<4> [302.392340] R13: 55fcf1197070 R14: 0020 R15: 
0042

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6320/fi-skl-iommu/igt@i915_selftest@live_blt.html
https://bugs.freedesktop.org/show_bug.cgi?id=108602
-Chris
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 20/21] iommu/vt-d: hpet: Reserve an interrupt remampping table entry for watchdog

2019-06-21 Thread Thomas Gleixner
On Wed, 19 Jun 2019, Jacob Pan wrote:
> On Tue, 18 Jun 2019 01:08:06 +0200 (CEST)
> Thomas Gleixner  wrote:
> > 
> > Unless this problem is not solved and I doubt it can be solved after
> > talking to IOMMU people and studying manuals,
>
> I agree. modify irte might be done with cmpxchg_double() but the queued
> invalidation interface for IRTE cache flush is shared with DMA and
> requires holding a spinlock for enque descriptors, QI tail update etc.
> 
> Also, reserving & manipulating IRTE slot for hpet via backdoor might not
> be needed if the HPET PCI BDF (found in ACPI) can be utilized. But it
> might need more work to add a fake PCI device for HPET.

What would PCI/BDF solve?

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 20/21] iommu/vt-d: hpet: Reserve an interrupt remampping table entry for watchdog

2019-06-21 Thread Jacob Pan
On Fri, 21 Jun 2019 17:33:28 +0200 (CEST)
Thomas Gleixner  wrote:

> On Wed, 19 Jun 2019, Jacob Pan wrote:
> > On Tue, 18 Jun 2019 01:08:06 +0200 (CEST)
> > Thomas Gleixner  wrote:  
> > > 
> > > Unless this problem is not solved and I doubt it can be solved
> > > after talking to IOMMU people and studying manuals,  
> >
> > I agree. modify irte might be done with cmpxchg_double() but the
> > queued invalidation interface for IRTE cache flush is shared with
> > DMA and requires holding a spinlock for enque descriptors, QI tail
> > update etc.
> > 
> > Also, reserving & manipulating IRTE slot for hpet via backdoor
> > might not be needed if the HPET PCI BDF (found in ACPI) can be
> > utilized. But it might need more work to add a fake PCI device for
> > HPET.  
> 
> What would PCI/BDF solve?
I was thinking if HPET is a PCI device then it can naturally
gain slots in IOMMU remapping table IRTEs via PCI MSI code. Then perhaps
it can use the IRQ subsystem to set affinity etc. w/o directly adding
additional helper functions in IRQ remapping code. I have not followed
all the discussions, just a thought.



Re: How to resolve an issue in swiotlb environment?

2019-06-21 Thread Suwan Kim
On Wed, Jun 19, 2019 at 05:05:49PM -0400, Alan Stern wrote:
> On Wed, 19 Jun 2019, shuah wrote:
> 
> > I missed a lot of the thread info. and went looking for it and found the
> > following summary of the problem:
> > 
> > ==
> > The issue which prompted the commit this thread is about arose in a
> > situation where the block layer set up a scatterlist containing buffer
> > sizes something like:
> > 
> > 4096 4096 1536 1024
> > 
> > and the maximum packet size was 1024.  The situation was a little
> > unusual, because it involved vhci-hcd (a virtual HCD).  This doesn't
> > matter much in normal practice because:
> > 
> > Block devices normally have a block size of 512 bytes or more.
> > Smaller values are very uncommon.  So scatterlist element sizes
> > are always divisible by 512.
> > 
> > xHCI is the only USB host controller type with a maximum packet
> > size larger than 512, and xHCI hardware can do full
> > scatter-gather so it doesn't care what the buffer sizes are.
> > 
> > So another approach would be to fix vhci-hcd and then trust that the
> > problem won't arise again, for the reasons above.  We would be okay so
> > long as nobody tried to use a USB-SCSI device with a block size of 256
> > bytes or less.
> > ===
> > 
> > Out of the summary, the following gives me pause:
> > 
> > "xHCI hardware can do full scatter-gather so it doesn't care what the
> > buffer sizes are."
> > 
> > vhci-hcd won't be able to count on hardware being able to do full
> > scatter-gather. It has to deal with a variety of hardware with
> > varying speeds.
> 
> Sure.  But you can test whether the server's HCD is able to handle 
> scatter-gather transfers, and if it is then you can say that the 
> client-side vhci-hcd is able to handle them as well.  Then all you 
> would have to do is preserve the scatterlist information describing the 
> transfer when you go between the client and the server.
> 
> The point is to make sure that the client-side vhci-hcd doesn't claim
> to be _less_ capable than the server-side actual HCD.  That's what
> leads to the problem described above.
> 
> > "We would be okay so long as nobody tried to use a USB-SCSI device with
> > a block size of 256 bytes or less."
> > 
> > At least a USB Storage device, I test with says 512 block size. Can we
> > count on not seeing a device with block size <= 256 bytes?
> 
> Yes, we can.  In fact, the SCSI core doesn't handle devices with block 
> size < 512.
> 
> > In any case, I am looking into adding SG support vhci-hci at the moment.
> > 
> > Looks like the following is the repo, I should be working with?
> > 
> > git://git.infradead.org/users/hch/misc.git
> 
> It doesn't matter.  Your work should end up being independent of 
> Christoph's, so you can base it on any repo.

I implemented SG support of vhci. I will send it as a patch.
Please look at it and let me know if you have a feedback.

Regards

Suwan Kim


Re: [RFC PATCH v4 20/21] iommu/vt-d: hpet: Reserve an interrupt remampping table entry for watchdog

2019-06-21 Thread Jacob Pan
On Fri, 21 Jun 2019 10:31:26 -0700
Jacob Pan  wrote:

> On Fri, 21 Jun 2019 17:33:28 +0200 (CEST)
> Thomas Gleixner  wrote:
> 
> > On Wed, 19 Jun 2019, Jacob Pan wrote:  
> > > On Tue, 18 Jun 2019 01:08:06 +0200 (CEST)
> > > Thomas Gleixner  wrote:
> > > > 
> > > > Unless this problem is not solved and I doubt it can be solved
> > > > after talking to IOMMU people and studying manuals,
> > >
> > > I agree. modify irte might be done with cmpxchg_double() but the
> > > queued invalidation interface for IRTE cache flush is shared with
> > > DMA and requires holding a spinlock for enque descriptors, QI tail
> > > update etc.
> > > 
> > > Also, reserving & manipulating IRTE slot for hpet via backdoor
> > > might not be needed if the HPET PCI BDF (found in ACPI) can be
> > > utilized. But it might need more work to add a fake PCI device for
> > > HPET.
> > 
> > What would PCI/BDF solve?  
> I was thinking if HPET is a PCI device then it can naturally
> gain slots in IOMMU remapping table IRTEs via PCI MSI code. Then
> perhaps it can use the IRQ subsystem to set affinity etc. w/o
> directly adding additional helper functions in IRQ remapping code. I
> have not followed all the discussions, just a thought.
> 
I looked at the code again, seems the per cpu HPET code already taken
care of HPET MSI management. Why can't we use IR-HPET-MSI chip and
domain to allocate and set affinity etc.?
Most APIC timer has ARAT not enough per cpu HPET, so per cpu HPET is
not used mostly.


Jacob


Re: [RFC PATCH v4 20/21] iommu/vt-d: hpet: Reserve an interrupt remampping table entry for watchdog

2019-06-21 Thread Thomas Gleixner
On Fri, 21 Jun 2019, Jacob Pan wrote:
> On Fri, 21 Jun 2019 10:31:26 -0700
> Jacob Pan  wrote:
> 
> > On Fri, 21 Jun 2019 17:33:28 +0200 (CEST)
> > Thomas Gleixner  wrote:
> > 
> > > On Wed, 19 Jun 2019, Jacob Pan wrote:  
> > > > On Tue, 18 Jun 2019 01:08:06 +0200 (CEST)
> > > > Thomas Gleixner  wrote:
> > > > > 
> > > > > Unless this problem is not solved and I doubt it can be solved
> > > > > after talking to IOMMU people and studying manuals,
> > > >
> > > > I agree. modify irte might be done with cmpxchg_double() but the
> > > > queued invalidation interface for IRTE cache flush is shared with
> > > > DMA and requires holding a spinlock for enque descriptors, QI tail
> > > > update etc.
> > > > 
> > > > Also, reserving & manipulating IRTE slot for hpet via backdoor
> > > > might not be needed if the HPET PCI BDF (found in ACPI) can be
> > > > utilized. But it might need more work to add a fake PCI device for
> > > > HPET.
> > > 
> > > What would PCI/BDF solve?  
> > I was thinking if HPET is a PCI device then it can naturally
> > gain slots in IOMMU remapping table IRTEs via PCI MSI code. Then
> > perhaps it can use the IRQ subsystem to set affinity etc. w/o
> > directly adding additional helper functions in IRQ remapping code. I
> > have not followed all the discussions, just a thought.
> > 
> I looked at the code again, seems the per cpu HPET code already taken
> care of HPET MSI management. Why can't we use IR-HPET-MSI chip and
> domain to allocate and set affinity etc.?
> Most APIC timer has ARAT not enough per cpu HPET, so per cpu HPET is
> not used mostly.

Sure, we can use that, but that does not allow to move the affinity from
NMI context either. Same issue with the IOMMU as with the other hack.

Thanks,

tglx
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v4 20/21] iommu/vt-d: hpet: Reserve an interrupt remampping table entry for watchdog

2019-06-21 Thread Ricardo Neri
On Fri, Jun 21, 2019 at 10:05:01PM +0200, Thomas Gleixner wrote:
> On Fri, 21 Jun 2019, Jacob Pan wrote:
> > On Fri, 21 Jun 2019 10:31:26 -0700
> > Jacob Pan  wrote:
> > 
> > > On Fri, 21 Jun 2019 17:33:28 +0200 (CEST)
> > > Thomas Gleixner  wrote:
> > > 
> > > > On Wed, 19 Jun 2019, Jacob Pan wrote:  
> > > > > On Tue, 18 Jun 2019 01:08:06 +0200 (CEST)
> > > > > Thomas Gleixner  wrote:
> > > > > > 
> > > > > > Unless this problem is not solved and I doubt it can be solved
> > > > > > after talking to IOMMU people and studying manuals,
> > > > >
> > > > > I agree. modify irte might be done with cmpxchg_double() but the
> > > > > queued invalidation interface for IRTE cache flush is shared with
> > > > > DMA and requires holding a spinlock for enque descriptors, QI tail
> > > > > update etc.
> > > > > 
> > > > > Also, reserving & manipulating IRTE slot for hpet via backdoor
> > > > > might not be needed if the HPET PCI BDF (found in ACPI) can be
> > > > > utilized. But it might need more work to add a fake PCI device for
> > > > > HPET.
> > > > 
> > > > What would PCI/BDF solve?  
> > > I was thinking if HPET is a PCI device then it can naturally
> > > gain slots in IOMMU remapping table IRTEs via PCI MSI code. Then
> > > perhaps it can use the IRQ subsystem to set affinity etc. w/o
> > > directly adding additional helper functions in IRQ remapping code. I
> > > have not followed all the discussions, just a thought.
> > > 
> > I looked at the code again, seems the per cpu HPET code already taken
> > care of HPET MSI management. Why can't we use IR-HPET-MSI chip and
> > domain to allocate and set affinity etc.?
> > Most APIC timer has ARAT not enough per cpu HPET, so per cpu HPET is
> > not used mostly.
> 
> Sure, we can use that, but that does not allow to move the affinity from
> NMI context either. Same issue with the IOMMU as with the other hack.

If I understand Thomas' point correctly, the problem is having to take
lock in NMI context to update the IRTE for the HPET; both as in my hack
and in the generic irq code. The problem is worse when using the generic
irq code as there are several layers and several locks that need to be
handled.

Thanks and BR,
Ricardo


Re: [PATCH v2 02/12] iommu/mediatek: Add probe_defer for smi-larb

2019-06-21 Thread Yong Wu


On Wed, 2019-06-19 at 15:52 +0200, Matthias Brugger wrote:
> 
> On 10/06/2019 14:55, Yong Wu wrote:
> > The iommu consumer should use device_link to connect with the
> > smi-larb(supplier). then the smi-larb should run before the iommu
> > consumer. Here we delay the iommu driver until the smi driver is
> > ready, then all the iommu consumer always is after the smi driver.
> > 
> > When there is no this patch, if some consumer drivers run before
> > smi-larb, the supplier link_status is DL_DEV_NO_DRIVER(0) in the
> > device_link_add, then device_links_driver_bound will use WARN_ON
> > to complain that the link_status of supplier is not right.
> > 
> > This is a preparing patch for adding device_link.
> > 
> > Signed-off-by: Yong Wu 
> > ---
> >  drivers/iommu/mtk_iommu.c| 2 +-
> >  drivers/iommu/mtk_iommu_v1.c | 2 +-
> >  2 files changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> > index 6fe3369..f7599d8 100644
> > --- a/drivers/iommu/mtk_iommu.c
> > +++ b/drivers/iommu/mtk_iommu.c
> > @@ -664,7 +664,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
> > id = i;
> >  
> > plarbdev = of_find_device_by_node(larbnode);
> > -   if (!plarbdev) {
> > +   if (!plarbdev || !plarbdev->dev.driver) {
> 
> can't we use:
> device_lock()
> device_is_bound(struct device *dev)
> device_unlock()

A new API for me. Thanks the hint. I have tried. it is ok.


> 
> > of_node_put(larbnode);
> > return -EPROBE_DEFER;
> > }
> > diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
> > index 0b0908c..c43c4a0 100644
> > --- a/drivers/iommu/mtk_iommu_v1.c
> > +++ b/drivers/iommu/mtk_iommu_v1.c
> > @@ -604,7 +604,7 @@ static int mtk_iommu_probe(struct platform_device *pdev)
> > plarbdev = of_platform_device_create(
> > larb_spec.np, NULL,
> > platform_bus_type.dev_root);
> > -   if (!plarbdev) {
> > +   if (!plarbdev || !plarbdev->dev.driver) {
> > of_node_put(larb_spec.np);
> > return -EPROBE_DEFER;
> > }
> > 



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v7 19/21] iommu/mediatek: Rename enable_4GB to dram_is_4gb

2019-06-21 Thread Yong Wu


On Fri, 2019-06-21 at 12:10 +0200, Matthias Brugger wrote:
> 
> On 20/06/2019 15:59, Yong Wu wrote:
> > On Tue, 2019-06-18 at 18:06 +0200, Matthias Brugger wrote:
> >>
> >> On 10/06/2019 14:17, Yong Wu wrote:
> >>> This patch only rename the variable name from enable_4GB to
> >>> dram_is_4gb for readable.
> >>
> >> From my understanding this is true when available RAM > 4GB so I think the 
> >> name
> >> should be something like dram_bigger_4gb otherwise it may create confusion 
> >> again.
> > 
> > Strictly, It is not "dram_bigger_4gb". actually if the dram size is over
> > 3GB (the first 1GB is the register space), the "4GB mode" will be
> > enabled. then how about the name "dram_enable_32bit"?(the PA 32bit will
> > be enabled in the 4GB mode.)
> 
> Ok I think dram_is_4gb is ok then. But I'd suggest to add an explanation above
> the struct mtk_iommu_data to explain exactly what this means.
> 
> >  
> > There is another option, please see the last part in [1] suggested by
> > Evan, something like below:
> > 
> > data->enable_4GB = !!(max_pfn > (BIT_ULL(32) >> PAGE_SHIFT));
> > if (!data->plat_data->has_4gb_mode)
> > data->enable_4GB = false;
> > Then mtk_iommu_map would only have:
> > if (data->enable_4GB)
> >  paddr |= BIT_ULL(32);
> > 
> 
> I think that's a nicer way to handle it.

Thanks your feedback. then I will use this way.

> 
> Regards,
> Matthias
> 
> > 
> > Which one do you prefer?  
> >   
> > [1] https://lore.kernel.org/patchwork/patch/1028421/
> > 
> >>
> >> Also from my point of view this patch should be done before
> >> "[PATCH 06/21] iommu/io-pgtable-arm-v7s: Extend MediaTek 4GB Mode"
> > 
> > OK.
> > 
> >>
> >> Regards,
> >> Matthias
> >>
> >>>
> >>> Signed-off-by: Yong Wu 
> >>> Reviewed-by: Evan Green 
> >>> ---
> >>>  drivers/iommu/mtk_iommu.c | 10 +-
> >>>  drivers/iommu/mtk_iommu.h |  2 +-
> >>>  2 files changed, 6 insertions(+), 6 deletions(-)
> >>>
> >>> diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
> >>> index 86158d8..67cab2d 100644
> >>> --- a/drivers/iommu/mtk_iommu.c
> >>> +++ b/drivers/iommu/mtk_iommu.c
> >>> @@ -382,7 +382,7 @@ static int mtk_iommu_map(struct iommu_domain *domain, 
> >>> unsigned long iova,
> >>>   int ret;
> >>>  
> >>>   /* The "4GB mode" M4U physically can not use the lower remap of Dram. */
> >>> - if (data->plat_data->has_4gb_mode && data->enable_4GB)
> >>> + if (data->plat_data->has_4gb_mode && data->dram_is_4gb)
> >>>   paddr |= BIT_ULL(32);
> >>>  
> >>>   spin_lock_irqsave(&dom->pgtlock, flags);
> >>> @@ -554,13 +554,13 @@ static int mtk_iommu_hw_init(const struct 
> >>> mtk_iommu_data *data)
> >>>   writel_relaxed(regval, data->base + REG_MMU_INT_MAIN_CONTROL);
> >>>  
> >>>   if (data->plat_data->m4u_plat == M4U_MT8173)
> >>> - regval = (data->protect_base >> 1) | (data->enable_4GB << 31);
> >>> + regval = (data->protect_base >> 1) | (data->dram_is_4gb << 31);
> >>>   else
> >>>   regval = lower_32_bits(data->protect_base) |
> >>>upper_32_bits(data->protect_base);
> >>>   writel_relaxed(regval, data->base + REG_MMU_IVRP_PADDR);
> >>>  
> >>> - if (data->enable_4GB && data->plat_data->has_vld_pa_rng) {
> >>> + if (data->dram_is_4gb && data->plat_data->has_vld_pa_rng) {
> >>>   /*
> >>>* If 4GB mode is enabled, the validate PA range is from
> >>>* 0x1__ to 0x1__. here record bit[32:30].
> >>> @@ -611,8 +611,8 @@ static int mtk_iommu_probe(struct platform_device 
> >>> *pdev)
> >>>   return -ENOMEM;
> >>>   data->protect_base = ALIGN(virt_to_phys(protect), MTK_PROTECT_PA_ALIGN);
> >>>  
> >>> - /* Whether the current dram is over 4GB */
> >>> - data->enable_4GB = !!(max_pfn > (BIT_ULL(32) >> PAGE_SHIFT));
> >>> + /* Whether the current dram is 4GB. */
> >>> + data->dram_is_4gb = !!(max_pfn > (BIT_ULL(32) >> PAGE_SHIFT));
> >>>  
> >>>   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> >>>   data->base = devm_ioremap_resource(dev, res);
> >>> diff --git a/drivers/iommu/mtk_iommu.h b/drivers/iommu/mtk_iommu.h
> >>> index 753266b..e8114b2 100644
> >>> --- a/drivers/iommu/mtk_iommu.h
> >>> +++ b/drivers/iommu/mtk_iommu.h
> >>> @@ -65,7 +65,7 @@ struct mtk_iommu_data {
> >>>   struct mtk_iommu_domain *m4u_dom;
> >>>   struct iommu_group  *m4u_group;
> >>>   struct mtk_smi_iommusmi_imu;  /* SMI larb iommu info */
> >>> - boolenable_4GB;
> >>> + booldram_is_4gb;
> >>>   booltlb_flush_active;
> >>>  
> >>>   struct iommu_device iommu;
> >>>
> > 
> > 



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 05/12] media: mtk-jpeg: Get rid of mtk_smi_larb_get/put

2019-06-21 Thread Yong Wu
On Thu, 2019-06-20 at 17:20 +0200, Matthias Brugger wrote:
> 
> On 10/06/2019 14:55, Yong Wu wrote:
> > MediaTek IOMMU has already added device_link between the consumer
> > and smi-larb device. If the jpg device call the pm_runtime_get_sync,
> > the smi-larb's pm_runtime_get_sync also be called automatically.
> 
> Please help me out find this relation. I seem to miss something basic, 
> because I
> can't find any between the jpeg IP and the iommu.

JPEG also is a multimedia consumer. It also access memory via IOMMU. All
the current SoC have the JPG smi ports. 

grep -r JPG include/dt-bindings/memory/mt*

> 
> Regards,
> Matthias
> 
> > 
> > CC: Rick Chang 
> > Signed-off-by: Yong Wu 
> > Reviewed-by: Evan Green 
> > ---
> >  drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c | 22 --
> >  drivers/media/platform/mtk-jpeg/mtk_jpeg_core.h |  2 --
> >  2 files changed, 24 deletions(-)
> > 
> > diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c 
> > b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
> > index f761e4d..2f37538 100644
> > --- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
> > +++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
> > @@ -29,7 +29,6 @@
> >  #include 
> >  #include 
> >  #include 
> > -#include 
> >  
> >  #include "mtk_jpeg_hw.h"
> >  #include "mtk_jpeg_core.h"
> > @@ -901,11 +900,6 @@ static int mtk_jpeg_queue_init(void *priv, struct 
> > vb2_queue *src_vq,
> >  
> >  static void mtk_jpeg_clk_on(struct mtk_jpeg_dev *jpeg)
> >  {
> > -   int ret;
> > -
> > -   ret = mtk_smi_larb_get(jpeg->larb);
> > -   if (ret)
> > -   dev_err(jpeg->dev, "mtk_smi_larb_get larbvdec fail %d\n", ret);
> > clk_prepare_enable(jpeg->clk_jdec_smi);
> > clk_prepare_enable(jpeg->clk_jdec);
> >  }
> > @@ -914,7 +908,6 @@ static void mtk_jpeg_clk_off(struct mtk_jpeg_dev *jpeg)
> >  {
> > clk_disable_unprepare(jpeg->clk_jdec);
> > clk_disable_unprepare(jpeg->clk_jdec_smi);
> > -   mtk_smi_larb_put(jpeg->larb);
> >  }
> >  
> >  static irqreturn_t mtk_jpeg_dec_irq(int irq, void *priv)
> > @@ -1059,21 +1052,6 @@ static int mtk_jpeg_release(struct file *file)
> >  
> >  static int mtk_jpeg_clk_init(struct mtk_jpeg_dev *jpeg)
> >  {
> > -   struct device_node *node;
> > -   struct platform_device *pdev;
> > -
> > -   node = of_parse_phandle(jpeg->dev->of_node, "mediatek,larb", 0);
> > -   if (!node)
> > -   return -EINVAL;
> > -   pdev = of_find_device_by_node(node);
> > -   if (WARN_ON(!pdev)) {
> > -   of_node_put(node);
> > -   return -EINVAL;
> > -   }
> > -   of_node_put(node);
> > -
> > -   jpeg->larb = &pdev->dev;
> > -
> > jpeg->clk_jdec = devm_clk_get(jpeg->dev, "jpgdec");
> > if (IS_ERR(jpeg->clk_jdec))
> > return PTR_ERR(jpeg->clk_jdec);
> > diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.h 
> > b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.h
> > index 1a6cdfd..e35fb79 100644
> > --- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.h
> > +++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.h
> > @@ -55,7 +55,6 @@ enum mtk_jpeg_ctx_state {
> >   * @dec_reg_base:  JPEG registers mapping
> >   * @clk_jdec:  JPEG hw working clock
> >   * @clk_jdec_smi:  JPEG SMI bus clock
> > - * @larb:  SMI device
> >   */
> >  struct mtk_jpeg_dev {
> > struct mutexlock;
> > @@ -69,7 +68,6 @@ struct mtk_jpeg_dev {
> > void __iomem*dec_reg_base;
> > struct clk  *clk_jdec;
> > struct clk  *clk_jdec_smi;
> > -   struct device   *larb;
> >  };
> >  
> >  /**
> > 


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu/dma: Fix calculation overflow in __finalise_sg()

2019-06-21 Thread Nicolin Chen
The max_len is a u32 type variable so the calculation on the
left hand of the last if-condition will potentially overflow
when a cur_len gets closer to UINT_MAX -- note that there're
drivers setting max_seg_size to UINT_MAX:
  drivers/dma/dw-edma/dw-edma-core.c:745:
dma_set_max_seg_size(dma->dev, U32_MAX);
  drivers/dma/dma-axi-dmac.c:871:
dma_set_max_seg_size(&pdev->dev, UINT_MAX);
  drivers/mmc/host/renesas_sdhi_internal_dmac.c:338:
dma_set_max_seg_size(dev, 0x);
  drivers/nvme/host/pci.c:2520:
dma_set_max_seg_size(dev->dev, 0x);

So this patch just casts the cur_len in the calculation to a
size_t type to fix the overflow issue, as it's not necessary
to change the type of cur_len after all.

Fixes: 809eac54cdd6 ("iommu/dma: Implement scatterlist segment merging")
Cc: sta...@vger.kernel.org
Signed-off-by: Nicolin Chen 
---
 drivers/iommu/dma-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index a9f13313a22f..676b7ecd451e 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -764,7 +764,7 @@ static int __finalise_sg(struct device *dev, struct 
scatterlist *sg, int nents,
 * - and wouldn't make the resulting output segment too long
 */
if (cur_len && !s_iova_off && (dma_addr & seg_mask) &&
-   (cur_len + s_length <= max_len)) {
+   ((size_t)cur_len + s_length <= max_len)) {
/* ...then concatenate it with the previous one */
cur_len += s_length;
} else {
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: Use after free from intel_alloc_iova

2019-06-21 Thread Lu Baolu

Hi Chris,

Thanks for the test and report.

On 6/21/19 9:27 PM, Chris Wilson wrote:

We see a use-after-free in our CI about 20% of the time on a Skylake
iommu testing host, present since enabling that host. Sadly, it has not
presented itself while running under KASAN.

<4> [302.391799] general protection fault:  [#1] PREEMPT SMP PTI
<4> [302.391803] CPU: 7 PID: 4854 Comm: i915_selftest Tainted: G U  
  5.2.0-rc5-CI-CI_DRM_6320+ #1


Since it's CI-CI_DRM_6320+, what kind of patches have you applied on top
of 5.2.0-rc5?

Best regards,
Baolu



<4> [302.391805] Hardware name: System manufacturer System Product Name/Z170I 
PRO GAMING, BIOS 1809 07/11/2016
<4> [302.391809] RIP: 0010:rb_prev+0x16/0x50
<4> [302.391811] Code: d0 e9 a5 fe ff ff 4c 89 49 10 c3 4c 89 41 10 c3 0f 1f 40 00 48 
8b 0f 48 39 cf 74 36 48 8b 47 10 48 85 c0 75 05 eb 1a 48 89 d0 <48> 8b 50 08 48 85 d2 
75 f4 f3 c3 48 3b 79 10 75 15 48 8b 09 48 89
<4> [302.391813] RSP: 0018:c954f850 EFLAGS: 00010002
<4> [302.391816] RAX: 6b6b6b6b6b6b6b6b RBX: 0010 RCX: 
6b6b6b6b6b6b6b6b
<4> [302.391818] RDX: 0001 RSI:  RDI: 
88806504dfc0
<4> [302.391820] RBP: 2000 R08: 0001 R09: 

<4> [302.391821] R10: c954f7d0 R11:  R12: 
88822b1d0370
<4> [302.391823] R13: 000fe000 R14: 88809a48f840 R15: 
88806504dfc0
<4> [302.391825] FS:  7fdec7d6de40() GS:88822eb8() 
knlGS:
<4> [302.391827] CS:  0010 DS:  ES:  CR0: 80050033
<4> [302.391829] CR2: 55e125021b78 CR3: 00011277e004 CR4: 
003606e0
<4> [302.391830] DR0:  DR1:  DR2: 

<4> [302.391832] DR3:  DR6: fffe0ff0 DR7: 
0400
<4> [302.391833] Call Trace:
<4> [302.391838]  alloc_iova+0xb3/0x150
<4> [302.391842]  alloc_iova_fast+0x51/0x270
<4> [302.391846]  intel_alloc_iova+0xa0/0xd0
<4> [302.391849]  intel_map_sg+0xae/0x190
<4> [302.391902]  i915_gem_gtt_prepare_pages+0x3e/0xf0 [i915]
<4> [302.391946]  i915_gem_object_get_pages_internal+0x225/0x2b0 [i915]
<4> [302.391981]  i915_gem_object_get_pages+0x1d/0xa0 [i915]
<4> [302.392027]  i915_gem_object_pin_map+0x1cf/0x2a0 [i915]
<4> [302.392073]  igt_fill_blt+0xdb/0x4e0 [i915]
<4> [302.392130]  __i915_subtests+0x1a4/0x1e0 [i915]
<4> [302.392184]  __run_selftests+0x112/0x170 [i915]
<4> [302.392236]  i915_live_selftests+0x2c/0x60 [i915]
<4> [302.392279]  i915_pci_probe+0x83/0x1a0 [i915]
<4> [302.392282]  ? _raw_spin_unlock_irqrestore+0x39/0x60
<4> [302.392285]  pci_device_probe+0x9e/0x120
<4> [302.392287]  really_probe+0xea/0x3c0
<4> [302.392289]  driver_probe_device+0x10b/0x120
<4> [302.392291]  device_driver_attach+0x4a/0x50
<4> [302.392293]  __driver_attach+0x97/0x130
<4> [302.392295]  ? device_driver_attach+0x50/0x50
<4> [302.392296]  bus_for_each_dev+0x74/0xc0
<4> [302.392298]  bus_add_driver+0x13f/0x210
<4> [302.392300]  ? 0xa01d8000
<4> [302.392302]  driver_register+0x56/0xe0
<4> [302.392303]  ? 0xa01d8000
<4> [302.392305]  do_one_initcall+0x58/0x300
<4> [302.392308]  ? kmem_cache_alloc_trace+0x1e8/0x290
<4> [302.392311]  do_init_module+0x56/0x1f6
<4> [302.392312]  load_module+0x24d1/0x2990
<4> [302.392318]  ? __se_sys_finit_module+0xd3/0xf0
<4> [302.392319]  __se_sys_finit_module+0xd3/0xf0
<4> [302.392323]  do_syscall_64+0x55/0x1c0
<4> [302.392325]  entry_SYSCALL_64_after_hwframe+0x49/0xbe
<4> [302.392326] RIP: 0033:0x7fdec7428839
<4> [302.392329] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 
89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 
01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48
<4> [302.392331] RSP: 002b:7ffec5007258 EFLAGS: 0246 ORIG_RAX: 
0139
<4> [302.392333] RAX: ffda RBX: 55fcf119cc00 RCX: 
7fdec7428839
<4> [302.392335] RDX:  RSI: 55fcf119e570 RDI: 
0006
<4> [302.392336] RBP: 55fcf119e570 R08: 0004 R09: 
55fcf000bc1b
<4> [302.392338] R10: 7ffec50074a0 R11: 0246 R12: 

<4> [302.392340] R13: 55fcf1197070 R14: 0020 R15: 
0042

https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6320/fi-skl-iommu/igt@i915_selftest@live_blt.html
https://bugs.freedesktop.org/show_bug.cgi?id=108602
-Chris


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu