Re: [RFC PATCH 1/3] of/pci: dma-ranges to account highest possible host bridge dma_mask

2017-03-27 Thread Oza Oza via iommu
On Mon, Mar 27, 2017 at 8:16 PM, Rob Herring wrote: > On Sat, Mar 25, 2017 at 12:31 AM, Oza Pawandeep wrote: >> it is possible that PCI device supports 64-bit DMA addressing, >> and thus it's driver sets device's dma_mask to DMA_BIT_MASK(64), >> however PCI

Re: [PATCH V9 00/11] IOMMU probe deferral support

2017-03-27 Thread Sricharan R
Hi, On 24/03/17 09:27, Shameerali Kolothum Thodi wrote: Hi Sricharan, -Original Message- From: Sricharan R [mailto:sricha...@codeaurora.org] [...] Looks like this triggers the start of the bug. So the below check in iommu_dma_init_domain fails, if

Re: [PATCH 3/9] Docs: dt: document qcom iommu bindings

2017-03-27 Thread Rob Herring
On Thu, Mar 23, 2017 at 9:45 PM, Rob Clark wrote: > On Thu, Mar 23, 2017 at 6:21 PM, Rob Herring wrote: >> On Tue, Mar 14, 2017 at 11:18:05AM -0400, Rob Clark wrote: >>> Cc: devicet...@vger.kernel.org >>> Signed-off-by: Rob Clark >>>

Re: [PATCH V9 00/11] IOMMU probe deferral support

2017-03-27 Thread Lorenzo Pieralisi
On Mon, Mar 27, 2017 at 05:18:15PM +0100, Robin Murphy wrote: [...] > >> [ 145.212351] iommu: Adding device :81:10.0 to group 5 > >> [ 145.212367] ixgbevf :81:10.0: 0x0 0x1, 0x0 0x, > >> 0x 0x > >> [ 145.213261] ixgbevf :81:10.0: enabling device

Re: [PATCH V9 00/11] IOMMU probe deferral support

2017-03-27 Thread Robin Murphy
On 27/03/17 16:58, Shameerali Kolothum Thodi wrote: > > >> -Original Message- >> From: Shameerali Kolothum Thodi >> Sent: Monday, March 27, 2017 3:53 PM >> To: 'Robin Murphy'; Sricharan R; Wangzhou (B); will.dea...@arm.com; >> j...@8bytes.org; lorenzo.pieral...@arm.com;

RE: [PATCH V9 00/11] IOMMU probe deferral support

2017-03-27 Thread Shameerali Kolothum Thodi
> -Original Message- > From: Shameerali Kolothum Thodi > Sent: Monday, March 27, 2017 3:53 PM > To: 'Robin Murphy'; Sricharan R; Wangzhou (B); will.dea...@arm.com; > j...@8bytes.org; lorenzo.pieral...@arm.com; iommu@lists.linux- > foundation.org; linux-arm-ker...@lists.infradead.org;

Re: [RFC PATCH 24/30] iommu: Specify PASID state when unbinding a task

2017-03-27 Thread Joerg Roedel
On Fri, Mar 24, 2017 at 07:08:47PM +, Jean-Philippe Brucker wrote: > On 24/03/17 11:00, Joerg Roedel wrote: > > The document you posted is an addition to the spec, so we can't rely on > > a stop marker being sent by a device when it shuts down a context. > > Current AMD GPUs don't send one,

Re: [RFC PATCH 3/3] of: fix node traversing in of_dma_get_range

2017-03-27 Thread Robin Murphy
Hi Rob, On 27/03/17 15:34, Rob Herring wrote: > On Sat, Mar 25, 2017 at 12:31 AM, Oza Pawandeep wrote: >> it jumps to the parent node without examining the child node. >> also with that, it throws "no dma-ranges found for node" >> for pci dma-ranges. >> >> this patch fixes

Re: [RFC PATCH 3/3] of: fix node traversing in of_dma_get_range

2017-03-27 Thread Rob Herring
On Sat, Mar 25, 2017 at 12:31 AM, Oza Pawandeep wrote: > it jumps to the parent node without examining the child node. > also with that, it throws "no dma-ranges found for node" > for pci dma-ranges. > > this patch fixes device node traversing for dma-ranges. What's the DT

RE: [PATCH] iommu/amd: flush IOTLB for specific domains only

2017-03-27 Thread Nath, Arindam
>-Original Message- >From: Daniel Drake [mailto:dr...@endlessm.com] >Sent: Monday, March 27, 2017 5:56 PM >To: Nath, Arindam >Cc: j...@8bytes.org; Deucher, Alexander; Bridgman, John; amd- >g...@lists.freedesktop.org; iommu@lists.linux-foundation.org; Suthikulpanit, >Suravee; Linux

Re: [PATCH] iommu/amd: flush IOTLB for specific domains only

2017-03-27 Thread Daniel Drake
Hi Arindam, You CC'd me on this - does this mean that it is a fix for the issue described in the thread "amd-iommu: can't boot with amdgpu, AMD-Vi: Completion-Wait loop timed out" ? Thanks Daniel On Mon, Mar 27, 2017 at 12:17 AM, wrote: > From: Arindam Nath

Re: amd-iommu: can't boot with amdgpu, AMD-Vi: Completion-Wait loop timed out

2017-03-27 Thread Daniel Drake
Hi Joerg, Thanks for looking into this. We confirm that this workaround avoids the iommu log spam and that amdgpu appears to be working fine with it. Daniel On Wed, Mar 22, 2017 at 5:22 AM, j...@8bytes.org wrote: > On Tue, Mar 21, 2017 at 04:30:55PM +, Deucher, Alexander

Re: [RFC PATCH 01/30] iommu/arm-smmu-v3: Link groups and devices

2017-03-27 Thread Robin Murphy
Hi Jean-Philippe, On 27/02/17 19:54, Jean-Philippe Brucker wrote: > Reintroduce smmu_group. This structure was removed during the generic DT > bindings rework, but will be needed when implementing PCIe ATS, to lookup > devices attached to a given domain. > > When unmapping from a domain, we need

Re: [RFC PATCH 21/30] iommu/arm-smmu-v3: Handle device faults from PRI

2017-03-27 Thread Jean-Philippe Brucker
Hi Valmiki, On 25/03/17 05:16, valmiki wrote: >> When we receive a PRI Page Request (PPR) from the SMMU, it contains a >> context identifier SID:SSID, an IOVA and the requested access flags. >> >> Search the domain corresponding to SID:SSID, and call handle_mm_fault on >> its mm. If memory

Re: [PATCH 5/4] iommu/arm-smmu: Poll for TLB sync completion more effectively

2017-03-27 Thread Sunil Kovvuri
On Thu, Mar 23, 2017 at 11:29 PM, Robin Murphy wrote: > On relatively slow development platforms and software models, the > inefficiency of our TLB sync loop tends not to show up - for instance on > a Juno r1 board I typically see the TLBI has completed of its own accord >

Re: [RFC PATCH 29/30] vfio: Add support for Shared Virtual Memory

2017-03-27 Thread Jean-Philippe Brucker
On 24/03/17 07:46, Liu, Yi L wrote: [...] So we need some kind of high-level classification that the vIOMMU must communicate to the physical one. Each IOMMU flavor would get a unique, global identifier, simply to make sure that vIOMMU and pIOMMU speak >> the same language.

[PATCH] iommu/amd: flush IOTLB for specific domains only

2017-03-27 Thread arindam . nath
From: Arindam Nath The idea behind flush queues is to defer the IOTLB flushing for domains for which the mappings are no longer valid. We add such domains in queue_add(), and when the queue size reaches FLUSH_QUEUE_SIZE, we perform __queue_flush(). Since we have already