On Mon, Mar 27, 2017 at 8:16 PM, Rob Herring wrote:
> On Sat, Mar 25, 2017 at 12:31 AM, Oza Pawandeep wrote:
>> it is possible that PCI device supports 64-bit DMA addressing,
>> and thus it's driver sets device's dma_mask to DMA_BIT_MASK(64),
>> however PCI
Hi,
On 24/03/17 09:27, Shameerali Kolothum Thodi wrote:
Hi Sricharan,
-Original Message-
From: Sricharan R [mailto:sricha...@codeaurora.org]
[...]
Looks like this triggers the start of the bug.
So the below check in iommu_dma_init_domain fails,
if
On Thu, Mar 23, 2017 at 9:45 PM, Rob Clark wrote:
> On Thu, Mar 23, 2017 at 6:21 PM, Rob Herring wrote:
>> On Tue, Mar 14, 2017 at 11:18:05AM -0400, Rob Clark wrote:
>>> Cc: devicet...@vger.kernel.org
>>> Signed-off-by: Rob Clark
>>>
On Mon, Mar 27, 2017 at 05:18:15PM +0100, Robin Murphy wrote:
[...]
> >> [ 145.212351] iommu: Adding device :81:10.0 to group 5
> >> [ 145.212367] ixgbevf :81:10.0: 0x0 0x1, 0x0 0x,
> >> 0x 0x
> >> [ 145.213261] ixgbevf :81:10.0: enabling device
On 27/03/17 16:58, Shameerali Kolothum Thodi wrote:
>
>
>> -Original Message-
>> From: Shameerali Kolothum Thodi
>> Sent: Monday, March 27, 2017 3:53 PM
>> To: 'Robin Murphy'; Sricharan R; Wangzhou (B); will.dea...@arm.com;
>> j...@8bytes.org; lorenzo.pieral...@arm.com;
> -Original Message-
> From: Shameerali Kolothum Thodi
> Sent: Monday, March 27, 2017 3:53 PM
> To: 'Robin Murphy'; Sricharan R; Wangzhou (B); will.dea...@arm.com;
> j...@8bytes.org; lorenzo.pieral...@arm.com; iommu@lists.linux-
> foundation.org; linux-arm-ker...@lists.infradead.org;
On Fri, Mar 24, 2017 at 07:08:47PM +, Jean-Philippe Brucker wrote:
> On 24/03/17 11:00, Joerg Roedel wrote:
> > The document you posted is an addition to the spec, so we can't rely on
> > a stop marker being sent by a device when it shuts down a context.
> > Current AMD GPUs don't send one,
Hi Rob,
On 27/03/17 15:34, Rob Herring wrote:
> On Sat, Mar 25, 2017 at 12:31 AM, Oza Pawandeep wrote:
>> it jumps to the parent node without examining the child node.
>> also with that, it throws "no dma-ranges found for node"
>> for pci dma-ranges.
>>
>> this patch fixes
On Sat, Mar 25, 2017 at 12:31 AM, Oza Pawandeep wrote:
> it jumps to the parent node without examining the child node.
> also with that, it throws "no dma-ranges found for node"
> for pci dma-ranges.
>
> this patch fixes device node traversing for dma-ranges.
What's the DT
>-Original Message-
>From: Daniel Drake [mailto:dr...@endlessm.com]
>Sent: Monday, March 27, 2017 5:56 PM
>To: Nath, Arindam
>Cc: j...@8bytes.org; Deucher, Alexander; Bridgman, John; amd-
>g...@lists.freedesktop.org; iommu@lists.linux-foundation.org; Suthikulpanit,
>Suravee; Linux
Hi Arindam,
You CC'd me on this - does this mean that it is a fix for the issue
described in the thread "amd-iommu: can't boot with amdgpu, AMD-Vi:
Completion-Wait loop timed out" ?
Thanks
Daniel
On Mon, Mar 27, 2017 at 12:17 AM, wrote:
> From: Arindam Nath
Hi Joerg,
Thanks for looking into this. We confirm that this workaround avoids
the iommu log spam and that amdgpu appears to be working fine with it.
Daniel
On Wed, Mar 22, 2017 at 5:22 AM, j...@8bytes.org wrote:
> On Tue, Mar 21, 2017 at 04:30:55PM +, Deucher, Alexander
Hi Jean-Philippe,
On 27/02/17 19:54, Jean-Philippe Brucker wrote:
> Reintroduce smmu_group. This structure was removed during the generic DT
> bindings rework, but will be needed when implementing PCIe ATS, to lookup
> devices attached to a given domain.
>
> When unmapping from a domain, we need
Hi Valmiki,
On 25/03/17 05:16, valmiki wrote:
>> When we receive a PRI Page Request (PPR) from the SMMU, it contains a
>> context identifier SID:SSID, an IOVA and the requested access flags.
>>
>> Search the domain corresponding to SID:SSID, and call handle_mm_fault on
>> its mm. If memory
On Thu, Mar 23, 2017 at 11:29 PM, Robin Murphy wrote:
> On relatively slow development platforms and software models, the
> inefficiency of our TLB sync loop tends not to show up - for instance on
> a Juno r1 board I typically see the TLBI has completed of its own accord
>
On 24/03/17 07:46, Liu, Yi L wrote:
[...]
So we need some kind of high-level classification that the vIOMMU
must communicate to the physical one. Each IOMMU flavor would get a
unique, global identifier, simply to make sure that vIOMMU and pIOMMU speak
>> the same language.
From: Arindam Nath
The idea behind flush queues is to defer the IOTLB flushing
for domains for which the mappings are no longer valid. We
add such domains in queue_add(), and when the queue size
reaches FLUSH_QUEUE_SIZE, we perform __queue_flush().
Since we have already
17 matches
Mail list logo