On Thu, Jan 31, 2019 at 06:17:31PM +0800, lantianyu1...@gmail.com wrote:
>
>
This comment needs to be indented one tab or it looks like we're outside
the funciton.
> +/*
> + * Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
> + * set x2apic destination mode to physcial mode w
On Fri, Feb 1, 2019 at 3:07 PM Dan Carpenter wrote:
>
> On Thu, Jan 31, 2019 at 06:17:31PM +0800, lantianyu1...@gmail.com wrote:
> >
> >
>
> This comment needs to be indented one tab or it looks like we're outside
> the funciton.
>
> > +/*
> > + * Hyper-V doesn't provide irq remapping for IO-APIC.
Hi All,
On 2019-01-18 12:37, Christoph Hellwig wrote:
> Hi all,
>
> this series fixes a rather gross layering violation in videobuf2, which
> pokes into arm DMA mapping internals to get a DMA address for memory that
> does not have a page structure, and to do so fixes up the dma_map_resource
> imp
Hi Vitaly:
Thanks for your review.
On Thu, Jan 31, 2019 at 10:04 PM Vitaly Kuznetsov wrote:
>
> lantianyu1...@gmail.com writes:
>
> > From: Lan Tianyu
> >
> > On the bare metal, enabling X2APIC mode requires interrupt remapping
> > function which helps to deliver irq to cpu with 32-b
On Thu, Jan 31, 2019 at 12:25:40PM +, Jean-Philippe Brucker wrote:
> On 31/01/2019 07:59, Peter Xu wrote:
> > On Wed, Jan 30, 2019 at 12:27:40PM +, Jean-Philippe Brucker wrote:
> >> Hi Peter,
> >
> > Hi, Jean,
> >
> >>
> >> On 30/01/2019 05:57, Peter Xu wrote:
> >>> AMD IOMMU driver is us
On 2019-01-31 4:48 p.m., Dave Jiang wrote:
>
> On 1/31/2019 4:41 PM, Logan Gunthorpe wrote:
>>
>> On 2019-01-31 3:46 p.m., Dave Jiang wrote:
>>> I believe irqbalance writes to the file /proc/irq/N/smp_affinity. So
>>> maybe take a look at the code that starts from there and see if it would
>>>
On 1/31/2019 4:41 PM, Logan Gunthorpe wrote:
On 2019-01-31 3:46 p.m., Dave Jiang wrote:
I believe irqbalance writes to the file /proc/irq/N/smp_affinity. So
maybe take a look at the code that starts from there and see if it would
have any impact on your stuff.
Ok, well on my system I can wri
On 2019-01-31 3:46 p.m., Dave Jiang wrote:
> I believe irqbalance writes to the file /proc/irq/N/smp_affinity. So
> maybe take a look at the code that starts from there and see if it would
> have any impact on your stuff.
Ok, well on my system I can write to the smp_affinity all day and the
M
On 2019-01-31 3:39 p.m., Bjorn Helgaas wrote:
> I assume you'll merge this along with the rest of the series, so:
>
> Acked-by: Bjorn Helgaas
Thanks!
>> diff --git a/include/linux/msi.h b/include/linux/msi.h
>> index 784fb52b9900..6458ab049852 100644
>> --- a/include/linux/msi.h
>> +++ b/inc
On 1/31/2019 3:39 PM, Logan Gunthorpe wrote:
On 2019-01-31 1:58 p.m., Dave Jiang wrote:
On 1/31/2019 1:48 PM, Logan Gunthorpe wrote:
On 2019-01-31 1:20 p.m., Dave Jiang wrote:
Does this work when the system moves the MSI vector either via software
(irqbalance) or BIOS APIC programming (some
On 2019-01-31 1:58 p.m., Dave Jiang wrote:
>
> On 1/31/2019 1:48 PM, Logan Gunthorpe wrote:
>>
>> On 2019-01-31 1:20 p.m., Dave Jiang wrote:
>>> Does this work when the system moves the MSI vector either via software
>>> (irqbalance) or BIOS APIC programming (some modes cause round robin
>>> be
[+cc Thomas, Marc]
On Thu, Jan 31, 2019 at 11:56:49AM -0700, Logan Gunthorpe wrote:
> For NTB devices, we want to be able to trigger MSI interrupts
> through a memory window. In these cases we may want to use
> more interrupts than the NTB PCI device has available in its MSI-X
> table.
>
> We all
On 1/31/2019 1:48 PM, Logan Gunthorpe wrote:
On 2019-01-31 1:20 p.m., Dave Jiang wrote:
Does this work when the system moves the MSI vector either via software
(irqbalance) or BIOS APIC programming (some modes cause round robin
behavior)?
I don't know how irqbalance works, and I'm not sure
On 2019-01-31 1:20 p.m., Dave Jiang wrote:
> Does this work when the system moves the MSI vector either via software
> (irqbalance) or BIOS APIC programming (some modes cause round robin
> behavior)?
I don't know how irqbalance works, and I'm not sure what you are
referring to by BIOS APIC p
On 1/31/2019 11:56 AM, Logan Gunthorpe wrote:
Hi,
This patch series adds optional support for using MSI interrupts instead
of NTB doorbells in ntb_transport. This is desirable seeing doorbells on
current hardware are quite slow and therefore switching to MSI interrupts
provides a significant p
On Thu, Jan 31, 2019 at 12:19:31PM -0700, Logan Gunthorpe wrote:
>
>
> On 2019-01-31 12:02 p.m., Jason Gunthorpe wrote:
> > I still think the right direction is to build on what Logan has done -
> > realize that he created a DMA-only SGL - make that a formal type of
> > the kernel and provide the
On Thu, Jan 31, 2019 at 02:35:14PM -0500, Jerome Glisse wrote:
> > Basically invert the API flow - the DMA map would be done close to
> > GUP, not buried in the driver. This absolutely doesn't work for every
> > flow we have, but it does enable the ones that people seem to care
> > about when talk
On 2019-01-31 12:35 p.m., Jerome Glisse wrote:
> So what is this O_DIRECT thing that keep coming again and again here :)
> What is the use case ? Note that bio will always have valid struct page
> of regular memory as using PCIE BAR for filesystem is crazy (you do not
> have atomic or cache cohe
On Thu, Jan 31, 2019 at 07:02:15PM +, Jason Gunthorpe wrote:
> On Thu, Jan 31, 2019 at 09:13:55AM +0100, Christoph Hellwig wrote:
> > On Wed, Jan 30, 2019 at 03:52:13PM -0700, Logan Gunthorpe wrote:
> > > > *shrug* so what if the special GUP called a VMA op instead of
> > > > traversing the VMA
On Wed, Jan 30, 2019 at 10:59 PM Yong Wu wrote:
>
> On Wed, 2019-01-30 at 10:28 -0800, Evan Green wrote:
> > On Mon, Dec 31, 2018 at 7:57 PM Yong Wu wrote:
> > >
> > > MediaTek extend the arm v7s descriptor to support the dram over 4GB.
> > >
> > > In the mt2712 and mt8173, it's called "4GB mode"
On 2019-01-31 12:02 p.m., Jason Gunthorpe wrote:
> I still think the right direction is to build on what Logan has done -
> realize that he created a DMA-only SGL - make that a formal type of
> the kernel and provide the right set of APIs to work with this type,
> without being forced to expose
On Thu, Jan 31, 2019 at 09:13:55AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 30, 2019 at 03:52:13PM -0700, Logan Gunthorpe wrote:
> > > *shrug* so what if the special GUP called a VMA op instead of
> > > traversing the VMA PTEs today? Why does it really matter? It could
> > > easily change to a
When a device has multiple aliases that all are from the same bus,
we program the IRTE to accept requests from any matching device on the
bus.
This is so NTB devices which can have requests from multiple bus-devfns
can pass MSI interrupts through across the bridge.
Signed-off-by: Logan Gunthorpe
Hi,
This patch series adds optional support for using MSI interrupts instead
of NTB doorbells in ntb_transport. This is desirable seeing doorbells on
current hardware are quite slow and therefore switching to MSI interrupts
provides a significant performance gain. On switchtec hardware, a simple
a
For NTB devices, we want to be able to trigger MSI interrupts
through a memory window. In these cases we may want to use
more interrupts than the NTB PCI device has available in its MSI-X
table.
We allow for this by creating a new 'virtual' interrupt. These
interrupts are allocated as usual but ar
The kbuild system does not support having multiple source files in
a module if one of those source files has the same name as the module.
Therefore, we must rename ntb.c to core.c, while the module remains
ntb.ko.
This is similar to the way the nvme modules are structured.
Signed-off-by: Logan G
When using multi-ports each port uses resources (dbs, msgs, mws, etc)
on every other port. Creating a mapping for these resources such that
each port has a corresponding resource on every other port is a bit
tricky.
Introduce the ntb_peer_resource_idx() function for this purpose.
It returns the pe
The NTB MSI library allows passing MSI interrupts across a memory
window. This offers similar functionality to doorbells or messages
except will often have much better latency and the client can
potentially use significantly more remote interrupts than typical hardware
provides for doorbells. (Whic
Introduce the module parameter 'use_msi' which, when set uses
MSI interrupts instead of doorbells for each queue pair (QP). T
he parameter is only available if NTB MSI support is configured into
the kernel. We also require there to be more than one memory window
(MW) so that an extra one is availab
Introduce a tool to test NTB MSI interrupts similar to the other
NTB test tools. This tool creates a debugfs directory for each
NTB device with the following files:
port
irqX_occurrences
peerX/port
peerX/count
peerX/trigger
The 'port' file tells the user the local port number and the
'occurrences
When the ntb_msi_test module is available, the test code will trigger
each of the interrupts and ensure the corresponding occurrences files
gets incremented.
Signed-off-by: Logan Gunthorpe
Cc: Jon Mason
Cc: Dave Jiang
Cc: Allen Hubbe
---
tools/testing/selftests/ntb/ntb_test.sh | 54 ++
Seeing the we want to use more interrupts in the NTB MSI code
we need to be able allocate more (sometimes virtual) interrupts
in the switchtec driver. Therefore add a module parameter to
request to allocate additional interrupts.
This puts virtually no limit on the number of MSI interrupts availab
On Wed, Jan 30, 2019 at 7:22 PM Yong Wu wrote:
>
> On Wed, 2019-01-30 at 11:11 -0800, Evan Green wrote:
> > On Mon, Dec 31, 2018 at 7:59 PM Yong Wu wrote:
> > >
> > > The "mediatek,larb-id" has already been parsed in MTK IOMMU driver.
> > > It's no need to parse it again in SMI driver. Only clean
On Wed, Jan 30, 2019 at 7:20 PM Yong Wu wrote:
>
> On Wed, 2019-01-30 at 10:30 -0800, Evan Green wrote:
> > On Mon, Dec 31, 2018 at 7:58 PM Yong Wu wrote:
> > >
> > > Both mt8173 and mt8183 don't have this vld_pa_rng(valid physical address
> > > range) register while mt2712 have. Move it into the
From: Joerg Roedel
This function will be used from dma_direct code to determine
the maximum segment size of a dma mapping.
Reviewed-by: Konrad Rzeszutek Wilk
Reviewed-by: Christoph Hellwig
Signed-off-by: Joerg Roedel
---
include/linux/swiotlb.h | 6 ++
kernel/dma/swiotlb.c| 9 +++
Hi,
here is the next version of this patch-set. Previous
versions can be found here:
V1: https://lore.kernel.org/lkml/20190110134433.15672-1-j...@8bytes.org/
V2: https://lore.kernel.org/lkml/20190115132257.6426-1-j...@8bytes.org/
V3: https://lore.kernel.org/lkml/20190123
From: Joerg Roedel
The function returns the maximum size that can be mapped
using DMA-API functions. The patch also adds the
implementation for direct DMA and a new dma_map_ops pointer
so that other implementations can expose their limit.
Reviewed-by: Konrad Rzeszutek Wilk
Reviewed-by: Christop
On Thu, Jan 31, 2019 at 09:43:51AM -0500, Michael S. Tsirkin wrote:
> OK. Joerg can you repost the series with this squashed
> and all acks applied?
Sure, sent out now as v6.
Regards,
Joerg
___
iommu mailing list
iommu@lists.linux-foundation.or
From: Joerg Roedel
Segments can't be larger than the maximum DMA mapping size
supported on the platform. Take that into account when
setting the maximum segment size for a block device.
Reviewed-by: Konrad Rzeszutek Wilk
Reviewed-by: Christoph Hellwig
Signed-off-by: Joerg Roedel
---
drivers/
From: Joerg Roedel
This function returns the maximum segment size for a single
dma transaction of a virtio device. The possible limit comes
from the SWIOTLB implementation in the Linux kernel, that
has an upper limit of (currently) 256kb of contiguous
memory it can map. Other DMA-API implementati
From: Joerg Roedel
The function returns the maximum size that can be remapped
by the SWIOTLB implementation. This function will be later
exposed to users through the DMA-API.
Reviewed-by: Konrad Rzeszutek Wilk
Reviewed-by: Christoph Hellwig
Signed-off-by: Joerg Roedel
---
include/linux/swiot
From: Joerg Roedel
The only reason why swiotlb_init_with_tbl() can fail is an
allocation failure in the memblock_alloc() function. But
this function just calls panic() in case it can't fulfill
the request and never returns an error, therefore
swiotlb_init_with_tbl() also never actually returns an
lantianyu1...@gmail.com writes:
> From: Lan Tianyu
>
> On the bare metal, enabling X2APIC mode requires interrupt remapping
> function which helps to deliver irq to cpu with 32-bit APIC ID.
> Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
> MSI protocol already supports t
On Thu, Jan 31, 2019 at 09:13:55AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 30, 2019 at 03:52:13PM -0700, Logan Gunthorpe wrote:
> > > *shrug* so what if the special GUP called a VMA op instead of
> > > traversing the VMA PTEs today? Why does it really matter? It could
> > > easily change to a
Hi Marek,
can chance you could retest the v2 version?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Thu, Jan 31, 2019 at 09:05:01AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 30, 2019 at 08:44:20PM +, Jason Gunthorpe wrote:
> > Not really, for MRs most drivers care about DMA addresses only. The
> > only reason struct page ever gets involved is because it is part of
> > the GUP, SGL and
On Thu, Jan 31, 2019 at 09:02:03AM +0100, Christoph Hellwig wrote:
> On Wed, Jan 30, 2019 at 01:50:27PM -0500, Jerome Glisse wrote:
> > I do not see how VMA changes are any different than using struct page
> > in respect to userspace exposure. Those vma callback do not need to be
> > set by everyon
Hi,
On 31/01/2019 13:52, Zhen Lei wrote:
> Currently, many peripherals are faster than before. For example, the top
> speed of the older netcard is 10Gb/s, and now it's more than 25Gb/s. But
> when iommu page-table mapping enabled, it's hard to reach the top speed
> in strict mode, because of freq
On Thu, Jan 31, 2019 at 03:37:23PM +0100, Christoph Hellwig wrote:
> On Thu, Jan 31, 2019 at 02:01:27PM +0100, Joerg Roedel wrote:
> > On Thu, Jan 31, 2019 at 11:41:29AM +0100, Christoph Hellwig wrote:
> > > Sorry for not noticing last time, but since 5.0 we keep all non-fast
> > > path DMA mapping
On Thu, Jan 31, 2019 at 02:01:27PM +0100, Joerg Roedel wrote:
> On Thu, Jan 31, 2019 at 11:41:29AM +0100, Christoph Hellwig wrote:
> > Sorry for not noticing last time, but since 5.0 we keep all non-fast
> > path DMA mapping interfaces out of line, so this should move to
> > kernel/dma/mapping.c.
>
Currently, many peripherals are faster than before. For example, the top
speed of the older netcard is 10Gb/s, and now it's more than 25Gb/s. But
when iommu page-table mapping enabled, it's hard to reach the top speed
in strict mode, because of frequently map and unmap operations. In order
to keep
On Thu, Jan 31, 2019 at 11:41:29AM +0100, Christoph Hellwig wrote:
> Sorry for not noticing last time, but since 5.0 we keep all non-fast
> path DMA mapping interfaces out of line, so this should move to
> kernel/dma/mapping.c.
Okay, attached patch does that. It applies on-top of this patch-set.
Hi Christoph,
I compiled kernels for the X5000 and X1000 from your branch
'powerpc-dma.6' today.
Gitweb:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/powerpc-dma.6
git clone git://git.infradead.org/users/hch/misc.git -b powerpc-dma.6 a
The X1000 and X5000 boot but unfort
Am Donnerstag, 31. Januar 2019, 13:31:52 CET schrieb Souptick Joarder:
> On Thu, Jan 31, 2019 at 5:37 PM Heiko Stuebner wrote:
> >
> > Am Donnerstag, 31. Januar 2019, 04:08:12 CET schrieb Souptick Joarder:
> > > Previouly drivers have their own way of mapping range of
> > > kernel pages/memory int
On Thu, Jan 31, 2019 at 5:37 PM Heiko Stuebner wrote:
>
> Am Donnerstag, 31. Januar 2019, 04:08:12 CET schrieb Souptick Joarder:
> > Previouly drivers have their own way of mapping range of
> > kernel pages/memory into user vma and this was done by
> > invoking vm_insert_page() within a loop.
> >
On 31/01/2019 07:59, Peter Xu wrote:
> On Wed, Jan 30, 2019 at 12:27:40PM +, Jean-Philippe Brucker wrote:
>> Hi Peter,
>
> Hi, Jean,
>
>>
>> On 30/01/2019 05:57, Peter Xu wrote:
>>> AMD IOMMU driver is using the clear_flush_young() to do cache flushing
>>> but that's actually already covered
Hi Greg:
Thanks for your review.
On Thu, Jan 31, 2019 at 7:57 PM Greg KH wrote:
>
> On Thu, Jan 31, 2019 at 06:17:31PM +0800, lantianyu1...@gmail.com wrote:
> > From: Lan Tianyu
> >
> > Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
> > set x2apic destination m
From: Lan Tianyu
On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual
From: Lan Tianyu
Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
set x2apic destination mode to physcial mode when x2apic is available
and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
8-bit APIC id.
Signed-off-by: Lan Tianyu
---
arch/x86/kernel/cpu/
From: Lan Tianyu
On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual
On Thu, Jan 31, 2019 at 7:59 PM Greg KH wrote:
>
> On Thu, Jan 31, 2019 at 06:17:32PM +0800, lantianyu1...@gmail.com wrote:
> > --- /dev/null
> > +++ b/drivers/iommu/hyperv-iommu.c
> > @@ -0,0 +1,189 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +#define pr_fmt(fmt) "HYPERV-IR: " fmt
>
Am Donnerstag, 31. Januar 2019, 04:08:12 CET schrieb Souptick Joarder:
> Previouly drivers have their own way of mapping range of
> kernel pages/memory into user vma and this was done by
> invoking vm_insert_page() within a loop.
>
> As this pattern is common across different drivers, it can
> be
On Thu, Jan 31, 2019 at 06:17:32PM +0800, lantianyu1...@gmail.com wrote:
> --- /dev/null
> +++ b/drivers/iommu/hyperv-iommu.c
> @@ -0,0 +1,189 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#define pr_fmt(fmt) "HYPERV-IR: " fmt
Minor nit, you never do any pr_*() calls, so this isn't needed,
On Thu, Jan 31, 2019 at 06:17:31PM +0800, lantianyu1...@gmail.com wrote:
> From: Lan Tianyu
>
> Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
> set x2apic destination mode to physcial mode when x2apic is available
> and Hyper-V IOMMU driver makes sure cpus assigned with IO-
On Thu, Jan 31, 2019 at 03:43:39PM +0530, Souptick Joarder wrote:
> On Thu, Jan 31, 2019 at 2:09 PM Mike Rapoport wrote:
> >
> > On Thu, Jan 31, 2019 at 08:38:12AM +0530, Souptick Joarder wrote:
> > > Previouly drivers have their own way of mapping range of
> > > kernel pages/memory into user vma
> +static inline size_t dma_max_mapping_size(struct device *dev)
> +{
> + const struct dma_map_ops *ops = get_dma_ops(dev);
> + size_t size = SIZE_MAX;
> +
> + if (dma_is_direct(ops))
> + size = dma_direct_max_mapping_size(dev);
> + else if (ops && ops->max_mapping_size)
On Thu, Jan 31, 2019 at 2:09 PM Mike Rapoport wrote:
>
> On Thu, Jan 31, 2019 at 08:38:12AM +0530, Souptick Joarder wrote:
> > Previouly drivers have their own way of mapping range of
> > kernel pages/memory into user vma and this was done by
> > invoking vm_insert_page() within a loop.
> >
> > As
On Thu, Jan 31, 2019 at 08:38:12AM +0530, Souptick Joarder wrote:
> Previouly drivers have their own way of mapping range of
> kernel pages/memory into user vma and this was done by
> invoking vm_insert_page() within a loop.
>
> As this pattern is common across different drivers, it can
> be gener
On Wed, Jan 30, 2019 at 03:52:13PM -0700, Logan Gunthorpe wrote:
> > *shrug* so what if the special GUP called a VMA op instead of
> > traversing the VMA PTEs today? Why does it really matter? It could
> > easily change to a struct page flow tomorrow..
>
> Well it's so that it's composable. We wan
On Wed, Jan 30, 2019 at 08:44:20PM +, Jason Gunthorpe wrote:
> Not really, for MRs most drivers care about DMA addresses only. The
> only reason struct page ever gets involved is because it is part of
> the GUP, SGL and dma_map family of APIs.
And the only way you get the DMA address is throug
On Wed, Jan 30, 2019 at 01:50:27PM -0500, Jerome Glisse wrote:
> I do not see how VMA changes are any different than using struct page
> in respect to userspace exposure. Those vma callback do not need to be
> set by everyone, in fact expectation is that only handful of driver
> will set those.
>
On Wed, Jan 30, 2019 at 12:27:40PM +, Jean-Philippe Brucker wrote:
> Hi Peter,
Hi, Jean,
>
> On 30/01/2019 05:57, Peter Xu wrote:
> > AMD IOMMU driver is using the clear_flush_young() to do cache flushing
> > but that's actually already covered by invalidate_range(). Remove the
> > extra no
72 matches
Mail list logo