Hi all,
As Shawn pointed out we've had issues with the dma mmap pgprots ever
since the dma_common_mmap helper was added beyong the initial
architectures - we default to uncached mappings, but for devices that
are DMA coherent, or if the DMA_ATTR_NON_CONSISTENT is set (and
supported) this can lead
Mips uses the KSEG1 kernel memory segment to map dma coherent
allocations for non-coherent devices as uncacheable, and does not have
any kind of special support for DMA_ATTR_WRITE_COMBINE in the allocation
path. Thus supporting DMA_ATTR_WRITE_COMBINE in dma_mmap_attrs will
lead to multiple mapping
All the way back to introducing dma_common_mmap we've defaulted to mark
the pages as uncached. But this is wrong for DMA coherent devices.
Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
flag is only treated special on the alloc side for non-coherent devices.
Introduce a new
On Tue, Aug 06, 2019 at 05:45:03PM +0100, Russell King - ARM Linux admin wrote:
> We could have used a different approach, making all IO writes contain
> a "drain write buffer" instruction, and map DMA memory as "buffered",
> but as there were no Linux barriers defined to order memory accesses
> to
On Tue, Aug 06, 2019 at 09:39:06PM +0200, Shawn Anastasio wrote:
>> -#ifdef CONFIG_ARCH_HAS_DMA_MMAP_PGPROT
>> pgprot_t arch_dma_mmap_pgprot(struct device *dev, pgprot_t prot,
>> unsigned long attrs);
>> -#else
>> -# define arch_dma_mmap_pgprot(dev, prot, attrs) pgprot_noncached(
Hi Christoph,
On 8/6/19 2:43 PM, Christoph Hellwig wrote:
Hi Lu,
I really do like the switch to the per-device dma_map_ops, but:
On Thu, Aug 01, 2019 at 02:01:55PM +0800, Lu Baolu wrote:
Current Intel IOMMU driver sets the system level dma_ops. This
implementation has at least the following d
Hi, Tomasz:
On Tue, 2019-08-06 at 18:47 +0900, Tomasz Figa wrote:
> Hi Jungo,
>
> On Fri, Jul 26, 2019 at 4:24 PM Jungo Lin wrote:
> >
> > Hi, Tomasz:
> >
> > On Thu, 2019-07-25 at 18:23 +0900, Tomasz Figa wrote:
> > > .Hi Jungo,
> > >
> > > On Sat, Jul 20, 2019 at 6:58 PM Jungo Lin wrote:
> >
Hi Robin,
On Tue, Aug 06, 2019 at 04:49:01PM +0100, Robin Murphy wrote:
> Hi Joerg,
>
> On 06/08/2019 16:25, Joerg Roedel wrote:
> > Hi Robin,
> >
> > On Mon, Jul 29, 2019 at 05:46:00PM +0100, Robin Murphy wrote:
> > > Since scatterlist dimensions are all unsigned ints, in the relatively
> > > r
From: Jason Gunthorpe
radeon is using a device global hash table to track what mmu_notifiers
have been registered on struct mm. This is better served with the new
get/put scheme instead.
radeon has a bug where it was not blocking notifier release() until all
the BO's had been invalidated. This c
From: Jason Gunthorpe
The sequence of mmu_notifier_unregister_no_release(),
mmu_notifier_call_srcu() is identical to mmu_notifier_put() with the
free_notifier callback.
As this is the last user of those APIs, converting it means we can drop
them.
Signed-off-by: Jason Gunthorpe
---
drivers/gpu
From: Jason Gunthorpe
This is a significant simplification, it eliminates all the remaining
'hmm' stuff in mm_struct, eliminates krefing along the critical notifier
paths, and takes away all the ugly locking and abuse of page_table_lock.
mmu_notifier_get() provides the single struct hmm per stru
From: Jason Gunthorpe
This series introduces a new registration flow for mmu_notifiers based on
the idea that the user would like to get a single refcounted piece of
memory for a mm, keyed to its use.
For instance many users of mmu_notifiers use an interval tree or similar
to dispatch notificati
From: Jason Gunthorpe
When using mmu_notifer_unregister_no_release() the caller must ensure
there is a SRCU synchronize before the mn memory is freed, otherwise use
after free races are possible, for instance:
CPU0 CPU1
From: Jason Gunthorpe
Many places in the kernel have a flow where userspace will create some
object and that object will need to connect to the subsystem's
mmu_notifier subscription for the duration of its lifetime.
In this case the subsystem is usually tracking multiple mm_structs and it
is dif
From: Jason Gunthorpe
At this point the ucontext is only being stored to access the ib_device,
so just store the ib_device directly instead. This is more natural and
logical as the umem has nothing to do with the ucontext.
Signed-off-by: Jason Gunthorpe
---
drivers/infiniband/core/umem.c |
From: Jason Gunthorpe
mmu_notifier_unregister_no_release() and mmu_notifier_call_srcu() no
longer have any users, they have all been converted to use
mmu_notifier_put().
So delete this difficult to use interface.
Signed-off-by: Jason Gunthorpe
---
include/linux/mmu_notifier.h | 5 -
mm/m
On 8/6/19 11:47 PM, Dmitry Safonov wrote:
> Hi Pavel,
>
> On 8/3/19 10:34 PM, Pavel Machek wrote:
>> Hi!
>>
>>> --- a/drivers/iommu/intel-iommu.c
>>> +++ b/drivers/iommu/intel-iommu.c
>>> @@ -3721,7 +3721,7 @@ static void intel_unmap(struct device *d
>>>
>>> freelist = domain_unmap(domain, s
From: Jason Gunthorpe
This is a significant simplification, no extra list is kept per FD, and
the interval tree is now shared between all the ucontexts, reducing
overhead if there are multiple ucontexts active.
Signed-off-by: Jason Gunthorpe
---
drivers/infiniband/core/umem_odp.c| 170
From: Jason Gunthorpe
A prior commit e0f3c3f78da2 ("mm/mmu_notifier: init notifier if necessary")
made an attempt at doing this, but had to be reverted as calling
the GFP_KERNEL allocator under the i_mmap_mutex causes deadlock, see
commit 35cfa2b0b491 ("mm/mmu_notifier: allocate mmu_notifier in a
From: Jason Gunthorpe
GRU is already using almost the same algorithm as get/put, it even
helpfully has a 10 year old comment to make this algorithm common, which
is finally happening.
There are a few differences and fixes from this conversion:
- GRU used rcu not srcu to read the hlist
- Unclear
From: Jason Gunthorpe
This simplifies the code to not have so many one line functions and extra
logic. __mmu_notifier_register() simply becomes the entry point to
register the notifier, and the other one calls it under lock.
Also add a lockdep_assert to check that the callers are holding the loc
Hi Pavel,
On 8/3/19 10:34 PM, Pavel Machek wrote:
> Hi!
>
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -3721,7 +3721,7 @@ static void intel_unmap(struct device *d
>>
>> freelist = domain_unmap(domain, start_pfn, last_pfn);
>>
>> -if (intel_iommu_str
From: Christoph Hellwig
[ Upstream commit 66d7780f18eae0232827fcffeaded39a6a168236 ]
Check that the pfn returned from arch_dma_coherent_to_pfn refers to
a valid page and reject the mmap / get_sgtable requests otherwise.
Based on the arm implementation of the mmap and get_sgtable methods.
Signe
On Tue, 2019-08-06 at 07:23 +0200, Christoph Hellwig wrote:
> On Mon, Aug 05, 2019 at 05:51:53PM +0200, Lucas Stach wrote:
> > The dma required_mask needs to reflect the actual addressing
> > capabilities
> > needed to handle the whole system RAM. When truncated down to the
> > bus
> > addressing c
On 8/5/19 10:01 AM, Christoph Hellwig wrote:
diff --git a/include/linux/dma-noncoherent.h b/include/linux/dma-noncoherent.h
index 3813211a9aad..9ae5cee543c4 100644
--- a/include/linux/dma-noncoherent.h
+++ b/include/linux/dma-noncoherent.h
@@ -42,13 +42,8 @@ void arch_dma_free(struct device *dev,
Hi Rob,
On Mon, 2019-08-05 at 13:23 -0600, Rob Herring wrote:
> On Mon, Aug 5, 2019 at 10:03 AM Nicolas Saenz Julienne
> wrote:
> > Hi Rob,
> > Thanks for the review!
> >
> > On Fri, 2019-08-02 at 11:17 -0600, Rob Herring wrote:
> > > On Wed, Jul 31, 2019 at 9:48 AM Nicolas Saenz Julienne
> > >
On Tue, Aug 06, 2019 at 05:45:03PM +0100, Russell King - ARM Linux admin wrote:
> On Tue, Aug 06, 2019 at 05:08:54PM +0100, Will Deacon wrote:
> > On Sat, Aug 03, 2019 at 08:48:12AM +0200, Christoph Hellwig wrote:
> > > On Fri, Aug 02, 2019 at 11:38:03AM +0100, Will Deacon wrote:
> > > >
> > > > S
On Tue, Aug 06, 2019 at 05:08:54PM +0100, Will Deacon wrote:
> On Sat, Aug 03, 2019 at 08:48:12AM +0200, Christoph Hellwig wrote:
> > On Fri, Aug 02, 2019 at 11:38:03AM +0100, Will Deacon wrote:
> > >
> > > So this boils down to a terminology mismatch. The Arm architecture
> > > doesn't have
> >
On Sat, Aug 03, 2019 at 08:48:12AM +0200, Christoph Hellwig wrote:
> On Fri, Aug 02, 2019 at 11:38:03AM +0100, Will Deacon wrote:
> >
> > So this boils down to a terminology mismatch. The Arm architecture doesn't
> > have
> > anything called "write combine", so in Linux we instead provide what th
On Tue, Aug 06, 2019 at 03:59:40PM +, Lendacky, Thomas wrote:
> As long as two different cookie types (page pointer for encrypted DMA
> and virtual address returned from page_address() for unencrypted DMA)
> is ok. I'm just not familiar with how the cookie is used in any other
> functions, if a
On 8/6/19 10:46 AM, Christoph Hellwig wrote:
> On Tue, Aug 06, 2019 at 02:18:49PM +, Lendacky, Thomas wrote:
>> I think you need to keep everything inside the original if statement since
>> the caller is expecting a page pointer to be returned in this case and not
>> the page_address() which is
Hi Joerg,
On 06/08/2019 16:25, Joerg Roedel wrote:
Hi Robin,
On Mon, Jul 29, 2019 at 05:46:00PM +0100, Robin Murphy wrote:
Since scatterlist dimensions are all unsigned ints, in the relatively
rare cases where a device's max_segment_size is set to UINT_MAX, then
the "cur_len + s_length <= max_
On Tue, Aug 06, 2019 at 02:18:49PM +, Lendacky, Thomas wrote:
> I think you need to keep everything inside the original if statement since
> the caller is expecting a page pointer to be returned in this case and not
> the page_address() which is returned when the DMA_ATTR_NO_KERNEL_MAPPING
> is
On Tue, Aug 06, 2019 at 04:06:58PM +0200, Lucas Stach wrote:
>
> dma_direct_free_pages() then needs the same check, as otherwise the cpu
> address is treated as a cookie instead of a real address and the
> encryption needs to be re-enabled.
Ok, lets try this one instead:
--
>From 3a7aa9fe38a5eae
On Tue, Jul 30, 2019 at 04:26:01PM +0100, Will Deacon wrote:
> Joerg -- if you'd like to pick this up as a fix, feel free, otherwise I'll
> include it in my pull request for 5.4.
Applied to iommu/fixes, thanks.
On Thu, Aug 01, 2019 at 11:14:58AM +0800, Lu Baolu wrote:
> drivers/iommu/intel-iommu.c | 2 ++
> 1 file changed, 2 insertions(+)
Applied to iommu/fixes, thanks.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/m
Hi Robin,
On Mon, Jul 29, 2019 at 05:46:00PM +0100, Robin Murphy wrote:
> Since scatterlist dimensions are all unsigned ints, in the relatively
> rare cases where a device's max_segment_size is set to UINT_MAX, then
> the "cur_len + s_length <= max_len" check in __finalise_sg() will always
> retur
On Mon, Jul 29, 2019 at 04:32:38PM +0100, Robin Murphy wrote:
> drivers/iommu/dma-iommu.c | 17 ++---
> 1 file changed, 10 insertions(+), 7 deletions(-)
Applied to iommu/fixes, thanks Robin.
___
iommu mailing list
iommu@lists.linux-foundatio
On 8/6/19 9:06 AM, Lucas Stach wrote:
> Am Dienstag, den 06.08.2019, 16:04 +0200 schrieb Christoph Hellwig:
>> Ok, does this work?
>>
>> --
>> From 34d35f335a98f515f2516b515051e12eae744c8d Mon Sep 17 00:00:00 2001
>>> From: Christoph Hellwig
>> Date: Tue, 6 Aug 2019 14:33:23 +0300
>> Subject: dma-
Am Dienstag, den 06.08.2019, 16:04 +0200 schrieb Christoph Hellwig:
> Ok, does this work?
>
> --
> From 34d35f335a98f515f2516b515051e12eae744c8d Mon Sep 17 00:00:00 2001
> > From: Christoph Hellwig
> Date: Tue, 6 Aug 2019 14:33:23 +0300
> Subject: dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING
>
> T
Ok, does this work?
--
>From 34d35f335a98f515f2516b515051e12eae744c8d Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Tue, 6 Aug 2019 14:33:23 +0300
Subject: dma-direct: fix DMA_ATTR_NO_KERNEL_MAPPING
The new DMA_ATTR_NO_KERNEL_MAPPING needs to actually assign
a dma_addr to work. Also sk
On 8/6/19 6:33 AM, Christoph Hellwig wrote:
> On Tue, Aug 06, 2019 at 11:13:29AM +0200, Lucas Stach wrote:
>> Hi Christoph,
>>
>> I just found a regression where my NVMe device is no longer able to set
>> up its HMB.
>>
>> After subject commit dma_direct_alloc_pages() is no longer initializing
>> d
Am Dienstag, den 06.08.2019, 13:33 +0200 schrieb Christoph Hellwig:
> On Tue, Aug 06, 2019 at 11:13:29AM +0200, Lucas Stach wrote:
> > Hi Christoph,
> >
> > I just found a regression where my NVMe device is no longer able to
> > set
> > up its HMB.
> >
> > After subject commit dma_direct_alloc_pa
On Tue, Aug 06, 2019 at 11:13:29AM +0200, Lucas Stach wrote:
> Hi Christoph,
>
> I just found a regression where my NVMe device is no longer able to set
> up its HMB.
>
> After subject commit dma_direct_alloc_pages() is no longer initializing
> dma_handle properly when DMA_ATTR_NO_KERNEL_MAPPING
Hi Jungo,
On Fri, Jul 26, 2019 at 4:24 PM Jungo Lin wrote:
>
> Hi, Tomasz:
>
> On Thu, 2019-07-25 at 18:23 +0900, Tomasz Figa wrote:
> > .Hi Jungo,
> >
> > On Sat, Jul 20, 2019 at 6:58 PM Jungo Lin wrote:
> > >
> > > Hi, Tomasz:
> > >
> > > On Wed, 2019-07-10 at 18:56 +0900, Tomasz Figa wrote:
>
Hi Christoph,
I just found a regression where my NVMe device is no longer able to set
up its HMB.
After subject commit dma_direct_alloc_pages() is no longer initializing
dma_handle properly when DMA_ATTR_NO_KERNEL_MAPPING is set, as the
function is now returning too early.
Now this could easily
A couple nitpicks below:
On Thu, Aug 01, 2019 at 05:59:46PM +0200, Eric Auger wrote:
> - * The new element is sorted by address with respect to the other
> - * regions of the same type. In case it overlaps with another
> - * region of the same type, regions are merged. In case it
> - * overlaps wi
47 matches
Mail list logo