Hi all,
my commit to add swiotlb to arm failed to initialize the atomic pool,
which is needed for GFP_ATOMIC allocations on non-coherent devices.
These are fairly rare, but exist so we should wire it up. For 5.4
I plan to move the initilization to the common dma-remap code so it
won't be missed f
When we use the generic dma-direct + remap code we also need to
initialize the atomic pool that is used for GFP_ATOMIC allocations on
non-coherent devices.
Fixes: ad3c7b18c5b3 ("arm: use swiotlb for bounce buffering on LPAE configs")
Signed-off-by: Christoph Hellwig
---
arch/arm/mm/dma-mapping.c
Hi all,
As Shawn pointed out we've had issues with the dma mmap pgprots ever
since the dma_common_mmap helper was added beyong the initial
architectures - we default to uncached mappings, but for devices that
are DMA coherent, or if the DMA_ATTR_NON_CONSISTENT is set (and
supported) this can lead
Mips uses the KSEG1 kernel memory segment do map dma coherent
allocations for non-coherent devices as uncachable, and does not have
any kind of special support for DMA_ATTR_WRITE_COMBINE in the allocation
path. Thus supporting DMA_ATTR_WRITE_COMBINE in dma_mmap_attrs will
lead to multiple mappings
All the way back to introducing dma_common_mmap we've defaulyed to mark
the pages as uncached. But this is wrong for DMA coherent devices.
Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment as that
flag is only treated special on the alloc side for non-coherent devices.
Introduce a new
Hello!
On 05.08.2019 11:01, Christoph Hellwig wrote:
Mips uses the KSEG1 kernel memory segment do map dma coherent
MIPS. s/do/to/?
allocations for n
on-coherent devices as uncachable, and does not have
Uncacheable?
any kind of special support for DMA_ATTR_WRITE_COMBINE in the al
On Mon, Aug 05, 2019 at 11:01:44AM +0300, Christoph Hellwig wrote:
> All the way back to introducing dma_common_mmap we've defaulyed to mark
s/defaultyed/defaulted/
> the pages as uncached. But this is wrong for DMA coherent devices.
> Later on DMA_ATTR_WRITE_COMBINE also got incorrect treatment
The comments are spot on and should be near the central API, not just
near a single implementation.
Signed-off-by: Christoph Hellwig
---
arch/arm/mm/dma-mapping.c | 11 ---
kernel/dma/mapping.c | 11 +++
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/arch/a
Hi all,
we have a few places where the DMA mapping layer has non-trivial default
actions that are questionable and/or dangerous.
This series instead wires up the mmap, get_sgtable and get_required_mask
methods explicitly and cleans up some surrounding areas. This also means
we could get rid of t
While the default ->mmap and ->get_sgtable implementations work for the
majority of our dma_map_ops impementations they are inherently safe
for others that don't use the page allocator or CMA and/or use their
own way of remapping not covered by the common code. So remove the
defaults if these meth
Provide a pgprot_noncached like all the other nommu ports so that
common code can rely on it being able to be present. Note that this is
generally code that is not actually run on nommu, but at least we can
avoid nasty ifdefs by having a stub.
Signed-off-by: Christoph Hellwig
---
arch/m68k/incl
Add a helper to check if DMA allocations for a specific device can be
mapped to userspace using dma_mmap_*.
Signed-off-by: Christoph Hellwig
---
include/linux/dma-mapping.h | 5 +
kernel/dma/mapping.c| 23 +++
2 files changed, 28 insertions(+)
diff --git a/inclu
Now that we never use a default ->mmap implementation, and non-coherent
architectures can control the presence of ->mmap support by enabling
ARCH_HAS_DMA_COHERENT_TO_PFN for the dma direct implementation there
is no need for a global config option to control the availability
of dma_common_mmap.
Si
Replace the local hack with the dma_can_mmap helper to check if
a given device supports mapping DMA allocations to userspace.
Signed-off-by: Christoph Hellwig
---
sound/core/pcm_native.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/sound/core/pcm_native.c b/sound/core
Most dma_map_ops instances are IOMMUs that work perfectly fine in 32-bits
of IOVA space, and the generic direct mapping code already provides its
own routines that is intelligent based on the amount of memory actually
present. Wire up the dma-direct routine for the ARM direct mapping code
as well,
On Mon, 05 Aug 2019 11:11:56 +0200,
Christoph Hellwig wrote:
>
> Replace the local hack with the dma_can_mmap helper to check if
> a given device supports mapping DMA allocations to userspace.
>
> Signed-off-by: Christoph Hellwig
> ---
> sound/core/pcm_native.c | 5 ++---
> 1 file changed, 2 in
Hi Jungo,
On Tue, Jul 30, 2019 at 10:45 AM Jungo Lin wrote:
>
> On Mon, 2019-07-29 at 19:04 +0900, Tomasz Figa wrote:
> > On Mon, Jul 29, 2019 at 10:18 AM Jungo Lin wrote:
> > > On Fri, 2019-07-26 at 14:49 +0900, Tomasz Figa wrote:
> > > > On Wed, Jul 24, 2019 at 1:31 PM Jungo Lin
> > > > wrot
The commit 72921427d46b
("string.h: Add str_has_prefix() helper function")
introduced str_has_prefix() to substitute error-prone
strncmp(str, const, len).
strncmp(str, const, len) is easy to have error in len
because of counting error or sizeof(const) without - 1.
These patches replace such patte
strncmp(str, const, len) is error-prone because len
is easy to have typo.
The example is the hard-coded len has counting error
or sizeof(const) forgets - 1.
So we prefer using newly introduced str_has_prefix()
to substitute such strncmp to make code better.
Signed-off-by: Chuhong Yuan
---
Changes
The dma required_mask needs to reflect the actual addressing capabilities
needed to handle the whole system RAM. When truncated down to the bus
addressing capabilities dma_addressing_limited() will incorrectly signal
no limitations for devices which are restricted by the bus_dma_mask.
Fixes: b4ebe
Hi Christoph,
Am Donnerstag, den 01.08.2019, 16:00 +0200 schrieb Christoph Hellwig:
> On Thu, Aug 01, 2019 at 10:35:02AM +0200, Lucas Stach wrote:
> > Hi Christoph,
> >
> > Am Donnerstag, den 01.08.2019, 09:29 +0200 schrieb Christoph Hellwig:
> > > Hi Lukas,
> > >
> > > have you tried the latest
Hi Rob,
Thanks for the review!
On Fri, 2019-08-02 at 11:17 -0600, Rob Herring wrote:
> On Wed, Jul 31, 2019 at 9:48 AM Nicolas Saenz Julienne
> wrote:
> > Some SoCs might have multiple interconnects each with their own DMA
> > addressing limitations. This function parses the 'dma-ranges' on each
On Mon, Aug 5, 2019 at 10:03 AM Nicolas Saenz Julienne
wrote:
>
> Hi Rob,
> Thanks for the review!
>
> On Fri, 2019-08-02 at 11:17 -0600, Rob Herring wrote:
> > On Wed, Jul 31, 2019 at 9:48 AM Nicolas Saenz Julienne
> > wrote:
> > > Some SoCs might have multiple interconnects each with their own
On Tue, 16 Jul 2019 11:30:14 +0200
Auger Eric wrote:
> Hi Jacob,
>
> On 6/9/19 3:44 PM, Jacob Pan wrote:
> > When VT-d driver runs in the guest, PASID allocation must be
> > performed via virtual command interface. This patch registers a
> > custom IOASID allocator which takes precedence over th
On Tue, 16 Jul 2019 18:44:56 +0200
Auger Eric wrote:
> Hi Jacob,
>
> On 6/9/19 3:44 PM, Jacob Pan wrote:
> > Guest shared virtual address (SVA) may require host to shadow guest
> > PASID tables. Guest PASID can also be allocated from the host via
> > enlightened interfaces. In this case, guest n
On Wed 12 Jun 00:15 PDT 2019, Vivek Gautam wrote:
> Indicate on MTP SDM845 that firmware implements handler to
> TLB invalidate erratum SCM call where SAFE sequence is toggled
> to achieve optimum performance on real-time clients, such as
> display and camera.
>
> Signed-off-by: Vivek Gautam
> -
On Wed 19 Jun 04:34 PDT 2019, Vivek Gautam wrote:
> On Tue, Jun 18, 2019 at 11:25 PM Will Deacon wrote:
> >
> > On Wed, Jun 12, 2019 at 12:45:51PM +0530, Vivek Gautam wrote:
> > > There are scnenarios where drivers are required to make a
> > > scm call in atomic context, such as in one of the qco
On Tue, 16 Jul 2019 18:44:56 +0200
Auger Eric wrote:
> > +struct gpasid_bind_data {
> other structs in iommu.h are prefixed with iommu_?
Good point, will add iommu_ prefix.
Thanks,
Jacob
___
iommu mailing list
iommu@lists.linux-foundation.org
https:
Hi Alex,
On 8/3/19 12:54 AM, Alex Williamson wrote:
On Fri, 2 Aug 2019 15:17:45 +0800
Lu Baolu wrote:
Hi Alex,
Thanks for reporting this. I will try to find a machine with a
pcie-to-pci bridge and get this issue fixed. I will update you
later.
Further debug below...
On 8/2/19 9:30 AM, Al
Multiple devices might share a private domain. One real example
is a pci bridge and all devices behind it. When remove a private
domain, make sure that it has been detached from all devices to
avoid use-after-free case.
Cc: Ashok Raj
Cc: Jacob Pan
Cc: Kevin Tian
Cc: Alex Williamson
Fixes: 9420
When the default domain of a group doesn't work for a device,
the iommu driver will try to use a private domain. The domain
which was previously attached to the device must be detached.
Cc: Ashok Raj
Cc: Jacob Pan
Cc: Kevin Tian
Cc: Alex Williamson
Fixes: 942067f1b6b97 ("iommu/vt-d: Identify d
+ (Robin)
Hi Robin,
Sorry to ping you...
What's your suggestion for this patch? I'm looking forward to your reply.
Thanks,
Xiongfeng.
On 2019/7/27 17:21, Xiongfeng Wang wrote:
> Fix following crash that occurs when 'fq_flush_timeout()' access
> 'fq->lock' while 'iovad->fq' has been cleared. T
Hello,
This version has only a small change in the last patch as requested by
Christoph and Halil, and collects Reviewed-by's.
These patches are applied on top of v5.3-rc2.
I don't have a way to test SME, SEV, nor s390's PEF so the patches have only
been build tested.
Changelog
Since v3:
- Pa
Now that generic code doesn't reference them, move sme_active() and
sme_me_mask to x86's .
Also remove the export for sme_active() since it's only used in files that
won't be built as modules. sme_me_mask on the other hand is used in
arch/x86/kvm/svm.c (via __sme_set() and __psp_pa()) which can be
powerpc is also going to use this feature, so put it in a generic location.
Signed-off-by: Thiago Jung Bauermann
Reviewed-by: Thomas Gleixner
Reviewed-by: Christoph Hellwig
---
arch/Kconfig | 3 +++
arch/s390/Kconfig | 4 +---
arch/x86/Kconfig | 4 +---
3 files changed, 5 insertions(+),
sme_active() is an x86-specific function so it's better not to call it from
generic code. Christoph Hellwig mentioned that "There is no reason why we
should have a special debug printk just for one specific reason why there
is a requirement for a large DMA mask.", so just remove dma_check_mask().
All references to sev_active() were moved to arch/x86 so we don't need to
define it for s390 anymore.
Signed-off-by: Thiago Jung Bauermann
Reviewed-by: Christoph Hellwig
Reviewed-by: Halil Pasic
---
arch/s390/include/asm/mem_encrypt.h | 1 -
arch/s390/mm/init.c | 7 +--
2 f
sme_active() is an x86-specific function so it's better not to call it from
generic code.
There's no need to mention which memory encryption feature is active, so
just use a more generic message. Besides, other architectures will have
different names for similar technology.
Signed-off-by: Thiago
Secure Encrypted Virtualization is an x86-specific feature, so it shouldn't
appear in generic kernel code because it forces non-x86 architectures to
define the sev_active() function, which doesn't make a lot of sense.
To solve this problem, add an x86 elfcorehdr_read() function to override
the gen
On Mon, Aug 05, 2019 at 05:51:53PM +0200, Lucas Stach wrote:
> The dma required_mask needs to reflect the actual addressing capabilities
> needed to handle the whole system RAM. When truncated down to the bus
> addressing capabilities dma_addressing_limited() will incorrectly signal
> no limitation
On Mon, Aug 05, 2019 at 11:22:03AM +0200, Takashi Iwai wrote:
> This won't work as expected, unfortunately. It's a bit tricky check,
> since the driver may have its own mmap implementation via
> substream->ops->mmap, and the dma_buffer.dev.dev might point to
> another object depending on the dma_b
On Tue, 06 Aug 2019 07:29:49 +0200,
Christoph Hellwig wrote:
>
> On Mon, Aug 05, 2019 at 11:22:03AM +0200, Takashi Iwai wrote:
> > This won't work as expected, unfortunately. It's a bit tricky check,
> > since the driver may have its own mmap implementation via
> > substream->ops->mmap, and the d
Hi Lu,
I really do like the switch to the per-device dma_map_ops, but:
On Thu, Aug 01, 2019 at 02:01:55PM +0800, Lu Baolu wrote:
> Current Intel IOMMU driver sets the system level dma_ops. This
> implementation has at least the following drawbacks: 1) each
> dma API will go through the IOMMU driv
43 matches
Mail list logo