From: Christophe Leroy
> Sent: 30 June 2022 10:40
>
> Le 30/06/2022 à 10:04, David Laight a écrit :
> > From: Michael Schmitz
> >> Sent: 29 June 2022 00:09
> >>
> >> Hi Arnd,
> >>
> >> On 29/06/22 09:50, Arnd Bergmann wrote:
> >
From: Michael Schmitz
> Sent: 29 June 2022 00:09
>
> Hi Arnd,
>
> On 29/06/22 09:50, Arnd Bergmann wrote:
> > On Tue, Jun 28, 2022 at 11:03 PM Michael Schmitz
> > wrote:
> >> On 28/06/22 19:03, Geert Uytterhoeven wrote:
> The driver allocates bounce buffers using kmalloc if it hits an
> >>
From: Christoph Hellwig
> Sent: 28 March 2022 07:37
>
> On Fri, Mar 25, 2022 at 11:46:09AM -0700, Linus Torvalds wrote:
> > I think my list of three different sync cases (not just two! It's not
> > just about whether to sync for the CPU or the device, it's also about
> > what direction the data it
From: Linus Torvalds
> Sent: 27 March 2022 06:21
>
> On Sat, Mar 26, 2022 at 10:06 PM Linus Torvalds
> wrote:
> >
> > On Sat, Mar 26, 2022 at 8:49 PM Halil Pasic wrote:
> > >
> > > I agree it CPU modified buffers *concurrently* with DMA can never work,
> > > and I believe the ownership model was
From: Linus Torvalds
> Sent: 26 March 2022 18:39
>
> On Sat, Mar 26, 2022 at 9:06 AM Toke Høiland-Jørgensen wrote:
> >
> > I was also toying with the idea of having a copy-based peek helper like:
> >
> > u32 data = dma_peek_word(buf, offset)
>
> I really don't think you can or want to have a wor
From: Linus Torvalds
> Sent: 25 March 2022 21:57
>
> On Fri, Mar 25, 2022 at 2:13 PM Johannes Berg
> wrote:
> >
> > Well I see now that you said 'cache "writeback"' in (1), and 'flush' in
> > (2), so perhaps you were thinking of the same, and I'm just calling it
> > "flush" and "invalidate" resp
I've been thinking of the case where a descriptor ring has
to be in non-coherent memory (eg because that is all there is).
The receive ring processing isn't actually that difficult.
The driver has to fill a cache line full of new buffer
descriptors in memory but without assigning the first
buffer
From: Christoph Hellwig
> Sent: 28 January 2021 14:59
>
> Add a helper to map memory allocated using dma_alloc_pages into
> a user address space, similar to the dma_alloc_attrs function for
> coherent allocations.
>
...
> +::
> +
> + int
> + dma_mmap_pages(struct device *dev, struct vm_ar
From: Yong Wu
> Sent: 16 December 2020 10:36
>
> Currently gather->end is "unsigned long" which may be overflow in
> arch32 in the corner case: 0xfff0 + 0x10(iova + size).
> Although it doesn't affect the size(end - start), it affects the checking
> "gather->end < end"
>
> Fixes: a7d20dc1
From: Arvind Sankar
> Sent: 29 October 2020 21:35
>
> On Thu, Oct 29, 2020 at 09:41:13PM +0100, Thomas Gleixner wrote:
> > On Thu, Oct 29 2020 at 17:59, Paolo Bonzini wrote:
> > > On 29/10/20 17:56, Arvind Sankar wrote:
> > >>> For those two just add:
> > >>> struct apic *apic = x86_system
From: Arnd Bergmann
> Sent: 29 October 2020 09:51
...
> I think ideally there would be no global variable, withall accesses
> encapsulated in function calls, possibly using static_call() optimizations
> if any of them are performance critical.
There isn't really a massive difference between global
From: Arnd Bergmann
> Sent: 28 October 2020 21:21
>
> From: Arnd Bergmann
>
> There are hundreds of warnings in a W=2 build about a local
> variable shadowing the global 'apic' definition:
>
> arch/x86/kvm/lapic.h:149:65: warning: declaration of 'apic' shadows a global
> declaration [-Wshadow]
From: David Woodhouse
> Sent: 25 October 2020 10:26
> To: David Laight ; x...@kernel.org
>
> On Sun, 2020-10-25 at 09:49 +, David Laight wrote:
> > Just looking at a random one of these patches...
> >
> > Does the compiler manage to optimise that reasonably?
&g
From: David Woodhouse
> Sent: 24 October 2020 22:35
>
> From: Thomas Gleixner
>
> Use the msi_msg shadow structs and compose the message with named bitfields
> instead of the unreadable macro maze.
>
> Signed-off-by: Thomas Gleixner
> Signed-off-by: David Woodhouse
> ---
> arch/x86/pci/xen.c
> On Wed, Sep 30, 2020 at 6:09 PM Christoph Hellwig wrote:
> >
> > Add a new API that returns a virtually non-contigous array of pages
> > and dma address. This API is only implemented for dma-iommu and will
> > not be implemented for non-iommu DMA API instances that have to allocate
> > contiguo
From: Christoph Hellwig
> Sent: 22 September 2020 14:40
...
> @@ -131,6 +125,16 @@ struct dma_map_ops {
> unsigned long (*get_merge_boundary)(struct device *dev);
> };
>
> +/*
> + * A dma_addr_t can hold any valid DMA or bus address for the platform. It
> can
> + * be given to a device to
From: Lu Baolu
> Sent: 30 August 2019 08:17
> The Intel VT-d hardware uses paging for DMA remapping.
> The minimum mapped window is a page size. The device
> drivers may map buffers not filling the whole IOMMU
> window. This allows the device to access to possibly
> unrelated memory and a maliciou
From: Robin Murphy
> Sent: 14 June 2019 16:06
...
> Well, apart from the bit in DMA-API-HOWTO which has said this since
> forever (well, before Git history, at least):
>
> "The CPU virtual address and the DMA address are both
> guaranteed to be aligned to the smallest PAGE_SIZE order which
> is gr
From: 'Christoph Hellwig'
> Sent: 14 June 2019 15:50
> To: David Laight
> On Fri, Jun 14, 2019 at 02:15:44PM +, David Laight wrote:
> > Does this still guarantee that requests for 16k will not cross a 16k
> > boundary?
> > It looks like you are losing the
From: Christoph Hellwig
> Sent: 14 June 2019 14:47
>
> Many architectures (e.g. arm, m68 and sh) have always used exact
> allocation in their dma coherent allocator, which avoids a lot of
> memory waste especially for larger allocations. Lift this behavior
> into the generic allocator so that dma
From: Srinath Mannam
> Sent: 01 May 2019 16:23
...
> > > On Fri, Apr 12, 2019 at 08:43:32AM +0530, Srinath Mannam wrote:
> > > > Few SOCs have limitation that their PCIe host can't allow few inbound
> > > > address ranges. Allowed inbound address ranges are listed in dma-ranges
> > > > DT property
From: Qian Cai
> Sent: 30 November 2018 21:48
> To: h...@lst.de; m.szyprow...@samsung.com; robin.mur...@arm.com
> Cc: yisen.zhu...@huawei.com; salil.me...@huawei.com; john.ga...@huawei.com;
> linux...@huawei.com;
> iommu@lists.linux-foundation.org; net...@vger.kernel.org;
> linux-ker...@vger.kern
From: Jaewon Kim
> Sent: 24 November 2017 05:59
>
> dma-coherent uses bitmap APIs which internally consider align based on the
> requested size. If most of allocations are small size like KBs, using
> alignment scheme seems to be good for anti-fragmentation. But if large
> allocation are commonly
From: Jim Quinla
> Sent: 24 October 2017 19:08
...
> Hi David, Christoph was also concerned about this:
>
> "For the block world take a look at __blk_segment_map_sg which does the
> merging
> of contiguous pages into a single SG segment. You'd have to override
> BIOVEC_PHYS_MERGEABLE to prevent
From: Jim Quinlan
> Sent: 20 October 2017 16:28
> On Fri, Oct 20, 2017 at 10:57 AM, Christoph Hellwig wrote:
> > On Fri, Oct 20, 2017 at 10:41:56AM -0400, Jim Quinlan wrote:
> >> I am not sure I understand your comment -- the size of the request
> >> shouldn't be a factor. Let's look at your exam
> >> It appears that my ax88179 is working just fine now with the vendor
> >> driver. So perhaps it's possible to revert the old commit in the linux
> >> kernel and allow the use of scatter gather ? (perhaps for non-intel
> >> hosts ? I'm not sure if this device is effected by intel xhci errata)
>
From: Christoph Hellwig
> Sent: 03 October 2017 11:43
>
> ia64 does not implement DMA_ATTR_NON_CONSISTENT allocations, so it doesn't
> make any sense to do any work in dma_cache_sync given that it must be a
> no-op when dma_alloc_attrs returns coherent memory.
>
> Signed-off-by: Christoph Hellwig
From: Christoph Hellwig
> Sent: 03 October 2017 11:43
> x86 does not implement DMA_ATTR_NON_CONSISTENT allocations, so it doesn't
> make any sense to do any work in dma_cache_sync given that it must be a
> no-op when dma_alloc_attrs returns coherent memory.
I believe it is just about possible to r
From: Alex Williamson
> Sent: 16 August 2017 17:56
...
> Firmware pissing match... Processors running with 8k or less page size
> fall within the recommendations of the PCI spec for register alignment
> of MMIO regions of the device and this whole problem becomes less of an
> issue.
Actually if q
From: Benjamin Herrenschmidt
> Sent: 15 August 2017 02:34
> On Tue, 2017-08-15 at 09:16 +0800, Jike Song wrote:
> > > Taking a step back, though, why does vfio-pci perform this check in the
> > > first place? If a malicious guest already has control of a device, any
> > > kind of interrupt spoofing
From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: 13 May 2016 06:33
...
> Simply denying direct writes to the vector table or preventing mapping
> of the vector table into the user address space does not provide any
> tangible form of protection. Many devices make use of window reg
From: Tian, Kevin
> Sent: 05 May 2016 10:37
...
> > Acutually, we are not aimed at accessing MSI-X table from
> > guest. So I think it's safe to passthrough MSI-X table if we
> > can make sure guest kernel would not touch MSI-X table in
> > normal code path such as para-virtualized guest kernel on
From: Yongji Xie
> Sent: 18 April 2016 11:59
> We introduce a new pci_bus_flags, PCI_BUS_FLAGS_MSI_REMAP
> which indicates all devices on the bus are protected by the
> hardware which supports IRQ remapping(intel naming).
>
> This flag will be used to know whether it's safe to expose
> MSI-X table
From: James Bottomley
> Sent: 28 September 2015 16:12
> > > > The x86 cpus will also do 32bit wide rmw cycles for the 'bit'
> > > > operations.
> > >
> > > That's different: it's an atomic RMW operation. The problem with the
> > > alpha was that the operation wasn't atomic (meaning that it can't
From: James Bottomley [mailto:james.bottom...@hansenpartnership.com]
> Sent: 28 September 2015 15:27
> On Mon, 2015-09-28 at 08:58 +, David Laight wrote:
> > From: Rafael J. Wysocki
> > > Sent: 27 September 2015 15:09
> > ...
> > > > > Say you have thr
From: Rafael J. Wysocki
> Sent: 27 September 2015 15:09
...
> > > Say you have three adjacent fields in a structure, x, y, z, each one byte
> > > long.
> > > Initially, all of them are equal to 0.
> > >
> > > CPU A writes 1 to x and CPU B writes 2 to y at the same time.
> > >
> > > What's the resu
From: Bjorn Helgaas
...
> >> Even if you do that, you ought to write valid interrupt information
> >> into the 4th slot (maybe replicating one of the earlier interrupts).
> >> Then, if the device does raise the 'unexpected' interrupt you don't
> >> get a write to a random kernel location.
> >
> > I
From: Alexander Gordeev
...
> > Even if you do that, you ought to write valid interrupt information
> > into the 4th slot (maybe replicating one of the earlier interrupts).
> > Then, if the device does raise the 'unexpected' interrupt you don't
> > get a write to a random kernel location.
>
> I mi
From: Bjorn Helgaas
> On Tue, Jun 10, 2014 at 03:10:30PM +0200, Alexander Gordeev wrote:
> > There are PCI devices that require a particular value written
> > to the Multiple Message Enable (MME) register while aligned on
> > power of 2 boundary value of actually used MSI vectors 'nvec'
> > is a le
39 matches
Mail list logo