On Mon, May 30, 2022 at 01:52:37AM +, Michael Kelley (LINUX) wrote:
> B) The contents of the memory buffer must transition between
> encrypted and not encrypted. The hardware doesn't provide
> any mechanism to do such a transition "in place". The only
> way to transition is for the CPU to cop
On Tue, May 10, 2022 at 06:26:55PM +, Michael Kelley (LINUX) wrote:
> > Hmm, this seems a bit pessimistic - the offset can vary per mapping, so
> > it feels to me like it should really be the caller's responsibility to
> > account for it if they're already involved enough to care about both
> >
On Thu, Sep 30, 2021 at 07:23:55PM -0300, Jason Gunthorpe wrote:
> > > The Intel functional issue is that Intel blocks the cache maintaince
> > > ops from the VM and the VM has no way to self-discover that the cache
> > > maintaince ops don't work.
> >
> > the VM doesn't need to know whether the m
On Thu, Sep 30, 2021 at 09:43:58PM +0800, Lu Baolu wrote:
> Here, we are discussing arch_sync_dma_for_cpu() and
> arch_sync_dma_for_device(). The x86 arch has clflush to sync dma buffer
> for device, but I can't see any instruction to sync dma buffer for cpu
> if the device is not cache coherent. I
On Fri, Aug 20, 2021 at 03:40:08PM +, Michael Kelley wrote:
> I see that the swiotlb code gets and uses the min_align_mask field. But
> the NVME driver is the only driver that ever sets it, so the value is zero
> in all other cases. Does swiotlb just use PAGE_SIZE in that that case? I
> coul
On Sat, Aug 21, 2021 at 02:04:11AM +0800, Tianyu Lan wrote:
> After dma_map_sg(), we still need to go through scatter list again to
> populate payload->rrange.pfn_array. We may just go through the scatter list
> just once if dma_map_sg() accepts a callback and run it during go
> through scatter l
On Thu, Aug 19, 2021 at 06:17:40PM +, Michael Kelley wrote:
> > +#define storvsc_dma_map(dev, page, offset, size, dir) \
> > + dma_map_page(dev, page, offset, size, dir)
> > +
> > +#define storvsc_dma_unmap(dev, dma_range, dir) \
> > + dma_unmap_page(dev, dma_range.dma,
On Thu, Aug 19, 2021 at 06:14:51PM +, Michael Kelley wrote:
> > + if (!pfns)
> > + return NULL;
> > +
> > + for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
> > + pfns[i] = virt_to_hvpfn(buf + i * HV_HYP_PAGE_SIZE)
> > + + (ms_hyperv.shared_gpa_boundary >>
On Thu, Aug 19, 2021 at 06:11:30PM +, Michael Kelley wrote:
> This function is manipulating page tables in the guest VM. It is not involved
> in communicating with Hyper-V, or passing PFNs to Hyper-V. The pfn array
> contains guest PFNs, not Hyper-V PFNs. So it should use PAGE_SIZE
> instead
On Tue, Oct 27, 2020 at 12:52:30PM +, Parav Pandit wrote:
>
> > From: h...@lst.de
> > Sent: Tuesday, October 27, 2020 1:41 PM
> >
> > On Mon, Oct 26, 2020 at 05:23:48AM +, Parav Pandit wrote:
> > > Hi Christoph,
> > >
> > > >
On Mon, Oct 26, 2020 at 05:23:48AM +, Parav Pandit wrote:
> Hi Christoph,
>
> > From: Jakub Kicinski
> > Sent: Saturday, October 24, 2020 11:45 PM
> >
> > CC: rdma, looks like rdma from the stack trace
> >
> > On Fri, 23 Oct 2020 20:07:17 -0700 syzbot wrote:
> > > syzbot has found a reprodu
On Mon, Oct 26, 2020 at 08:07:43PM +, Song Bao Hua (Barry Song) wrote:
> > diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
> > index c99de4a21458..964b74c9b7e3 100644
> > --- a/kernel/dma/Kconfig
> > +++ b/kernel/dma/Kconfig
> > @@ -125,7 +125,8 @@ if DMA_CMA
> >
> > config DMA_PERNUMA_
On Fri, May 15, 2020 at 01:10:21PM +0100, Robin Murphy wrote:
>> Meanwhile, for the safety of buffers, lower-layer drivers need to make
>> certain the buffers have already been unmapped in iommu before those buffers
>> go back to buddy for other users.
>
> That sounds like it would only have bene
On Thu, Nov 28, 2019 at 08:02:16AM +, Thomas Hellstrom wrote:
> > We have a hard time handling that in generic code. Do we have any
> > good use case for SWIOTLB_FORCE not that we have force_dma_unencrypted?
> > I'd love to be able to get rid of it..
> >
> IIRC the justification for it is debu
On Wed, Nov 27, 2019 at 06:22:57PM +, Thomas Hellstrom wrote:
> > bool dma_addressing_limited(struct device *dev)
> > {
> > + if (force_dma_unencrypted(dev))
> > + return true;
> > return min_not_zero(dma_get_mask(dev), dev->bus_dma_limit) <
> > dma_get
On Thu, Oct 24, 2019 at 12:41:41PM +, Laurentiu Tudor wrote:
> From: Laurentiu Tudor
>
> Introduce a few new dma unmap and sync variants that, on top of the
> original variants, return the virtual address corresponding to the
> input dma address.
> In order to implement this a new dma map op
On Mon, Oct 28, 2019 at 10:55:05AM +, Laurentiu Tudor wrote:
> >> @@ -85,9 +75,10 @@ static void free_rx_fd(struct dpaa2_eth_priv *priv,
> >> sgt = vaddr + dpaa2_fd_get_offset(fd);
> >> for (i = 1; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
> >> addr = dpaa2_sg_get_addr(&sgt[i]);
>
On Wed, Oct 23, 2019 at 11:53:41AM +, Laurentiu Tudor wrote:
> We had an internal discussion over these points you are raising and
> Madalin (cc-ed) came up with another idea: instead of adding this prone
> to misuse api how about experimenting with a new dma unmap and dma sync
> variants th
> + select DMA_CMA
Thіs needs to be
select DMA_CMA if HAVE_DMA_CONTIGUOUS
> +#include
> + /* Allocate from CMA */
> + // request_pages = (request_size >> PAGE_SHIFT) + 1;
> + request_pages = (round_up(request_size, PAGE_SIZE) >> PAGE_SHIFT);
> + page = dma_alloc_fro
On Tue, Sep 03, 2019 at 04:59:59AM +, Yoshihiro Shimoda wrote:
> Hi Christoph,
>
> Now this patch series got {Ack,Review}ed-by from each maintainer.
> https://patchwork.kernel.org/project/linux-renesas-soc/list/?series=166501
>
> So, would you pick this up through the dma-mapping tree as you
On Wed, Aug 28, 2019 at 04:41:45PM +, Derrick, Jonathan wrote:
> > diff --git a/arch/x86/include/asm/pci.h b/arch/x86/include/asm/pci.h
> > index 6fa846920f5f..75fe28492290 100644
> > --- a/arch/x86/include/asm/pci.h
> > +++ b/arch/x86/include/asm/pci.h
> > @@ -35,12 +35,15 @@ extern int noioap
On Fri, Jun 14, 2019 at 06:05:01PM +, Eugeniy Paltsev wrote:
> Hi Christoph,
>
> Regular question - do you have any public git repository with all this dma
> changes?
> I want to test it for ARC.
>
> Pretty sure the
> [PATCH 2/7] arc: remove the partial DMA_ATTR_NON_CONSISTENT support
> is
On Fri, Jun 14, 2019 at 06:11:00AM +, Tan, Ley Foon wrote:
> On Fri, 2019-06-14 at 07:44 +0200, Christoph Hellwig wrote:
> > On Fri, Jun 14, 2019 at 09:40:34AM +0800, Ley Foon Tan wrote:
> > >
> > > Hi Christoph
> > >
> > > Can this patch in http://git.infradead.org/users/hch/dma-mapping.gi
>
Just curious, what exactly is the use case? Explaining how someone
would wan to use this should drive the way we design an interface for it.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Wed, Apr 10, 2019 at 03:01:14PM +, Thomas Hellstrom wrote:
> > So can you please respin a version acceptable to you and submit it
> > for 5.1 ASAP? Otherwise I'll need to move ahead with the simple
> > revert.
>
> I will.
> I need to do some testing to investigate how to best choose betwe
On Tue, Apr 09, 2019 at 05:24:48PM +, Thomas Hellstrom wrote:
> > Note that this only affects external, untrusted devices. But that
> > may include eGPU,
>
> What about discrete graphics cards, like Radeon and Nvidia? Who gets to
> determine what's trusted?
Based on firmware tables. discret
On Tue, Apr 09, 2019 at 02:17:40PM +, Thomas Hellstrom wrote:
> If that's the case, I think most of the graphics drivers will stop
> functioning. I don't think people would want that, and even if the
> graphics drivers are "to blame" due to not implementing the sync calls,
> I think the work in
On Tue, Apr 09, 2019 at 01:04:51PM +, Thomas Hellstrom wrote:
> On the VMware platform we have two possible vIOMMUS, the AMD iommu and
> Intel VTD, Given those conditions I belive the patch is functionally
> correct. We can't cover the AMD case with intel_iommu_enabled.
> Furthermore the only f
On Mon, Apr 08, 2019 at 06:47:52PM +, Thomas Hellstrom wrote:
> We HAVE discussed our needs, although admittedly some of my emails
> ended up unanswered.
And than you haven't followed up, and instead ignored the layering
instructions and just commited a broken patch?
> We've as you're well aw
On Tue, Jan 08, 2019 at 09:51:45AM +, Thomas Hellstrom wrote:
> Hi, Christoph,
>
> On Sat, 2019-01-05 at 09:01 +0100, Christoph Hellwig wrote:
> > Hi Thomas,
> >
> > vmwgfx has been doing some odd checks based on DMA ops which rely
> > on deep DMA mapping layer internals, and I think the chan
On Fri, Jan 04, 2019 at 01:45:26AM +, Huaisheng HS1 Ye wrote:
> From: Stefano Stabellini
> Sent: Friday, January 04, 2019 1:55 AM
> > On Thu, 3 Jan 2019, Huaisheng Ye wrote:
> > > From: Huaisheng Ye
> > >
> > > dma_common_get_sgtable has parameter attrs which is not used at all.
> > > Remove
Btw, can you try wit the very latests dma-mapping-for-next tree
which has a new fix from Thierry Reding that might be related.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Thu, Dec 20, 2018 at 02:39:20PM +, Eugeniy Paltsev wrote:
> > I would be really surprised if that is caused by the patch to add
> > the zeroing.
> Me too :)
>
> > Can you check which commit caused the issue by bisecting
> > from a known good baseline?
>
> Yep. At least kernel build from
On Thu, Dec 20, 2018 at 02:32:52PM +, Eugeniy Paltsev wrote:
> Hi Christoph,
>
> I test kernel from your 'dma-alloc-always-zero' branch, and as
> I can see we have DMA peripherals (like USB) broken.
I would be really surprised if that is caused by the patch to add
the zeroing. Can you check
On Fri, Dec 14, 2018 at 12:12:00PM +, Eugeniy Paltsev wrote:
> Hi Christoph,
>
> do you have any public git repository with all your dma changes?
Most of the tree show up in my misc.git repo for testing.
This series is here:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dm
On Fri, May 18, 2018 at 10:05:51PM +0200, Helge Deller wrote:
> This patch seems to fix the dma issues I faced on my 32bit B160L parisc box.
>
> So it leaves only one open issue on parisc:
> Now every 32 bit parisc system is unnecessarily non-coherent.
I diagree with those comments, let me resend
On Fri, May 18, 2018 at 02:49:36PM +, Alexey Brodkin wrote:
> So if we set aside my complaints about direction in
> arch_sync_dma_for_{device|cpu}()...
Many other architectures use the argument. Various of those look bogus,
but for now I want to be able to do straight forward conversions. I
On Fri, May 18, 2018 at 01:03:46PM +, Alexey Brodkin wrote:
> Note mmc_get_dma_dir() is just "data->flags & MMC_DATA_WRITE ? DMA_TO_DEVICE
> : DMA_FROM_DEVICE".
> I.e. if we're preparing for sending data dma_noncoherent_map_sg() will have
> DMA_TO_DEVICE which
> is quite OK for passing to dma
> > The logical question is why?
>
> 1. See that's another platform with ARC core so maybe in case of ARM
>DMA allocator already zeroes pages regardless provided flags -
>personally I didn't check that.
Yes, most architectures always clear memory returned by dma_alloc*.
Looks like a few d
> > +int dma_configure(struct device *dev)
> > +{
> > + if (dev->bus->dma_configure)
> > + return dev->bus->dma_configure(dev);
>
> What if dma_common_configure() is called in case "bus->dma_configure" is not
> defined?
Then we'd still have a dependency of common code on OF and ACPI.
On Tue, Mar 13, 2018 at 04:22:53AM +, Nipun Gupta wrote:
> > Isn't this one or the other one but not both?
> >
> > Something like:
> >
> > if (dev->of_node)
> > of_dma_deconfigure(dev);
> > else
> > acpi_dma_deconfigure(dev);
> >
> > should work.
>
> I understand your point. Seems r
On Wed, Jan 10, 2018 at 03:27:41PM +, Alexey Brodkin wrote:
> Hi Christoph,
>
> On Wed, 2018-01-10 at 09:00 +0100, Christoph Hellwig wrote:
> > cris currently has an incomplete direct mapping dma_map_ops implementation
> > is PCI support is enabled. Replace it with the fully feature generic
>
42 matches
Mail list logo