On Fri, Sep 05, 2025 at 06:20:51PM +0200, Marek Szyprowski wrote:
> I've checked the most advertised use case in
> https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/log/?h=dmabuf-vfio
> and I still don't see the reason why it cannot be based
> on dma_map_resource() API? I'm awa
On Tue, Sep 02, 2025 at 03:59:37PM -0600, Keith Busch wrote:
> On Tue, Sep 02, 2025 at 10:49:48PM +0200, Marek Szyprowski wrote:
> > On 19.08.2025 19:36, Leon Romanovsky wrote:
> > > @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter,
> > > struct phys_vec *vec)
> > > static
On Mon, Sep 01, 2025 at 11:47:59PM +0200, Marek Szyprowski wrote:
> I would like to give those patches a try in linux-next, but in meantime
> I tested it on my test farm and found a regression in dma_map_resource()
> handling. Namely the dma_map_resource() is no longer possible with size
> not a
On Tue, Aug 19, 2025 at 08:36:44PM +0300, Leon Romanovsky wrote:
> This series does the core code and modern flows. A followup series
> will give the same treatment to the legacy dma_ops implementation.
I took a quick check over this to see that it is sane. I think using
phys is an improvement f
On Thu, Aug 28, 2025 at 02:54:35PM -0600, Keith Busch wrote:
> In truth though, I hadn't tried p2p metadata before today, and it looks
> like bio_integrity_map_user() is missing the P2P extraction flags to
> make that work. Just added this patch below, now I can set p2p or host
> memory independen
On Thu, Aug 28, 2025 at 01:10:32PM -0600, Keith Busch wrote:
> On Thu, Aug 28, 2025 at 03:41:15PM -0300, Jason Gunthorpe wrote:
> > On Thu, Aug 28, 2025 at 11:15:20AM -0600, Keith Busch wrote:
> > >
> > > I don't think that was ever the case. Metadata is allocated
On Thu, Aug 28, 2025 at 11:15:20AM -0600, Keith Busch wrote:
> On Thu, Aug 28, 2025 at 07:54:27PM +0300, Leon Romanovsky wrote:
> > On Thu, Aug 28, 2025 at 09:19:20AM -0600, Keith Busch wrote:
> > > On Tue, Aug 19, 2025 at 08:36:59PM +0300, Leon Romanovsky wrote:
> > > > diff --git a/include/linux/
crypted into a
MR.
So it looks to me like this series will be important for this use case
as well.
It looks OK though:
Reviewed-by: Jason Gunthorpe
Jason
-
> kernel/dma/debug.h | 21 ---
> kernel/dma/direct.c | 16 -
> kernel/dma/mapping.c| 69 -
> 9 files changed, 50 insertions(+), 134 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
if (ops->map_resource)
addr = ops->map_resource(dev, phys, size, dir, attrs);
else
addr = DMA_MAPPING_ERROR;
As I think some of the design here is to run the trace even on the
failure path?
Otherwise looks OK
Reviewed-by: Jason Gunthorpe
Jason
> ---
> drivers/xen/swiotlb-xen.c | 21 -
> 1 file changed, 20 insertions(+), 1 deletion(-)
Reviewed-by: Jason Gunthorpe
Jason
addr = phys_to_virt(phys);
And make addr a void *
Otherwise looks fine
Reviewed-by: Jason Gunthorpe
Jason
emoving
iommu_dma_(un)map_resource().
Reviewed-by: Jason Gunthorpe
Jason
mmu/dma-iommu.c | 14 ++
> include/linux/iommu-dma.h | 7 +++
> kernel/dma/mapping.c | 4 ++--
> kernel/dma/ops_helpers.c | 6 +++---
> 4 files changed, 14 insertions(+), 17 deletions(-)
This looks fine
Reviewed-by: Jason Gunthorpe
But related to other patches..
io
On Tue, Aug 19, 2025 at 08:36:47PM +0300, Leon Romanovsky wrote:
> @@ -1218,19 +1219,24 @@ void debug_dma_map_page(struct device *dev, struct
> page *page, size_t offset,
> return;
>
> entry->dev = dev;
> - entry->type = dma_debug_single;
> - entry->paddr
; ---
> include/trace/events/dma.h | 4 ++--
> kernel/dma/mapping.c | 4 ++--
> 2 files changed, 4 insertions(+), 4 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
.rs| 3 +++
> 4 files changed, 43 insertions(+), 1 deletion(-)
Reviewed-by: Jason Gunthorpe
Jason
On Thu, Aug 14, 2025 at 04:31:06PM +0300, Leon Romanovsky wrote:
> On Thu, Aug 14, 2025 at 09:44:48AM -0300, Jason Gunthorpe wrote:
> > On Thu, Aug 14, 2025 at 03:35:06PM +0300, Leon Romanovsky wrote:
> > > > Then check attrs here, not pfn_valid.
> > >
&g
On Thu, Aug 14, 2025 at 03:35:06PM +0300, Leon Romanovsky wrote:
> > Then check attrs here, not pfn_valid.
>
> attrs are not available in kmsan_handle_dma(). I can add it if you prefer.
That makes more sense to the overall design. The comments I gave
before were driving at a promise to never try
On Wed, Aug 13, 2025 at 06:07:18PM +0300, Leon Romanovsky wrote:
> > > /* Helper function to handle DMA data transfers. */
> > > -void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
> > > +void kmsan_handle_dma(phys_addr_t phys, size_t size,
> > > enum dma_data_dir
On Sat, Aug 09, 2025 at 12:53:09PM -0400, Demi Marie Obenour wrote:
> > With a long term goal that struct page only exists for legacy code,
> > and is maybe entirely compiled out of modern server kernels.
>
> Why just server kernels? I suspect client systems actually run
> newer kernels than serv
On Fri, Aug 08, 2025 at 08:51:08PM +0200, Marek Szyprowski wrote:
> First - basing the API on the phys_addr_t.
>
> Page based API had the advantage that it was really hard to abuse it and
> call for something that is not 'a normal RAM'.
This is not true anymore. Today we have ZONE_DEVICE as a s
On Mon, Aug 04, 2025 at 03:42:34PM +0300, Leon Romanovsky wrote:
> Changelog:
> v1:
> * Added new DMA_ATTR_MMIO attribute to indicate
>PCI_P2PDMA_MAP_THRU_HOST_BRIDGE path.
> * Rewrote dma_map_* functions to use thus new attribute
> v0: https://lore.kernel.org/all/cover.1750854543.git.l...@ke
On Mon, Aug 04, 2025 at 03:42:50PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Block layer maps MMIO memory through dma_map_phys() interface
> with help of DMA_ATTR_MMIO attribute. There is a need to unmap
> that memory with the appropriate unmap function.
Be specific, AFIACT the i
On Mon, Aug 04, 2025 at 03:42:45PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
> that operate directly on physical addresses instead of page+offset
> parameters. This provides a more efficient interface for driv
> case PCI_P2PDMA_MAP_NONE:
> break;
> case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE:
> - attrs |= DMA_ATTR_SKIP_CPU_SYNC;
> + attrs |= DMA_ATTR_MMIO;
> pfns[idx] |= HMM_PFN_P2PDMA;
> break;
Yeah, this is a lot cleaner
Reviewed-by: Jason Gunthorpe
Jason
ich provides cleaner interfaces
> for drivers that already have access to physical addresses.
>
> Signed-off-by: Leon Romanovsky
> ---
> mm/hmm.c | 8
> 1 file changed, 4 insertions(+), 4 deletions(-)
Reviewed-by: Jason Gunthorpe
Maybe the next patch should be squished into here too if it is going
to be a full example
Jason
On Mon, Aug 04, 2025 at 03:42:41PM +0300, Leon Romanovsky wrote:
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -80,42 +80,54 @@ static inline void dma_direct_sync_single_for_cpu(struct
> device *dev,
> arch_dma_mark_clean(paddr, size);
> }
>
> -static inline dma_add
On Mon, Aug 04, 2025 at 03:42:43PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Extend base DMA page API to handle MMIO flow.
I would mention here this follows the long ago agreement that we don't
need to enable P2P in the legacy dma_ops area. Simply failing when
getting an ATTR_MMI
On Mon, Aug 04, 2025 at 03:42:42PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Convert the KMSAN DMA handling function from page-based to physical
> address-based interface.
>
> The refactoring renames kmsan_handle_dma() parameters from accepting
> (struct page *page, size_t offset
On Mon, Aug 04, 2025 at 03:42:40PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
> order to allow single phys_addr_t flow.
Some later patch deletes iommu_dma_map_resource() ? Mention that plan here?
> --- a/drive
On Mon, Aug 04, 2025 at 03:42:39PM +0300, Leon Romanovsky wrote:
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 399838c17b705..11c5d5f8c0981 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1190,11 +1190,9 @@ static inline size_t iova_un
also absorb debug_dma_map_resource() into
here as well and we can have the caller of dma_dma_map_resource() call
debug_dma_map_page with ATTR_MMIO?
If not, this looks OK
Reviewed-by: Jason Gunthorpe
Jason
phys, size,
> - dma_info_to_prot(dir, coherent, attrs), GFP_ATOMIC);
> + prot, GFP_ATOMIC);
> }
Hmm, I missed this in prior series, ideally the GFP_ATOMIC should be
passed in as a gfp_t here so we can use GFP_KERNEL in callers that are
able.
Reviewed-by: Jason Gunthorpe
Jason
On Mon, Aug 04, 2025 at 03:42:35PM +0300, Leon Romanovsky wrote:
> From: Leon Romanovsky
>
> This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers
> that reside in memory-mapped I/O (MMIO) regions, such as device BARs
> exposed through the host bridge, which are accessible for pee
On Mon, Jun 24, 2024 at 10:36:13AM -0700, Easwar Hariharan wrote:
> Hi Jason,
>
> On 6/24/2024 9:32 AM, Jason Gunthorpe wrote:
> > On Mon, Jun 24, 2024 at 02:36:45PM +, Teddy Astie wrote:
> >>>> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
>
On Mon, Jun 24, 2024 at 02:36:45PM +, Teddy Astie wrote:
> >> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
> >> +{
> >> + switch (cap) {
> >> + case IOMMU_CAP_CACHE_COHERENCY:
> >> + return true;
> >
> > Will the PV-IOMMU only ever be exposed on hardware where th
On Thu, Jun 13, 2024 at 01:50:22PM +, Teddy Astie wrote:
> +struct iommu_domain *xen_iommu_domain_alloc(unsigned type)
> +{
> + struct xen_iommu_domain *domain;
> + u16 ctx_no;
> + int ret;
> +
> + if (type & IOMMU_DOMAIN_IDENTITY) {
> + /* use default domain */
> +
On Tue, Jun 20, 2023 at 01:01:39PM -0700, Vishal Moola wrote:
> On Fri, Jun 16, 2023 at 5:38 AM Jason Gunthorpe wrote:
> >
> > On Mon, Jun 12, 2023 at 02:03:53PM -0700, Vishal Moola (Oracle) wrote:
> > > Currently, page table information is stored within struct page. As p
On Mon, Jun 12, 2023 at 02:03:53PM -0700, Vishal Moola (Oracle) wrote:
> Currently, page table information is stored within struct page. As part
> of simplifying struct page, create struct ptdesc for page table
> information.
>
> Signed-off-by: Vishal Moola (Oracle)
> ---
> include/linux/pgtable
On Mon, May 01, 2023 at 12:27:55PM -0700, Vishal Moola (Oracle) wrote:
> The MM subsystem is trying to shrink struct page. This patchset
> introduces a memory descriptor for page table tracking - struct ptdesc.
>
> This patchset introduces ptdesc, splits ptdesc from struct page, and
> converts man
On Fri, Aug 05, 2022 at 10:53:36AM -0500, Bjorn Helgaas wrote:
> On Fri, Aug 05, 2022 at 09:10:41AM -0300, Jason Gunthorpe wrote:
> > On Fri, Aug 05, 2022 at 12:03:15PM +0200, Josef Johansson wrote:
> > > On 2/14/22 11:07, Josef Johansson wrote:
> > > > From: Josef J
ask &&
> >!desc.pci.msi_attrib.is_virtual;
> > - if (!desc.pci.msi_attrib.can_mask) {
> > + if (desc.pci.msi_attrib.can_mask) {
> > addr = pci_msix_desc_addr(&desc);
> > desc.pci.msix_ctrl = readl(addr +
> > PCI_MSIX_ENTRY_VECTOR_CTRL);
> > }
> >
Reviewed-by: Jason Gunthorpe
Bjorn, please take it?
Jason
On Wed, Mar 23, 2022 at 05:49:43PM +0100, Michal Hocko wrote:
> > The bug here is that prior to commit a81461b0546c ("xen/gntdev: update
> > to new mmu_notifier semantic") wired the mn_invl_range_start() which
> > takes a mutex to invalidate_page, which is defined to run in an atomic
> > context.
>
On Wed, Mar 23, 2022 at 10:45:30AM +0100, Michal Hocko wrote:
> [Let me add more people to the CC list - I am not really sure who is the
> most familiar with all the tricks that mmu notifiers might do]
>
> On Wed 23-03-22 09:43:59, Juergen Gross wrote:
> > Hi,
> >
> > during analysis of a custom
On Thu, Feb 10, 2022 at 05:55:32PM -0600, Bjorn Helgaas wrote:
> > Commit 71020a3c0dff4 ("PCI/MSI: Use msi_add_msi_desc()") modifies
> > the logic of checking msi_attrib.can_mask, without any reason.
> >
> > This commits restores that logic.
>
> I agree, this looks like a typo in 71020a3c0dff
; ---
> V2: Handle the INTx case directly instead of trying to be overly smart - Marc
> ---
> drivers/pci/msi/msi.c | 25 +
> 1 file changed, 5 insertions(+), 20 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
> 2 files changed, 38 insertions(+)
Reviewed-by: Jason Gunthorpe
Jason
c: "Cédric Le Goater"
> Cc: linuxppc-...@lists.ozlabs.org
>
> ---
> V2: Remove it completely - Cedric
> ---
> arch/powerpc/platforms/pseries/msi.c | 33 -
> 1 file changed, 8 insertions(+), 25 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
_enabled - Jason
> ---
> arch/powerpc/platforms/pseries/msi.c |3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
Reviewed-by: Jason Gunthorpe
> --- a/arch/powerpc/platforms/pseries/msi.c
> +++ b/arch/powerpc/platforms/pseries/msi.c
> @@ -448,8 +4
lists.ozlabs.org
> ---
> V3: Use pci_dev property - Jason
> V2: Invoke the function with the correct number of arguments - Andy
> ---
> arch/powerpc/platforms/cell/axon_msi.c |5 +
> 1 file changed, 1 insertion(+), 4 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
> kernel/irq/msi.c | 17 ++---
> 1 file changed, 2 insertions(+), 15 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
5 +
> 1 file changed, 1 insertion(+), 4 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
d.
> ---
> arch/x86/pci/xen.c |9 ++---
> 1 file changed, 2 insertions(+), 7 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
-by: Thomas Gleixner
> ---
> V3: New patch
> ---
> drivers/pci/msi/msi.c | 23 +--
> 1 file changed, 17 insertions(+), 6 deletions(-)
Reviewed-by: Jason Gunthorpe
Jason
On Mon, Dec 06, 2021 at 11:51:02PM +0100, Thomas Gleixner wrote:
> This is the third part of [PCI]MSI refactoring which aims to provide the
> ability of expanding MSI-X vectors after enabling MSI-X.
I read through this and didn't have any substantive remarks
Reviewed-by: Jason Gunthorpe
Jason
On Mon, Dec 06, 2021 at 11:51:05PM +0100, Thomas Gleixner wrote:
> +++ b/kernel/irq/msi.c
> @@ -127,12 +127,37 @@ int msi_setup_device_data(struct device
> return -ENOMEM;
>
> INIT_LIST_HEAD(&md->list);
> + mutex_init(&md->mutex);
> dev->msi.data = md;
> devres
On Mon, Dec 06, 2021 at 11:39:26PM +0100, Thomas Gleixner wrote:
> Store the properties which are interesting for various places so the MSI
> descriptor fiddling can be removed.
>
> Signed-off-by: Thomas Gleixner
> ---
> V2: Use the setter function
> ---
> drivers/pci/msi/msi.c |8
>
On Mon, Dec 06, 2021 at 11:39:33PM +0100, Thomas Gleixner wrote:
> @@ -209,10 +209,10 @@ static int setup_msi_msg_address(struct
> return -ENODEV;
> }
>
> - entry = first_pci_msi_entry(dev);
> + is_64bit = msi_device_has_property(&dev->dev, MSI_PROP_64BIT);
How about
On Mon, Dec 06, 2021 at 11:39:28PM +0100, Thomas Gleixner wrote:
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> arch/x86/pci/xen.c |6 ++
> 1 file changed, 2 inser
On Mon, Dec 06, 2021 at 11:39:34PM +0100, Thomas Gleixner wrote:
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> arch/powerpc/platforms/pseries/msi.c |4 ++--
> 1 file
On Mon, Dec 06, 2021 at 11:39:29PM +0100, Thomas Gleixner wrote:
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> ---
> arch/x86/kernel/apic/msi.c |5 +
> 1 file ch
/msi/msi.c |2 +-
> drivers/pci/probe.c|4 +++-
> include/linux/device.h |2 --
> include/linux/pci.h|1 +
> 5 files changed, 5 insertions(+), 5 deletions(-)
Reviewed-by: Jason Gunthorpe
> --- a/drivers/base/core.c
> +++ b/drivers/base/core.c
> @@ -2
n stuff all that well anymore, but I read
through all the patches and only noticed a small spello
[patch 02/22] PCI/MSI: Fix pci_irq_vector()/pci_irq_get_attinity()
ff
It all seems good, I especially like the splitting of msi.c and
removal of ops..
Reviewed-by: Jason Gunthorpe
Thanks,
Jason
On Fri, Nov 20, 2020 at 12:21:39PM -0600, Gustavo A. R. Silva wrote:
> IB/hfi1: Fix fall-through warnings for Clang
> IB/mlx4: Fix fall-through warnings for Clang
> IB/qedr: Fix fall-through warnings for Clang
> RDMA/mlx5: Fix fall-through warnings for Clang
I picked these four to the rdm
On Wed, Aug 26, 2020 at 01:16:28PM +0200, Thomas Gleixner wrote:
> This is the second version of providing a base to support device MSI (non
> PCI based) and on top of that support for IMS (Interrupt Message Storm)
> based devices in a halfways architecture independent way.
Hi Thomas,
Our test te
On Mon, Oct 19, 2020 at 12:42:15PM -0700, Nick Desaulniers wrote:
> On Sat, Oct 17, 2020 at 10:43 PM Greg KH wrote:
> >
> > On Sat, Oct 17, 2020 at 09:09:28AM -0700, t...@redhat.com wrote:
> > > From: Tom Rix
> > >
> > > This is a upcoming change to clean up a new warning treewide.
> > > I am won
On Wed, Sep 30, 2020 at 01:08:27PM +, Derrick, Jonathan wrote:
> +Megha
>
> On Wed, 2020-09-30 at 09:57 -0300, Jason Gunthorpe wrote:
> > On Wed, Sep 30, 2020 at 12:45:30PM +, Derrick, Jonathan wrote:
> > > Hi Jason
> > >
> > > On Mon, 2020-0
On Wed, Sep 30, 2020 at 12:45:30PM +, Derrick, Jonathan wrote:
> Hi Jason
>
> On Mon, 2020-08-31 at 11:39 -0300, Jason Gunthorpe wrote:
> > On Wed, Aug 26, 2020 at 01:16:52PM +0200, Thomas Gleixner wrote:
> > > From: Thomas Gleixner
> > >
> > > De
On Wed, Sep 30, 2020 at 08:41:48AM +0200, Thomas Gleixner wrote:
> On Tue, Sep 29 2020 at 16:03, Megha Dey wrote:
> > On 8/26/2020 4:16 AM, Thomas Gleixner wrote:
> >> #9 is obviously just for the folks interested in IMS
> >>
> >
> > I see that the tip tree (as of 9/29) has most of these patches bu
On Wed, Aug 26, 2020 at 01:17:14PM +0200, Thomas Gleixner wrote:
> + * ims_queue_info - Information to create an IMS queue domain
> + * @queue_lock: Callback which informs the device driver that
> + * an interrupt management operation starts.
> + * @queue_sync_unlock:
On Wed, Aug 26, 2020 at 01:16:52PM +0200, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> Devices on the VMD bus use their own MSI irq domain, but it is not
> distinguishable from regular PCI/MSI irq domains. This is required
> to exclude VMD devices from getting the irq domain pointer set by
On Fri, Aug 28, 2020 at 01:47:59PM +0100, Marc Zyngier wrote:
> > So the arch_setup_msi_irq/etc is not really an arch hook, but some
> > infrastructure to support those 4 PCI root port drivers.
>
> I happen to have a *really old* patch addressing Tegra [1], which
> I was never able to test (no HW
On Fri, Aug 28, 2020 at 12:21:42PM +0100, Lorenzo Pieralisi wrote:
> On Thu, Aug 27, 2020 at 01:20:40PM -0500, Bjorn Helgaas wrote:
>
> [...]
>
> > And I can't figure out what's special about tegra, rcar, and xilinx
> > that makes them need it as well. Is there something I could grep for
> > to
On Sat, Aug 22, 2020 at 03:34:45AM +0200, Thomas Gleixner wrote:
> >> One question is whether the device can see partial updates to that
> >> memory due to the async 'swap' of context from the device CPU.
> >
> > It is worse than just partial updates.. The device operation is much
> > more like you
On Sat, Aug 22, 2020 at 01:47:12AM +0200, Thomas Gleixner wrote:
> On Fri, Aug 21 2020 at 17:17, Jason Gunthorpe wrote:
> > On Fri, Aug 21, 2020 at 09:47:43PM +0200, Thomas Gleixner wrote:
> >> So if I understand correctly then the queue memory where the MSI
> >>
On Fri, Aug 21, 2020 at 09:47:43PM +0200, Thomas Gleixner wrote:
> On Fri, Aug 21 2020 at 09:45, Jason Gunthorpe wrote:
> > On Fri, Aug 21, 2020 at 02:25:02AM +0200, Thomas Gleixner wrote:
> >> +static void ims_mask_irq(struct irq_data *data)
> >> +{
&g
On Fri, Aug 21, 2020 at 02:25:02AM +0200, Thomas Gleixner wrote:
> +static void ims_mask_irq(struct irq_data *data)
> +{
> + struct msi_desc *desc = irq_data_get_msi_desc(data);
> + struct ims_array_slot __iomem *slot = desc->device_msi.priv_iomem;
> + u32 __iomem *ctrl = &slot->ctrl;
>
s looks like it for the ptemod miss, thanks
Reviewed-by: Jason Gunthorpe
Jason
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On Fri, Nov 22, 2019 at 04:54:08PM -0800, Ralph Campbell wrote:
> Actually, I think you can remove the "need_wake" variable since it is
> unconditionally set to "true".
Oh, yes, thank you. An earlier revision had a different control flow
> Also, the comment in__mmu_interval_notifier_insert() sa
On Wed, Nov 13, 2019 at 05:59:52AM -0800, Christoph Hellwig wrote:
> > +int mmu_interval_notifier_insert(struct mmu_interval_notifier *mni,
> > + struct mm_struct *mm, unsigned long start,
> > + unsigned long length,
> > +
From: Jason Gunthorpe
gntdev simply wants to monitor a specific VMA for any notifier events,
this can be done straightforwardly using mmu_interval_notifier_insert()
over the VMA's VA range.
The notifier should be attached until the original VMA is destroyed.
It is unclear if any of th
From: Jason Gunthorpe
Replace the internal interval tree based mmu notifier with the new common
mmu_interval_notifier_insert() API. This removes a lot of code and fixes a
deadlock that can be triggered in ODP:
zap_page_range()
mmu_notifier_invalidate_range_start
From: Jason Gunthorpe
Remove the interval tree in the driver and rely on the tree maintained by
the mmu_notifier for delivering mmu_notifier invalidation callbacks.
For some reason amdgpu has a very complicated arrangement where it tries
to prevent duplicate entries in the interval_tree, this
From: Jason Gunthorpe
Only the function calls are stubbed out with static inlines that always
fail. This is the standard way to write a header for an optional component
and makes it easier for drivers that only optionally need HMM_MIRROR.
Reviewed-by: Jérôme Glisse
Tested-by: Ralph Campbell
From: Jason Gunthorpe
Of the 13 users of mmu_notifiers, 8 of them use only
invalidate_range_start/end() and immediately intersect the
mmu_notifier_range with some kind of internal list of VAs. 4 use an
interval tree (i915_gem, radeon_mn, umem_odp, hfi1). 4 use a linked list
of some kind
From: Jason Gunthorpe
Convert the collision-retry lock around hmm_range_fault to use the one now
provided by the mmu_interval notifier.
Although this driver does not seem to use the collision retry lock that
hmm provides correctly, it can still be converted over to use the
mmu_interval_notifier
From: Jason Gunthorpe
Remove the hmm_mirror object and use the mmu_interval_notifier API instead
for the range, and use the normal mmu_notifier API for the general
invalidation callback.
While here re-organize the pagefault path so the locking pattern is clear.
nouveau is the only driver that
From: Jason Gunthorpe
The only two users of this are now converted to use mmu_interval_notifier,
delete all the code and update hmm.rst.
Reviewed-by: Jérôme Glisse
Tested-by: Ralph Campbell
Signed-off-by: Jason Gunthorpe
---
Documentation/vm/hmm.rst | 105 ---
include/linux
From: Jason Gunthorpe
find_vma() must be called under the mmap_sem, reorganize this code to
do the vma check after entering the lock.
Further, fix the unlocked use of struct task_struct's mm, instead use
the mm from hmm_mirror which has an active mm_grab. Also the mm_grab
must be converted
From: Jason Gunthorpe
This converts one of the two users of mmu_notifiers to use the new API.
The conversion is fairly straightforward, however the existing use of
notifiers here seems to be racey.
Tested-by: Dennis Dalessandro
Signed-off-by: Jason Gunthorpe
---
drivers/infiniband/hw/hfi1
From: Jason Gunthorpe
There is no reason to get the invalidate_range_start() callback via an
indirection through hmm_mirror, just register a normal notifier directly.
Tested-by: Ralph Campbell
Signed-off-by: Jason Gunthorpe
---
drivers/gpu/drm/nouveau/nouveau_svm.c | 95
From: Jason Gunthorpe
The new API is an exact match for the needs of radeon.
For some reason radeon tries to remove overlapping ranges from the
interval tree, but interval trees (and mmu_interval_notifier_insert())
support overlapping ranges directly. Simply delete all this code.
Since this
From: Jason Gunthorpe
Now that we have KERNEL_HEADER_TEST all headers are generally compile
tested, so relying on makefile tricks to avoid compiling code that depends
on CONFIG_MMU_NOTIFIER is more annoying.
Instead follow the usual pattern and provide most of the header with only
the functions
From: Jason Gunthorpe
hmm_mirror's handling of ranges does not use a sequence count which
results in this bug:
CPU0 CPU1
hmm_range_wait_until_valid(range)
valid ==
From: Jason Gunthorpe
8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1,
scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where
they only use invalidate_range_start/end and immediately check the
invalidating range against some driver data structure to tell
On Thu, Nov 07, 2019 at 09:00:34PM -0500, Jerome Glisse wrote:
> On Fri, Nov 08, 2019 at 12:32:25AM +0000, Jason Gunthorpe wrote:
> > On Thu, Nov 07, 2019 at 04:04:08PM -0500, Jerome Glisse wrote:
> > > On Thu, Nov 07, 2019 at 08:11:06PM +0000, Jason Gunthorpe wrote:
> >
On Thu, Nov 07, 2019 at 12:53:56PM -0800, John Hubbard wrote:
> > > > +/**
> > > > + * struct mmu_range_notifier_ops
> > > > + * @invalidate: Upon return the caller must stop using any SPTEs
> > > > within this
> > > > + * range, this function can sleep. Return false if
> > > > block
On Thu, Nov 07, 2019 at 05:54:52PM -0500, Boris Ostrovsky wrote:
> On 11/7/19 3:36 PM, Jason Gunthorpe wrote:
> > On Tue, Nov 05, 2019 at 10:16:46AM -0500, Boris Ostrovsky wrote:
> >
> >>> So, I suppose it can be relaxed to a null test and a WARN_ON that it
> >
On Thu, Nov 07, 2019 at 04:04:08PM -0500, Jerome Glisse wrote:
> On Thu, Nov 07, 2019 at 08:11:06PM +0000, Jason Gunthorpe wrote:
> > On Wed, Nov 06, 2019 at 09:08:07PM -0500, Jerome Glisse wrote:
> >
> > > >
> > > > Extra credit: IMHO, t
1 - 100 of 142 matches
Mail list logo