On Fri, Sep 05, 2025 at 06:20:51PM +0200, Marek Szyprowski wrote:
> On 29.08.2025 15:16, Jason Gunthorpe wrote:
> > On Tue, Aug 19, 2025 at 08:36:44PM +0300, Leon Romanovsky wrote:
> >
> >> This series does the core code and modern flows. A followup series
> >> wi
From: Leon Romanovsky
Make sure that CPU is not synced and IOMMU is configured to take
MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
Reviewed-by: Keith Busch
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 13 +++--
include/linux/blk-mq-dma.h | 6
From: Leon Romanovsky
Make dma_map_page_attrs() and dma_map_page_attrs() respect
DMA_ATTR_MMIO.
DMA_ATR_MMIO makes the functions behave the same as
dma_(un)map_resource():
- No swiotlb is possible
- Legacy dma_ops arches use ops->map_resource()
- No kmsan
- No arch_dma_map_phys_dir
From: Leon Romanovsky
Make iommu_dma_map_phys() and iommu_dma_unmap_phys() respect
DMA_ATTR_MMIO.
DMA_ATTR_MMIO makes the functions behave the same as
iommu_dma_(un)map_resource():
- No swiotlb is possible
- No cache flushing is done (ATTR_MMIO should not be cached memory)
- prot for
From: Leon Romanovsky
Convert the DMA direct mapping functions to accept physical addresses
directly instead of page+offset parameters. The functions were already
operating on physical addresses internally, so this change eliminates
the redundant page-to-physical conversion at the API boundary
From: Leon Romanovsky
General dma_direct_map_resource() is going to be removed
in next patch, so simply open-code it in xen driver.
Reviewed-by: Juergen Gross
Reviewed-by: Jason Gunthorpe
Signed-off-by: Leon Romanovsky
---
drivers/xen/swiotlb-xen.c | 21 -
1 file changed
From: Leon Romanovsky
Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
that operate directly on physical addresses instead of page+offset
parameters. This provides a more efficient interface for drivers that
already have physical addresses available.
The new functions are
From: Leon Romanovsky
Convert the DMA debug infrastructure from page-based to physical address-based
mapping as a preparation to rely on physical address for DMA mapping routines.
The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and
changes its signature to accept a
From: Leon Romanovsky
In case peer-to-peer transaction traverses through host bridge,
the IOMMU needs to have IOMMU_MMIO flag, together with skip of
CPU sync.
The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag,
but IOMMU flag was missed, due to assumption that such memory
can be
From: Leon Romanovsky
After introduction of dma_map_phys(), there is no need to convert
from physical address to struct page in order to map page. So let's
use it directly.
Reviewed-by: Keith Busch
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 4 ++--
1 file changed, 2 inser
From: Leon Romanovsky
Convert HMM DMA operations from the legacy page-based API to the new
physical address-based dma_map_phys() and dma_unmap_phys() functions.
This demonstrates the preferred approach for new code that should use
physical addresses directly rather than page+offset parameters
From: Leon Romanovsky
Block layer maps MMIO memory through dma_map_phys() interface
with help of DMA_ATTR_MMIO attribute. There is a need to unmap
that memory with the appropriate unmap function, something which
wasn't possible before adding new REQ attribute to block layer in
previous
From: Leon Romanovsky
Convert the KMSAN DMA handling function from page-based to physical
address-based interface.
The refactoring renames kmsan_handle_dma() parameters from accepting
(struct page *page, size_t offset, size_t size) to (phys_addr_t phys,
size_t size). The existing semantics
From: Leon Romanovsky
As a preparation for following map_page -> map_phys API conversion,
let's rename trace_dma_*map_page() to be trace_dma_*map_phys().
Reviewed-by: Jason Gunthorpe
Signed-off-by: Leon Romanovsky
---
include/trace/events/dma.h | 4 ++--
kernel/dma/mapping.c
From: Leon Romanovsky
This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid
touching the possibly non-KVA MMIO memory.
Also correct the incorrect caching attribute for the IOMMU, MMIO
memory should not be cachable inside the IOMMU mapping or it can
possibly create system problems
From: Leon Romanovsky
Rename the IOMMU DMA mapping functions to better reflect their actual
calling convention. The functions iommu_dma_map_page() and
iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and
iommu_dma_unmap_phys() respectively, as they already operate on physical
addresses
From: Leon Romanovsky
This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers
that reside in memory-mapped I/O (MMIO) regions, such as device BARs
exposed through the host bridge, which are accessible for peer-to-peer
(P2P) DMA.
This attribute is especially useful for exporting
rks. This is intended to
replace the incorrect driver use of dma_map_resource() on PCI BAR
addresses.
This series does the core code and modern flows. A followup series
will give the same treatment to the legacy dma_ops implementation.
Thanks
Leon Romanovsky (16):
dma-mapping: introduce new D
On Mon, Sep 01, 2025 at 07:23:02PM -0300, Jason Gunthorpe wrote:
> On Mon, Sep 01, 2025 at 11:47:59PM +0200, Marek Szyprowski wrote:
> > I would like to give those patches a try in linux-next, but in meantime
> > I tested it on my test farm and found a regression in dma_map_resource()
> > handlin
On Thu, Aug 28, 2025 at 12:17:30PM -0300, Jason Gunthorpe wrote:
> On Tue, Aug 19, 2025 at 08:36:53PM +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky
> >
> > Extend base DMA page API to handle MMIO flow and follow
> > existing dma_map_resource() implementatio
On Thu, Aug 28, 2025 at 09:19:20AM -0600, Keith Busch wrote:
> On Tue, Aug 19, 2025 at 08:36:59PM +0300, Leon Romanovsky wrote:
> > diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
> > index 09b99d52fd36..283058bcb5b1 100644
> > --- a/include/linux/blk_type
On Tue, Aug 19, 2025 at 08:36:44PM +0300, Leon Romanovsky wrote:
> Changelog:
> v4:
> * Fixed kbuild error with mismatch in kmsan function declaration due to
>rebase error.
> v3: https://lore.kernel.org/all/cover.1755193625.git.l...@kernel.org
> * Fixed typo i
On Tue, Aug 19, 2025, at 20:20, Keith Busch wrote:
> On Tue, Aug 19, 2025 at 08:36:58PM +0300, Leon Romanovsky wrote:
>> static bool blk_dma_map_direct(struct request *req, struct device *dma_dev,
>> struct blk_dma_iter *iter, struct phys_vec *vec)
>>
From: Leon Romanovsky
Extend base DMA page API to handle MMIO flow and follow
existing dma_map_resource() implementation to rely on dma_map_direct()
only to take DMA direct path.
Signed-off-by: Leon Romanovsky
---
kernel/dma/mapping.c | 26 +-
1 file changed, 21
From: Leon Romanovsky
Convert the KMSAN DMA handling function from page-based to physical
address-based interface.
The refactoring renames kmsan_handle_dma() parameters from accepting
(struct page *page, size_t offset, size_t size) to (phys_addr_t phys,
size_t size). The existing semantics
From: Leon Romanovsky
Block layer maps MMIO memory through dma_map_phys() interface
with help of DMA_ATTR_MMIO attribute. There is a need to unmap
that memory with the appropriate unmap function, something which
wasn't possible before adding new REQ attribute to block layer in
previous
From: Leon Romanovsky
Rename the IOMMU DMA mapping functions to better reflect their actual
calling convention. The functions iommu_dma_map_page() and
iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and
iommu_dma_unmap_phys() respectively, as they already operate on physical
addresses
From: Leon Romanovsky
In case peer-to-peer transaction traverses through host bridge,
the IOMMU needs to have IOMMU_MMIO flag, together with skip of
CPU sync.
The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag,
but IOMMU flag was missed, due to assumption that such memory
can be
From: Leon Romanovsky
Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
that operate directly on physical addresses instead of page+offset
parameters. This provides a more efficient interface for drivers that
already have physical addresses available.
The new functions are
From: Leon Romanovsky
Convert HMM DMA operations from the legacy page-based API to the new
physical address-based dma_map_phys() and dma_unmap_phys() functions.
This demonstrates the preferred approach for new code that should use
physical addresses directly rather than page+offset parameters
From: Leon Romanovsky
General dma_direct_map_resource() is going to be removed
in next patch, so simply open-code it in xen driver.
Reviewed-by: Juergen Gross
Signed-off-by: Leon Romanovsky
---
drivers/xen/swiotlb-xen.c | 21 -
1 file changed, 20 insertions(+), 1 deletion
From: Leon Romanovsky
After introduction of dma_map_phys(), there is no need to convert
from physical address to struct page in order to map page. So let's
use it directly.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
From: Leon Romanovsky
Convert the DMA direct mapping functions to accept physical addresses
directly instead of page+offset parameters. The functions were already
operating on physical addresses internally, so this change eliminates
the redundant page-to-physical conversion at the API boundary
From: Leon Romanovsky
Make sure that CPU is not synced and IOMMU is configured to take
MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 13 +++--
include/linux/blk-mq-dma.h | 6 +-
include/linux
From: Leon Romanovsky
As a preparation for following map_page -> map_phys API conversion,
let's rename trace_dma_*map_page() to be trace_dma_*map_phys().
Signed-off-by: Leon Romanovsky
---
include/trace/events/dma.h | 4 ++--
kernel/dma/mapping.c | 4 ++--
2 files changed, 4 in
From: Leon Romanovsky
Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
order to allow single phys_addr_t flow.
In the following patches, the iommu_dma_map_resource() will be removed
in favour of iommu_dma_map_phys(..., attrs | DMA_ATTR_MMIO) flow.
Signed-off-by: Leon
From: Leon Romanovsky
This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid
touching the possibly non-KVA MMIO memory.
Also correct the incorrect caching attribute for the IOMMU, MMIO
memory should not be cachable inside the IOMMU mapping or it can
possibly create system problems
p PCI P2P MMIO without creating struct page. The
VFIO DMABUF series demonstrates how this works. This is intended to
replace the incorrect driver use of dma_map_resource() on PCI BAR
addresses.
This series does the core code and modern flows. A followup series
will give the same treatment to
From: Leon Romanovsky
This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers
that reside in memory-mapped I/O (MMIO) regions, such as device BARs
exposed through the host bridge, which are accessible for peer-to-peer
(P2P) DMA.
This attribute is especially useful for exporting
From: Leon Romanovsky
Convert the DMA debug infrastructure from page-based to physical address-based
mapping as a preparation to rely on physical address for DMA mapping routines.
The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and
changes its signature to accept a
On Thu, Aug 14, 2025, at 22:05, Christophe Leroy wrote:
> Le 14/08/2025 à 19:53, Leon Romanovsky a écrit :
>> Changelog:
>> v3:
>> * Fixed typo in "cacheable" word
>> * Simplified kmsan patch a lot to be simple argument refactoring
>
> v2 sent toda
From: Leon Romanovsky
Convert HMM DMA operations from the legacy page-based API to the new
physical address-based dma_map_phys() and dma_unmap_phys() functions.
This demonstrates the preferred approach for new code that should use
physical addresses directly rather than page+offset parameters
From: Leon Romanovsky
Block layer maps MMIO memory through dma_map_phys() interface
with help of DMA_ATTR_MMIO attribute. There is a need to unmap
that memory with the appropriate unmap function, something which
wasn't possible before adding new REQ attribute to block layer in
previous
From: Leon Romanovsky
Make sure that CPU is not synced and IOMMU is configured to take
MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 13 +++--
include/linux/blk-mq-dma.h | 6 +-
include/linux
From: Leon Romanovsky
In case peer-to-peer transaction traverses through host bridge,
the IOMMU needs to have IOMMU_MMIO flag, together with skip of
CPU sync.
The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag,
but IOMMU flag was missed, due to assumption that such memory
can be
From: Leon Romanovsky
Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
that operate directly on physical addresses instead of page+offset
parameters. This provides a more efficient interface for drivers that
already have physical addresses available.
The new functions are
From: Leon Romanovsky
Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
order to allow single phys_addr_t flow.
In the following patches, the iommu_dma_map_resource() will be removed
in favour of iommu_dma_map_phys(..., attrs | DMA_ATTR_MMIO) flow.
Signed-off-by: Leon
From: Leon Romanovsky
Extend base DMA page API to handle MMIO flow and follow
existing dma_map_resource() implementation to rely on dma_map_direct()
only to take DMA direct path.
Signed-off-by: Leon Romanovsky
---
kernel/dma/mapping.c | 26 +-
1 file changed, 21
From: Leon Romanovsky
After introduction of dma_map_phys(), there is no need to convert
from physical address to struct page in order to map page. So let's
use it directly.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
From: Leon Romanovsky
General dma_direct_map_resource() is going to be removed
in next patch, so simply open-code it in xen driver.
Reviewed-by: Juergen Gross
Signed-off-by: Leon Romanovsky
---
drivers/xen/swiotlb-xen.c | 21 -
1 file changed, 20 insertions(+), 1 deletion
From: Leon Romanovsky
Convert the DMA direct mapping functions to accept physical addresses
directly instead of page+offset parameters. The functions were already
operating on physical addresses internally, so this change eliminates
the redundant page-to-physical conversion at the API boundary
From: Leon Romanovsky
Rename the IOMMU DMA mapping functions to better reflect their actual
calling convention. The functions iommu_dma_map_page() and
iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and
iommu_dma_unmap_phys() respectively, as they already operate on physical
addresses
From: Leon Romanovsky
Convert the KMSAN DMA handling function from page-based to physical
address-based interface.
The refactoring renames kmsan_handle_dma() parameters from accepting
(struct page *page, size_t offset, size_t size) to (phys_addr_t phys,
size_t size). The existing semantics
From: Leon Romanovsky
Convert the DMA debug infrastructure from page-based to physical address-based
mapping as a preparation to rely on physical address for DMA mapping routines.
The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and
changes its signature to accept a
_map_resource() on PCI BAR
addresses.
This series does the core code and modern flows. A followup series
will give the same treatment to the legacy dma_ops implementation.
Thanks
Leon Romanovsky (16):
dma-mapping: introduce new DMA attribute to indicate MMIO memory
iommu/dma: implement DMA_ATTR_MM
From: Leon Romanovsky
This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid
touching the possibly non-KVA MMIO memory.
Also correct the incorrect caching attribute for the IOMMU, MMIO
memory should not be cachable inside the IOMMU mapping or it can
possibly create system problems
From: Leon Romanovsky
As a preparation for following map_page -> map_phys API conversion,
let's rename trace_dma_*map_page() to be trace_dma_*map_phys().
Signed-off-by: Leon Romanovsky
---
include/trace/events/dma.h | 4 ++--
kernel/dma/mapping.c | 4 ++--
2 files changed, 4 in
From: Leon Romanovsky
This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers
that reside in memory-mapped I/O (MMIO) regions, such as device BARs
exposed through the host bridge, which are accessible for peer-to-peer
(P2P) DMA.
This attribute is especially useful for exporting
On Thu, Aug 14, 2025 at 10:37:22AM -0700, Randy Dunlap wrote:
> Hi Leon,
>
> On 8/14/25 3:13 AM, Leon Romanovsky wrote:
> > diff --git a/Documentation/core-api/dma-attributes.rst
> > b/Documentation/core-api/dma-attributes.rst
> > index 1887d92e8e92..58a1528a9bb9 100
On Thu, Aug 14, 2025 at 09:44:48AM -0300, Jason Gunthorpe wrote:
> On Thu, Aug 14, 2025 at 03:35:06PM +0300, Leon Romanovsky wrote:
> > > Then check attrs here, not pfn_valid.
> >
> > attrs are not available in kmsan_handle_dma(). I can add it if you prefer.
>
>
On Thu, Aug 14, 2025 at 09:13:16AM -0300, Jason Gunthorpe wrote:
> On Wed, Aug 13, 2025 at 06:07:18PM +0300, Leon Romanovsky wrote:
> > > > /* Helper function to handle DMA data transfers. */
> > > > -void kmsan_handle_dma(struct page *page, size_t offset,
From: Leon Romanovsky
General dma_direct_map_resource() is going to be removed
in next patch, so simply open-code it in xen driver.
Reviewed-by: Juergen Gross
Signed-off-by: Leon Romanovsky
---
drivers/xen/swiotlb-xen.c | 21 -
1 file changed, 20 insertions(+), 1 deletion
From: Leon Romanovsky
In case peer-to-peer transaction traverses through host bridge,
the IOMMU needs to have IOMMU_MMIO flag, together with skip of
CPU sync.
The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag,
but IOMMU flag was missed, due to assumption that such memory
can be
From: Leon Romanovsky
Convert HMM DMA operations from the legacy page-based API to the new
physical address-based dma_map_phys() and dma_unmap_phys() functions.
This demonstrates the preferred approach for new code that should use
physical addresses directly rather than page+offset parameters
From: Leon Romanovsky
Convert the DMA direct mapping functions to accept physical addresses
directly instead of page+offset parameters. The functions were already
operating on physical addresses internally, so this change eliminates
the redundant page-to-physical conversion at the API boundary
From: Leon Romanovsky
Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
that operate directly on physical addresses instead of page+offset
parameters. This provides a more efficient interface for drivers that
already have physical addresses available.
The new functions are
From: Leon Romanovsky
After introduction of dma_map_phys(), there is no need to convert
from physical address to struct page in order to map page. So let's
use it directly.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
From: Leon Romanovsky
Block layer maps MMIO memory through dma_map_phys() interface
with help of DMA_ATTR_MMIO attribute. There is a need to unmap
that memory with the appropriate unmap function, something which
wasn't possible before adding new REQ attribute to block layer in
previous
From: Leon Romanovsky
Make sure that CPU is not synced and IOMMU is configured to take
MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 13 +++--
include/linux/blk-mq-dma.h | 6 +-
include/linux
From: Leon Romanovsky
Extend base DMA page API to handle MMIO flow and follow
existing dma_map_resource() implementation to rely on dma_map_direct()
only to take DMA direct path.
Signed-off-by: Leon Romanovsky
---
kernel/dma/mapping.c | 24
1 file changed, 20
From: Leon Romanovsky
Convert the DMA debug infrastructure from page-based to physical address-based
mapping as a preparation to rely on physical address for DMA mapping routines.
The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and
changes its signature to accept a
From: Leon Romanovsky
Rename the IOMMU DMA mapping functions to better reflect their actual
calling convention. The functions iommu_dma_map_page() and
iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and
iommu_dma_unmap_phys() respectively, as they already operate on physical
addresses
ruct page. The
VFIO DMABUF series demonstrates how this works. This is intended to
replace the incorrect driver use of dma_map_resource() on PCI BAR
addresses.
This series does the core code and modern flows. A followup series
will give the same treatment to the legacy dma_ops implementation.
Thank
From: Leon Romanovsky
Convert the KMSAN DMA handling function from page-based to physical
address-based interface.
The refactoring renames kmsan_handle_dma() parameters from accepting
(struct page *page, size_t offset, size_t size) to (phys_addr_t phys,
size_t size). A PFN_VALID check is added
From: Leon Romanovsky
This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers
that reside in memory-mapped I/O (MMIO) regions, such as device BARs
exposed through the host bridge, which are accessible for peer-to-peer
(P2P) DMA.
This attribute is especially useful for exporting
From: Leon Romanovsky
Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
order to allow single phys_addr_t flow.
In the following patches, the iommu_dma_map_resource() will be removed
in favour of iommu_dma_map_phys(..., attrs | DMA_ATTR_MMIO) flow.
Signed-off-by: Leon
From: Leon Romanovsky
As a preparation for following map_page -> map_phys API conversion,
let's rename trace_dma_*map_page() to be trace_dma_*map_phys().
Signed-off-by: Leon Romanovsky
---
include/trace/events/dma.h | 4 ++--
kernel/dma/mapping.c | 4 ++--
2 files changed, 4 in
From: Leon Romanovsky
This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid
touching the possibly non-KVA MMIO memory.
Also correct the incorrect caching attribute for the IOMMU, MMIO
memory should not be cachable inside the IOMMU mapping or it can
possibly create system problems
On Thu, Aug 07, 2025 at 10:45:33AM -0300, Jason Gunthorpe wrote:
> On Mon, Aug 04, 2025 at 03:42:50PM +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky
> >
> > Block layer maps MMIO memory through dma_map_phys() interface
> > with help of DMA_ATTR_MMIO attribut
On Thu, Aug 07, 2025 at 09:21:15AM -0300, Jason Gunthorpe wrote:
> On Mon, Aug 04, 2025 at 03:42:42PM +0300, Leon Romanovsky wrote:
> > From: Leon Romanovsky
> >
> > Convert the KMSAN DMA handling function from page-based to physical
> > address-based interface.
>
On Wed, Aug 06, 2025 at 03:26:30PM -0300, Jason Gunthorpe wrote:
> On Mon, Aug 04, 2025 at 03:42:37PM +0300, Leon Romanovsky wrote:
> > +void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size,
> > + int direction, dma_addr_t dma_addr, unsi
From: Leon Romanovsky
Block layer maps MMIO memory through dma_map_phys() interface
with help of DMA_ATTR_MMIO attribute. There is a need to unmap
that memory with the appropriate unmap function.
Signed-off-by: Leon Romanovsky
---
drivers/nvme/host/pci.c | 18 +-
1 file
From: Leon Romanovsky
Make sure that CPU is not synced and IOMMU is configured to take
MMIO path by providing newly introduced DMA_ATTR_MMIO attribute.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 13 +++--
include/linux/blk-mq-dma.h | 6 +-
include/linux
From: Leon Romanovsky
In case peer-to-peer transaction traverses through host bridge,
the IOMMU needs to have IOMMU_MMIO flag, together with skip of
CPU sync.
The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag,
but IOMMU flag was missed, due to assumption that such memory
can be
From: Leon Romanovsky
Convert the KMSAN DMA handling function from page-based to physical
address-based interface.
The refactoring renames kmsan_handle_dma() parameters from accepting
(struct page *page, size_t offset, size_t size) to (phys_addr_t phys,
size_t size). A PFN_VALID check is added
From: Leon Romanovsky
Extend base DMA page API to handle MMIO flow.
Signed-off-by: Leon Romanovsky
---
kernel/dma/mapping.c | 24
1 file changed, 20 insertions(+), 4 deletions(-)
diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
index 709405d46b2b4
From: Leon Romanovsky
Convert HMM DMA operations from the legacy page-based API to the new
physical address-based dma_map_phys() and dma_unmap_phys() functions.
This demonstrates the preferred approach for new code that should use
physical addresses directly rather than page+offset parameters
From: Leon Romanovsky
After introduction of dma_map_phys(), there is no need to convert
from physical address to struct page in order to map page. So let's
use it directly.
Signed-off-by: Leon Romanovsky
---
block/blk-mq-dma.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
From: Leon Romanovsky
Rename the IOMMU DMA mapping functions to better reflect their actual
calling convention. The functions iommu_dma_map_page() and
iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and
iommu_dma_unmap_phys() respectively, as they already operate on physical
addresses
From: Leon Romanovsky
Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys()
that operate directly on physical addresses instead of page+offset
parameters. This provides a more efficient interface for drivers that
already have physical addresses available.
The new functions are
From: Leon Romanovsky
General dma_direct_map_resource() is going to be removed
in next patch, so simply open-code it in xen driver.
Signed-off-by: Leon Romanovsky
---
drivers/xen/swiotlb-xen.c | 21 -
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/drivers
From: Leon Romanovsky
Convert the DMA direct mapping functions to accept physical addresses
directly instead of page+offset parameters. The functions were already
operating on physical addresses internally, so this change eliminates
the redundant page-to-physical conversion at the API boundary
From: Leon Romanovsky
Convert the DMA debug infrastructure from page-based to physical address-based
mapping as a preparation to rely on physical address for DMA mapping routines.
The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and
changes its signature to accept a
From: Leon Romanovsky
As a preparation for following map_page -> map_phys API conversion,
let's rename trace_dma_*map_page() to be trace_dma_*map_phys().
Signed-off-by: Leon Romanovsky
---
include/trace/events/dma.h | 4 ++--
kernel/dma/mapping.c | 4 ++--
2 files changed, 4 in
symbol backward compatibility by keeping
the old page-based API as wrapper functions around the new physical
address-based implementations.
Thanks
Leon Romanovsky (16):
dma-mapping: introduce new DMA attribute to indicate MMIO memory
iommu/dma: handle MMIO path in dma_iova_link
dma-debug
From: Leon Romanovsky
Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in
order to allow single phys_addr_t flow.
Signed-off-by: Leon Romanovsky
---
drivers/iommu/dma-iommu.c | 20
1 file changed, 16 insertions(+), 4 deletions(-)
diff --git a/drivers
From: Leon Romanovsky
This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers
that reside in memory-mapped I/O (MMIO) regions, such as device BARs
exposed through the host bridge, which are accessible for peer-to-peer
(P2P) DMA.
This attribute is especially useful for exporting
From: Leon Romanovsky
Make sure that CPU is not synced if MMIO path is taken.
Signed-off-by: Leon Romanovsky
---
drivers/iommu/dma-iommu.c | 21 -
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index
On Mon, Mar 22, 2021 at 08:01:17AM +0100, Jürgen Groß wrote:
> On 22.03.21 07:48, Leon Romanovsky wrote:
> > On Mon, Mar 22, 2021 at 06:58:34AM +0100, Jürgen Groß wrote:
> > > On 22.03.21 06:39, Leon Romanovsky wrote:
> > > > On Sun, Mar 21, 2021 at 06:54:
On Mon, Mar 22, 2021 at 06:58:34AM +0100, Jürgen Groß wrote:
> On 22.03.21 06:39, Leon Romanovsky wrote:
> > On Sun, Mar 21, 2021 at 06:54:52PM +0100, Hsu, Chiahao wrote:
> > >
> >
> > <...>
> >
> > > > > Typically there should be
1 - 100 of 112 matches
Mail list logo