Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Looks good:
Reviewed-by: Christoph Hellwig
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On 2022/4/4 15:52, Muhammad Usama Anjum wrote:
Any thoughts?
It looks good to me. I will queue it for v5.19.
Best regards,
baolu
On 3/13/22 8:03 PM, Muhammad Usama Anjum wrote:
dev_iommu_priv_get() is being used at the top of this function which
dereferences dev. Dev cannot be NULL after
If the IOMMU is in use and an untrusted device is connected to an external
facing port but the address requested isn't page aligned will cause the
kernel to attempt to use bounce buffers.
If for some reason the bounce buffers have not been allocated this is a
problem that should be made apparent
It's been observed that plugging in a TBT3 NVME device to a port marked
with ExternalFacingPort that some DMA transactions occur that are not a
full page and so the DMA API attempts to use software bounce buffers
instead of relying upon the IOMMU translation.
This doesn't work and leads to
Previously the AMD IOMMU would only enable SWIOTLB in certain
circumstances:
* IOMMU in passthrough mode
* SME enabled
This logic however doesn't work when an untrusted device is plugged in
that doesn't do page aligned DMA transactions. The expectation is
that a bounce buffer is used for those
On Mon, Apr 04, 2022 at 05:05:00PM +, Limonciello, Mario wrote:
> I do expect that solves it as well. The reason I submitted the way I
> did is that there seemed to be a strong affinity for having swiotlb
> disabled when IOMMU is enabled on AMD IOMMU. The original code that
> disabled
On Mon, Apr 04, 2022 at 01:43:49PM +0800, Lu Baolu wrote:
> On 2022/3/30 19:58, Jason Gunthorpe wrote:
> > > > Testing the group size is inherently the wrong test to make.
> > > What is your suggestion then?
> > Add a flag to the group that positively indicates the group can never
> > have more
[AMD Official Use Only]
> On Mon, Apr 04, 2022 at 11:47:05AM -0500, Mario Limonciello wrote:
> > The bounce buffers were originally set up, but torn down during
> > the boot process.
> > * This happens because as part of IOMMU initialization
> > `amd_iommu_init_dma_ops` gets called and resets
From: Christoph Hellwig Sent: Sunday, April 3, 2022 10:06 PM
>
> Pass a bool to pass if swiotlb needs to be enabled based on the
Wording problems. I'm not sure what you meant to say.
> addressing needs and replace the verbose argument with a set of
> flags, including one to force enable
On Mon, Apr 04, 2022 at 11:47:05AM -0500, Mario Limonciello wrote:
> The bounce buffers were originally set up, but torn down during
> the boot process.
> * This happens because as part of IOMMU initialization
> `amd_iommu_init_dma_ops` gets called and resets the global swiotlb to 0.
> * When
The helper function `dev_use_swiotlb` is used for various decision
making points for how to handle DMA mapping requests.
If the kernel doesn't have any memory allocated for swiotlb to use, then
an untrusted device being connected to the system may fail to initialize
when a request is made.
To
If the IOMMU is in use and an untrusted device is connected to an external
facing port but the address requested isn't page aligned will cause the
kernel to attempt to use bounce buffers.
If the bounce buffers have not been allocated however, this leads
to messages like this:
swiotlb buffer is
It's been observed that plugging in a TBT3 NVME device to a port marked
with ExternalFacingPort that some DMA transactions occur that are not a
full page and so the DMA API attempts to use software bounce buffers
instead of relying upon the IOMMU translation.
This doesn't work and leads to
Hi Christoph,
On Mon, Apr 04, 2022 at 05:05:56AM +, Christoph Hellwig wrote:
> From: Christoph Hellwig
> Subject: [PATCH 12/15] swiotlb: provide swiotlb_init variants that remap
> the buffer
>
> To shared more code between swiotlb and xen-swiotlb, offer a
> swiotlb_init_remap interface and
From: Jon Nettleton
Check if there is any RMR info associated with the devices behind
the SMMU and if any, install bypass SMRs for them. This is to
keep any ongoing traffic associated with these devices alive
when we enable/reset SMMU during probe().
Signed-off-by: Jon Nettleton
Signed-off-by:
Check if there is any RMR info associated with the devices behind
the SMMUv3 and if any, install bypass STEs for them. This is to
keep any ongoing traffic associated with these devices alive
when we enable/reset SMMUv3 during probe().
Signed-off-by: Shameer Kolothum
---
By default, disable_bypass flag is set and any dev without
an iommu domain installs STE with CFG_ABORT during
arm_smmu_init_bypass_stes(). Introduce a "force" flag and
move the STE update logic to arm_smmu_init_bypass_stes()
so that we can force it to install CFG_BYPASS STE for specific
SIDs.
Introduce a helper to check the sid range and to init the l2 strtab
entries(bypass). This will be useful when we have to initialize the
l2 strtab with bypass for RMR SIDs.
Signed-off-by: Shameer Kolothum
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 28 +++--
1 file changed,
This will provide a way for SMMU drivers to retrieve StreamIDs
associated with IORT RMR nodes and use that to set bypass settings
for those IDs.
Signed-off-by: Shameer Kolothum
---
drivers/acpi/arm64/iort.c | 29 +
include/linux/acpi_iort.h | 8
2 files
Parse through the IORT RMR nodes and populate the reserve region list
corresponding to a given IOMMU and device(optional). Also, go through
the ID mappings of the RMR node and retrieve all the SIDs associated
with it.
Now that we have this support, update iommu_dma_get/_put_resv_regions()
paths
Currently drivers use generic_iommu_put_resv_regions() to remove
reserved regions. Introduce a dma-iommu specific reserve region
removal helper(iommu_dma_put_resv_regions()). This will be useful
when we introduce reserve regions with any firmware specific memory
allocations(eg: IORT RMR) that have
Currently IORT provides a helper to retrieve HW MSI reserve regions.
Change this to a generic helper to retrieve any IORT related reserve
regions. This will be useful when we add support for RMR nodes in
subsequent patches.
Signed-off-by: Shameer Kolothum
---
drivers/acpi/arm64/iort.c | 23
At present iort_iommu_msi_get_resv_regions() returns the number of
MSI reserved regions on success and there are no users for this.
The reserved region list will get populated anyway for platforms
that require the HW MSI region reservation. Hence, change the
function to return void instead.
A union is introduced to struct iommu_resv_region to hold
any firmware specific data. This is in preparation to add
support for IORT RMR reserve regions and the union now holds
the RMR specific information.
Signed-off-by: Shameer Kolothum
---
include/linux/iommu.h | 9 +
1 file changed,
IORT rev E.d introduces more details into the RMR node Flags
field. Add temporary definitions to describe and access these
Flags field until ACPICA header is updated to support E.d.
This patch can be reverted once the include/acpi/actbl2.h has
all the relevant definitions.
Signed-off-by: Shameer
Hi
v8 --> v9
- Adressed comments from Robin on interfaces as discussed here[0].
- Addressed comments from Lorenzo.
Though functionally there aren't any major changes, the interfaces have
changed from v8 and for that reason not included the T-by tags from
Steve and Eric yet(Many thanks for
Add max opt argument to iova_domain_init_rcaches(), and use it to set the
rcaches range.
Also fix up all users to set this value (at 0, meaning use default),
including a wrapper for that, iova_domain_init_rcaches_default().
For dma-iommu.c we derive the iova_len argument from the IOMMU group
max
Add support to allow the maximum optimised DMA len be set for an IOMMU
group via sysfs.
This is much the same with the method to change the default domain type
for a group.
Signed-off-by: John Garry
---
.../ABI/testing/sysfs-kernel-iommu_groups | 16 +
drivers/iommu/iommu.c
Allow iommu_change_dev_def_domain() to create a new default domain, keeping
the same as current.
Also remove comment about the function purpose, which will become stale.
Signed-off-by: John Garry
---
drivers/iommu/iommu.c | 49 ++-
include/linux/iommu.h
Some low-level drivers may request DMA mappings whose IOVA length exceeds
that of the current rcache upper limit.
This means that allocations for those IOVAs will never be cached, and
always must be allocated and freed from the RB tree per DMA mapping cycle.
This has a significant effect on
For streaming DMA mappings involving an IOMMU and whose IOVA len regularly
exceeds the IOVA rcache upper limit (meaning that they are not cached),
performance can be reduced.
This may be much more pronounced from commit 4e89dce72521 ("iommu/iova:
Retry from last rb tree node if iova search
Function iommu_group_store_type() supports changing the default domain
of an IOMMU group.
Many conditions need to be satisfied and steps taken for this action to be
successful.
Satisfying these conditions and steps will be required for setting other
IOMMU group attributes, so factor into a
On 4/4/2022 3:10 PM, Vasant Hegde via iommu wrote:
> Newer AMD systems can support multiple PCI segments, where each segment
> contains one or more IOMMU instances. However, an IOMMU instance can only
> support a single PCI segment.
>
Hi,
Please ignore this series. Looks like I had network
Rename 'device_id' as 'sbdf' and extend it to 32bit so that we can
pass PCI segment ID to ppr_notifier(). Also pass PCI segment ID to
pci_get_domain_bus_and_slot() instead of default value.
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 2 +-
drivers/iommu/amd/iommu.c
Rename struct device_state.devid variable to struct device_state.sbdf
and extend it to 32-bit to include the 16-bit PCI segment ID via
the helper function get_pci_sbdf_id().
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
Print pci segment ID along with bdf. Useful for debugging.
Co-developed-by: Suravee Suthikulpaint
Signed-off-by: Suravee Suthikulpaint
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/init.c | 10 +-
drivers/iommu/amd/iommu.c | 36 ++--
2 files
From: Suravee Suthikulpanit
By default, PCI segment is zero and can be omitted. To support system
with non-zero PCI segment ID, modify the parsing functions to allow
PCI segment ID.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
.../admin-guide/kernel-parameters.txt
From: Suravee Suthikulpanit
Upcoming AMD systems can have multiple PCI segments. Hence pass PCI
segment ID to pci_get_domain_bus_and_slot() instead of '0'.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/init.c | 6
From: Suravee Suthikulpanit
Extend current device ID variables to 32-bit to include the 16-bit
segment ID when parsing device information from IVRS table to initialize
each IOMMU.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
From: Suravee Suthikulpanit
Current get_device_id() only provide 16-bit PCI device ID (i.e. BDF).
With multiple PCI segment support, we need to extend the helper function
to include PCI segment ID.
So, introduce a new helper function get_device_sbdf_id() to replace
the current
Fix amd_iommu_flush_dte_all() and amd_iommu_flush_tlb_all() to flush
upto last_bdf only.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/iommu.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff
Replace it with per PCI segment last_bdf variable.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 3 ---
drivers/iommu/amd/init.c| 35 ++---
From: Suravee Suthikulpanit
This is replaced by the per PCI segment alias table.
Also remove alias_table_size variable.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/amd_iommu_types.h | 6 --
From: Suravee Suthikulpanit
Replace global amd_iommu_dev_table with per PCI segment device table.
Also remove "dev_table_size".
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/amd_iommu_types.h | 6 --
From: Suravee Suthikulpanit
To include a pointer to per PCI segment device table.
Also include struct amd_iommu as one of the function parameter to
amd_iommu_apply_erratum_63() since it is needed when setting up DTE.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by:
From: Suravee Suthikulpanit
Include struct amd_iommu_pci_seg as a function parameter since
we need to access per PCI segment device table.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/init.c | 27
From: Suravee Suthikulpanit
Start using per PCI segment device table instead of global
device table.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/iommu.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git
From: Suravee Suthikulpanit
Start using per PCI segment device table instead of global
device table.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/iommu.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git
From: Suravee Suthikulpanit
Start using per PCI segment device table instead of global
device table.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/iommu.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git
From: Suravee Suthikulpanit
Start using per PCI segment data structures instead of global data
structures.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/iommu.c | 19 +++
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git
Then, remove the global amd_iommu_rlookup_table and rlookup_table_size.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 5 -
drivers/iommu/amd/init.c| 23 ++-
From: Suravee Suthikulpanit
Pass amd_iommu structure as one of the parameter to these functions
as its needed to retrieve variable tables inside these functions.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/iommu.c | 26
From: Suravee Suthikulpanit
Pass amd_iommu structure as one of the parameter to amd_irte_ops functions
since its needed to activate/deactivate the iommu.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 6 ++--
From: Suravee Suthikulpanit
Add a pointer to struct amd_iommu to amd_ir_data structure, which
can be used to correlate interrupt remapping data to a per-PCI-segment
interrupt remapping table.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
From: Suravee Suthikulpanit
To allow IOMMU rlookup using both PCI segment and device ID.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/iommu.c | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff
From: Suravee Suthikulpanit
Use rlookup_amd_iommu() helper function which will give per PCI
segment rlookup_table.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/iommu.c | 64 +++
1 file changed, 38 insertions(+), 26
Then, remove the global irq_lookup_table.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 2 --
drivers/iommu/amd/init.c| 19 ---
drivers/iommu/amd/iommu.c | 36
It will replace global "rlookup_table_size" variable.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 3 +++
drivers/iommu/amd/init.c| 11 ++-
2 files changed, 9
It will replace global "alias_table_size" variable.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 3 +++
drivers/iommu/amd/init.c| 5 +++--
2 files changed, 6 insertions(+), 2
With multiple pci segment support, number of BDF supported by each
segment may differ. Hence introduce per segment device table size
which depends on last_bdf. This will replace global
"device_table_size" variable.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Current code uses global "amd_iommu_last_bdf" to track the last bdf
supported by the system. This value is used for various memory
allocation, device data flushing, etc.
Introduce per PCI segment last_bdf which will be used to track last bdf
supported by the given PCI segment and use this value
Newer AMD systems can support multiple PCI segments. In order to support
multiple PCI segments IVMD table in IVRS structure is enhanced to
include pci segment id. Update ivmd_header structure to include "pci_seg".
Also introduce per PCI segment unity map list. It will replace global
From: Suravee Suthikulpanit
This will replace global alias table (amd_iommu_alias_table).
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/amd_iommu_types.h | 7 +
drivers/iommu/amd/init.c| 41
From: Suravee Suthikulpanit
It will remove global old_dev_tbl_cpy. Also update copy_device_table()
copy device table for all PCI segments.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/amd_iommu_types.h | 6 ++
This will replace global dev_data_list.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 3 +++
drivers/iommu/amd/init.c| 1 +
drivers/iommu/amd/iommu.c | 21
This will replace global irq lookup table (irq_lookup_table).
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 6 ++
drivers/iommu/amd/init.c| 27 +++
2
From: Suravee Suthikulpanit
This will replace global rlookup table (amd_iommu_rlookup_table).
Also add helper functions to set/get rlookup table for the given device.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
From: Suravee Suthikulpanit
Introduce per PCI segment device table. All IOMMUs within the segment
will share this device table. This will replace global device
table i.e. amd_iommu_dev_table.
Also introduce helper function to get the device table for the given IOMMU.
Co-developed-by: Vasant
Newer AMD systems can support multiple PCI segments, where each segment
contains one or more IOMMU instances. However, an IOMMU instance can only
support a single PCI segment.
Current code assumes that system contains only one pci segment (segment 0)
and creates global data structures such as
struct iommu_dev_data contains member "pdev" to point to pci_dev. This is
valid for only PCI devices and for other devices this will be NULL. This
causes unnecessary "pdev != NULL" check at various places.
Replace "struct pci_dev" member with "struct device" and use
to_pci_dev() to get pci device
Newer AMD systems can support multiple PCI segments, where each segment
contains one or more IOMMU instances. However, an IOMMU instance can only
support a single PCI segment.
Current code assumes a system contains only one PCI segment (segment 0)
and creates global data structures such as device
From: Suravee Suthikulpanit
Use rlookup_amd_iommu() helper function which will give per PCI
segment rlookup_table.
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/iommu.c | 64 +++
1 file changed, 38 insertions(+), 26
Then, remove the global irq_lookup_table.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 2 --
drivers/iommu/amd/init.c| 19 ---
drivers/iommu/amd/iommu.c | 36
It will replace global "rlookup_table_size" variable.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 3 +++
drivers/iommu/amd/init.c| 11 ++-
2 files changed, 9
It will replace global "alias_table_size" variable.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 3 +++
drivers/iommu/amd/init.c| 5 +++--
2 files changed, 6 insertions(+), 2
With multiple pci segment support, number of BDF supported by each
segment may differ. Hence introduce per segment device table size
which depends on last_bdf. This will replace global
"device_table_size" variable.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Current code uses global "amd_iommu_last_bdf" to track the last bdf
supported by the system. This value is used for various memory
allocation, device data flushing, etc.
Introduce per PCI segment last_bdf which will be used to track last bdf
supported by the given PCI segment and use this value
Newer AMD systems can support multiple PCI segments. In order to support
multiple PCI segments IVMD table in IVRS structure is enhanced to
include pci segment id. Update ivmd_header structure to include "pci_seg".
Also introduce per PCI segment unity map list. It will replace global
From: Suravee Suthikulpanit
This will replace global alias table (amd_iommu_alias_table).
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/amd_iommu_types.h | 7 +
drivers/iommu/amd/init.c| 41
From: Suravee Suthikulpanit
It will remove global old_dev_tbl_cpy. Also update copy_device_table()
copy device table for all PCI segments.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
drivers/iommu/amd/amd_iommu_types.h | 6 ++
This will replace global dev_data_list.
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 3 +++
drivers/iommu/amd/init.c| 1 +
drivers/iommu/amd/iommu.c | 21
This will replace global irq lookup table (irq_lookup_table).
Co-developed-by: Suravee Suthikulpanit
Signed-off-by: Suravee Suthikulpanit
Signed-off-by: Vasant Hegde
---
drivers/iommu/amd/amd_iommu_types.h | 6 ++
drivers/iommu/amd/init.c| 27 +++
2
From: Suravee Suthikulpanit
This will replace global rlookup table (amd_iommu_rlookup_table).
Also add helper functions to set/get rlookup table for the given device.
Co-developed-by: Vasant Hegde
Signed-off-by: Vasant Hegde
Signed-off-by: Suravee Suthikulpanit
---
From: Suravee Suthikulpanit
Introduce per PCI segment device table. All IOMMUs within the segment
will share this device table. This will replace global device
table i.e. amd_iommu_dev_table.
Also introduce helper function to get the device table for the given IOMMU.
Co-developed-by: Vasant
Newer AMD systems can support multiple PCI segments, where each segment
contains one or more IOMMU instances. However, an IOMMU instance can only
support a single PCI segment.
Current code assumes that system contains only one pci segment (segment 0)
and creates global data structures such as
struct iommu_dev_data contains member "pdev" to point to pci_dev. This is
valid for only PCI devices and for other devices this will be NULL. This
causes unnecessary "pdev != NULL" check at various places.
Replace "struct pci_dev" member with "struct device" and use
to_pci_dev() to get pci device
Newer AMD systems can support multiple PCI segments, where each segment
contains one or more IOMMU instances. However, an IOMMU instance can only
support a single PCI segment.
Current code assumes a system contains only one PCI segment (segment 0)
and creates global data structures such as device
Any thoughts?
On 3/13/22 8:03 PM, Muhammad Usama Anjum wrote:
> dev_iommu_priv_get() is being used at the top of this function which
> dereferences dev. Dev cannot be NULL after this. Remove the validity
> check on dev and simplify the code.
>
> Signed-off-by: Muhammad Usama Anjum
> ---
>
On 4/3/22 10:05 PM, Christoph Hellwig wrote:
> To shared more code between swiotlb and xen-swiotlb, offer a
> swiotlb_init_remap interface and add a remap callback to
> swiotlb_init_late that will allow Xen to remap the buffer the
> buffer without duplicating much of the logic.
>
>
On 2022/3/31 3:09, Jason Gunthorpe wrote:
On Tue, Mar 29, 2022 at 01:37:55PM +0800, Lu Baolu wrote:
Add support for SVA domain allocation and provide an SVA-specific
iommu_domain_ops.
Signed-off-by: Lu Baolu
include/linux/intel-iommu.h | 1 +
drivers/iommu/intel/iommu.c | 10 ++
Hi Jason,
On 2022/3/31 3:08, Jason Gunthorpe wrote:
On Tue, Mar 29, 2022 at 01:37:53PM +0800, Lu Baolu wrote:
Attaching an IOMMU domain to a PASID of a device is a generic operation
for modern IOMMU drivers which support PASID-granular DMA address
translation. Currently visible usage scenarios
Hi Jason and Kevin,
On 2022/4/3 7:32, Jason Gunthorpe wrote:
On Sat, Apr 02, 2022 at 08:43:16AM +, Tian, Kevin wrote:
This assumes any domain is interchangeable with any device, which is
not the iommu model. We need a domain op to check if a device is
compatiable with the domain for vfio
93 matches
Mail list logo