Hi,
On 12/12/19 3:46 AM, Barret Rhoden via iommu wrote:
I can imagine a bunch of ways around this.
One option is to hook in a check for buggy RMRRs in intel-iommu.c. If
the base and end are 0, just ignore the entry. That works for my
specific buggy DMAR entry. There might be other buggy entr
Hi,
On 12/12/19 9:49 AM, Jerry Snitselaar wrote:
On Wed Dec 11 19, Lu Baolu wrote:
If the default DMA domain of a group doesn't fit a device, it
will still sit in the group but use a private identity domain.
When map/unmap/iova_to_phys come through iommu API, the driver
should still serve them,
On Wed Dec 11 19, Lu Baolu wrote:
If the default DMA domain of a group doesn't fit a device, it
will still sit in the group but use a private identity domain.
When map/unmap/iova_to_phys come through iommu API, the driver
should still serve them, otherwise, other devices in the same
group will be
Hi,
On 12/12/19 4:28 AM, Alex Williamson wrote:
Commit d850c2ee5fe2 ("iommu/vt-d: Expose ISA direct mapping region via
iommu_get_resv_regions") created a direct-mapped reserved memory region
in order to replace the static identity mapping of the ISA address
space, where the latter was then remov
Hi,
On 12/12/19 12:35 AM, Jerry Snitselaar wrote:
On Wed Dec 11 19, Lu Baolu wrote:
If the default DMA domain of a group doesn't fit a device, it
will still sit in the group but use a private identity domain.
When map/unmap/iova_to_phys come through iommu API, the driver
should still serve them
Hi -
Commit f036c7fa0ab6 ("iommu/vt-d: Check VT-d RMRR region in BIOS is
reported as reserved") caused a machine to fail to boot for me, but only
after a kexec.
Firmware provided an RMRR entry with base and end both == 0:
DMAR: RMRR base: 0x00 end: 0x00
Yes, firm
On Wed, Dec 11, 2019 at 03:37:30PM +, James Sewart wrote:
> > On 10 Dec 2019, at 22:37, Bjorn Helgaas wrote:
> >> -void pci_add_dma_alias(struct pci_dev *dev, u8 devfn)
> >> +void pci_add_dma_alias(struct pci_dev *dev, u8 devfn_from, unsigned
> >> nr_devfns)
> >> {
> >> + int devfn_to;
> >>
The VT-d docs specify requirements for the RMRR entries base and end
(called 'Limit' in the docs) addresses.
This commit will cause the DMAR processing to skip any RMRR entries
that do not meet these requirements with the expectation that firmware
is giving us junk.
Signed-off-by: Barret Rhoden
The RMRR sanity check is to confirm that the memory pointed to by the
RMRR entry is not used by the kernel. e820 RESERVED memory will not be
used. However, there are ranges of physical memory that are not covered
by the e820 table at all. The kernel will not use this memory, either.
This commit
RMRR entries describe memory regions that are DMA targets for devices
outside the kernel's control.
RMRR entries that fail the sanity check are pointing to regions of
memory that the firmware did not tell the kernel are reserved or
otherwise should not be used.
Instead of aborting DMAR processing
Commit d850c2ee5fe2 ("iommu/vt-d: Expose ISA direct mapping region via
iommu_get_resv_regions") created a direct-mapped reserved memory region
in order to replace the static identity mapping of the ISA address
space, where the latter was then removed in commit df4f3c603aeb
("iommu/vt-d: Remove stat
On Wed, 2019-12-11 at 18:33 +, Robin Murphy wrote:
> Since iommu_dma_alloc_iova() combines incoming masks with the u64 bus
> limit, it makes more sense to pass them around in their native u64
> rather than converting to dma_addr_t early. Do that, and resolve the
> remaining type discrepancy aga
On Wed, Dec 11, 2019 at 06:33:26PM +, Robin Murphy wrote:
> Since iommu_dma_alloc_iova() combines incoming masks with the u64 bus
> limit, it makes more sense to pass them around in their native u64
> rather than converting to dma_addr_t early. Do that, and resolve the
> remaining type discrepa
On Wed, Dec 11, 2019 at 06:33:26PM +, Robin Murphy wrote:
> Since iommu_dma_alloc_iova() combines incoming masks with the u64 bus
> limit, it makes more sense to pass them around in their native u64
> rather than converting to dma_addr_t early. Do that, and resolve the
> remaining type discrepa
Since iommu_dma_alloc_iova() combines incoming masks with the u64 bus
limit, it makes more sense to pass them around in their native u64
rather than converting to dma_addr_t early. Do that, and resolve the
remaining type discrepancy against the domain geometry with a cheeky
cast to keep things simp
Hello Alex,
I tried the suggested changes on kernel 5.4.2 and now it is working perfectly.
Thank you for your detailed answer!
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday 11 December 2019 16:23, Alex Williamson
wrote:
> On Wed, 11 Dec 2019 13:17:18 +
On Wed Dec 11 19, Lu Baolu wrote:
If the default DMA domain of a group doesn't fit a device, it
will still sit in the group but use a private identity domain.
When map/unmap/iova_to_phys come through iommu API, the driver
should still serve them, otherwise, other devices in the same
group will be
On Mon, Dec 09, 2019 at 03:50:07PM +0100, Thierry Reding wrote:
> From: Thierry Reding
>
> Use the new standard function instead of open-coding it.
>
> Cc: Jean-Philippe Brucker
> Cc: virtualizat...@lists.linux-foundation.org
> Signed-off-by: Thierry Reding
Reviewed-by: Jean-Philippe Brucker
From: Thierry Reding
[ Upstream commit 96d3ab802e4930a29a33934373157d6dff1b2c7e ]
Page tables that reside in physical memory beyond the 4 GiB boundary are
currently not working properly. The reason is that when the physical
address for page directory entries is read, it gets truncated at 32 bits
> On 10 Dec 2019, at 22:37, Bjorn Helgaas wrote:
>
> [+cc Joerg]
>
> On Tue, Dec 03, 2019 at 03:43:53PM +, James Sewart wrote:
>> pci_add_dma_alias can now be used to create a dma alias for a range of
>> devfns.
>>
>> Reviewed-by: Logan Gunthorpe
>> Signed-off-by: James Sewart
>> ---
>>
From: Thierry Reding
[ Upstream commit 96d3ab802e4930a29a33934373157d6dff1b2c7e ]
Page tables that reside in physical memory beyond the 4 GiB boundary are
currently not working properly. The reason is that when the physical
address for page directory entries is read, it gets truncated at 32 bits
From: Thierry Reding
[ Upstream commit 96d3ab802e4930a29a33934373157d6dff1b2c7e ]
Page tables that reside in physical memory beyond the 4 GiB boundary are
currently not working properly. The reason is that when the physical
address for page directory entries is read, it gets truncated at 32 bits
From: Ezequiel Garcia
[ Upstream commit 42bb97b80f2e3bf592e3e99d109b67309aa1b30e ]
IOMMU domain resource life is well-defined, managed
by .domain_alloc and .domain_free.
Therefore, domain-specific resources shouldn't be tied to
the device life, but instead to its domain.
Signed-off-by: Ezequie
From: Thierry Reding
[ Upstream commit 96d3ab802e4930a29a33934373157d6dff1b2c7e ]
Page tables that reside in physical memory beyond the 4 GiB boundary are
currently not working properly. The reason is that when the physical
address for page directory entries is read, it gets truncated at 32 bits
On Wed, 11 Dec 2019 13:17:18 +
cprt wrote:
> Hello,
> I am using VFIO with QEMU trying to passthrough my audio device.
>
> I successfully did this operation with my previous system, with a 7th
> generation intel and an older kernel.
> Now I am using a 10th generation intel and a newer kerne
From: Jean-Philippe Brucker
[ Upstream commit f7aff1a93f52047739af31072de0ad8d149641f3 ]
Since commit 7723f4c5ecdb ("driver core: platform: Add an error message
to platform_get_irq*()"), platform_get_irq_byname() displays an error
when the IRQ isn't found. Since the SMMUv3 driver uses that funct
From: Ezequiel Garcia
[ Upstream commit 42bb97b80f2e3bf592e3e99d109b67309aa1b30e ]
IOMMU domain resource life is well-defined, managed
by .domain_alloc and .domain_free.
Therefore, domain-specific resources shouldn't be tied to
the device life, but instead to its domain.
Signed-off-by: Ezequie
From: Thierry Reding
[ Upstream commit 96d3ab802e4930a29a33934373157d6dff1b2c7e ]
Page tables that reside in physical memory beyond the 4 GiB boundary are
currently not working properly. The reason is that when the physical
address for page directory entries is read, it gets truncated at 32 bits
28 matches
Mail list logo