Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone only has low 1M memory presented
and lock
Thanks Alexander,
but this was already fixed 4 days ago.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Mon, Dec 13, 2021 at 08:50:05AM +0800, Lu Baolu wrote:
> > Does this work for you? Can I work towards this in the next version?
>
> A kindly ping ... Is this heading the right direction? I need your
> advice to move ahead. :-)
I prefer this to all the duplicated code in the v3 series.. Given
On Fri, Dec 10, 2021 at 11:18:44PM +0100, Thomas Gleixner wrote:
> There are quite some places which retrieve the first MSI descriptor to
> evaluate whether the setup is for MSI or MSI-X. That's required because
> pci_dev::msi[x]_enabled is only set when the setup completed successfully.
>
> There
On Fri, Dec 10, 2021 at 11:18:46PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Cc: Juergen Gross
> Cc: xen-de...@lists.xenproject.org
> ---
> V3: Use pci_dev->msix_enabled.
> ---
> arch/x86/pci/xen.c
On Fri, Dec 10, 2021 at 11:18:47PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> ---
> V3: Use pci_dev->msix_enabled - Jason
> ---
> arch/x86/kernel/apic/msi.c |5 +
> 1 file changed, 1 insertion
On Fri, Dec 10, 2021 at 11:18:49PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> to determine whether this is MSI or MSIX instead of consulting MSI
> descriptors.
>
> Signed-off-by: Thomas Gleixner
> ---
> V2: Use PCI device property - Jason
> ---
> kernel/irq/msi.c | 17 ++--
On Fri, Dec 10, 2021 at 11:18:51PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Cc: Arnd Bergmann
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: linuxppc-...@lists.ozlabs.org
> ---
> V3: Us
On Fri, Dec 10, 2021 at 11:18:52PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> instead of fiddling with MSI descriptors.
>
> Signed-off-by: Thomas Gleixner
> Cc: Michael Ellerman
> Cc: linuxppc-...@lists.ozlabs.org
> ---
> V3: Use pci_dev->msix_enabled - Jason
> ---
> arch/power
On Fri, Dec 10, 2021 at 11:19:22PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> Set the domain info flag and remove the check.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: "Cédric Le Goater"
> Cc:
On Fri, Dec 10, 2021 at 11:19:23PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> This allows drivers to retrieve the Linux interrupt number instead of
> fiddling with MSI descriptors.
>
> msi_get_virq() returns the Linux interrupt number or 0 in case that there
> is no entry for the
On Fri, Dec 10, 2021 at 11:19:25PM +0100, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> Use msi_get_vector() and handle the return value to be compatible.
>
> No functional change intended.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> ---
> V2: Handle the INTx c
On 12/13/21 6:27 AM, Baoquan He wrote:
Currently three dma atomic pools are initialized as long as the relevant
kernel codes are built in. While in kdump kernel of x86_64, this is not
right when trying to create atomic_pool_dma, because there's no managed
pages in DMA zone. In the case, DMA zone
On 12/12/21 11:14 PM, Tianyu Lan wrote:
> In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
> extra address space which is above shared_gpa_boundary (E.G 39 bit
> address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
> physical address will be original physical add
> -Original Message-
> From: Tianyu Lan
> Sent: Monday, December 13, 2021 2:14 AM
> To: KY Srinivasan ; Haiyang Zhang
> ; Stephen
> Hemminger ; wei@kernel.org; Dexuan Cui
> ;
> t...@linutronix.de; mi...@redhat.com; b...@alien8.de;
> dave.han...@linux.intel.com;
> x...@kernel.org;
On Thu, Dec 09, 2021 at 05:35:59PM +0100, Thierry Reding wrote:
> From: Thierry Reding
>
> Allow the NVIDIA-specific ARM SMMU implementation to bind to the SMMU
> instances found on Tegra234.
>
> Signed-off-by: Thierry Reding
> ---
> drivers/iommu/arm/arm-smmu/arm-smmu-impl.c | 3 ++-
> 1 file
On 23:19-20211210, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> Use the common msi_index member and get rid of the pointless wrapper struct.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> Cc: Nishanth Menon
> Cc: Tero Kristo
> Cc:
On 23:18-20211210, Thomas Gleixner wrote:
[...]
>
> It's also available from git:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git
> msi-v3-part-2
[...]
> ---
> drivers/dma/ti/k3-udma-private.c|6
> drivers/dma/ti/k3-udma.c
During refactoring the logic around gfpflags_allow_blocking() got inverted
due to missing '!'. Fix this by adding it back.
Fixes: 8d7c141bb80f ("dma-direct: add a dma_direct_use_pool helper")
Signed-off-by: Alexander Stein
---
I bisected this to the commit in 'Fixed:' tag. Here is the splat:
On 23:19-20211210, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> Allocate the MSI device data on first invocation of the allocation function.
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> Cc: Nishanth Menon
> Cc: Tero Kristo
> Cc:
On Wed, 01 Dec 2021 13:09:42 +0530, Vinod Koul wrote:
> Add the SoC specific compatible for SM8450 implementing
> arm,mmu-500.
>
> Signed-off-by: Vinod Koul
> ---
> Documentation/devicetree/bindings/iommu/arm,smmu.yaml | 1 +
> 1 file changed, 1 insertion(+)
>
Acked-by: Rob Herring
__
On 23:19-20211210, Thomas Gleixner wrote:
> From: Thomas Gleixner
>
> Just use the core function msi_get_virq().
>
> Signed-off-by: Thomas Gleixner
> Reviewed-by: Greg Kroah-Hartman
> Reviewed-by: Jason Gunthorpe
> Cc: Peter Ujfalusi
> Cc: Vinod Koul
> Cc: dmaeng...@vger.kernel.org
Acked-b
On Mon, Dec 13, 2021 at 02:48:52PM +0800, Yong Wu wrote:
> On Fri, 2021-12-03 at 17:34 -0600, Rob Herring wrote:
> > On Fri, 03 Dec 2021 14:40:24 +0800, Yong Wu wrote:
> > > If a platform's larb support gals, there will be some larbs have a
> > > one
> > > more "gals" clock while the others still o
On Fri, 03 Dec 2021 14:40:25 +0800, Yong Wu wrote:
> Add mt8186 smi support in the bindings.
>
> Signed-off-by: Yong Wu
> ---
> .../bindings/memory-controllers/mediatek,smi-common.yaml | 4 +++-
> .../bindings/memory-controllers/mediatek,smi-larb.yaml| 3 +++
> 2 files changed, 6 in
On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Convert the sg_is_chain(), sg_is_last() and sg_chain_ptr() macros
> into static inline functions. There's no reason for these to be macros
> and static inline are generally preferred these days.
>
> Also introduce the SG_PAGE_LINK_MASK define so the P2
On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Make use of the third free LSB in scatterlist's page_link on 64bit systems.
>
> The extra bit will be used by dma_[un]map_sg_p2pdma() to determine when a
> given SGL segments dma_address points to a PCI bus address.
> dma_unmap_sg_p2pdma() will need to
On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Attempt to find the mapping type for P2PDMA pages on the first
> DMA map attempt if it has not been done ahead of time.
>
> Previously, the mapping type was expected to be calculated ahead of
> time, but if pages are to come from userspace then there's
On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> pci_p2pdma_map_type() will be needed by the dma-iommu map_sg
> implementation because it will need to determine the mapping type
> ahead of actually doing the mapping to create the actual IOMMU mapping.
>
> Prototypes for this helper are added to dma-m
On 11/17/21 1:53 PM, Logan Gunthorpe wrote:
> Introduce a supports_pci_p2pdma() operation in nvme_ctrl_ops to
> replace the fixed NVME_F_PCI_P2PDMA flag such that the dma_map_ops
> flags can be checked for PCI P2PDMA support.
>
> Signed-off-by: Logan Gunthorpe
> ---
Looks good.
Reviewed-by: Cha
> static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
> - struct request *req, struct nvme_rw_command *cmd, int entries)
> + struct request *req, struct nvme_rw_command *cmd)
> {
> struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
> struct dma_pool *p
On 2021-12-13 3:21 p.m., Chaitanya Kulkarni wrote:
>
>> static blk_status_t nvme_pci_setup_sgls(struct nvme_dev *dev,
>> -struct request *req, struct nvme_rw_command *cmd, int entries)
>> +struct request *req, struct nvme_rw_command *cmd)
>> {
>> struct nvme_iod
On 12/14/2021 12:45 AM, Dave Hansen wrote:
On 12/12/21 11:14 PM, Tianyu Lan wrote:
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG. The access
physical
On Fri, 2021-12-10 at 12:57 -0800, Guenter Roeck wrote:
> Since commit baf94e6ebff9 ("iommu/mediatek: Add device link for smi-
> common
> and m4u"), the driver assumes that at least one phandle associated
> with
> "mediatek,larbs" exists. If that is not the case, for example if
> reason
> "mediatek
33 matches
Mail list logo