---
From b78f4164881125c4fecfdb87878d0120b2177c53 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Sun, 1 Oct 2017 09:37:35 +0200
Subject: nvme-pci: Use PCI bus address for data/queues in CMB
Currently, NVMe PCI host driver is programming CMB dma address as
I/O SQs addresses. This results
yes, this patch works for our platform.
On Wed, Oct 4, 2017 at 12:00 PM, Christoph Hellwig wrote:
> On Mon, Oct 02, 2017 at 11:21:29AM -0600, Keith Busch wrote:
>> Yah, calling this a DMA address was a misnomer and confusing.
>
> Abhishek, can you test if this works for you?
On Mon, Oct 02, 2017 at 11:21:29AM -0600, Keith Busch wrote:
> Yah, calling this a DMA address was a misnomer and confusing.
Abhishek, can you test if this works for you?
On Sun, Oct 01, 2017 at 09:42:03AM +0200, Christoph Hellwig wrote:
> This looks very convoluted, mostly because the existing code is
> doing weird things. For one thing what is sq_dma_addr currently
> is not a DMA adddress - we either need the resource address
> for the ioremap, but we don't need
This looks very convoluted, mostly because the existing code is
doing weird things. For one thing what is sq_dma_addr currently
is not a DMA adddress - we either need the resource address
for the ioremap, but we don't need to stash that away, and second
the one programmed into the controller shoul
Currently, NVMe PCI host driver is programming CMB dma address as
I/O SQs addresses. This results in failures on systems where 1:1
outbound mapping is not used (example Broadcom iProc SOCs) because
CMB BAR will be progammed with PCI bus address but NVMe PCI EP will
try to access CMB using dma addre
6 matches
Mail list logo