By default all BARs map with VMA access permissions as pgprot_noncached. In ARM64 pgprot_noncached is MT_DEVICE_nGnRnE which is strongly ordered and allows aligned access. This type of mapping works for NON-PREFETCHABLE bars containing EP controller registers. But it restricts PREFETCHABLE bars from doing unaligned access.
In CMB NVMe drives PREFETCHABLE bars are required to map as MT_NORMAL_NC to do unaligned access. Signed-off-by: Srinath Mannam <srinath.man...@broadcom.com> Reviewed-by: Ray Jui <ray....@broadcom.com> Reviewed-by: Vikram Prakash <vikram.prak...@broadcom.com> --- drivers/vfio/pci/vfio_pci.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index b423a30..eff6b65 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -1142,7 +1142,10 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma) } vma->vm_private_data = vdev; - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); + if (pci_resource_flags(pdev, index) & IORESOURCE_PREFETCH) + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); + else + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); vma->vm_pgoff = (pci_resource_start(pdev, index) >> PAGE_SHIFT) + pgoff; return remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, -- 2.7.4