Re: [RFC v3.1 00/22] intel_iommu: expose Shared Virtual Addressing to VMs

2020-02-22 Thread no-reply
Patchew URL: 
https://patchew.org/QEMU/1582358843-51931-1-git-send-email-yi.l@intel.com/



Hi,

This series failed the docker-mingw@fedora build test. Please find the testing 
commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#! /bin/bash
export ARCH=x86_64
make docker-image-fedora V=1 NETWORK=1
time make docker-test-mingw@fedora J=14 NETWORK=1
=== TEST SCRIPT END ===

 from /tmp/qemu-test/src/include/hw/pci/pci_bus.h:4,
 from /tmp/qemu-test/src/include/hw/pci-host/i440fx.h:15,
 from /tmp/qemu-test/src/stubs/pci-host-piix.c:2:
/tmp/qemu-test/src/include/hw/iommu/host_iommu_context.h:26:10: fatal error: 
linux/iommu.h: No such file or directory
 #include 
  ^~~
compilation terminated.
make: *** [/tmp/qemu-test/src/rules.mak:69: stubs/pci-host-piix.o] Error 1
make: *** Waiting for unfinished jobs
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 664, in 
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', 
'--label', 'com.qemu.instance.uuid=797be2e5d6284df8a28232368d5ea704', '-u', 
'1003', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=', 
'-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 
'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', 
'/home/patchew2/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', 
'/var/tmp/patchew-tester-tmp-c3wao_ik/src/docker-src.2020-02-22-03.18.45.20389:/var/tmp/qemu:z,ro',
 'qemu:fedora', '/var/tmp/qemu/run', 'test-mingw']' returned non-zero exit 
status 2.
filter=--filter=label=com.qemu.instance.uuid=797be2e5d6284df8a28232368d5ea704
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-c3wao_ik/src'
make: *** [docker-run-test-mingw@fedora] Error 2

real2m30.962s
user0m7.737s


The full log is available at
http://patchew.org/logs/1582358843-51931-1-git-send-email-yi.l@intel.com/testing.docker-mingw@fedora/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-de...@redhat.com

[RFC v3.1 00/22] intel_iommu: expose Shared Virtual Addressing to VMs

2020-02-22 Thread Liu Yi L
Shared Virtual Addressing (SVA), a.k.a, Shared Virtual Memory (SVM) on
Intel platforms allows address space sharing between device DMA and
applications. SVA can reduce programming complexity and enhance security.

This QEMU series is intended to expose SVA usage to VMs. i.e. Sharing
guest application address space with passthru devices. This is called
vSVA in this series. The whole vSVA enabling requires QEMU/VFIO/IOMMU
changes. This version is 3.1 to address comments in RFCv3. It is based
on the kernel which can be found in below github. This kernel has some
internal tweak between VFIO and VT-d iommu driver, so it is not sent
out to community review. But the interface between kernel and QEMU are
latest. So I send out this version for review.
https://github.com/luxis1999/linux-vsva: vsva-linux-5.5-rc3-rfcv3.1

The high-level architecture for SVA virtualization is as below, the key
design of vSVA support is to utilize the dual-stage IOMMU translation (
also known as IOMMU nesting translation) capability in host IOMMU.

.-.  .---.
|   vIOMMU|  | Guest process CR3, FL only|
| |  '---'
./
| PASID Entry |--- PASID cache flush -
'-'   |
| |   V
| |CR3 in GPA
'-'
Guest
--| Shadow |--|
  vv  v
Host
.-.  .--.
|   pIOMMU|  | Bind FL for GVA-GPA  |
| |  '--'
./  |
| PASID Entry | V (Nested xlate)
'\.--.
| |   |SL for GPA-HPA, default domain|
| |   '--'
'-'
Where:
 - FL = First level/stage one page tables
 - SL = Second level/stage two page tables

The complete vSVA kernel upstream patches are divided into three phases:
1. Common APIs and PCI device direct assignment
2. IOMMU-backed Mediated Device assignment
3. Page Request Services (PRS) support

This QEMU RFC patchset is aiming for the phase 1 and phase 2.

Related series:
[1] [PATCH V9 00/10] Nested Shared Virtual Address (SVA) VT-d support:
https://lkml.org/lkml/2020/1/29/37
[PATCH 0/3] IOMMU user API enhancement:
https://lkml.org/lkml/2020/1/29/45

[2] [RFC v3 0/8] vfio: expose virtual Shared Virtual Addressing to VMs
https://lkml.org/lkml/2020/1/29/255

There are roughly two parts:
 1. Introduce HostIOMMUContext as abstract of host IOMMU. It provides explicit
method for vIOMMU emulators to communicate with host IOMMU. e.g. propagate
guest page table binding to host IOMMU to setup dual-stage DMA translation
in host IOMMU and flush iommu iotlb.
 2. Setup dual-stage IOMMU translation for Intel vIOMMU. Includes 
- Check IOMMU uAPI version compatibility and VFIO Nesting capabilities which
  includes hardware compatibility (stage 1 format) and VFIO_PASID_REQ
  availability. This is preparation for setting up dual-stage DMA 
translation
  in host IOMMU.
- Propagate guest PASID allocation and free request to host.
- Propagate guest page table binding to host to setup dual-stage IOMMU DMA
  translation in host IOMMU.
- Propagate guest IOMMU cache invalidation to host to ensure iotlb
  correctness.

The complete QEMU set can be found in below link:
https://github.com/luxis1999/qemu.git: sva_qemu_rfcv3.1

Complete kernel can be found in:
https://github.com/luxis1999/linux-vsva.git: vsva-linux-5.5-rc3-rfcv3.1

Tests: basci functionality test, VM reboot/shutdown, full comapilation.
  

Changelog:
- RFC v3 -> v3.1:
  a) Drop IOMMUContext, and rename DualStageIOMMUObject to 
HostIOMMUContext.
 HostIOMMUContext is per-vfio-container, it is exposed to  vIOMMU 
via PCI
 layer. VFIO registers a PCIHostIOMMUFunc callback to PCI layer, 
vIOMMU
 could get HostIOMMUContext instance via it.
  b) Check IOMMU uAPI version by VFIO_CHECK_EXTENSION
  c) Add a check on VFIO_PASID_REQ availability via VFIO_GET_IOMMU_IHNFO
  d) Reorder the series, put vSVA linux header file update in the 
beginning
 put the x-scalable-mode option mofification in the end of the 
series.
  e) Dropped patch "[RFC v3 01/25] hw/pci: modify pci_setup_iommu() to 
set PCIIOMMUOps"
  RFCv3: https://patchwork.kernel.org/cover/11356033/

- RFC v2 -> v3:
  a) Introduce DualStageIOMMUObject to abstract the host IOMMU 
programming
  capability. e.g. request PASID from host, setup IOMMU nesting 
translation
  on host IOMMU. The pasid_alloc/bind_guest_page_table/iommu_cache_flush
  operations are moved to be DualStageIOMMUOps. Thus, 
DualStageIOMMUObject
  is an