On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote: > * All guest memory is mapped into the physical nvme device > but not 1:1 as vfio-pci would do this. > This allows very efficient DMA. > To support this, patch 2 adds ability for a mdev device to listen on > guest's memory map events. > Any such memory is immediately pinned and then DMA mapped. > (Support for fabric drivers where this is not possible exits too, > in which case the fabric driver will do its own DMA mapping)
Does this mean that all guest memory is pinned all the time? If so, are you sure that's acceptable? Additionally, what is the performance overhead of the IOMMU notifier added by patch 8/9? How often was that notifier called per second in your tests and how much time was spent per call in the notifier callbacks? Thanks, Bart.