> From: Dr. David Alan Gilbert <dgilb...@redhat.com>
> Sent: Tuesday, March 16, 2021 11:47 PM
> 
> * Tian, Kevin (kevin.t...@intel.com) wrote:
> > > From: Qemu-devel <qemu-devel-
> bounces+kevin.tian=intel....@nongnu.org>
> > > On Behalf Of Dr. David Alan Gilbert
> > >
> > > * Daniel P. Berrangé (berra...@redhat.com) wrote:
> > > > On Thu, Mar 11, 2021 at 12:50:09AM +0530, Tarun Gupta wrote:
> > > > > Document interfaces used for VFIO device migration. Added flow of
> state
> > > changes
> > > > > during live migration with VFIO device. Tested by building docs with
> the
> > > new
> > > > > vfio-migration.rst file.
> > > > >
> > > > > v2:
> > > > > - Included the new vfio-migration.rst file in index.rst
> > > > > - Updated dirty page tracking section, also added details about
> > > > >   'pre-copy-dirty-page-tracking' opt-out option.
> > > > > - Incorporated comments around wording of doc.
> > > > >
> > > > > Signed-off-by: Tarun Gupta <targu...@nvidia.com>
> > > > > Signed-off-by: Kirti Wankhede <kwankh...@nvidia.com>
> > > > > ---
> > > > >  MAINTAINERS                   |   1 +
> > > > >  docs/devel/index.rst          |   1 +
> > > > >  docs/devel/vfio-migration.rst | 135
> > > ++++++++++++++++++++++++++++++++++
> > > > >  3 files changed, 137 insertions(+)
> > > > >  create mode 100644 docs/devel/vfio-migration.rst
> > > >
> > > >
> > > > > +Postcopy
> > > > > +========
> > > > > +
> > > > > +Postcopy migration is not supported for VFIO devices.
> > > >
> > > > What is the problem here and is there any plan for how to address it ?
> > >
> > > There's no equivalent to userfaultfd for accesses to RAM made by a
> > > device.
> > > There's some potential for this to be doable with an IOMMU or the like,
> > > but:
> > >   a) IOMMUs and devices aren't currently happy at recovering from
> > > failures
> > >   b) the fragementation you get during a postcopy probably isn't pretty
> > > when you get to build IOMMU tables.
> >
> > To overcome such limitations one may adopt a prefault-and-pull scheme if
> > the vendor driver has the capability to track pending DMA buffers in the
> > migration process (with additional uAPI changes in VFIO or userfaultfd),
> > as discussed here:
> >
> > https://static.sched.com/hosted_files/kvmforum2019/7a/kvm-forum-
> postcopy-final.pdf
> 
> Did that get any further?

Not yet. As you may see in another thread, even the precopy side still needs
some time to land.

> 
> I can imagine that might be tricikier for a GPU than a network card; the
> shaders in a GPU are pretty random as to what they go off and access, so
> I can't see how you could prefault.

trickier but not impossible. e.g. any page accessed by shaders must be mapped
into GPU page table which could be the coarse-grained interface to track
in-use DMA buffers. However this could be large and prefaulting a large set
of pages weakens the benefit of postcopy. or there may be vendor specific
method to do fine-grained tracking. nevertheless, it does provide an postcopy
option for many devices which don't support I/O page fault but actual 
feasibility 
is vendor specific.

Thanks
Kevin

Reply via email to