Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-06 Thread Alex Williamson
On Sat, 7 Nov 2020 00:27:46 +0530
Kirti Wankhede  wrote:

> On 11/6/2020 2:56 AM, Alex Williamson wrote:
> > On Fri, 6 Nov 2020 02:22:11 +0530
> > Kirti Wankhede  wrote:
> >   
> >> On 11/6/2020 12:41 AM, Alex Williamson wrote:  
> >>> On Fri, 6 Nov 2020 00:29:36 +0530
> >>> Kirti Wankhede  wrote:
> >>>  
>  On 11/4/2020 6:15 PM, Alex Williamson wrote:  
> > On Wed, 4 Nov 2020 13:25:40 +0530
> > Kirti Wankhede  wrote:
> > 
> >> On 11/4/2020 1:57 AM, Alex Williamson wrote:  
> >>> On Wed, 4 Nov 2020 01:18:12 +0530
> >>> Kirti Wankhede  wrote:
> >>>
>  On 10/30/2020 12:35 AM, Alex Williamson wrote:  
> > On Thu, 29 Oct 2020 23:11:16 +0530
> > Kirti Wankhede  wrote:
> >   
> 
>  
>    
>  +System memory dirty pages tracking
>  +--
>  +
>  +A ``log_sync`` memory listener callback is added to mark system 
>  memory pages  
> >>>
> >>> s/is added to mark/marks those/
> >>>  
>  +as dirty which are used for DMA by VFIO device. Dirty pages 
>  bitmap is queried  
> >>>
> >>> s/by/by the/
> >>> s/Dirty/The dirty/
> >>>  
>  +per container. All pages pinned by vendor driver through 
>  vfio_pin_pages()  
> >>>
> >>> s/by/by the/
> >>>  
>  +external API have to be marked as dirty during migration. When 
>  there are CPU
>  +writes, CPU dirty page tracking can identify dirtied pages, but 
>  any page pinned
>  +by vendor driver can also be written by device. There is 
>  currently no device  
> >>>
> >>> s/by/by the/ (x2)
> >>>  
>  +which has hardware support for dirty page tracking. So all 
>  pages which are
>  +pinned by vendor driver are considered as dirty.
>  +Dirty pages are tracked when device is in stop-and-copy phase 
>  because if pages
>  +are marked dirty during pre-copy phase and content is 
>  transfered from source to
>  +destination, there is no way to know newly dirtied pages from 
>  the point they
>  +were copied earlier until device stops. To avoid repeated copy 
>  of same content,
>  +pinned pages are marked dirty only during stop-and-copy phase.  
> >>
> >>  
> >>> Let me take a quick stab at rewriting this paragraph (not sure if 
> >>> I
> >>> understood it correctly):
> >>>
> >>> "Dirty pages are tracked when the device is in the stop-and-copy 
> >>> phase.
> >>> During the pre-copy phase, it is not possible to distinguish a 
> >>> dirty
> >>> page that has been transferred from the source to the destination 
> >>> from
> >>> newly dirtied pages, which would lead to repeated copying of the 
> >>> same
> >>> content. Therefore, pinned pages are only marked dirty during the
> >>> stop-and-copy phase." ?
> >>>  
> >>
> >> I think above rephrase only talks about repeated copying in 
> >> pre-copy
> >> phase. Used "copied earlier until device stops" to indicate both
> >> pre-copy and stop-and-copy till device stops.  
> >
> >
> > Now I'm confused, I thought we had abandoned the idea that we can 
> > only
> > report pinned pages during stop-and-copy.  Doesn't the device needs 
> > to
> > expose its dirty memory footprint during the iterative phase 
> > regardless
> > of whether that causes repeat copies?  If QEMU iterates and sees 
> > that
> > all memory is still dirty, it may have transferred more data, but it
> > can actually predict if it can achieve its downtime tolerances.  
> > Which
> > is more important, less data transfer or predictability?  Thanks,
> >   
> 
>  Even if QEMU copies and transfers content of all sys mem pages during
>  pre-copy (worst case with IOMMU backed mdev device when its vendor
>  driver is not smart to pin pages explicitly and all sys mem pages are
>  marked dirty), then also its prediction about downtime tolerance will
>  not be correct, because during stop-and-copy again all pages need to 
>  be
>  copied as device can write to any of those pinned pages.  
> >>>
> >>> I think you're only reiterating my point.  If QEMU copies all of guest
> >>> memory during the iterative phase and each time it sees that all 
> 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-06 Thread Kirti Wankhede




On 11/6/2020 2:56 AM, Alex Williamson wrote:

On Fri, 6 Nov 2020 02:22:11 +0530
Kirti Wankhede  wrote:


On 11/6/2020 12:41 AM, Alex Williamson wrote:

On Fri, 6 Nov 2020 00:29:36 +0530
Kirti Wankhede  wrote:
   

On 11/4/2020 6:15 PM, Alex Williamson wrote:

On Wed, 4 Nov 2020 13:25:40 +0530
Kirti Wankhede  wrote:
  

On 11/4/2020 1:57 AM, Alex Williamson wrote:

On Wed, 4 Nov 2020 01:18:12 +0530
Kirti Wankhede  wrote:
 

On 10/30/2020 12:35 AM, Alex Williamson wrote:

On Thu, 29 Oct 2020 23:11:16 +0530
Kirti Wankhede  wrote:






+System memory dirty pages tracking
+--
+
+A ``log_sync`` memory listener callback is added to mark system memory pages


s/is added to mark/marks those/
   

+as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried


s/by/by the/
s/Dirty/The dirty/
   

+per container. All pages pinned by vendor driver through vfio_pin_pages()


s/by/by the/
   

+external API have to be marked as dirty during migration. When there are CPU
+writes, CPU dirty page tracking can identify dirtied pages, but any page pinned
+by vendor driver can also be written by device. There is currently no device


s/by/by the/ (x2)
   

+which has hardware support for dirty page tracking. So all pages which are
+pinned by vendor driver are considered as dirty.
+Dirty pages are tracked when device is in stop-and-copy phase because if pages
+are marked dirty during pre-copy phase and content is transfered from source to
+destination, there is no way to know newly dirtied pages from the point they
+were copied earlier until device stops. To avoid repeated copy of same content,
+pinned pages are marked dirty only during stop-and-copy phase.


   

Let me take a quick stab at rewriting this paragraph (not sure if I
understood it correctly):

"Dirty pages are tracked when the device is in the stop-and-copy phase.
During the pre-copy phase, it is not possible to distinguish a dirty
page that has been transferred from the source to the destination from
newly dirtied pages, which would lead to repeated copying of the same
content. Therefore, pinned pages are only marked dirty during the
stop-and-copy phase." ?
   


I think above rephrase only talks about repeated copying in pre-copy
phase. Used "copied earlier until device stops" to indicate both
pre-copy and stop-and-copy till device stops.



Now I'm confused, I thought we had abandoned the idea that we can only
report pinned pages during stop-and-copy.  Doesn't the device needs to
expose its dirty memory footprint during the iterative phase regardless
of whether that causes repeat copies?  If QEMU iterates and sees that
all memory is still dirty, it may have transferred more data, but it
can actually predict if it can achieve its downtime tolerances.  Which
is more important, less data transfer or predictability?  Thanks,



Even if QEMU copies and transfers content of all sys mem pages during
pre-copy (worst case with IOMMU backed mdev device when its vendor
driver is not smart to pin pages explicitly and all sys mem pages are
marked dirty), then also its prediction about downtime tolerance will
not be correct, because during stop-and-copy again all pages need to be
copied as device can write to any of those pinned pages.


I think you're only reiterating my point.  If QEMU copies all of guest
memory during the iterative phase and each time it sees that all memory
is dirty, such as if CPUs or devices (including assigned devices) are
dirtying pages as fast as it copies them (or continuously marks them
dirty), then QEMU can predict that downtime will require copying all
pages.


But as of now there is no way to know if device has dirtied pages during
iterative phase.



This claim doesn't make any sense, pinned pages are considered
persistently dirtied, during the iterative phase and while stopped.



If instead devices don't mark dirty pages until the VM is
stopped, then QEMU might iterate through memory copy and predict a short
downtime because not much memory is dirty, only to be surprised that
all of memory is suddenly dirty.  At that point it's too late, the VM
is already stopped, the predicted short downtime takes far longer than
expected.  This is exactly why we made the kernel interface mark pinned
pages persistently dirty when it was proposed that we only report
pinned pages once.  Thanks,
 


Since there is no way to know if device dirtied pages during iterative
phase, QEMU should query pinned pages in stop-and-copy phase.



As above, I don't believe this is true.

  

Whenever there will be hardware support or some software mechanism to
report pages dirtied by device then we will add a capability bit in
migration capability and based on that capability bit qemu/user space
app should decide to query dirty pages in iterative phase.



Yes, we could advertise support 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-05 Thread Alex Williamson
On Fri, 6 Nov 2020 02:22:11 +0530
Kirti Wankhede  wrote:

> On 11/6/2020 12:41 AM, Alex Williamson wrote:
> > On Fri, 6 Nov 2020 00:29:36 +0530
> > Kirti Wankhede  wrote:
> >   
> >> On 11/4/2020 6:15 PM, Alex Williamson wrote:  
> >>> On Wed, 4 Nov 2020 13:25:40 +0530
> >>> Kirti Wankhede  wrote:
> >>>  
>  On 11/4/2020 1:57 AM, Alex Williamson wrote:  
> > On Wed, 4 Nov 2020 01:18:12 +0530
> > Kirti Wankhede  wrote:
> > 
> >> On 10/30/2020 12:35 AM, Alex Williamson wrote:  
> >>> On Thu, 29 Oct 2020 23:11:16 +0530
> >>> Kirti Wankhede  wrote:
> >>>
> >>
> >> 
> >>
> >> +System memory dirty pages tracking
> >> +--
> >> +
> >> +A ``log_sync`` memory listener callback is added to mark system 
> >> memory pages  
> >
> > s/is added to mark/marks those/
> >   
> >> +as dirty which are used for DMA by VFIO device. Dirty pages 
> >> bitmap is queried  
> >
> > s/by/by the/
> > s/Dirty/The dirty/
> >   
> >> +per container. All pages pinned by vendor driver through 
> >> vfio_pin_pages()  
> >
> > s/by/by the/
> >   
> >> +external API have to be marked as dirty during migration. When 
> >> there are CPU
> >> +writes, CPU dirty page tracking can identify dirtied pages, but 
> >> any page pinned
> >> +by vendor driver can also be written by device. There is 
> >> currently no device  
> >
> > s/by/by the/ (x2)
> >   
> >> +which has hardware support for dirty page tracking. So all pages 
> >> which are
> >> +pinned by vendor driver are considered as dirty.
> >> +Dirty pages are tracked when device is in stop-and-copy phase 
> >> because if pages
> >> +are marked dirty during pre-copy phase and content is transfered 
> >> from source to
> >> +destination, there is no way to know newly dirtied pages from the 
> >> point they
> >> +were copied earlier until device stops. To avoid repeated copy of 
> >> same content,
> >> +pinned pages are marked dirty only during stop-and-copy phase.  
> 
>    
> > Let me take a quick stab at rewriting this paragraph (not sure if I
> > understood it correctly):
> >
> > "Dirty pages are tracked when the device is in the stop-and-copy 
> > phase.
> > During the pre-copy phase, it is not possible to distinguish a dirty
> > page that has been transferred from the source to the destination 
> > from
> > newly dirtied pages, which would lead to repeated copying of the 
> > same
> > content. Therefore, pinned pages are only marked dirty during the
> > stop-and-copy phase." ?
> >   
> 
>  I think above rephrase only talks about repeated copying in pre-copy
>  phase. Used "copied earlier until device stops" to indicate both
>  pre-copy and stop-and-copy till device stops.  
> >>>
> >>>
> >>> Now I'm confused, I thought we had abandoned the idea that we can only
> >>> report pinned pages during stop-and-copy.  Doesn't the device needs to
> >>> expose its dirty memory footprint during the iterative phase 
> >>> regardless
> >>> of whether that causes repeat copies?  If QEMU iterates and sees that
> >>> all memory is still dirty, it may have transferred more data, but it
> >>> can actually predict if it can achieve its downtime tolerances.  Which
> >>> is more important, less data transfer or predictability?  Thanks,
> >>>
> >>
> >> Even if QEMU copies and transfers content of all sys mem pages during
> >> pre-copy (worst case with IOMMU backed mdev device when its vendor
> >> driver is not smart to pin pages explicitly and all sys mem pages are
> >> marked dirty), then also its prediction about downtime tolerance will
> >> not be correct, because during stop-and-copy again all pages need to be
> >> copied as device can write to any of those pinned pages.  
> >
> > I think you're only reiterating my point.  If QEMU copies all of guest
> > memory during the iterative phase and each time it sees that all memory
> > is dirty, such as if CPUs or devices (including assigned devices) are
> > dirtying pages as fast as it copies them (or continuously marks them
> > dirty), then QEMU can predict that downtime will require copying all
> > pages.  
> 
>  But as of now there is no way to know if device has dirtied pages during
>  iterative phase.  
> >>>
> >>>
> >>> This claim doesn't make any sense, pinned pages are considered
> >>> persistently 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-05 Thread Kirti Wankhede




On 11/6/2020 12:41 AM, Alex Williamson wrote:

On Fri, 6 Nov 2020 00:29:36 +0530
Kirti Wankhede  wrote:


On 11/4/2020 6:15 PM, Alex Williamson wrote:

On Wed, 4 Nov 2020 13:25:40 +0530
Kirti Wankhede  wrote:
   

On 11/4/2020 1:57 AM, Alex Williamson wrote:

On Wed, 4 Nov 2020 01:18:12 +0530
Kirti Wankhede  wrote:
  

On 10/30/2020 12:35 AM, Alex Williamson wrote:

On Thu, 29 Oct 2020 23:11:16 +0530
Kirti Wankhede  wrote:
 



 

+System memory dirty pages tracking
+--
+
+A ``log_sync`` memory listener callback is added to mark system memory pages


s/is added to mark/marks those/


+as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried


s/by/by the/
s/Dirty/The dirty/


+per container. All pages pinned by vendor driver through vfio_pin_pages()


s/by/by the/


+external API have to be marked as dirty during migration. When there are CPU
+writes, CPU dirty page tracking can identify dirtied pages, but any page pinned
+by vendor driver can also be written by device. There is currently no device


s/by/by the/ (x2)


+which has hardware support for dirty page tracking. So all pages which are
+pinned by vendor driver are considered as dirty.
+Dirty pages are tracked when device is in stop-and-copy phase because if pages
+are marked dirty during pre-copy phase and content is transfered from source to
+destination, there is no way to know newly dirtied pages from the point they
+were copied earlier until device stops. To avoid repeated copy of same content,
+pinned pages are marked dirty only during stop-and-copy phase.




Let me take a quick stab at rewriting this paragraph (not sure if I
understood it correctly):

"Dirty pages are tracked when the device is in the stop-and-copy phase.
During the pre-copy phase, it is not possible to distinguish a dirty
page that has been transferred from the source to the destination from
newly dirtied pages, which would lead to repeated copying of the same
content. Therefore, pinned pages are only marked dirty during the
stop-and-copy phase." ?



I think above rephrase only talks about repeated copying in pre-copy
phase. Used "copied earlier until device stops" to indicate both
pre-copy and stop-and-copy till device stops.



Now I'm confused, I thought we had abandoned the idea that we can only
report pinned pages during stop-and-copy.  Doesn't the device needs to
expose its dirty memory footprint during the iterative phase regardless
of whether that causes repeat copies?  If QEMU iterates and sees that
all memory is still dirty, it may have transferred more data, but it
can actually predict if it can achieve its downtime tolerances.  Which
is more important, less data transfer or predictability?  Thanks,
 


Even if QEMU copies and transfers content of all sys mem pages during
pre-copy (worst case with IOMMU backed mdev device when its vendor
driver is not smart to pin pages explicitly and all sys mem pages are
marked dirty), then also its prediction about downtime tolerance will
not be correct, because during stop-and-copy again all pages need to be
copied as device can write to any of those pinned pages.


I think you're only reiterating my point.  If QEMU copies all of guest
memory during the iterative phase and each time it sees that all memory
is dirty, such as if CPUs or devices (including assigned devices) are
dirtying pages as fast as it copies them (or continuously marks them
dirty), then QEMU can predict that downtime will require copying all
pages.


But as of now there is no way to know if device has dirtied pages during
iterative phase.



This claim doesn't make any sense, pinned pages are considered
persistently dirtied, during the iterative phase and while stopped.

 

If instead devices don't mark dirty pages until the VM is
stopped, then QEMU might iterate through memory copy and predict a short
downtime because not much memory is dirty, only to be surprised that
all of memory is suddenly dirty.  At that point it's too late, the VM
is already stopped, the predicted short downtime takes far longer than
expected.  This is exactly why we made the kernel interface mark pinned
pages persistently dirty when it was proposed that we only report
pinned pages once.  Thanks,
  


Since there is no way to know if device dirtied pages during iterative
phase, QEMU should query pinned pages in stop-and-copy phase.



As above, I don't believe this is true.

   

Whenever there will be hardware support or some software mechanism to
report pages dirtied by device then we will add a capability bit in
migration capability and based on that capability bit qemu/user space
app should decide to query dirty pages in iterative phase.



Yes, we could advertise support for fine granularity dirty page
tracking, but I completely disagree that we should consider pinned
pages clean until suddenly exposing them as dirty 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-05 Thread Alex Williamson
On Fri, 6 Nov 2020 00:29:36 +0530
Kirti Wankhede  wrote:

> On 11/4/2020 6:15 PM, Alex Williamson wrote:
> > On Wed, 4 Nov 2020 13:25:40 +0530
> > Kirti Wankhede  wrote:
> >   
> >> On 11/4/2020 1:57 AM, Alex Williamson wrote:  
> >>> On Wed, 4 Nov 2020 01:18:12 +0530
> >>> Kirti Wankhede  wrote:
> >>>  
>  On 10/30/2020 12:35 AM, Alex Williamson wrote:  
> > On Thu, 29 Oct 2020 23:11:16 +0530
> > Kirti Wankhede  wrote:
> > 
> 
>  
>  
>  +System memory dirty pages tracking
>  +--
>  +
>  +A ``log_sync`` memory listener callback is added to mark system 
>  memory pages  
> >>>
> >>> s/is added to mark/marks those/
> >>>
>  +as dirty which are used for DMA by VFIO device. Dirty pages bitmap 
>  is queried  
> >>>
> >>> s/by/by the/
> >>> s/Dirty/The dirty/
> >>>
>  +per container. All pages pinned by vendor driver through 
>  vfio_pin_pages()  
> >>>
> >>> s/by/by the/
> >>>
>  +external API have to be marked as dirty during migration. When 
>  there are CPU
>  +writes, CPU dirty page tracking can identify dirtied pages, but any 
>  page pinned
>  +by vendor driver can also be written by device. There is currently 
>  no device  
> >>>
> >>> s/by/by the/ (x2)
> >>>
>  +which has hardware support for dirty page tracking. So all pages 
>  which are
>  +pinned by vendor driver are considered as dirty.
>  +Dirty pages are tracked when device is in stop-and-copy phase 
>  because if pages
>  +are marked dirty during pre-copy phase and content is transfered 
>  from source to
>  +destination, there is no way to know newly dirtied pages from the 
>  point they
>  +were copied earlier until device stops. To avoid repeated copy of 
>  same content,
>  +pinned pages are marked dirty only during stop-and-copy phase.  
> >>
> >>
> >>> Let me take a quick stab at rewriting this paragraph (not sure if I
> >>> understood it correctly):
> >>>
> >>> "Dirty pages are tracked when the device is in the stop-and-copy 
> >>> phase.
> >>> During the pre-copy phase, it is not possible to distinguish a dirty
> >>> page that has been transferred from the source to the destination from
> >>> newly dirtied pages, which would lead to repeated copying of the same
> >>> content. Therefore, pinned pages are only marked dirty during the
> >>> stop-and-copy phase." ?
> >>>
> >>
> >> I think above rephrase only talks about repeated copying in pre-copy
> >> phase. Used "copied earlier until device stops" to indicate both
> >> pre-copy and stop-and-copy till device stops.  
> >
> >
> > Now I'm confused, I thought we had abandoned the idea that we can only
> > report pinned pages during stop-and-copy.  Doesn't the device needs to
> > expose its dirty memory footprint during the iterative phase regardless
> > of whether that causes repeat copies?  If QEMU iterates and sees that
> > all memory is still dirty, it may have transferred more data, but it
> > can actually predict if it can achieve its downtime tolerances.  Which
> > is more important, less data transfer or predictability?  Thanks,
> > 
> 
>  Even if QEMU copies and transfers content of all sys mem pages during
>  pre-copy (worst case with IOMMU backed mdev device when its vendor
>  driver is not smart to pin pages explicitly and all sys mem pages are
>  marked dirty), then also its prediction about downtime tolerance will
>  not be correct, because during stop-and-copy again all pages need to be
>  copied as device can write to any of those pinned pages.  
> >>>
> >>> I think you're only reiterating my point.  If QEMU copies all of guest
> >>> memory during the iterative phase and each time it sees that all memory
> >>> is dirty, such as if CPUs or devices (including assigned devices) are
> >>> dirtying pages as fast as it copies them (or continuously marks them
> >>> dirty), then QEMU can predict that downtime will require copying all
> >>> pages.  
> >>
> >> But as of now there is no way to know if device has dirtied pages during
> >> iterative phase.  
> > 
> > 
> > This claim doesn't make any sense, pinned pages are considered
> > persistently dirtied, during the iterative phase and while stopped.
> > 
> > 
> >>> If instead devices don't mark dirty pages until the VM is
> >>> stopped, then QEMU might iterate through memory copy and predict a short
> >>> downtime because not much memory is dirty, only to be surprised that
> >>> all of memory is suddenly dirty.  At that point it's too late, the VM
> >>> is already stopped, the 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-05 Thread Kirti Wankhede




On 11/4/2020 6:15 PM, Alex Williamson wrote:

On Wed, 4 Nov 2020 13:25:40 +0530
Kirti Wankhede  wrote:


On 11/4/2020 1:57 AM, Alex Williamson wrote:

On Wed, 4 Nov 2020 01:18:12 +0530
Kirti Wankhede  wrote:
   

On 10/30/2020 12:35 AM, Alex Williamson wrote:

On Thu, 29 Oct 2020 23:11:16 +0530
Kirti Wankhede  wrote:
  



  

+System memory dirty pages tracking
+--
+
+A ``log_sync`` memory listener callback is added to mark system memory pages


s/is added to mark/marks those/
 

+as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried


s/by/by the/
s/Dirty/The dirty/
 

+per container. All pages pinned by vendor driver through vfio_pin_pages()


s/by/by the/
 

+external API have to be marked as dirty during migration. When there are CPU
+writes, CPU dirty page tracking can identify dirtied pages, but any page pinned
+by vendor driver can also be written by device. There is currently no device


s/by/by the/ (x2)
 

+which has hardware support for dirty page tracking. So all pages which are
+pinned by vendor driver are considered as dirty.
+Dirty pages are tracked when device is in stop-and-copy phase because if pages
+are marked dirty during pre-copy phase and content is transfered from source to
+destination, there is no way to know newly dirtied pages from the point they
+were copied earlier until device stops. To avoid repeated copy of same content,
+pinned pages are marked dirty only during stop-and-copy phase.


 

Let me take a quick stab at rewriting this paragraph (not sure if I
understood it correctly):

"Dirty pages are tracked when the device is in the stop-and-copy phase.
During the pre-copy phase, it is not possible to distinguish a dirty
page that has been transferred from the source to the destination from
newly dirtied pages, which would lead to repeated copying of the same
content. Therefore, pinned pages are only marked dirty during the
stop-and-copy phase." ?
 


I think above rephrase only talks about repeated copying in pre-copy
phase. Used "copied earlier until device stops" to indicate both
pre-copy and stop-and-copy till device stops.



Now I'm confused, I thought we had abandoned the idea that we can only
report pinned pages during stop-and-copy.  Doesn't the device needs to
expose its dirty memory footprint during the iterative phase regardless
of whether that causes repeat copies?  If QEMU iterates and sees that
all memory is still dirty, it may have transferred more data, but it
can actually predict if it can achieve its downtime tolerances.  Which
is more important, less data transfer or predictability?  Thanks,
  


Even if QEMU copies and transfers content of all sys mem pages during
pre-copy (worst case with IOMMU backed mdev device when its vendor
driver is not smart to pin pages explicitly and all sys mem pages are
marked dirty), then also its prediction about downtime tolerance will
not be correct, because during stop-and-copy again all pages need to be
copied as device can write to any of those pinned pages.


I think you're only reiterating my point.  If QEMU copies all of guest
memory during the iterative phase and each time it sees that all memory
is dirty, such as if CPUs or devices (including assigned devices) are
dirtying pages as fast as it copies them (or continuously marks them
dirty), then QEMU can predict that downtime will require copying all
pages.


But as of now there is no way to know if device has dirtied pages during
iterative phase.



This claim doesn't make any sense, pinned pages are considered
persistently dirtied, during the iterative phase and while stopped.

  

If instead devices don't mark dirty pages until the VM is
stopped, then QEMU might iterate through memory copy and predict a short
downtime because not much memory is dirty, only to be surprised that
all of memory is suddenly dirty.  At that point it's too late, the VM
is already stopped, the predicted short downtime takes far longer than
expected.  This is exactly why we made the kernel interface mark pinned
pages persistently dirty when it was proposed that we only report
pinned pages once.  Thanks,
   


Since there is no way to know if device dirtied pages during iterative
phase, QEMU should query pinned pages in stop-and-copy phase.



As above, I don't believe this is true.



Whenever there will be hardware support or some software mechanism to
report pages dirtied by device then we will add a capability bit in
migration capability and based on that capability bit qemu/user space
app should decide to query dirty pages in iterative phase.



Yes, we could advertise support for fine granularity dirty page
tracking, but I completely disagree that we should consider pinned
pages clean until suddenly exposing them as dirty once the VM is
stopped.  Thanks,



Should QEMU copy dirtied pages twice, during iterative phase and then 
when VM is stopped?


Thanks,
Kirti



Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-04 Thread Alex Williamson
On Wed, 4 Nov 2020 13:25:40 +0530
Kirti Wankhede  wrote:

> On 11/4/2020 1:57 AM, Alex Williamson wrote:
> > On Wed, 4 Nov 2020 01:18:12 +0530
> > Kirti Wankhede  wrote:
> >   
> >> On 10/30/2020 12:35 AM, Alex Williamson wrote:  
> >>> On Thu, 29 Oct 2020 23:11:16 +0530
> >>> Kirti Wankhede  wrote:
> >>>  
> >>
> >> 
> >>  
> >> +System memory dirty pages tracking
> >> +--
> >> +
> >> +A ``log_sync`` memory listener callback is added to mark system 
> >> memory pages  
> >
> > s/is added to mark/marks those/
> > 
> >> +as dirty which are used for DMA by VFIO device. Dirty pages bitmap is 
> >> queried  
> >
> > s/by/by the/
> > s/Dirty/The dirty/
> > 
> >> +per container. All pages pinned by vendor driver through 
> >> vfio_pin_pages()  
> >
> > s/by/by the/
> > 
> >> +external API have to be marked as dirty during migration. When there 
> >> are CPU
> >> +writes, CPU dirty page tracking can identify dirtied pages, but any 
> >> page pinned
> >> +by vendor driver can also be written by device. There is currently no 
> >> device  
> >
> > s/by/by the/ (x2)
> > 
> >> +which has hardware support for dirty page tracking. So all pages 
> >> which are
> >> +pinned by vendor driver are considered as dirty.
> >> +Dirty pages are tracked when device is in stop-and-copy phase because 
> >> if pages
> >> +are marked dirty during pre-copy phase and content is transfered from 
> >> source to
> >> +destination, there is no way to know newly dirtied pages from the 
> >> point they
> >> +were copied earlier until device stops. To avoid repeated copy of 
> >> same content,
> >> +pinned pages are marked dirty only during stop-and-copy phase.  
> 
>  
> > Let me take a quick stab at rewriting this paragraph (not sure if I
> > understood it correctly):
> >
> > "Dirty pages are tracked when the device is in the stop-and-copy phase.
> > During the pre-copy phase, it is not possible to distinguish a dirty
> > page that has been transferred from the source to the destination from
> > newly dirtied pages, which would lead to repeated copying of the same
> > content. Therefore, pinned pages are only marked dirty during the
> > stop-and-copy phase." ?
> > 
> 
>  I think above rephrase only talks about repeated copying in pre-copy
>  phase. Used "copied earlier until device stops" to indicate both
>  pre-copy and stop-and-copy till device stops.  
> >>>
> >>>
> >>> Now I'm confused, I thought we had abandoned the idea that we can only
> >>> report pinned pages during stop-and-copy.  Doesn't the device needs to
> >>> expose its dirty memory footprint during the iterative phase regardless
> >>> of whether that causes repeat copies?  If QEMU iterates and sees that
> >>> all memory is still dirty, it may have transferred more data, but it
> >>> can actually predict if it can achieve its downtime tolerances.  Which
> >>> is more important, less data transfer or predictability?  Thanks,
> >>>  
> >>
> >> Even if QEMU copies and transfers content of all sys mem pages during
> >> pre-copy (worst case with IOMMU backed mdev device when its vendor
> >> driver is not smart to pin pages explicitly and all sys mem pages are
> >> marked dirty), then also its prediction about downtime tolerance will
> >> not be correct, because during stop-and-copy again all pages need to be
> >> copied as device can write to any of those pinned pages.  
> > 
> > I think you're only reiterating my point.  If QEMU copies all of guest
> > memory during the iterative phase and each time it sees that all memory
> > is dirty, such as if CPUs or devices (including assigned devices) are
> > dirtying pages as fast as it copies them (or continuously marks them
> > dirty), then QEMU can predict that downtime will require copying all
> > pages.   
> 
> But as of now there is no way to know if device has dirtied pages during 
> iterative phase.


This claim doesn't make any sense, pinned pages are considered
persistently dirtied, during the iterative phase and while stopped.

 
> > If instead devices don't mark dirty pages until the VM is
> > stopped, then QEMU might iterate through memory copy and predict a short
> > downtime because not much memory is dirty, only to be surprised that
> > all of memory is suddenly dirty.  At that point it's too late, the VM
> > is already stopped, the predicted short downtime takes far longer than
> > expected.  This is exactly why we made the kernel interface mark pinned
> > pages persistently dirty when it was proposed that we only report
> > pinned pages once.  Thanks,
> >   
> 
> Since there is no way to know if device dirtied pages during iterative 
> phase, QEMU should query pinned pages in stop-and-copy phase.


As above, I don't 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-03 Thread Kirti Wankhede




On 11/4/2020 1:57 AM, Alex Williamson wrote:

On Wed, 4 Nov 2020 01:18:12 +0530
Kirti Wankhede  wrote:


On 10/30/2020 12:35 AM, Alex Williamson wrote:

On Thu, 29 Oct 2020 23:11:16 +0530
Kirti Wankhede  wrote:
   





+System memory dirty pages tracking
+--
+
+A ``log_sync`` memory listener callback is added to mark system memory pages


s/is added to mark/marks those/
  

+as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried


s/by/by the/
s/Dirty/The dirty/
  

+per container. All pages pinned by vendor driver through vfio_pin_pages()


s/by/by the/
  

+external API have to be marked as dirty during migration. When there are CPU
+writes, CPU dirty page tracking can identify dirtied pages, but any page pinned
+by vendor driver can also be written by device. There is currently no device


s/by/by the/ (x2)
  

+which has hardware support for dirty page tracking. So all pages which are
+pinned by vendor driver are considered as dirty.
+Dirty pages are tracked when device is in stop-and-copy phase because if pages
+are marked dirty during pre-copy phase and content is transfered from source to
+destination, there is no way to know newly dirtied pages from the point they
+were copied earlier until device stops. To avoid repeated copy of same content,
+pinned pages are marked dirty only during stop-and-copy phase.


  

Let me take a quick stab at rewriting this paragraph (not sure if I
understood it correctly):

"Dirty pages are tracked when the device is in the stop-and-copy phase.
During the pre-copy phase, it is not possible to distinguish a dirty
page that has been transferred from the source to the destination from
newly dirtied pages, which would lead to repeated copying of the same
content. Therefore, pinned pages are only marked dirty during the
stop-and-copy phase." ?
  


I think above rephrase only talks about repeated copying in pre-copy
phase. Used "copied earlier until device stops" to indicate both
pre-copy and stop-and-copy till device stops.



Now I'm confused, I thought we had abandoned the idea that we can only
report pinned pages during stop-and-copy.  Doesn't the device needs to
expose its dirty memory footprint during the iterative phase regardless
of whether that causes repeat copies?  If QEMU iterates and sees that
all memory is still dirty, it may have transferred more data, but it
can actually predict if it can achieve its downtime tolerances.  Which
is more important, less data transfer or predictability?  Thanks,
   


Even if QEMU copies and transfers content of all sys mem pages during
pre-copy (worst case with IOMMU backed mdev device when its vendor
driver is not smart to pin pages explicitly and all sys mem pages are
marked dirty), then also its prediction about downtime tolerance will
not be correct, because during stop-and-copy again all pages need to be
copied as device can write to any of those pinned pages.


I think you're only reiterating my point.  If QEMU copies all of guest
memory during the iterative phase and each time it sees that all memory
is dirty, such as if CPUs or devices (including assigned devices) are
dirtying pages as fast as it copies them (or continuously marks them
dirty), then QEMU can predict that downtime will require copying all
pages. 


But as of now there is no way to know if device has dirtied pages during 
iterative phase.



If instead devices don't mark dirty pages until the VM is
stopped, then QEMU might iterate through memory copy and predict a short
downtime because not much memory is dirty, only to be surprised that
all of memory is suddenly dirty.  At that point it's too late, the VM
is already stopped, the predicted short downtime takes far longer than
expected.  This is exactly why we made the kernel interface mark pinned
pages persistently dirty when it was proposed that we only report
pinned pages once.  Thanks,



Since there is no way to know if device dirtied pages during iterative 
phase, QEMU should query pinned pages in stop-and-copy phase.


Whenever there will be hardware support or some software mechanism to 
report pages dirtied by device then we will add a capability bit in 
migration capability and based on that capability bit qemu/user space 
app should decide to query dirty pages in iterative phase.


Thanks,
Kirti



Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-03 Thread Alex Williamson
On Wed, 4 Nov 2020 01:18:12 +0530
Kirti Wankhede  wrote:

> On 10/30/2020 12:35 AM, Alex Williamson wrote:
> > On Thu, 29 Oct 2020 23:11:16 +0530
> > Kirti Wankhede  wrote:
> >   
> 
> 
> 
>  +System memory dirty pages tracking
>  +--
>  +
>  +A ``log_sync`` memory listener callback is added to mark system memory 
>  pages  
> >>>
> >>> s/is added to mark/marks those/
> >>>  
>  +as dirty which are used for DMA by VFIO device. Dirty pages bitmap is 
>  queried  
> >>>
> >>> s/by/by the/
> >>> s/Dirty/The dirty/
> >>>  
>  +per container. All pages pinned by vendor driver through 
>  vfio_pin_pages()  
> >>>
> >>> s/by/by the/
> >>>  
>  +external API have to be marked as dirty during migration. When there 
>  are CPU
>  +writes, CPU dirty page tracking can identify dirtied pages, but any 
>  page pinned
>  +by vendor driver can also be written by device. There is currently no 
>  device  
> >>>
> >>> s/by/by the/ (x2)
> >>>  
>  +which has hardware support for dirty page tracking. So all pages which 
>  are
>  +pinned by vendor driver are considered as dirty.
>  +Dirty pages are tracked when device is in stop-and-copy phase because 
>  if pages
>  +are marked dirty during pre-copy phase and content is transfered from 
>  source to
>  +destination, there is no way to know newly dirtied pages from the point 
>  they
>  +were copied earlier until device stops. To avoid repeated copy of same 
>  content,
>  +pinned pages are marked dirty only during stop-and-copy phase.  
> >>
> >>  
> >>> Let me take a quick stab at rewriting this paragraph (not sure if I
> >>> understood it correctly):
> >>>
> >>> "Dirty pages are tracked when the device is in the stop-and-copy phase.
> >>> During the pre-copy phase, it is not possible to distinguish a dirty
> >>> page that has been transferred from the source to the destination from
> >>> newly dirtied pages, which would lead to repeated copying of the same
> >>> content. Therefore, pinned pages are only marked dirty during the
> >>> stop-and-copy phase." ?
> >>>  
> >>
> >> I think above rephrase only talks about repeated copying in pre-copy
> >> phase. Used "copied earlier until device stops" to indicate both
> >> pre-copy and stop-and-copy till device stops.  
> > 
> > 
> > Now I'm confused, I thought we had abandoned the idea that we can only
> > report pinned pages during stop-and-copy.  Doesn't the device needs to
> > expose its dirty memory footprint during the iterative phase regardless
> > of whether that causes repeat copies?  If QEMU iterates and sees that
> > all memory is still dirty, it may have transferred more data, but it
> > can actually predict if it can achieve its downtime tolerances.  Which
> > is more important, less data transfer or predictability?  Thanks,
> >   
> 
> Even if QEMU copies and transfers content of all sys mem pages during 
> pre-copy (worst case with IOMMU backed mdev device when its vendor 
> driver is not smart to pin pages explicitly and all sys mem pages are 
> marked dirty), then also its prediction about downtime tolerance will 
> not be correct, because during stop-and-copy again all pages need to be 
> copied as device can write to any of those pinned pages.

I think you're only reiterating my point.  If QEMU copies all of guest
memory during the iterative phase and each time it sees that all memory
is dirty, such as if CPUs or devices (including assigned devices) are
dirtying pages as fast as it copies them (or continuously marks them
dirty), then QEMU can predict that downtime will require copying all
pages.  If instead devices don't mark dirty pages until the VM is
stopped, then QEMU might iterate through memory copy and predict a short
downtime because not much memory is dirty, only to be surprised that
all of memory is suddenly dirty.  At that point it's too late, the VM
is already stopped, the predicted short downtime takes far longer than
expected.  This is exactly why we made the kernel interface mark pinned
pages persistently dirty when it was proposed that we only report
pinned pages once.  Thanks,

Alex




Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-11-03 Thread Kirti Wankhede




On 10/30/2020 12:35 AM, Alex Williamson wrote:

On Thu, 29 Oct 2020 23:11:16 +0530
Kirti Wankhede  wrote:






+System memory dirty pages tracking
+--
+
+A ``log_sync`` memory listener callback is added to mark system memory pages


s/is added to mark/marks those/
   

+as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried


s/by/by the/
s/Dirty/The dirty/
   

+per container. All pages pinned by vendor driver through vfio_pin_pages()


s/by/by the/
   

+external API have to be marked as dirty during migration. When there are CPU
+writes, CPU dirty page tracking can identify dirtied pages, but any page pinned
+by vendor driver can also be written by device. There is currently no device


s/by/by the/ (x2)
   

+which has hardware support for dirty page tracking. So all pages which are
+pinned by vendor driver are considered as dirty.
+Dirty pages are tracked when device is in stop-and-copy phase because if pages
+are marked dirty during pre-copy phase and content is transfered from source to
+destination, there is no way to know newly dirtied pages from the point they
+were copied earlier until device stops. To avoid repeated copy of same content,
+pinned pages are marked dirty only during stop-and-copy phase.




Let me take a quick stab at rewriting this paragraph (not sure if I
understood it correctly):

"Dirty pages are tracked when the device is in the stop-and-copy phase.
During the pre-copy phase, it is not possible to distinguish a dirty
page that has been transferred from the source to the destination from
newly dirtied pages, which would lead to repeated copying of the same
content. Therefore, pinned pages are only marked dirty during the
stop-and-copy phase." ?
   


I think above rephrase only talks about repeated copying in pre-copy
phase. Used "copied earlier until device stops" to indicate both
pre-copy and stop-and-copy till device stops.



Now I'm confused, I thought we had abandoned the idea that we can only
report pinned pages during stop-and-copy.  Doesn't the device needs to
expose its dirty memory footprint during the iterative phase regardless
of whether that causes repeat copies?  If QEMU iterates and sees that
all memory is still dirty, it may have transferred more data, but it
can actually predict if it can achieve its downtime tolerances.  Which
is more important, less data transfer or predictability?  Thanks,



Even if QEMU copies and transfers content of all sys mem pages during 
pre-copy (worst case with IOMMU backed mdev device when its vendor 
driver is not smart to pin pages explicitly and all sys mem pages are 
marked dirty), then also its prediction about downtime tolerance will 
not be correct, because during stop-and-copy again all pages need to be 
copied as device can write to any of those pinned pages.


Thanks,
Kirti




Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-10-29 Thread Cornelia Huck
On Thu, 29 Oct 2020 13:05:19 -0600
Alex Williamson  wrote:

> On Thu, 29 Oct 2020 23:11:16 +0530
> Kirti Wankhede  wrote:
> 
> > Thanks for corrections Cornelia. I had done the corrections you 
> > suggested I had not replied, see my comments on couple of places where I 
> > disagree.
> > 
> > 
> > On 10/29/2020 5:22 PM, Cornelia Huck wrote:  
> > > On Thu, 29 Oct 2020 11:23:11 +0530
> > > Kirti Wankhede  wrote:

> > >> +Detailed description of UAPI for VFIO device for migration is in the 
> > >> comment
> > >> +above ``vfio_device_migration_info`` structure definition in header file
> > >> +linux-headers/linux/vfio.h.
> > > 
> > > I think I'd copy that to this file. If I'm looking at the
> > > documentation, I'd rather not go hunting for source code to find out
> > > what structure you are talking about. Plus, as it's UAPI, I don't
> > > expect it to change much, so it should be easy to keep the definitions
> > > in sync (famous last words).
> > > 
> > 
> > I feel its duplication of documentation. I would like to know others 
> > views as well.  
> 
> 
> TBH I don't think it's necessary here either, we're documenting the
> QEMU interaction with the uAPI, the uAPI itself is documented in the
> kernel header.  I don't think it would be unreasonable to ask someone
> trying to understand this to look at both sources together.

Ok, I can live with that. But let me correct some nits :)

"A detailed description of the UAPI for VFIO device migration can be
found in the comment for the ``vfio_device_migration_info`` structure
in the header file linux-headers/linux/vfio.h."

(...)

> > >> +remaining data for VFIO device till pending_bytes returned by vendor 
> > >> driver
> > >> +is zero.
> > > 
> > > "...and interactively copies the remaining data for the VFIO device
> > > until the vendor driver indicates that no data remains (pending_bytes
> > > is zero)." ?  
> 
> 
> Connie, was that intentional to replace "iteratively" with
> "interactively"?  Iteratively seems correct to me.

Eh, should be "iteratively", of course. Too many meetings :/




Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-10-29 Thread Alex Williamson
On Thu, 29 Oct 2020 23:11:16 +0530
Kirti Wankhede  wrote:

> Thanks for corrections Cornelia. I had done the corrections you 
> suggested I had not replied, see my comments on couple of places where I 
> disagree.
> 
> 
> On 10/29/2020 5:22 PM, Cornelia Huck wrote:
> > On Thu, 29 Oct 2020 11:23:11 +0530
> > Kirti Wankhede  wrote:
> >   
> >> Document interfaces used for VFIO device migration. Added flow of state
> >> changes during live migration with VFIO device.
> >>
> >> Signed-off-by: Kirti Wankhede 
> >> ---
> >>   MAINTAINERS   |   1 +
> >>   docs/devel/vfio-migration.rst | 119 
> >> ++  
> > 
> > You probably want to include this into the Developer's Guide via
> > index.rst.
> >   
> 
> Ok.
> 
> >>   2 files changed, 120 insertions(+)
> >>   create mode 100644 docs/devel/vfio-migration.rst
> >>
> >> diff --git a/MAINTAINERS b/MAINTAINERS
> >> index 6a197bd358d6..6f3fcffc6b3d 100644
> >> --- a/MAINTAINERS
> >> +++ b/MAINTAINERS
> >> @@ -1728,6 +1728,7 @@ M: Alex Williamson 
> >>   S: Supported
> >>   F: hw/vfio/*
> >>   F: include/hw/vfio/
> >> +F: docs/devel/vfio-migration.rst
> >>   
> >>   vfio-ccw
> >>   M: Cornelia Huck 
> >> diff --git a/docs/devel/vfio-migration.rst b/docs/devel/vfio-migration.rst
> >> new file mode 100644
> >> index ..dab9127825e4
> >> --- /dev/null
> >> +++ b/docs/devel/vfio-migration.rst
> >> @@ -0,0 +1,119 @@
> >> +=
> >> +VFIO device Migration
> >> +=
> >> +
> >> +VFIO devices use iterative approach for migration because certain VFIO 
> >> devices  
> > 
> > s/use/use an/ ?
> >   
> >> +(e.g. GPU) have large amount of data to be transfered. The iterative 
> >> pre-copy
> >> +phase of migration allows for the guest to continue whilst the VFIO 
> >> device state
> >> +is transferred to destination, this helps to reduce the total downtime of 
> >> the  
> > 
> > s/to destination,/to the destination;/
> >   
> >> +VM. VFIO devices can choose to skip the pre-copy phase of migration by 
> >> returning
> >> +pending_bytes as zero during pre-copy phase.  
> > 
> > s/during/during the/
> >   
> >> +
> >> +Detailed description of UAPI for VFIO device for migration is in the 
> >> comment
> >> +above ``vfio_device_migration_info`` structure definition in header file
> >> +linux-headers/linux/vfio.h.  
> > 
> > I think I'd copy that to this file. If I'm looking at the
> > documentation, I'd rather not go hunting for source code to find out
> > what structure you are talking about. Plus, as it's UAPI, I don't
> > expect it to change much, so it should be easy to keep the definitions
> > in sync (famous last words).
> >   
> 
> I feel its duplication of documentation. I would like to know others 
> views as well.


TBH I don't think it's necessary here either, we're documenting the
QEMU interaction with the uAPI, the uAPI itself is documented in the
kernel header.  I don't think it would be unreasonable to ask someone
trying to understand this to look at both sources together.


> >> +
> >> +VFIO device hooks for iterative approach:
> >> +-  A ``save_setup`` function that setup migration region, sets _SAVING 
> >> flag in  
> > 
> > s/setup/sets up the/
> > s/in/in the/
> >   
> >> +VFIO device state and inform VFIO IOMMU module to start dirty page 
> >> tracking.  
> > 
> > s/inform/informs the/
> >   
> >> +
> >> +- A ``load_setup`` function that setup migration region on the 
> >> destination and  
> > 
> > s/setup/sets up the/
> >   
> >> +sets _RESUMING flag in VFIO device state.  
> > 
> > s/in/in the/
> >   
> >> +
> >> +- A ``save_live_pending`` function that reads pending_bytes from vendor 
> >> driver
> >> +that indicate how much more data the vendor driver yet to save for the 
> >> VFIO
> >> +device.  
> > 
> > "A ``save_live_pending`` function that reads pending_bytes from the
> > vendor driver, which indicates the amount of data that the vendor
> > driver has yet to save for the VFIO device." ?
> >   
> >> +
> >> +- A ``save_live_iterate`` function that reads VFIO device's data from 
> >> vendor  
> > 
> > s/reads/reads the/
> > s/from/from the/
> >   
> >> +driver through migration region during iterative phase.  
> > 
> > s/through/through the/
> >   
> >> +
> >> +- A ``save_live_complete_precopy`` function that resets _RUNNING flag 
> >> from VFIO  
> > 
> > s/from/from the/
> >   
> >> +device state, saves device config space, if any, and iteratively copies  
> > 
> > s/saves/saves the/
> >   
> >> +remaining data for VFIO device till pending_bytes returned by vendor 
> >> driver
> >> +is zero.  
> > 
> > "...and interactively copies the remaining data for the VFIO device
> > until the vendor driver indicates that no data remains (pending_bytes
> > is zero)." ?


Connie, was that intentional to replace "iteratively" with
"interactively"?  Iteratively seems correct to me.


> >   
> >> +
> >> +- A ``load_state`` function loads config section and data sections 
> 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-10-29 Thread Kirti Wankhede
Thanks for corrections Cornelia. I had done the corrections you 
suggested I had not replied, see my comments on couple of places where I 
disagree.



On 10/29/2020 5:22 PM, Cornelia Huck wrote:

On Thu, 29 Oct 2020 11:23:11 +0530
Kirti Wankhede  wrote:


Document interfaces used for VFIO device migration. Added flow of state
changes during live migration with VFIO device.

Signed-off-by: Kirti Wankhede 
---
  MAINTAINERS   |   1 +
  docs/devel/vfio-migration.rst | 119 ++


You probably want to include this into the Developer's Guide via
index.rst.



Ok.


  2 files changed, 120 insertions(+)
  create mode 100644 docs/devel/vfio-migration.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 6a197bd358d6..6f3fcffc6b3d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1728,6 +1728,7 @@ M: Alex Williamson 
  S: Supported
  F: hw/vfio/*
  F: include/hw/vfio/
+F: docs/devel/vfio-migration.rst
  
  vfio-ccw

  M: Cornelia Huck 
diff --git a/docs/devel/vfio-migration.rst b/docs/devel/vfio-migration.rst
new file mode 100644
index ..dab9127825e4
--- /dev/null
+++ b/docs/devel/vfio-migration.rst
@@ -0,0 +1,119 @@
+=
+VFIO device Migration
+=
+
+VFIO devices use iterative approach for migration because certain VFIO devices


s/use/use an/ ?


+(e.g. GPU) have large amount of data to be transfered. The iterative pre-copy
+phase of migration allows for the guest to continue whilst the VFIO device 
state
+is transferred to destination, this helps to reduce the total downtime of the


s/to destination,/to the destination;/


+VM. VFIO devices can choose to skip the pre-copy phase of migration by 
returning
+pending_bytes as zero during pre-copy phase.


s/during/during the/


+
+Detailed description of UAPI for VFIO device for migration is in the comment
+above ``vfio_device_migration_info`` structure definition in header file
+linux-headers/linux/vfio.h.


I think I'd copy that to this file. If I'm looking at the
documentation, I'd rather not go hunting for source code to find out
what structure you are talking about. Plus, as it's UAPI, I don't
expect it to change much, so it should be easy to keep the definitions
in sync (famous last words).



I feel its duplication of documentation. I would like to know others 
views as well.



+
+VFIO device hooks for iterative approach:
+-  A ``save_setup`` function that setup migration region, sets _SAVING flag in


s/setup/sets up the/
s/in/in the/


+VFIO device state and inform VFIO IOMMU module to start dirty page tracking.


s/inform/informs the/


+
+- A ``load_setup`` function that setup migration region on the destination and


s/setup/sets up the/


+sets _RESUMING flag in VFIO device state.


s/in/in the/


+
+- A ``save_live_pending`` function that reads pending_bytes from vendor driver
+that indicate how much more data the vendor driver yet to save for the VFIO
+device.


"A ``save_live_pending`` function that reads pending_bytes from the
vendor driver, which indicates the amount of data that the vendor
driver has yet to save for the VFIO device." ?


+
+- A ``save_live_iterate`` function that reads VFIO device's data from vendor


s/reads/reads the/
s/from/from the/


+driver through migration region during iterative phase.


s/through/through the/


+
+- A ``save_live_complete_precopy`` function that resets _RUNNING flag from VFIO


s/from/from the/


+device state, saves device config space, if any, and iteratively copies


s/saves/saves the/


+remaining data for VFIO device till pending_bytes returned by vendor driver
+is zero.


"...and interactively copies the remaining data for the VFIO device
until the vendor driver indicates that no data remains (pending_bytes
is zero)." ?


+
+- A ``load_state`` function loads config section and data sections generated by
+above save functions.


"A ``load_state`` function that loads the config section and the data
sections that are generated by the save functions above." ?


+
+- ``cleanup`` functions for both save and load that unmap migration region.


..."that perform any migration-related cleanup, including unmapping the
migration region." ?


+
+VM state change handler is registered to change VFIO device state based on VM
+state change.


"A VM state change handler is registered to change the VFIO device
state when the VM state changes." ?


+
+Similarly, a migration state change notifier is added to get a notification on


s/added/registered/ ?


+migration state change. These states are translated to VFIO device state and
+conveyed to vendor driver.
+
+System memory dirty pages tracking
+--
+
+A ``log_sync`` memory listener callback is added to mark system memory pages


s/is added to mark/marks those/


+as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried


s/by/by the/
s/Dirty/The dirty/


+per container. All pages pinned by vendor driver through 

Re: [PATCH v1] docs/devel: Add VFIO device migration documentation

2020-10-29 Thread Cornelia Huck
On Thu, 29 Oct 2020 11:23:11 +0530
Kirti Wankhede  wrote:

> Document interfaces used for VFIO device migration. Added flow of state
> changes during live migration with VFIO device.
> 
> Signed-off-by: Kirti Wankhede 
> ---
>  MAINTAINERS   |   1 +
>  docs/devel/vfio-migration.rst | 119 
> ++

You probably want to include this into the Developer's Guide via
index.rst.

>  2 files changed, 120 insertions(+)
>  create mode 100644 docs/devel/vfio-migration.rst
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 6a197bd358d6..6f3fcffc6b3d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1728,6 +1728,7 @@ M: Alex Williamson 
>  S: Supported
>  F: hw/vfio/*
>  F: include/hw/vfio/
> +F: docs/devel/vfio-migration.rst
>  
>  vfio-ccw
>  M: Cornelia Huck 
> diff --git a/docs/devel/vfio-migration.rst b/docs/devel/vfio-migration.rst
> new file mode 100644
> index ..dab9127825e4
> --- /dev/null
> +++ b/docs/devel/vfio-migration.rst
> @@ -0,0 +1,119 @@
> +=
> +VFIO device Migration
> +=
> +
> +VFIO devices use iterative approach for migration because certain VFIO 
> devices

s/use/use an/ ?

> +(e.g. GPU) have large amount of data to be transfered. The iterative pre-copy
> +phase of migration allows for the guest to continue whilst the VFIO device 
> state
> +is transferred to destination, this helps to reduce the total downtime of the

s/to destination,/to the destination;/

> +VM. VFIO devices can choose to skip the pre-copy phase of migration by 
> returning
> +pending_bytes as zero during pre-copy phase.

s/during/during the/

> +
> +Detailed description of UAPI for VFIO device for migration is in the comment
> +above ``vfio_device_migration_info`` structure definition in header file
> +linux-headers/linux/vfio.h.

I think I'd copy that to this file. If I'm looking at the
documentation, I'd rather not go hunting for source code to find out
what structure you are talking about. Plus, as it's UAPI, I don't
expect it to change much, so it should be easy to keep the definitions
in sync (famous last words).

> +
> +VFIO device hooks for iterative approach:
> +-  A ``save_setup`` function that setup migration region, sets _SAVING flag 
> in

s/setup/sets up the/
s/in/in the/

> +VFIO device state and inform VFIO IOMMU module to start dirty page tracking.

s/inform/informs the/

> +
> +- A ``load_setup`` function that setup migration region on the destination 
> and

s/setup/sets up the/

> +sets _RESUMING flag in VFIO device state.

s/in/in the/

> +
> +- A ``save_live_pending`` function that reads pending_bytes from vendor 
> driver
> +that indicate how much more data the vendor driver yet to save for the VFIO
> +device.

"A ``save_live_pending`` function that reads pending_bytes from the
vendor driver, which indicates the amount of data that the vendor
driver has yet to save for the VFIO device." ?

> +
> +- A ``save_live_iterate`` function that reads VFIO device's data from vendor

s/reads/reads the/
s/from/from the/

> +driver through migration region during iterative phase.

s/through/through the/

> +
> +- A ``save_live_complete_precopy`` function that resets _RUNNING flag from 
> VFIO

s/from/from the/

> +device state, saves device config space, if any, and iteratively copies

s/saves/saves the/

> +remaining data for VFIO device till pending_bytes returned by vendor driver
> +is zero.

"...and interactively copies the remaining data for the VFIO device
until the vendor driver indicates that no data remains (pending_bytes
is zero)." ?

> +
> +- A ``load_state`` function loads config section and data sections generated 
> by
> +above save functions.

"A ``load_state`` function that loads the config section and the data
sections that are generated by the save functions above." ?

> +
> +- ``cleanup`` functions for both save and load that unmap migration region.

..."that perform any migration-related cleanup, including unmapping the
migration region." ?

> +
> +VM state change handler is registered to change VFIO device state based on VM
> +state change.

"A VM state change handler is registered to change the VFIO device
state when the VM state changes." ?

> +
> +Similarly, a migration state change notifier is added to get a notification 
> on

s/added/registered/ ?

> +migration state change. These states are translated to VFIO device state and
> +conveyed to vendor driver.
> +
> +System memory dirty pages tracking
> +--
> +
> +A ``log_sync`` memory listener callback is added to mark system memory pages

s/is added to mark/marks those/ 

> +as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried

s/by/by the/
s/Dirty/The dirty/

> +per container. All pages pinned by vendor driver through vfio_pin_pages()

s/by/by the/

> +external API have to be marked as dirty during migration. When there are CPU
> +writes, CPU dirty page tracking can identify 

[PATCH v1] docs/devel: Add VFIO device migration documentation

2020-10-29 Thread Kirti Wankhede
Document interfaces used for VFIO device migration. Added flow of state
changes during live migration with VFIO device.

Signed-off-by: Kirti Wankhede 
---
 MAINTAINERS   |   1 +
 docs/devel/vfio-migration.rst | 119 ++
 2 files changed, 120 insertions(+)
 create mode 100644 docs/devel/vfio-migration.rst

diff --git a/MAINTAINERS b/MAINTAINERS
index 6a197bd358d6..6f3fcffc6b3d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1728,6 +1728,7 @@ M: Alex Williamson 
 S: Supported
 F: hw/vfio/*
 F: include/hw/vfio/
+F: docs/devel/vfio-migration.rst
 
 vfio-ccw
 M: Cornelia Huck 
diff --git a/docs/devel/vfio-migration.rst b/docs/devel/vfio-migration.rst
new file mode 100644
index ..dab9127825e4
--- /dev/null
+++ b/docs/devel/vfio-migration.rst
@@ -0,0 +1,119 @@
+=
+VFIO device Migration
+=
+
+VFIO devices use iterative approach for migration because certain VFIO devices
+(e.g. GPU) have large amount of data to be transfered. The iterative pre-copy
+phase of migration allows for the guest to continue whilst the VFIO device 
state
+is transferred to destination, this helps to reduce the total downtime of the
+VM. VFIO devices can choose to skip the pre-copy phase of migration by 
returning
+pending_bytes as zero during pre-copy phase.
+
+Detailed description of UAPI for VFIO device for migration is in the comment
+above ``vfio_device_migration_info`` structure definition in header file
+linux-headers/linux/vfio.h.
+
+VFIO device hooks for iterative approach:
+-  A ``save_setup`` function that setup migration region, sets _SAVING flag in
+VFIO device state and inform VFIO IOMMU module to start dirty page tracking.
+
+- A ``load_setup`` function that setup migration region on the destination and
+sets _RESUMING flag in VFIO device state.
+
+- A ``save_live_pending`` function that reads pending_bytes from vendor driver
+that indicate how much more data the vendor driver yet to save for the VFIO
+device.
+
+- A ``save_live_iterate`` function that reads VFIO device's data from vendor
+driver through migration region during iterative phase.
+
+- A ``save_live_complete_precopy`` function that resets _RUNNING flag from VFIO
+device state, saves device config space, if any, and iteratively copies
+remaining data for VFIO device till pending_bytes returned by vendor driver
+is zero.
+
+- A ``load_state`` function loads config section and data sections generated by
+above save functions.
+
+- ``cleanup`` functions for both save and load that unmap migration region.
+
+VM state change handler is registered to change VFIO device state based on VM
+state change.
+
+Similarly, a migration state change notifier is added to get a notification on
+migration state change. These states are translated to VFIO device state and
+conveyed to vendor driver.
+
+System memory dirty pages tracking
+--
+
+A ``log_sync`` memory listener callback is added to mark system memory pages
+as dirty which are used for DMA by VFIO device. Dirty pages bitmap is queried
+per container. All pages pinned by vendor driver through vfio_pin_pages()
+external API have to be marked as dirty during migration. When there are CPU
+writes, CPU dirty page tracking can identify dirtied pages, but any page pinned
+by vendor driver can also be written by device. There is currently no device
+which has hardware support for dirty page tracking. So all pages which are
+pinned by vendor driver are considered as dirty.
+Dirty pages are tracked when device is in stop-and-copy phase because if pages
+are marked dirty during pre-copy phase and content is transfered from source to
+destination, there is no way to know newly dirtied pages from the point they
+were copied earlier until device stops. To avoid repeated copy of same content,
+pinned pages are marked dirty only during stop-and-copy phase.
+
+System memory dirty pages tracking when vIOMMU is enabled
+-
+With vIOMMU, IO virtual address range can get unmapped while in pre-copy phase
+of migration. In that case, unmap ioctl returns pages pinned in that range and
+QEMU reports corresponding guest physical pages dirty.
+During stop-and-copy phase, an IOMMU notifier is used to get a callback for
+mapped pages and then dirty pages bitmap is fetched from VFIO IOMMU modules for
+those mapped ranges.
+
+Flow of state changes during Live migration
+===
+Below is the flow of state change during live migration where states in 
brackets
+represent VM state, migration state and VFIO device state as:
+(VM state, MIGRATION_STATUS, VFIO_DEVICE_STATE)
+
+Live migration save path
+
+QEMU normal running state
+(RUNNING, _NONE, _RUNNING)
+|
+   migrate_init spawns