On Fri, Jul 28, 2023 at 3:49 PM Markus Armbruster <arm...@redhat.com> wrote:

> ~hyman <hy...@git.sr.ht> writes:
>
> > From: Hyman Huang(黄勇) <yong.hu...@smartx.com>
> >
> > Reformat migration doc comments to conform to current conventions
> > as commit a937b6aa739 (qapi: Reformat doc comments to conform to
> > current conventions).
> >
> > Also, craft the dirty-limit capability comment.
>
> Split into two patches?
>
Ok.

>
> > Signed-off-by: Hyman Huang(黄勇) <yong.hu...@smartx.com>
> > ---
> >  qapi/migration.json | 66 +++++++++++++++++++++------------------------
> >  1 file changed, 31 insertions(+), 35 deletions(-)
> >
> > diff --git a/qapi/migration.json b/qapi/migration.json
> > index 6b49593d2f..5d5649c885 100644
> > --- a/qapi/migration.json
> > +++ b/qapi/migration.json
> > @@ -258,17 +258,17 @@
> >  #     blocked.  Present and non-empty when migration is blocked.
> >  #     (since 6.0)
> >  #
> > -# @dirty-limit-throttle-time-per-round: Maximum throttle time (in
> microseconds) of virtual
> > -#                                       CPUs each dirty ring full
> round, which shows how
> > -#                                       MigrationCapability dirty-limit
> affects the guest
> > -#                                       during live migration. (since
> 8.1)
> > -#
> > -# @dirty-limit-ring-full-time: Estimated average dirty ring full time
> (in microseconds)
> > -#                              each dirty ring full round, note that
> the value equals
> > -#                              dirty ring memory size divided by
> average dirty page rate
> > -#                              of virtual CPU, which can be used to
> observe the average
> > -#                              memory load of virtual CPU indirectly.
> Note that zero
> > -#                              means guest doesn't dirty memory (since
> 8.1)
> > +# @dirty-limit-throttle-time-per-round: Maximum throttle time
> > +#     (in microseconds) of virtual CPUs each dirty ring full round,
> > +#     which shows how MigrationCapability dirty-limit affects the
>
> Perhaps "for each ... round"?
>
> Remind me, what's a "dirty ring full round"?
>
Every time the x86 PML buffer is filled with gpa, hardware throws an
exception and
guest exits to kvm, then to qemu. Qemu will handle the exception with
reaping the
dirty ring and get the dirty page info, then enter the kvm, empty the PML
buffer and
enter guests again, i call this "dirty ring full round", but it seems not
straightforward,
please help me describe that,  thanks.


> > +#     guest during live migration.  (Since 8.1)
> > +#
> > +# @dirty-limit-ring-full-time: Estimated average dirty ring full
> > +#     time (in microseconds) each dirty ring full round. The value
>
> Likewise.
>
> > +#     equals dirty ring memory size divided by average dirty page
>
> "the dirty ring memory size divided by the average ..."
>
> > +#     rate of the virtual CPU, which can be used to observe the
> > +#     average memory load of the virtual CPU indirectly. Note that
> > +#     zero means guest doesn't dirty memory.  (Since 8.1)
>
> Two spaces between sentences for consistency.
>
> >  #
> >  # Since: 0.14
> >  ##
> > @@ -519,15 +519,11 @@
> >  #     are present.  'return-path' capability must be enabled to use
> >  #     it.  (since 8.1)
> >  #
> > -# @dirty-limit: If enabled, migration will use the dirty-limit algo to
> > -#               throttle down guest instead of auto-converge algo.
> > -#               Throttle algo only works when vCPU's dirtyrate greater
> > -#               than 'vcpu-dirty-limit', read processes in guest os
> > -#               aren't penalized any more, so this algo can improve
> > -#               performance of vCPU during live migration. This is an
> > -#               optional performance feature and should not affect the
> > -#               correctness of the existing auto-converge algo.
> > -#               (since 8.1)
> > +# @dirty-limit: If enabled, migration will throttle vCPUs as needed to
> > +#     keep their dirty page rate within @vcpu-dirty-limit.  This can
> > +#     improve responsiveness of large guests during live migration,
> > +#     and can result in more stable read performance.  Requires KVM
> > +#     with accelerator property "dirty-ring-size" set.  (Since 8.1)
> >  #
> >  # Features:
> >  #
> > @@ -822,17 +818,17 @@
> >  #     Nodes are mapped to their block device name if there is one, and
> >  #     to their node name otherwise.  (Since 5.2)
> >  #
> > -# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty
> limit during
> > -#                             live migration. Should be in the range 1
> to 1000ms,
> > -#                             defaults to 1000ms. (Since 8.1)
> > +# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty
> > +#     limit during  live migration. Should be in the range 1 to 1000ms,
>
> Single space in "during live", and two space between sentences, please.
>
> > +#     defaults to 1000ms.  (Since 8.1)
>
> I dislike that we mix milli- and microseconds.  Too late to fix, I'm
> afraid.
>
> Remind me, what't the "periodic time of dirty limit during live
> migration"?
>
It is the period to calculate the dirty page rate for each vCPU.
So Qemu will use it to compare with the dirty-limit value.
If the period is too short, cpu overhead is increasing and if
it is too long, the result may not be precise. So we make it
configurable.

>
> >  #
> >  # @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
> > -#                    Defaults to 1. (Since 8.1)
> > +#     Defaults to 1.  (Since 8.1)
> >  #
> >  # Features:
> >  #
> >  # @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
> > -#            are experimental.
> > +#     are experimental.
> >  #
> >  # Since: 2.4
> >  ##
> > @@ -988,17 +984,17 @@
> >  #     Nodes are mapped to their block device name if there is one, and
> >  #     to their node name otherwise.  (Since 5.2)
> >  #
> > -# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty
> limit during
> > -#                             live migration. Should be in the range 1
> to 1000ms,
> > -#                             defaults to 1000ms. (Since 8.1)
> > +# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty
> > +#     limit during live migration. Should be in the range 1 to 1000ms,
>
> Two spaces between sentences.
>
> > +#     defaults to 1000ms.  (Since 8.1)
> >  #
> >  # @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
> > -#                    Defaults to 1. (Since 8.1)
> > +#     Defaults to 1.  (Since 8.1)
> >  #
> >  # Features:
> >  #
> >  # @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
> > -#            are experimental.
> > +#     are experimental.
> >  #
> >  # TODO: either fuse back into MigrationParameters, or make
> >  #     MigrationParameters members mandatory
> > @@ -1191,17 +1187,17 @@
> >  #     Nodes are mapped to their block device name if there is one, and
> >  #     to their node name otherwise.  (Since 5.2)
> >  #
> > -# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty
> limit during
> > -#                             live migration. Should be in the range 1
> to 1000ms,
> > -#                             defaults to 1000ms. (Since 8.1)
> > +# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of dirty
> > +#     limit during live migration. Should be in the range 1 to 1000ms,
>
> Two spaces between sentences.
>
> > +#     defaults to 1000ms.  (Since 8.1)
> >  #
> >  # @vcpu-dirty-limit: Dirtyrate limit (MB/s) during live migration.
> > -#                    Defaults to 1. (Since 8.1)
> > +#     Defaults to 1.  (Since 8.1)
> >  #
> >  # Features:
> >  #
> >  # @unstable: Members @x-checkpoint-delay and @x-vcpu-dirty-limit-period
> > -#            are experimental.
> > +#     are experimental.
> >  #
> >  # Since: 2.4
> >  ##
>
>

-- 
Best regards

Reply via email to