On Tue, Aug 1, 2023 at 8:33 PM Markus Armbruster <arm...@redhat.com> wrote:
> Yong Huang <yong.hu...@smartx.com> writes: > > > On Fri, Jul 28, 2023 at 3:49 PM Markus Armbruster <arm...@redhat.com> > wrote: > > > >> ~hyman <hy...@git.sr.ht> writes: > >> > >> > From: Hyman Huang(黄勇) <yong.hu...@smartx.com> > >> > > >> > Reformat migration doc comments to conform to current conventions > >> > as commit a937b6aa739 (qapi: Reformat doc comments to conform to > >> > current conventions). > >> > > >> > Also, craft the dirty-limit capability comment. > >> > >> Split into two patches? > >> > > Ok. > > > >> > >> > Signed-off-by: Hyman Huang(黄勇) <yong.hu...@smartx.com> > >> > --- > >> > qapi/migration.json | 66 > +++++++++++++++++++++------------------------ > >> > 1 file changed, 31 insertions(+), 35 deletions(-) > >> > > >> > diff --git a/qapi/migration.json b/qapi/migration.json > >> > index 6b49593d2f..5d5649c885 100644 > >> > --- a/qapi/migration.json > >> > +++ b/qapi/migration.json > >> > @@ -258,17 +258,17 @@ > >> > # blocked. Present and non-empty when migration is blocked. > >> > # (since 6.0) > >> > # > >> > -# @dirty-limit-throttle-time-per-round: Maximum throttle time > (inmicroseconds) of virtual > >> > -# CPUs each dirty ring > fullround, which shows how > >> > -# MigrationCapability > dirty-limitaffects the guest > >> > -# during live migration. > (since8.1) > >> > -# > >> > -# @dirty-limit-ring-full-time: Estimated average dirty ring full > time(in microseconds) > >> > -# each dirty ring full round, note > thatthe value equals > >> > -# dirty ring memory size divided > byaverage dirty page rate > >> > -# of virtual CPU, which can be used > toobserve the average > >> > -# memory load of virtual CPU > indirectly.Note that zero > >> > -# means guest doesn't dirty memory > (since8.1) > >> > +# @dirty-limit-throttle-time-per-round: Maximum throttle time > >> > +# (in microseconds) of virtual CPUs each dirty ring full round, > >> > +# which shows how MigrationCapability dirty-limit affects the > >> > >> Perhaps "for each ... round"? > >> > >> Remind me, what's a "dirty ring full round"? > >> > > Every time the x86 PML buffer is filled with gpa, hardware throws an > > exception and > > guest exits to kvm, then to qemu. Qemu will handle the exception with > > reaping the > > dirty ring and get the dirty page info, then enter the kvm, empty the PML > > buffer and > > enter guests again, i call this "dirty ring full round", but it seems not > > straightforward, > > please help me describe that, thanks. > > "x86 PML" is page modification logging, right? > Yes. The dirty ring full round may be actually imprecise indeed. I'll try to refactor the comment in the next version, hoping to do better. > > >> > +# guest during live migration. (Since 8.1) > >> > +# > >> > +# @dirty-limit-ring-full-time: Estimated average dirty ring full > >> > +# time (in microseconds) each dirty ring full round. The value > >> > >> Likewise. > >> > >> > +# equals dirty ring memory size divided by average dirty page > >> > >> "the dirty ring memory size divided by the average ..." > >> > >> > +# rate of the virtual CPU, which can be used to observe the > >> > +# average memory load of the virtual CPU indirectly. Note that > >> > +# zero means guest doesn't dirty memory. (Since 8.1) > >> > >> Two spaces between sentences for consistency. > >> > >> > # > >> > # Since: 0.14 > >> > ## > >> > @@ -519,15 +519,11 @@ > >> > # are present. 'return-path' capability must be enabled to use > >> > # it. (since 8.1) > >> > # > >> > -# @dirty-limit: If enabled, migration will use the dirty-limit algo > to > >> > -# throttle down guest instead of auto-converge algo. > >> > -# Throttle algo only works when vCPU's dirtyrate > greater > >> > -# than 'vcpu-dirty-limit', read processes in guest os > >> > -# aren't penalized any more, so this algo can improve > >> > -# performance of vCPU during live migration. This is an > >> > -# optional performance feature and should not affect > the > >> > -# correctness of the existing auto-converge algo. > >> > -# (since 8.1) > >> > +# @dirty-limit: If enabled, migration will throttle vCPUs as needed > to > >> > +# keep their dirty page rate within @vcpu-dirty-limit. This can > >> > +# improve responsiveness of large guests during live migration, > >> > +# and can result in more stable read performance. Requires KVM > >> > +# with accelerator property "dirty-ring-size" set. (Since 8.1) > >> > # > >> > # Features: > >> > # > >> > @@ -822,17 +818,17 @@ > >> > # Nodes are mapped to their block device name if there is one, > and > >> > # to their node name otherwise. (Since 5.2) > >> > # > >> > -# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of > dirtylimit during > >> > -# live migration. Should be in the range > 1to 1000ms, > >> > -# defaults to 1000ms. (Since 8.1) > >> > +# @x-vcpu-dirty-limit-period: Periodic time (in milliseconds) of > dirty > >> > +# limit during live migration. Should be in the range 1 to > 1000ms, > >> > >> Single space in "during live", and two space between sentences, please. > >> > >> > +# defaults to 1000ms. (Since 8.1) > >> > >> I dislike that we mix milli- and microseconds. Too late to fix, I'm > >> afraid. > >> > >> Remind me, what't the "periodic time of dirty limit during live > >> migration"? > >> > > It is the period to calculate the dirty page rate for each vCPU. > > So Qemu will use it to compare with the dirty-limit value. > > If the period is too short, cpu overhead is increasing and if > > it is too long, the result may not be precise. So we make it > > configurable. > > The limited migration knowledge I have wasn't enough to review the doc > part of your QAPI schema patch with reasonable effort and within a > reasonable time. You had to answer a lot of fairly basic questions. > Thanks for your patience. > Not at all, feel free to comment on the patch set. I'll try my best to reply. > > Perhaps I should've given up and left the docs to the migration > maintainers. > > I believe what I've been missing is an overview of the dirty limit > algorithm to throttle guests being live-migrated. > > Could we have that in docs/devel/migration.rst? > No problem, I'll do that and request for comments in the next version. > > Apropos migration.rst: not a peep on compression. Juan, Peter, > Leonardo, should it be covered there? > > [...] > > -- Best regards