On Sat, Apr 02, 2022 at 04:59:24PM +0800, Chongyun Wu wrote: > > on 4/2/2022 3:28 PM, Hyman Huang wrote: > > > > > > 在 2022/3/28 9:32, wuc...@chinatelecom.cn 写道: > > > From: Chongyun Wu <wuc...@chinatelecom.cn> > > > > > > A new structure KVMDirtyRingDirtyCounter is introduced in > > > KVMDirtyRingReaper to record the number of dirty pages > > > within a period of time. > > > > > > When kvm_dirty_ring_mark_page collects dirty pages, if it > > > finds that the current dirty pages are not duplicates, it > > > increases the dirty_pages_period count. > > > > > > Divide the dirty_pages_period count by the interval to get > > > the dirty page rate for this period. > > > > > > And use dirty_pages_period_peak_rate to count the highest > > > dirty page rate, to solve the problem that the dirty page > > > collection rate may change greatly during a period of time, > > > resulting in a large change in the dirty page rate. > > > > > > Through sufficient testing, it is found that the dirty rate > > > calculated after kvm_dirty_ring_flush usually matches the actual > > > pressure, and the dirty rate counted per second may change in the > > > subsequent seconds, so record the peak dirty rate as the real > > > dirty pages rate. > > > > > > This dirty pages rate is mainly used as the subsequent autoconverge > > > calculation speed limit throttle. > > As Peter's advice, i think the better way is exporting or adapting the > > existing implemenation of dirty page rate calculation instead of > > building different blocks. > Yes,that right. But this is a little different. > Qemu currently already has a variety of dirty page rate calculation methods, > which are used in different scenarios. This method is mainly to calculate > the appropriate speed limit threshold in the process of hot migration of the > dirty ring. It is realized by making full use of the characteristics of the > dirty ring, that is, statistics are performed when collecting dirty pages, > which cannot be achieved by the old bitmap method, and it will not add too > much extra overhead, such as starting new threads, etc. There is no need to > input parameters such as a suitable sampling period, etc., which is simple > and convenient. Through the test, the pressure of the actual stress added > can be relatively accurately calculated.
Please see commit 7786ae40ba4e7d5b9 and we've already have per-vcpu data. If we want a per-vm data we'd want to do that there too together within kvm_dirty_ring_reap_one(). Regarding "make best use of dirty ring": it's per-vcpu already and actually when I thought about optimizing auto-converge why not start using per-vcpu data to do per-vcpu throttling already? I'm lost on why it goes back to a per-vm approach? I'm also curious what will this be compared to Yong's dirty limit approach? Dirty limit allows setting dirty rate upon all vcpus in one shot. From design-wise, that does sound to be superior.. not only because it's per-vcpu, but also because it's not sleep()ing but trapping only writes in the vmexit with small intervals. Would you have time to compare these two solutions? I fully trust and appreciate your test results and I believe it performs better than the old auto converge, it's just that we can't merge solution A & B if they all look similar and solving the same problem, even if both will work better. We need to choose a way to go at last. Thanks, -- Peter Xu