In some cases for large size VM, memory_global_dirty_log_sync() and
ramblock_sync_dirty_bitmap() combined can take more than 1 sec. As a
result diff of end_time and rs->time_last_bitmap_sync can be more
than 1 sec even when migration_bitmap_sync is called the first time,
setting up the throttle in the first iteration itself. When
migration_bitmap_sync is called the first-time num_dirty_pages_period
is equal to VM total pages and ram_counters.transferred is zero which
forces VM to throttle to 99 from migration start. So we should always
check if dirty_sync_count is greater than 1 before trying throttling.

Signed-off-by: manish.mishra <manish.mis...@nutanix.com>
---
 migration/ram.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/migration/ram.c b/migration/ram.c
index 7a43bfd7af..9ba1c8b235 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1006,8 +1006,12 @@ static void migration_bitmap_sync(RAMState *rs)
 
     end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 
-    /* more than 1 second = 1000 millisecons */
-    if (end_time > rs->time_last_bitmap_sync + 1000) {
+    /*
+     * more than 1 second = 1000 millisecons
+     * Avoid throttling VM in the first iteration of live migration.
+     */
+    if (end_time > rs->time_last_bitmap_sync + 1000 &&
+        ram_counters.dirty_sync_count > 1) {
         migration_trigger_throttle(rs);
 
         migration_update_rates(rs, end_time);
-- 
2.22.3


Reply via email to