In this version:
- add peter's patch.
- split mr_do_commit() from mr_commit().
- adjust the sanity check in address_space_to_flatview().
- rebase to latest upstream.
- replace 8260 with 8362 as testing host.
- update the latest test results.
Here I list some cases which will trigger do_commit() in
address_space_to_flatview():
1.virtio_load->virtio_init_region_cache
2.virtio_load->virtio_set_features_nocheck
3.vapic_post_load
4.tcg_commit
5.ahci_state_post_load
During my test, virtio_init_region_cache() will frequently trigger
do_commit() in address_space_to_flatview(), which will reduce the
optimization effect of v6 compared with v1.
------------------------------------------------------------------------
The duration of loading non-iterable vmstate accounts for a significant
portion of downtime (starting with the timestamp of source qemu stop and
ending with the timestamp of target qemu start). Most of the time is spent
committing memory region changes repeatedly.
This patch packs all the changes to memory region during the period of
loading non-iterable vmstate in a single memory transaction. With the
increase of devices, this patch will greatly improve the performance.
This time I replace 8260 with 8362 as testing host, use latest spdk as
vhost-user-blk backend. The downtime results are different from the previous,
but it doesn't affect the improvement comparison of loading vmstate.
Here are the test1 results:
test info:
- Host
- Intel(R) Xeon(R) Platinum 8362 CPU
- Mellanox Technologies MT28841
- VM
- 32 CPUs 128GB RAM VM
- 8 16-queue vhost-net device
- 16 4-queue vhost-user-blk device.
time of loading non-iterable vmstate downtime
before 112 ms 285 ms
after 44 ms 208 ms
In test2, we keep the number of the device the same as test1, reduce the
number of queues per device:
Here are the test2 results:
test info:
- Host
- Intel(R) Xeon(R) Platinum 8362 CPU
- Mellanox Technologies MT28841
- VM
- 32 CPUs 128GB RAM VM
- 8 1-queue vhost-net device
- 16 1-queue vhost-user-blk device.
time of loading non-iterable vmstate downtime
before 65 ms 151 ms
after 30 ms 110 ms
In test3, we keep the number of queues per device the same as test1, reduce
the number of devices:
Here are the test3 results:
test info:
- Host
- Intel(R) Xeon(R) Platinum 8362 CPU
- Mellanox Technologies MT28841
- VM
- 32 CPUs 128GB RAM VM
- 1 16-queue vhost-net device
- 1 4-queue vhost-user-blk device.
time of loading non-iterable vmstate downtime
before 24 ms 51 ms
after 12 ms 38 ms
As we can see from the test results above, both the number of queues and
the number of devices have a great impact on the time of loading non-iterable
vmstate. The growth of the number of devices and queues will lead to more
mr commits, and the time consumption caused by the flatview reconstruction
will also increase.
Please review, Chuang
[v5]
- rename rcu_read_locked() to rcu_read_is_locked().
- adjust the sanity check in address_space_to_flatview().
- improve some comments.
[v4]
- attach more information in the cover letter.
- remove changes on virtio_load.
- add rcu_read_locked() to detect holding of rcu lock.
[v3]
- move virtio_load_check_delay() from virtio_memory_listener_commit() to
virtio_vmstate_change().
- add delay_check flag to VirtIODevice to make sure virtio_load_check_delay()
will be called when delay_check is true.
[v2]
- rebase to latest upstream.
- add sanity check to address_space_to_flatview().
- postpone the init of the vring cache until migration's loading completes.
[v1]
The duration of loading non-iterable vmstate accounts for a significant
portion of downtime (starting with the timestamp of source qemu stop and
ending with the timestamp of target qemu start). Most of the time is spent
committing memory region changes repeatedly.
This patch packs all the changes to memory region during the period of
loading non-iterable vmstate in a single memory transaction. With the
increase of devices, this patch will greatly improve the performance.
Here are the test results:
test vm info:
- 32 CPUs 128GB RAM
- 8 16-queue vhost-net device
- 16 4-queue vhost-user-blk device.
time of loading non-iterable vmstate
before about 210 ms
after about 40 ms