From: Xiao Guangrong
Currently we have two behaviors if all threads are busy to do compression,
the main thread mush wait one of them becoming free if @compress-wait-thread
set to on or the main thread can directly return without wait and post
the page out as normal one
Both of them have its pro
From: Xiao Guangrong
It introduces a new statistic, pages-per-second, as bandwidth or mbps is
not enough to measure the performance of posting pages out as we have
compression, xbzrle, which can significantly reduce the amount of the
data size, instead, pages-per-second is the one we want
Signed
From: Xiao Guangrong
If we update parameter, tls-creds and tls-hostname, these string
values are duplicated to local variables in migrate_params_test_apply()
by using g_strdup(), however these new allocated memory are missed to
be freed
Actually, they are not used to check anything, we can direc
From: Xiao Guangrong
Changelog in v2:
squash 'compress-wait-thread-adaptive' into 'compress-wait-thread' based
on peter's suggestion
Currently we have two behaviors if all threads are busy to do compression,
the main thread mush wait one of them becoming free if @compress-wait-thread
set to on
From: Xiao Guangrong
Currently we have two behaviors if all threads are busy to do compression,
the main thread mush wait one of them becoming free if @compress-wait-thread
set to on or the main thread can directly return without wait and post
the page out as normal one
Both of them have its pro
From: Xiao Guangrong
Currently we have two behaviors if all threads are busy to do compression,
the main thread mush wait one of them becoming free if @compress-wait-thread
set to on or the main thread can directly return without wait and post
the page out as normal one
Both of them have its pro
From: Xiao Guangrong
It introduces a new statistic, pages-per-second, as bandwidth or mbps is
not enough to measure the performance of posting pages out as we have
compression, xbzrle, which can significantly reduce the amount of the
data size, instead, pages-per-second if the one we want
Signed
From: Xiao Guangrong
It's the benhcmark of threaded-workqueue, also it's a good
example to show how threaded-workqueue is used
Signed-off-by: Xiao Guangrong
---
tests/Makefile.include | 5 +-
tests/threaded-workqueue-bench.c | 255 +++
2 files ch
From: Xiao Guangrong
Adapt the compression code to the threaded workqueue
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 308
1 file changed, 110 insertions(+), 198 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index
From: Xiao Guangrong
Adapt the compression code to the threaded workqueue
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 222
1 file changed, 77 insertions(+), 145 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 2
From: Xiao Guangrong
This modules implements the lockless and efficient threaded workqueue.
Three abstracted objects are used in this module:
- Request.
It not only contains the data that the workqueue fetches out
to finish the request but also offers the space to save the result
af
From: Xiao Guangrong
It will be used by threaded workqueue
Signed-off-by: Xiao Guangrong
---
include/qemu/bitops.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h
index 3f0926cf40..c522958852 100644
--- a/include/qemu/bitops.h
++
From: Xiao Guangrong
Changelog in v3:
Thanks to Emilio's comments and his example code, the changes in
this version are:
1. move @requests from the shared data struct to each single thread
2. move completion ev from the shared data struct to each single thread
3. move bitmaps from the shared data
From: Xiao Guangrong
Adapt the compression code to the threaded workqueue
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 313 +---
1 file changed, 115 insertions(+), 198 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index
From: Xiao Guangrong
It will be used by threaded workqueue
Signed-off-by: Xiao Guangrong
---
include/qemu/bitops.h | 13 +
1 file changed, 13 insertions(+)
diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h
index 3f0926cf40..c522958852 100644
--- a/include/qemu/bitops.h
++
From: Xiao Guangrong
It's the benhcmark of threaded-workqueue, also it's a good
example to show how threaded-workqueue is used
Signed-off-by: Xiao Guangrong
---
tests/Makefile.include | 5 +-
tests/threaded-workqueue-bench.c | 256 +++
2 files ch
From: Xiao Guangrong
Adapt the compression code to the threaded workqueue
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 225
1 file changed, 81 insertions(+), 144 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index a
From: Xiao Guangrong
This modules implements the lockless and efficient threaded workqueue.
Three abstracted objects are used in this module:
- Request.
It not only contains the data that the workqueue fetches out
to finish the request but also offers the space to save the result
af
From: Xiao Guangrong
Changelog in v2:
These changes are based on Paolo's suggestion:
1) rename the lockless multithreads model to threaded workqueue
2) hugely improve the internal design, that make all the request be
a large array, properly partition it, assign requests to threads
respectiv
From: Xiao Guangrong
Adapt the compression code to the lockless multithread model
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 223
1 file changed, 78 insertions(+), 145 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
From: Xiao Guangrong
Current implementation of compression and decompression are very
hard to be enabled on productions. We noticed that too many wait-wakes
go to kernel space and CPU usages are very low even if the system
is really free
The reasons are:
1) there are two many locks used to do sy
From: Xiao Guangrong
ptr_ring is good to minimize cache-contention and has the simple model
of memory barrier which will be used by lockless threads model to pass
requests between main migration thread and compression threads
Some changes are made:
1) drop unnecessary APIs, e.g, for _irq, _bh AP
From: Xiao Guangrong
This is the last part of our previous work:
https://lists.gnu.org/archive/html/qemu-devel/2018-06/msg00526.html
This part finally improves the multithreads model used by compression
and decompression, that makes the compression feature is really usable
in the production.
From: Xiao Guangrong
Adapt the compression code to the lockless multithread model
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 312 +---
1 file changed, 115 insertions(+), 197 deletions(-)
diff --git a/migration/ram.c b/migration/ram.
From: Xiao Guangrong
It avoids to touch compression locks if xbzrle and compression
are both enabled
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 65a563993d..747dd9208b 100644
--
From: Xiao Guangrong
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size
R
From: Xiao Guangrong
Changelog in v6:
Thanks to Juan's review, in this version we
1) move flush compressed data to find_dirty_block() where it hits the end
of memblock
2) use save_page_use_compression instead of migrate_use_compression in
flush_compressed_data
Xiao Guangrong (3):
migrat
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively
We do not need to flush all t
From: Xiao Guangrong
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size
R
From: Xiao Guangrong
ram_find_and_save_block() can return negative if any error hanppens,
however, it is completely ignored in current code
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 18 +++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c
From: Xiao Guangrong
As Peter pointed out:
| - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's
| per-guest-page granularity
|
| - RAMState.iterations is done for each ram_find_and_save_block(), so
| it's per-host-page granularity
|
| An example is that when we migrate a 2M h
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively
We do not need to flush all t
From: Xiao Guangrong
Changelog in v5:
use the way in the older version to handle flush_compressed_data in the
iteration, i.e, introduce dirty_sync_count and flush compressed data if
the count is changed. That's because we should post the data after
QEMU_VM_SECTION_PART has been posted
From: Xiao Guangrong
ram_find_and_save_block() can return negative if any error hanppens,
however, it is completely ignored in current code
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 18 +++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c
From: Xiao Guangrong
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size
R
From: Xiao Guangrong
As Peter pointed out:
| - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's
| per-guest-page granularity
|
| - RAMState.iterations is done for each ram_find_and_save_block(), so
| it's per-host-page granularity
|
| An example is that when we migrate a 2M h
From: Xiao Guangrong
Detecting zero page is not a light work, moving it to the thread to
speed the main thread up, btw, handling ram_release_pages() for the
zero page is moved to the thread as well
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 96 ++
From: Xiao Guangrong
Try to hold src_page_req_mutex only if the queue is not
empty
Reviewed-by: Dr. David Alan Gilbert
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
include/qemu/queue.h | 1 +
migration/ram.c | 4
2 files changed, 5 insertions(+)
diff --git a/include/qem
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively
We do not need to flush all t
From: Xiao Guangrong
It will be used by the compression threads
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 40 ++--
1 file changed, 30 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index d631b9
From: Xiao Guangrong
Instead of putting the main thread to sleep state to wait for
free compression thread, we can directly post it out as normal
page that reduces the latency and uses CPUs more efficiently
A parameter, compress-wait-thread, is introduced, it can be
enabled if the user really wa
From: Xiao Guangrong
It is not used and cleans the code up a little
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 26 +++---
1 file changed, 11 insertions(+), 15 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 49ace30614..e463
From: Xiao Guangrong
Changelog in v4:
These changes are based on the suggestion from Peter Eric.
1) improve qapi's grammar
2) move calling flush_compressed_data to migration_bitmap_sync()
3) rename 'handle_pages' to 'target_page_count'
Note: there is still no clear way to fix handling the error
From: Xiao Guangrong
The compressed page is not normal page
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index ae9e83c2b6..d631b9a6fe 100644
--- a/migration/ram.c
+++ b/migration/ra
From: Xiao Guangrong
ram_find_and_save_block() can return negative if any error hanppens,
however, it is completely ignored in current code
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 18 +++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c
From: Xiao Guangrong
Try to hold src_page_req_mutex only if the queue is not
empty
Reviewed-by: Dr. David Alan Gilbert
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
include/qemu/queue.h | 1 +
migration/ram.c | 4
2 files changed, 5 insertions(+)
diff --git a/include/qem
From: Xiao Guangrong
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
compressed-size: amount of bytes after compression
compression-rate: rate of compressed size
S
From: Xiao Guangrong
As Peter pointed out:
| - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's
| per-guest-page granularity
|
| - RAMState.iterations is done for each ram_find_and_save_block(), so
| it's per-host-page granularity
|
| An example is that when we migrate a 2M h
From: Xiao Guangrong
It is not used and cleans the code up a little
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 26 +++---
1 file changed, 11 insertions(+), 15 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 49ace30614..e463
From: Xiao Guangrong
The compressed page is not normal page
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index ae9e83c2b6..d631b9a6fe 100644
--- a/migration/ram.c
+++ b/migration/ra
From: Xiao Guangrong
It will be used by the compression threads
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 40 ++--
1 file changed, 30 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index d631b9
From: Xiao Guangrong
Detecting zero page is not a light work, moving it to the thread to
speed the main thread up, btw, handling ram_release_pages() for the
zero page is moved to the thread as well
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 96 +
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively
We do not need to flush all t
From: Xiao Guangrong
Instead of putting the main thread to sleep state to wait for
free compression thread, we can directly post it out as normal
page that reduces the latency and uses CPUs more efficiently
A parameter, compress-wait-thread, is introduced, it can be
enabled if the user really wa
From: Xiao Guangrong
Changelog in v3:
Thanks to Peter's comments, the changes in this version are:
1) make compress-wait-thread be true on default to keep current behavior
2) save the compressed-size instead of reduced size and fix calculating
compression ratio
3) fix calculating xbzrle_count
From: Xiao Guangrong
Try to hold src_page_req_mutex only if the queue is not
empty
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
include/qemu/queue.h | 1 +
migration/ram.c | 4
2 files changed, 5 insertions(+)
diff --git a/include/qemu/queue.h b/include/qem
From: Xiao Guangrong
Detecting zero page is not a light work, moving it to the thread to
speed the main thread up
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 112 +++-
1 file changed, 78 insertions(+), 34 deletions(-)
diff --git a/mi
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
the throughput and use CPU resource more effectively
We do not need to flush all t
From: Xiao Guangrong
It is not used and cleans the code up a little
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 26 +++---
1 file changed, 11 insertions(+), 15 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index ce6e69b649..5aa624b3b9 100644
--- a/mig
From: Xiao Guangrong
It will be used by the compression threads
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 40 ++--
1 file changed, 30 insertions(+), 10 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index e68b0e6dec..ce6e69b649 100644
From: Xiao Guangrong
Instead of putting the main thread to sleep state to wait for
free compression thread, we can directly post it out as normal
page that reduces the latency and uses CPUs more efficiently
A parameter, compress-wait-thread, is introduced, it can be
enabled if the user really wa
From: Xiao Guangrong
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
reduced-size: amount of bytes reduced by compression
compression-rate: rate of compressed size
From: Xiao Guangrong
The compressed page is not normal page
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 0ad234c692..1b016e048d 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1903,7 +1903,6
From: Xiao Guangrong
Thanks to Peter's suggestion, i split the long series (1) and this is the
first part.
I am not sure if Dave is happy to @reduced-size, will change immediately
if it's objected. :)
Changelog in v2:
1) introduce a parameter to make the main thread wait for free thread
thre
From: Xiao Guangrong
Adapt the compression code to the lockless multithread model
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 381 ++--
1 file changed, 175 insertions(+), 206 deletions(-)
diff --git a/migration/ram.c b/migration/ram.
From: Xiao Guangrong
Current implementation of compression and decompression are very
hard to be enabled on productions. We noticed that too many wait-wakes
go to kernel space and CPU usages are very low even if the system
is really free
The reasons are:
1) there are two many locks used to do sy
From: Xiao Guangrong
Adapt the compression code to the lockless multithread model
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 412 ++--
1 file changed, 161 insertions(+), 251 deletions(-)
diff --git a/migration/ram.c b/migration/ram.
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feed new request to them, reducing its call can improve
the throughput and use CPU resource more effectively
We do not need to flush all th
From: Xiao Guangrong
Try to hold src_page_req_mutex only if the queue is not
empty
Signed-off-by: Xiao Guangrong
---
include/qemu/queue.h | 1 +
migration/ram.c | 4
2 files changed, 5 insertions(+)
diff --git a/include/qemu/queue.h b/include/qemu/queue.h
index 59fd1203a1..ac418efc4
From: Xiao Guangrong
It's the simple lockless ring buffer implement which supports both
single producer vs. single consumer and multiple producers vs.
single consumer.
Many lessons were learned from Linux Kernel's kfifo (1) and DPDK's
rte_ring (2) before i wrote this implement. It corrects some
From: Xiao Guangrong
Instead of putting the main thread to sleep state to wait for
free compression thread, we can directly post it out as normal
page that reduces the latency and uses CPUs more efficiently
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 34 +++-
From: Xiao Guangrong
It is used to slightly clean the code up, no logic is changed
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 35 ++-
1 file changed, 22 insertions(+), 13 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index dd1283dd45..ee0
From: Xiao Guangrong
Detecting zero page is not a light work, we can disable it
for compression that can handle all zero data very well
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 44 +++-
1 file changed, 23 insertions(+), 21 deletions(-)
diff -
From: Xiao Guangrong
The compressed page is not normal page
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index 0caf32ab0a..dbf24d8c87 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1432,7 +1432,6
From: Xiao Guangrong
Then the uses can adjust the parameters based on this info
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount of count that no free thread to compress data
busy-rate: rate of thread busy
reduced-size: amount of bytes reduc
From: Xiao Guangrong
Sync up xbzrle_cache_miss_prev only after migration iteration goes
forward
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/migration/ram.c b/migration/ram.c
index dbf24d8c87..dd1283dd45 100644
--- a/migr
From: Xiao Guangrong
Background
--
Current implementation of compression and decompression are very
hard to be enabled on productions. We noticed that too many wait-wakes
go to kernel space and CPU usages are very low even if the system
is really free
The reasons are:
1) there are two ma
From: Xiao Guangrong
QEMU 2.13 enables strict check for compression & decompression to
make the migration more robust, that depends on the source to fix
the internal design which triggers the unexpected error conditions
To make it work for migrating old version QEMU to 2.13 QEMU, we
introduce th
From: Xiao Guangrong
Fix the bug introduced by da3f56cb2e767016 (migration: remove
ram_save_compressed_page()), It should be 'return' rather than
'res'
Sorry for this stupid mistake :(
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
dif
From: Xiao Guangrong
QEMU 2.13 enables strict check for compression & decompression to
make the migration more robuster, that depends on the source to fix
the internal design which triggers the unexpected error conditions
To make it work for migrating old version QEMU to 2.13 QEMU, we
introduce
From: Xiao Guangrong
Now, we can reuse the path in ram_save_page() to post the page out
as normal, then the only thing remained in ram_save_compressed_page()
is compression that we can move it out to the caller
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guang
From: Xiao Guangrong
Abstract the common function control_save_page() to cleanup the code,
no logic is changed
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 174 +---
1 file ch
From: Xiao Guangrong
It directly sends the page to the stream neither checking zero nor
using xbzrle or compression
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 50 ++
1 file chan
From: Xiao Guangrong
Current code uses compress2() to compress memory which manages memory
internally, that causes huge memory is allocated and freed very
frequently
More worse, frequently returning memory to kernel will flush TLBs
and trigger invalidation callbacks on mmu-notification which
int
From: Xiao Guangrong
Current code uses uncompress() to decompress memory which manages
memory internally, that causes huge memory is allocated and freed
very frequently, more worse, frequently returning memory to kernel
will flush TLBs
So, we maintain the memory by ourselves and reuse it for eac
From: Xiao Guangrong
The function is called by both ram_save_page and ram_save_target_page,
so move it to the common caller to cleanup the code
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff
From: Xiao Guangrong
save_zero_page() is always our first approach to try, move it to
the common place before calling ram_save_compressed_page
and ram_save_page
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 105 +
From: Xiao Guangrong
Currently the page being compressed is allowed to be updated by
the VM on the source QEMU, correspondingly the destination QEMU
just ignores the decompression error. However, we completely miss
the chance to catch real errors, then the VM is corrupted silently
To make the mi
From: Xiao Guangrong
Move some code from ram_save_target_page() to ram_save_host_page()
to make it be more readable for latter patches that dramatically
clean ram_save_target_page() up
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 43 +++
From: Xiao Guangrong
Changelog in v3:
Following changes are from Peter's review:
1) use comp_param[i].file and decomp_param[i].compbuf to indicate if
the thread is properly init'd or not
2) save the file which is used by ram loader to the global variable
instead it is cached per decompressi
From: Xiao Guangrong
As compression is a heavy work, do not do it in migration thread,
instead, we post it out as a normal page
Reviewed-by: Wei Wang
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 32
From: Xiao Guangrong
Now, we can reuse the path in ram_save_page() to post the page out
as normal, then the only thing remained in ram_save_compressed_page()
is compression that we can move it out to the caller
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guang
From: Xiao Guangrong
The function is called by both ram_save_page and ram_save_target_page,
so move it to the common caller to cleanup the code
Reviewed-by: Peter Xu
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff
From: Xiao Guangrong
Currently the page being compressed is allowed to be updated by
the VM on the source QEMU, correspondingly the destination QEMU
just ignores the decompression error. However, we completely miss
the chance to catch real errors, then the VM is corrupted silently
To make the mi
From: Xiao Guangrong
It directly sends the page to the stream neither checking zero nor
using xbzrle or compression
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 50 ++
1 file chan
From: Xiao Guangrong
Abstract the common function control_save_page() to cleanup the code,
no logic is changed
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 174 +---
1 file ch
From: Xiao Guangrong
Move some code from ram_save_target_page() to ram_save_host_page()
to make it be more readable for latter patches that dramatically
clean ram_save_target_page() up
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 43 +++
1 file ch
From: Xiao Guangrong
Changelog in v2:
Thanks to the review from Dave, Peter, Wei and Jiang Biao, the changes
in this version are:
1) include the performance number in the cover letter
2)add some comments to explain how to use z_stream->opaque in the
patchset
3) allocate a internal buffer for p
From: Xiao Guangrong
Current code uses compress2() to compress memory which manages memory
internally, that causes huge memory is allocated and freed very
frequently
More worse, frequently returning memory to kernel will flush TLBs
and trigger invalidation callbacks on mmu-notification which
int
From: Xiao Guangrong
save_zero_page() is always our first approach to try, move it to
the common place before calling ram_save_compressed_page
and ram_save_page
Reviewed-by: Peter Xu
Reviewed-by: Dr. David Alan Gilbert
Signed-off-by: Xiao Guangrong
---
migration/ram.c | 105 +
1 - 100 of 147 matches
Mail list logo