On Wed, Jun 15, 2022 at 7:49 PM David Hildenbrand wrote:
>
> On 15.06.22 13:17, Xiao Guangrong wrote:
> > On Wed, Jun 15, 2022 at 4:24 PM David Hildenbrand
wrote:
> >
> >>>> Is that a temporary or a permanent thing? Do we know?
> >>>
> >>
On Wed, Jun 15, 2022 at 4:24 PM David Hildenbrand wrote:
> >> Is that a temporary or a permanent thing? Do we know?
> >
> > No idea. But his last signed-off was three years ago.
>
> I sent a patch to Xiao, asking if he's still active in QEMU. If I don't
> get a reply this week, I'll move forward
On 3/27/19 5:41 PM, Stefano Garzarella wrote:
Hi Yang, Xiao,
Just adding few things, because I'm currently exploring the QEMU modules
in order to reduce the boot time and the footprint.
Hi Stefan Hajnoczi and Stefano Garzarella,
The work exploring the QEMU modules looks really good. we
On 3/26/19 5:07 PM, Paolo Bonzini wrote:
On 26/03/19 08:00, Xiao Guangrong wrote:
On 3/26/19 7:18 AM, Paolo Bonzini wrote:
Also, what is the use case? Is it to reduce the attack surface without
having multiple QEMU binaries?
Security is one story we concern, only the essential
On 3/26/19 7:18 AM, Paolo Bonzini wrote:
On 25/03/19 12:46, Yang Zhong wrote:
Hello all,
Rust-VMM has started to make all features and common modules to crates, and CSP
can
deploy their VMM on demand. This afternoon, Xiao guangrong and i talked about
the light
weight VM solitions,and we
then becomes useless now.
Anyway, this pathset looks good to me.
Reviewed-by: Xiao Guangrong
On 1/11/19 5:57 PM, Markus Armbruster wrote:
guangrong.x...@gmail.com writes:
From: Xiao Guangrong
Currently we have two behaviors if all threads are busy to do compression,
the main thread mush wait one of them becoming free if @compress-wait-thread
set to on or the main thread can
On 1/16/19 2:40 PM, Peter Xu wrote:
On Fri, Jan 11, 2019 at 02:37:32PM +0800, guangrong.x...@gmail.com wrote:
+
+static void update_compress_wait_thread(MigrationState *s)
+{
+s->compress_wait_thread = get_compress_wait_thread(>parameters);
+assert(s->compress_wait_thread !=
On 1/16/19 12:03 AM, Eric Blake wrote:
On 1/15/19 4:24 AM, Dr. David Alan Gilbert wrote:
I think the problem is that
migrate_params_check checks a MigrationParameters
while the QMP command gives us a MigrateSetParameters; but we also use
migrate_params_check for the global check you added
On 12/21/18 4:11 PM, Peter Xu wrote:
> On Thu, Dec 13, 2018 at 03:57:25PM +0800, guangrong.x...@gmail.com wrote:
>> From: Xiao Guangrong
>>
>> Currently we have two behaviors if all threads are busy to do compression,
>> the main thread mush wait one of them becoming fr
On 12/5/18 1:16 AM, Paolo Bonzini wrote:
On 04/12/18 16:49, Christophe de Dinechin wrote:
Linux and QEMU's own qht work just fine with compile-time directives.
Wouldn’t it work fine without any compile-time directive at all?
Yes, that's what I meant. Though there are certainly cases
On 11/27/18 8:49 PM, Christophe de Dinechin wrote:
(I did not finish the review, but decided to send what I already had).
On 22 Nov 2018, at 08:20, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
This modules implements the lockless and efficient threaded workqueue.
I’m
On 11/26/18 6:28 PM, Paolo Bonzini wrote:
On 26/11/18 09:18, Xiao Guangrong wrote:
+static uint64_t get_free_request_bitmap(Threads *threads,
ThreadLocal *thread)
+{
+ uint64_t request_fill_bitmap, request_done_bitmap, result_bitmap;
+
+ request_fill_bitmap =
atomic_rcu_read
On 11/27/18 2:55 AM, Emilio G. Cota wrote:
On Mon, Nov 26, 2018 at 15:57:25 +0800, Xiao Guangrong wrote:
On 11/23/18 7:02 PM, Dr. David Alan Gilbert wrote:
+#include "qemu/osdep.h"
+#include "qemu/bitmap.h"
+#include "qemu/threaded-workqueue.h"
+
+#defi
On 11/27/18 2:49 AM, Emilio G. Cota wrote:
On Mon, Nov 26, 2018 at 16:06:37 +0800, Xiao Guangrong wrote:
+/* after the user fills the request, the bit is flipped. */
+uint64_t request_fill_bitmap QEMU_ALIGNED(SMP_CACHE_BYTES);
+/* after handles the request, the thread flips
On 11/26/18 6:56 PM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
On 11/23/18 7:02 PM, Dr. David Alan Gilbert wrote:
+#include "qemu/osdep.h"
+#include "qemu/bitmap.h"
+#include "qemu/threaded-workqueue.h"
+
+#defi
On 11/24/18 8:17 AM, Emilio G. Cota wrote:
On Thu, Nov 22, 2018 at 15:20:25 +0800, guangrong.x...@gmail.com wrote:
+static uint64_t get_free_request_bitmap(Threads *threads, ThreadLocal *thread)
+{
+uint64_t request_fill_bitmap, request_done_bitmap, result_bitmap;
+
+
On 11/24/18 8:12 AM, Emilio G. Cota wrote:
On Thu, Nov 22, 2018 at 15:20:25 +0800, guangrong.x...@gmail.com wrote:
+ /*
+ * the bit in these two bitmaps indicates the index of the @requests
This @ is not ASCII, is it?
Good eyes. :)
Will fix it.
+ * respectively. If it's the
On 11/24/18 2:29 AM, Dr. David Alan Gilbert wrote:
static void
-update_compress_thread_counts(const CompressParam *param, int bytes_xmit)
+update_compress_thread_counts(CompressData *cd, int bytes_xmit)
Keep the const?
Yes, indeed. Will correct it in the next version.
+if
On 11/23/18 7:02 PM, Dr. David Alan Gilbert wrote:
+#include "qemu/osdep.h"
+#include "qemu/bitmap.h"
+#include "qemu/threaded-workqueue.h"
+
+#define SMP_CACHE_BYTES 64
That's architecture dependent isn't it?
Yes, it's arch dependent indeed.
I just used 64 for simplification and i
On 11/14/18 2:38 AM, Emilio G. Cota wrote:
On Tue, Nov 06, 2018 at 20:20:22 +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
This modules implements the lockless and efficient threaded workqueue.
(snip)
+++ b/util/threaded-workqueue.c
+struct Threads {
+/*
+ * in order
Hi,
Ping...
On 11/6/18 8:20 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Changelog in v2:
These changes are based on Paolo's suggestion:
1) rename the lockless multithreads model to threaded workqueue
2) hugely improve the internal design, that make all the request
On 10/28/2018 03:50 PM, Paolo Bonzini wrote:
On 27/10/2018 01:33, Emilio G. Cota wrote:
On Wed, Oct 17, 2018 at 12:10:15 +0200, Paolo Bonzini wrote:
On 16/10/2018 13:10, guangrong.x...@gmail.com wrote:
An idea: the total number of requests is going to be very small, and a
PtrRing is not
On 10/17/2018 06:10 PM, Paolo Bonzini wrote:
An idea: the total number of requests is going to be very small, and a
PtrRing is not the nicest data structure for multiple producer/single
consumer. So you could instead:
- add the size of one request to the ops structure. Move the
On 10/17/2018 04:14 PM, Paolo Bonzini wrote:
On 16/10/2018 18:40, Emilio G. Cota wrote:
+#define SMP_CACHE_BYTES 64
+#define cacheline_aligned_in_smp \
+__attribute__((__aligned__(SMP_CACHE_BYTES)))
You could use QEMU_ALIGNED() here.
Yes, you are right.
+
+#define
On 09/06/2018 07:03 PM, Juan Quintela wrote:
guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Changelog in v6:
Thanks to Juan's review, in this version we
1) move flush compressed data to find_dirty_block() where it hits the end
of memblock
2) use save_page_use_compression instead
On 09/04/2018 11:54 AM, Xiao Guangrong wrote:
We will call it only if xbzrle is also enabled, at this case, we will
disable compression and xbzrle for the following pages, please refer
^and use xbzrle
Sorry for the typo.
On 09/04/2018 12:38 AM, Juan Quintela wrote:
guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request to them, reducing its call can improve
On 08/08/2018 02:12 PM, Peter Xu wrote:
On Tue, Aug 07, 2018 at 05:12:09PM +0800, guangrong.x...@gmail.com wrote:
[...]
@@ -1602,6 +1614,26 @@ static void migration_update_rates(RAMState *rs, int64_t
end_time)
rs->xbzrle_cache_miss_prev) / page_count;
On 08/08/2018 10:11 PM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
On 08/08/2018 01:08 PM, Peter Xu wrote:
On Tue, Aug 07, 2018 at 05:12:07PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
ram_find_and_save_block() can return negative
On 08/08/2018 02:56 PM, Peter Xu wrote:
On Wed, Aug 08, 2018 at 02:29:52PM +0800, Xiao Guangrong wrote:
On 08/08/2018 01:08 PM, Peter Xu wrote:
On Tue, Aug 07, 2018 at 05:12:07PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
ram_find_and_save_block() can return negative
On 08/08/2018 02:05 PM, Peter Xu wrote:
On Tue, Aug 07, 2018 at 05:12:08PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
As Peter pointed out:
| - xbzrle_counters.cache_miss is done in save_xbzrle_page(), so it's
| per-guest-page granularity
|
| - RAMState.iterations
On 08/08/2018 01:08 PM, Peter Xu wrote:
On Tue, Aug 07, 2018 at 05:12:07PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
ram_find_and_save_block() can return negative if any error hanppens,
however, it is completely ignored in current code
Could you hint me where we'll
On 08/08/2018 12:52 PM, Peter Xu wrote:
On Tue, Aug 07, 2018 at 05:12:06PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request
On 08/08/2018 11:51 AM, Peter Xu wrote:
On Tue, Aug 07, 2018 at 08:29:54AM -0500, Eric Blake wrote:
On 08/07/2018 04:12 AM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Instead of putting the main thread to sleep state to wait for
free compression thread, we can directly post
On 07/24/2018 02:36 AM, Eric Blake wrote:
On 07/19/2018 07:15 AM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Instead of putting the main thread to sleep state to wait for
free compression thread, we can directly post it out as normal
page that reduces the latency and uses CPUs
On 07/23/2018 05:40 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 04:44:49PM +0800, Xiao Guangrong wrote:
[...]
However, it is not safe to do ram_release_pages in the thread as it's
not protected it multithreads. Fortunately, compression will be disabled
if it switches to post-copy, so i
On 07/23/2018 05:15 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 04:40:29PM +0800, Xiao Guangrong wrote:
On 07/23/2018 04:05 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 03:39:18PM +0800, Xiao Guangrong wrote:
On 07/23/2018 12:36 PM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:15PM
On 07/23/2018 05:01 PM, Peter Xu wrote:
Yes, it's sufficient for current thread model, will drop it for now
and add it at the time when the lockless mutilthread model is applied. :)
Ah I think I see your point. Even if so I would think it better to do
any extra cleanup directly in
On 07/23/2018 04:35 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 04:05:21PM +0800, Xiao Guangrong wrote:
On 07/23/2018 01:49 PM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:20PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
flush_compressed_data() needs to wait all
On 07/23/2018 04:28 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 03:56:33PM +0800, Xiao Guangrong wrote:
[...]
@@ -2249,15 +2308,8 @@ static int ram_save_target_page(RAMState *rs,
PageSearchStatus *pss,
return res;
}
-/*
- * When starting the process of a new
On 07/23/2018 04:05 PM, Peter Xu wrote:
On Mon, Jul 23, 2018 at 03:39:18PM +0800, Xiao Guangrong wrote:
On 07/23/2018 12:36 PM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:15PM +0800, guangrong.x...@gmail.com wrote:
@@ -1597,6 +1608,24 @@ static void migration_update_rates(RAMState *rs
On 07/23/2018 01:49 PM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:20PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feeds new request
On 07/23/2018 01:03 PM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:18PM +0800, guangrong.x...@gmail.com wrote:
[...]
@@ -1950,12 +1971,16 @@ retry:
set_compress_params(_param[idx], block, offset);
qemu_cond_signal(_param[idx].cond);
On 07/23/2018 12:36 PM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:15PM +0800, guangrong.x...@gmail.com wrote:
@@ -1597,6 +1608,24 @@ static void migration_update_rates(RAMState *rs, int64_t
end_time)
rs->xbzrle_cache_miss_prev) / iter_count;
On 07/23/2018 11:25 AM, Peter Xu wrote:
On Thu, Jul 19, 2018 at 08:15:13PM +0800, guangrong.x...@gmail.com wrote:
@@ -3113,6 +3132,8 @@ static Property migration_properties[] = {
DEFINE_PROP_UINT8("x-compress-threads", MigrationState,
On 07/23/2018 12:05 AM, Michael S. Tsirkin wrote:
On Wed, Jul 18, 2018 at 04:46:21PM +0800, Xiao Guangrong wrote:
On 07/17/2018 02:58 AM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
On 06/29/2018 05:42 PM, Dr. David Alan Gilbert wrote:
* Xiao
On 07/12/2018 04:26 PM, Peter Xu wrote:
On Thu, Jul 12, 2018 at 03:47:57PM +0800, Xiao Guangrong wrote:
On 07/11/2018 04:21 PM, Peter Xu wrote:
On Thu, Jun 28, 2018 at 05:33:58PM +0800, Xiao Guangrong wrote:
On 06/19/2018 03:36 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:15PM
On 07/17/2018 03:01 AM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
On 06/14/2018 12:25 AM, Dr. David Alan Gilbert wrote:
}
static void migration_bitmap_sync(RAMState *rs)
@@ -1412,6 +1441,9 @@ static void flush_compressed_data(RAMState *rs
On 07/17/2018 02:58 AM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
On 06/29/2018 05:42 PM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
Hi Peter,
Sorry for the delay as i was busy on other things.
On 06/19/2018
On 07/14/2018 02:01 AM, Dr. David Alan Gilbert wrote:
* guangrong.x...@gmail.com (guangrong.x...@gmail.com) wrote:
From: Xiao Guangrong
flush_compressed_data() needs to wait all compression threads to
finish their work, after that all threads are free until the
migration feed new request
On 07/14/2018 12:24 AM, Dr. David Alan Gilbert wrote:
+static void *thread_run(void *opaque)
+{
+ThreadLocal *self_data = (ThreadLocal *)opaque;
+Threads *threads = self_data->threads;
+void (*handler)(ThreadRequest *data) = threads->thread_request_handler;
+ThreadRequest
On 07/11/2018 04:21 PM, Peter Xu wrote:
On Thu, Jun 28, 2018 at 05:33:58PM +0800, Xiao Guangrong wrote:
On 06/19/2018 03:36 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:15PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Try to hold src_page_req_mutex only
On 06/29/2018 09:08 PM, Michael S. Tsirkin wrote:
On Fri, Jun 29, 2018 at 03:30:44PM +0800, Xiao Guangrong wrote:
Hi Michael,
On 06/20/2018 08:38 PM, Michael S. Tsirkin wrote:
On Mon, Jun 04, 2018 at 05:55:17PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
(1)
https
On 06/29/2018 07:22 PM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
On 06/19/2018 03:36 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:15PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Try to hold src_page_req_mutex only
On 06/29/2018 05:42 PM, Dr. David Alan Gilbert wrote:
* Xiao Guangrong (guangrong.x...@gmail.com) wrote:
Hi Peter,
Sorry for the delay as i was busy on other things.
On 06/19/2018 03:30 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:14PM +0800, guangrong.x...@gmail.com wrote:
From
On 06/29/2018 02:15 PM, Jason Wang wrote:
On 2018年06月29日 11:59, Xiao Guangrong wrote:
On 06/28/2018 09:36 PM, Jason Wang wrote:
On 2018年06月04日 17:55, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
It's the simple lockless ring buffer implement which supports both
single
On 06/29/2018 12:23 PM, Michael S. Tsirkin wrote:
On Thu, Jun 28, 2018 at 09:36:00PM +0800, Jason Wang wrote:
On 2018年06月04日 17:55, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Memory barrier is omitted here, please refer to the comment in the code.
(1)https://git.kernel.org
Hi Michael,
On 06/20/2018 08:38 PM, Michael S. Tsirkin wrote:
On Mon, Jun 04, 2018 at 05:55:17PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
(1)
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/kfifo.h
(2) http://dpdk.org/doc/api
On 06/28/2018 09:36 PM, Jason Wang wrote:
On 2018年06月04日 17:55, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
It's the simple lockless ring buffer implement which supports both
single producer vs. single consumer and multiple producers vs.
single consumer.
Finally, it fetches
On 06/28/2018 07:55 PM, Wei Wang wrote:
On 06/28/2018 06:02 PM, Xiao Guangrong wrote:
CC: Paul, Peter Zijlstra, Stefani, Lai who are all good at memory barrier.
On 06/20/2018 12:52 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:17PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao
Hi Daniel,
On 06/28/2018 05:36 PM, Daniel P. Berrangé wrote:
On Thu, Jun 28, 2018 at 05:12:39PM +0800, Xiao Guangrong wrote:
After this patch, the workload is moved to the worker thread, is it
acceptable?
It depends on your point of view. If you have spare / idle CPUs on the host
On 06/20/2018 02:52 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:18PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Current implementation of compression and decompression are very
hard to be enabled on productions. We noticed that too many wait-wakes
go to kernel space
On 06/20/2018 01:55 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:17PM +0800, guangrong.x...@gmail.com wrote:
[...]
(Some more comments/questions for the MP implementation...)
+static inline int ring_mp_put(Ring *ring, void *data)
+{
+unsigned int index, in, in_next, out;
+
+
CC: Paul, Peter Zijlstra, Stefani, Lai who are all good at memory barrier.
On 06/20/2018 12:52 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:17PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
It's the simple lockless ring buffer implement which supports both
single
On 06/19/2018 03:36 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:15PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Try to hold src_page_req_mutex only if the queue is not
empty
Pure question: how much this patch would help? Basically if you are
running compression
Hi Peter,
Sorry for the delay as i was busy on other things.
On 06/19/2018 03:30 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:14PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Detecting zero page is not a light work, we can disable it
for compression that can handle
On 06/14/2018 12:25 AM, Dr. David Alan Gilbert wrote:
}
static void migration_bitmap_sync(RAMState *rs)
@@ -1412,6 +1441,9 @@ static void flush_compressed_data(RAMState *rs)
qemu_mutex_lock(_param[idx].mutex);
if (!comp_param[idx].quit) {
len =
On 06/14/2018 12:17 AM, Dr. David Alan Gilbert wrote:
* guangrong.x...@gmail.com (guangrong.x...@gmail.com) wrote:
From: Xiao Guangrong
It is used to slightly clean the code up, no logic is changed
Actually, there is a slight change; iterations_prev is always updated
when previously
On 06/13/2018 11:51 PM, Dr. David Alan Gilbert wrote:
* guangrong.x...@gmail.com (guangrong.x...@gmail.com) wrote:
From: Xiao Guangrong
The compressed page is not normal page
Is this the right reason?
I think the 'normal' page shouldn't include the compressed
page and XBZRLE-ed page
On 06/13/2018 11:43 PM, Dr. David Alan Gilbert wrote:
* Peter Xu (pet...@redhat.com) wrote:
On Tue, Jun 12, 2018 at 10:42:25AM +0800, Xiao Guangrong wrote:
On 06/11/2018 03:39 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:09PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao
On 06/11/2018 04:00 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:08PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Background
--
Current implementation of compression and decompression are very
hard to be enabled on productions. We noticed that too many wait
On 06/11/2018 03:39 PM, Peter Xu wrote:
On Mon, Jun 04, 2018 at 05:55:09PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Instead of putting the main thread to sleep state to wait for
free compression thread, we can directly post it out as normal
page that reduces the latency
On 06/09/2018 03:29 AM, Eduardo Habkost wrote:
commit f548222c added PC_COMPAT_2_12 to the 3.0 PC machine-types.
I believe this happened during manual conflict resolution when
applying the patch.
Indeed!
Reviewed-by: Xiao Guangrong
On 06/05/2018 06:31 AM, Eric Blake wrote:
On 06/04/2018 04:55 AM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong
Then the uses can adjust the parameters based on this info
Currently, it includes:
pages: amount of pages compressed and transferred to the target VM
busy: amount
On 05/03/2018 04:06 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
QEMU 2.13 enables strict check for compression & decompression to
make the migration more robust, that depends on the source to fix
the internal design which triggers the unexpe
On 05/02/2018 10:46 AM, Peter Xu wrote:
On Sat, Apr 28, 2018 at 04:10:45PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
Fix the bug introduced by da3f56cb2e767016 (migration: remove
ram_save_compressed_page()), It should be 'return' rather tha
On 04/27/2018 07:29 PM, Dr. David Alan Gilbert wrote:
Yes, can not agree with you more. :)
The challenge is how to put something into the stream without breaking
an old version of QEMU that's receiving the stream.
Er, i did not think this case :(.
The new parameter as this patch did is
On 04/27/2018 05:31 PM, Peter Xu wrote:
On Fri, Apr 27, 2018 at 11:15:37AM +0800, Xiao Guangrong wrote:
On 04/26/2018 10:01 PM, Eric Blake wrote:
On 04/26/2018 04:15 AM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
QEMU 2.13 enables strict
On 04/26/2018 10:01 PM, Eric Blake wrote:
On 04/26/2018 04:15 AM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
QEMU 2.13 enables strict check for compression & decompression to
make the migration more robuster, that depends on the source
On 04/26/2018 05:34 PM, Dr. David Alan Gilbert wrote:
* guangrong.x...@gmail.com (guangrong.x...@gmail.com) wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
QEMU 2.13 enables strict check for compression & decompression to
make the migration more robuster, that depends on
Hi Dave,
On 04/26/2018 05:15 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
QEMU 2.13 enables strict check for compression & decompression to
make the migration more robuster, that depends on the source to fix
the internal design whic
Hi Paolo, Michael, Stefan and others,
Could anyone merge this patchset if it is okay to you guys?
On 03/30/2018 03:51 PM, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
Changelog in v3:
Following changes are from Peter's review:
1) use comp_param[i
.
Reviewed-by: Xiao Guangrong <xiaoguangr...@tencent.com>
On 03/29/2018 12:25 PM, Peter Xu wrote:
On Thu, Mar 29, 2018 at 11:51:03AM +0800, Xiao Guangrong wrote:
On 03/28/2018 05:59 PM, Peter Xu wrote:
On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.x...@gmail.com wrote:
[...]
-static int compress_threads_load_setup(void)
+static int
On 03/28/2018 05:59 PM, Peter Xu wrote:
On Tue, Mar 27, 2018 at 05:10:37PM +0800, guangrong.x...@gmail.com wrote:
[...]
-static int compress_threads_load_setup(void)
+static int compress_threads_load_setup(QEMUFile *f)
{
int i, thread_count;
@@ -2665,6 +2685,7 @@ static int
On 03/28/2018 05:42 PM, Peter Xu wrote:
On Tue, Mar 27, 2018 at 05:10:36PM +0800, guangrong.x...@gmail.com wrote:
[...]
+static int compress_threads_load_setup(void)
+{
+int i, thread_count;
+
+if (!migrate_use_compression()) {
+return 0;
+}
+
+thread_count =
On 03/28/2018 05:25 PM, Peter Xu wrote:
On Tue, Mar 27, 2018 at 05:10:35PM +0800, guangrong.x...@gmail.com wrote:
[...]
@@ -357,10 +358,20 @@ static void compress_threads_save_cleanup(void)
terminate_compression_threads();
thread_count = migrate_compress_threads();
for (i
On 03/28/2018 12:20 PM, Peter Xu wrote:
On Wed, Mar 28, 2018 at 12:08:19PM +0800, jiang.bi...@zte.com.cn wrote:
On Tue, Mar 27, 2018 at 10:35:29PM +0800, Xiao Guangrong wrote:
No, we can't make the assumption that "error _must_ be caused by page update".
No document/ABI abou
On 03/28/2018 11:01 AM, Wang, Wei W wrote:
On Tuesday, March 13, 2018 3:58 PM, Xiao Guangrong wrote:
As compression is a heavy work, do not do it in migration thread, instead, we
post it out as a normal page
Signed-off-by: Xiao Guangrong <xiaoguangr...@tencent.com>
Hi Guangrong,
On 03/28/2018 08:43 AM, jiang.bi...@zte.com.cn wrote:
On 03/27/2018 07:17 PM, Peter Xu wrote:
On Tue, Mar 27, 2018 at 03:42:32AM +0800, Xiao Guangrong wrote:
[...]
It'll be understandable to me if the problem is that the compress()
API does not allow the input buffer to be changed during
On 03/27/2018 07:17 PM, Peter Xu wrote:
On Tue, Mar 27, 2018 at 03:42:32AM +0800, Xiao Guangrong wrote:
[...]
It'll be understandable to me if the problem is that the compress()
API does not allow the input buffer to be changed during the whole
period of the call. If that is a must
On 03/27/2018 03:22 PM, Peter Xu wrote:
On Thu, Mar 22, 2018 at 08:03:53PM +0800, Xiao Guangrong wrote:
On 03/21/2018 06:00 PM, Peter Xu wrote:
On Tue, Mar 13, 2018 at 03:57:34PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
Currently th
On 03/26/2018 05:02 PM, Peter Xu wrote:
On Thu, Mar 22, 2018 at 07:38:07PM +0800, Xiao Guangrong wrote:
On 03/21/2018 04:19 PM, Peter Xu wrote:
On Fri, Mar 16, 2018 at 04:05:14PM +0800, Xiao Guangrong wrote:
Hi David,
Thanks for your review.
On 03/15/2018 06:25 PM, Dr. David Alan
On 03/21/2018 06:00 PM, Peter Xu wrote:
On Tue, Mar 13, 2018 at 03:57:34PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
Currently the page being compressed is allowed to be updated by
the VM on the source QEMU, correspondingly the destinatio
On 03/21/2018 05:06 PM, Peter Xu wrote:
On Tue, Mar 13, 2018 at 03:57:33PM +0800, guangrong.x...@gmail.com wrote:
From: Xiao Guangrong <xiaoguangr...@tencent.com>
Current code uses compress2()/uncompress() to compress/decompress
memory, these two function manager memory allo
On 03/21/2018 04:19 PM, Peter Xu wrote:
On Fri, Mar 16, 2018 at 04:05:14PM +0800, Xiao Guangrong wrote:
Hi David,
Thanks for your review.
On 03/15/2018 06:25 PM, Dr. David Alan Gilbert wrote:
migration/ram.c | 32
Hi,
Do you have some performance
On 03/19/2018 06:54 PM, Dr. David Alan Gilbert wrote:
+return 0;
+exit:
+compress_threads_load_cleanup();
I don't think this is safe; if inflateInit(..) fails in not-the-last
thread, compress_threads_load_cleanup() will try and destroy all the
mutex's and condition variables, even
On 03/19/2018 03:56 PM, jiang.bi...@zte.com.cn wrote:
Hi, guangrong
@@ -1051,11 +1052,13 @@ static int do_compress_ram_page(QEMUFile *f, z_stream
*stream, RAMBlock *block,
{
RAMState *rs = ram_state;
int bytes_sent, blen;
-uint8_t *p = block->host + (offset & TARGET_PAGE_MASK);
+
On 03/19/2018 09:49 AM, jiang.bi...@zte.com.cn wrote:
Hi, guangrong
+/* return the size after compression, or negative value on error */
+static int qemu_compress_data(z_stream *stream, uint8_t *dest, size_t dest_len,
+ const uint8_t *source, size_t source_len)
1 - 100 of 842 matches
Mail list logo