From: Manish Mishra
QEMU does not set 0x1f in case VM does not have extended CPU topology
and expects guests to fallback to 0xb. Some versions of Windows does not
like this behavior and expects this leaf to be populated. As a result Windows
VM fails with blue screen.
Leaf 0x1f is superset of 0xb
QEMU does not set 0x1f in case VM does not have extended CPU topology
and expects guests to fallback to 0xb. Some versions of windows i.e.
windows 10, 11 does not like this behavior and expects this leaf to be
populated. This is observed with windows VMs with secure boot, uefi
and HyperV role enabl
On 21/08/23 5:31 pm, manish.mishra wrote:
Hi Everyone,
We are facing this issue. I see this conversation was never conversed and discussed issue
is still active on QEMU master. Just for summary, the solution mentioned in this thread
"temporarily enable bus master memory region&quo
Hi Everyone,
We are facing this issue. I see this conversation was never conversed and discussed issue
is still active on QEMU master. Just for summary, the solution mentioned in this thread
"temporarily enable bus master memory region" was not taken with the following
justification.
"Poking
On 26/04/23 5:35 pm, Juan Quintela wrote:
"manish.mishra" wrote:
On 26/04/23 4:35 pm, Juan Quintela wrote:
"manish.mishra" wrote:
On 26/04/23 3:58 pm, Juan Quintela wrote:
Before:
while (true) {
sem_post(channels_ready)
}
And you want to add to
On 26/04/23 4:35 pm, Juan Quintela wrote:
"manish.mishra" wrote:
On 26/04/23 3:58 pm, Juan Quintela wrote:
"manish.mishra" wrote:
multifd_send_sync_main() posts request on the multifd channel
but does not call sem_wait() on channels_ready semaphore, making
the chan
On 26/04/23 3:58 pm, Juan Quintela wrote:
"manish.mishra" wrote:
multifd_send_sync_main() posts request on the multifd channel
but does not call sem_wait() on channels_ready semaphore, making
the channels_ready semaphore count keep increasing.
As a result, sem_wait() on channel
() keeps searching
for a free channel in a busy loop until a channel is freed.
Signed-off-by: manish.mishra
---
migration/multifd.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/migration/multifd.c b/migration/multifd.c
index cce3ad6988..43d26e7012 100644
--- a/migration
On 09/02/23 7:47 pm, Vitaly Kuznetsov wrote:
Alex Bennée writes:
"manish.mishra" writes:
Hi Everyone,
Checking if there is any feedback on this.
I've expanded the CC list to some relevant maintainers and people who
have touched that code in case this was missed.
Thanks
Hi Everyone,
Checking if there is any feedback on this.
Thanks
Manish Mishra
On 31/01/23 8:17 pm, manish.mishra wrote:
Hi Everyone,
I hope everyone is doing great. We wanted to check why we do not expose support
for HyperV features in Qemu similar to what we do for normal CPU features via
On 31/01/23 8:17 pm, manish.mishra wrote:
Hi Everyone,
I hope everyone is doing great. We wanted to check why we do not expose support
for HyperV features in Qemu similar to what we do for normal CPU features via
query-cpu-defs or cpu-model-expansion QMP commands. This support is required
On 31/01/23 8:47 pm, Peter Xu wrote:
On Tue, Jan 31, 2023 at 08:29:08PM +0530, manish.mishra wrote:
Hi Peter, Daniel,
Just a gentle reminder on this patch if it can be merged, and really
sorry i see now earlier reminders i sent were on v6[0/2] and somehow you
were not CCed on that earlier
Hi Everyone,
I hope everyone is doing great. We wanted to check why we do not expose support
for HyperV features in Qemu similar to what we do for normal CPU features via
query-cpu-defs or cpu-model-expansion QMP commands. This support is required
for live migration with HyperV features as hype
a
On 21/12/22 12:06 am, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be a
Hi Everyone,
I was just checking if it was not missed in holidays and was received. :)
Thanks
Manish Mishra
On 21/12/22 12:14 am, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first
-off-by: manish.mishra
---
migration/channel.c | 45 +
migration/channel.h | 5
migration/migration.c| 54
migration/multifd.c | 19 +++---
migration/multifd.h | 2 +-
migration/postcopy
by: Daniel P. Berrange
Suggested-by: Daniel P. Berrange
Signed-off-by: manish.mishra
---
chardev/char-socket.c | 4 ++--
include/io/channel.h| 6 ++
io/channel-buffer.c | 1 +
io/channel-command.c| 1 +
io/chan
migration_incoming_setup but if multifd channel is received before
default channel, multifd channels will be uninitialized. Moved
multifd_load_setup to migration_ioc_process_incoming.
manish.mishra (2):
io: Add support for MSG_PEEK for socket channel
migration: check magic value for deciding the
migration_incoming_setup but if multifd channel is received before
default channel, multifd channels will be uninitialized. Moved
multifd_load_setup to migration_ioc_process_incoming.
manish.mishra (2):
io: Add support for MSG_PEEK for socket channel
migration: check magic value for deciding the
:
From: "manish.mishra"
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra parameter
'flags' is added to io_readv calls to pass extra read flags like
On 29/11/22 7:57 pm, Peter Xu wrote:
On Tue, Nov 29, 2022 at 04:24:58PM +0530, manish.mishra wrote:
On 23/11/22 11:34 pm, Peter Xu wrote:
On Wed, Nov 23, 2022 at 05:27:34PM +, manish.mishra wrote:
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read
On 23/11/22 11:34 pm, Peter Xu wrote:
On Wed, Nov 23, 2022 at 05:27:34PM +, manish.mishra wrote:
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra parameter
peek and added one
specific to live migration.
2. Updated to use qemu_co_sleep_ns instead of qio_channel_yield.
3. Some other minor fixes.
v5:
1. Handle busy-wait in migration_channel_read_peek due partial reads.
manish.mishra (2):
io: Add support for MSG_PEEK for socket channel
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra parameter
'flags' is added to io_readv calls to pass extra read flags like
MSG_PEEK.
Reviewed-by: Daniel P. Berrangé
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be always true, as even if a channel has a
connectio
On 23/11/22 9:57 pm, Peter Xu wrote:
On Wed, Nov 23, 2022 at 09:28:14PM +0530, manish.mishra wrote:
On 23/11/22 9:22 pm, Peter Xu wrote:
On Wed, Nov 23, 2022 at 03:05:27PM +, manish.mishra wrote:
+int migration_channel_read_peek(QIOChannel *ioc,
+const
On 23/11/22 9:28 pm, Daniel P. Berrangé wrote:
On Wed, Nov 23, 2022 at 03:05:27PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel
On 23/11/22 9:22 pm, Peter Xu wrote:
On Wed, Nov 23, 2022 at 03:05:27PM +, manish.mishra wrote:
+int migration_channel_read_peek(QIOChannel *ioc,
+const char *buf,
+const size_t buflen,
+Error
peek and added one
specific to live migration.
2. Updated to use qemu_co_sleep_ns instead of qio_channel_yield.
3. Some other minor fixes.
manish.mishra (2):
io: Add support for MSG_PEEK for socket channel
migration: check magic value for deciding the mapping of channels
chardev/char
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be always true, as even if a channel has a
connectio
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra parameter
'flags' is added to io_readv calls to pass extra read flags like
MSG_PEEK.
Suggested-by: Daniel P. Berrangé
On 22/11/22 10:03 pm, Peter Xu wrote:
On Tue, Nov 22, 2022 at 11:29:05AM -0500, Peter Xu wrote:
On Tue, Nov 22, 2022 at 11:10:18AM -0500, Peter Xu wrote:
On Tue, Nov 22, 2022 at 09:01:59PM +0530, manish.mishra wrote:
On 22/11/22 8:19 pm, Daniel P. Berrangé wrote:
On Tue, Nov 22, 2022 at 09
On 22/11/22 2:53 pm, Daniel P. Berrangé wrote:
On Mon, Nov 21, 2022 at 01:40:27PM +0100, Juan Quintela wrote:
Het Gala wrote:
To prevent double data encoding of uris, instead of passing transport
mechanisms, host address and port all together in form of a single string
and writing different
On 22/11/22 8:19 pm, Daniel P. Berrangé wrote:
On Tue, Nov 22, 2022 at 09:41:01AM -0500, Peter Xu wrote:
On Tue, Nov 22, 2022 at 02:38:53PM +0530, manish.mishra wrote:
On 22/11/22 2:30 pm, Daniel P. Berrangé wrote:
On Sat, Nov 19, 2022 at 09:36:14AM +, manish.mishra wrote:
MSG_PEEK
On 22/11/22 3:23 pm, Daniel P. Berrangé wrote:
On Tue, Nov 22, 2022 at 03:10:53PM +0530, manish.mishra wrote:
On 22/11/22 2:59 pm, Daniel P. Berrangé wrote:
On Tue, Nov 22, 2022 at 02:38:53PM +0530, manish.mishra wrote:
On 22/11/22 2:30 pm, Daniel P. Berrangé wrote:
On Sat, Nov 19, 2022 at
On 22/11/22 2:59 pm, Daniel P. Berrangé wrote:
On Tue, Nov 22, 2022 at 02:38:53PM +0530, manish.mishra wrote:
On 22/11/22 2:30 pm, Daniel P. Berrangé wrote:
On Sat, Nov 19, 2022 at 09:36:14AM +, manish.mishra wrote:
MSG_PEEK reads from the peek of channel, The data is treated as
unread
On 22/11/22 2:30 pm, Daniel P. Berrangé wrote:
On Sat, Nov 19, 2022 at 09:36:14AM +, manish.mishra wrote:
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra
On 19/11/22 3:06 pm, manish.mishra wrote:
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra parameter
'flags' is added to io_readv calls to pass extra
possible with tls, hence this logic is avoided for tls
live migrations. This patch uses MSG_PEEK to check the magic number of
channels so that current data/control stream management remains
un-effected.
Suggested-by: Daniel P. Berrangé
Signed-off-by: manish.mishra
v2:
TLS does not support
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be always true, as even if a channel has a
connectio
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be always true, as even if a channel has a
connectio
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra parameter
'flags' is added to io_readv calls to pass extra read flags like
MSG_PEEK.
Suggested-by: Daniel P. Berrangé
MSG_PEEK reads from the peek of channel, The data is treated as
unread and the next read shall still return this data. This
support is currently added only for socket class. Extra parameter
'flags' is added to io_readv calls to pass extra read flags like
MSG_PEEK.
---
chardev/char-socket.c
On 16/11/22 4:57 pm, Daniel P. Berrangé wrote:
On Wed, Nov 16, 2022 at 04:49:18PM +0530, manish.mishra wrote:
On 16/11/22 12:20 am, Daniel P. Berrangé wrote:
On Tue, Nov 15, 2022 at 06:11:30PM +, Daniel P. Berrangé wrote:
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote
On 16/11/22 12:20 am, Daniel P. Berrangé wrote:
On Tue, Nov 15, 2022 at 06:11:30PM +, Daniel P. Berrangé wrote:
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as
On 15/11/22 11:06 pm, Peter Xu wrote:
Hi, Manish,
On Mon, Nov 07, 2022 at 04:51:59PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel
Thanks Peter
On 14/11/22 10:21 pm, Peter Xu wrote:
Manish,
On Thu, Nov 03, 2022 at 11:47:51PM +0530, manish.mishra wrote:
Yes, but if we try to read early on main channel with tls enabled case it is an
issue. Sorry i may not have put above comment cleary. I will try to put
scenario step
On 11/11/22 4:17 am, Peter Xu wrote:
On Thu, Nov 10, 2022 at 05:59:45PM +0530, manish.mishra wrote:
Hi Everyone, Just a gentle reminder for review. :)
Hi, Manish,
I've got a slightly busy week, sorry! If Daniel and Juan won't have time
to look at it I'll have a closer
Hi Everyone, Just a gentle reminder for review. :)
Thanks
Manish Mishra
On 07/11/22 10:21 pm, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel
On 07/11/22 10:21 pm, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the main channel followed by the multifid or post-copy
preemption channel. This may not be
possible with tls, hence this logic is avoided for tls
live migrations. This patch uses MSG_PEEK to check the magic number of
channels so that current data/control stream management remains
un-effected.
Suggested-by: Daniel P. Berrangé
Signed-off-by: manish.mishra
v2:
TLS does not support
On 03/11/22 11:47 pm, manish.mishra wrote:
On 03/11/22 11:27 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 11:06:23PM +0530, manish.mishra wrote:
On 03/11/22 10:57 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 10:04:54PM +0530, manish.mishra wrote:
On 03/11/22 2:59 pm
On 03/11/22 11:47 pm, manish.mishra wrote:
On 03/11/22 11:27 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 11:06:23PM +0530, manish.mishra wrote:
On 03/11/22 10:57 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 10:04:54PM +0530, manish.mishra wrote:
On 03/11/22 2:59 pm
On 03/11/22 11:27 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 11:06:23PM +0530, manish.mishra wrote:
On 03/11/22 10:57 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 10:04:54PM +0530, manish.mishra wrote:
On 03/11/22 2:59 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022
On 03/11/22 10:57 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 10:04:54PM +0530, manish.mishra wrote:
On 03/11/22 2:59 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 02:50:25PM +0530, manish.mishra wrote:
On 01/11/22 9:15 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022
On 03/11/22 2:59 pm, Daniel P. Berrangé wrote:
On Thu, Nov 03, 2022 at 02:50:25PM +0530, manish.mishra wrote:
On 01/11/22 9:15 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022 at 09:10:14PM +0530, manish.mishra wrote:
On 01/11/22 8:21 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022 at
On 01/11/22 9:15 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022 at 09:10:14PM +0530, manish.mishra wrote:
On 01/11/22 8:21 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022 at 02:30:29PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side
On 01/11/22 9:15 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022 at 09:10:14PM +0530, manish.mishra wrote:
On 01/11/22 8:21 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022 at 02:30:29PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side
On 01/11/22 8:21 pm, Daniel P. Berrangé wrote:
On Tue, Nov 01, 2022 at 02:30:29PM +, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
always established in the same order as the source and the first one will
always be the default channel
Sorry for late patch on this. I mentioned i will send it last week itself, but
later reliased it was festival week in India, so was mostly holidays.
Thanks
Manish Mishra
On 01/11/22 8:00 pm, manish.mishra wrote:
Current logic assumes that channel connections on the destination side are
management remains un-effected.
Signed-off-by: manish.mishra
---
include/io/channel.h | 25 +
io/channel-socket.c | 27 +++
io/channel.c | 39 +++
migration/migration.c| 33
On 21/10/22 3:37 am, Daniel P. Berrangé wrote:
On Thu, Oct 20, 2022 at 12:32:06PM -0400, Peter Xu wrote:
On Thu, Oct 20, 2022 at 08:14:19PM +0530, manish.mishra wrote:
I had one concern, during recover we do not send any magic. As of now we
do not support multifd with postcopy so it
Mon, Oct 17, 2022 at 01:06:00PM +0530, manish.mishra wrote:
Hi Daniel,
I was thinking for some solutions for this so wanted to discuss that before
going ahead. Also added Juan and Peter in loop.
1. Earlier i was thinking, on destination side as of now for default
and multi-FD channel first data
migration to/from old qemu, but then that
can come as migration capability?
Please let me know if any of these works or if you have some other suggestions?
Thanks
Manish Mishra
On 13/10/22 1:45 pm, Daniel P. Berrangé wrote:
On Thu, Oct 13, 2022 at 01:23:40AM +0530, manish.mishra wrote:
Hi
On 13/10/22 1:45 pm, Daniel P. Berrangé wrote:
On Thu, Oct 13, 2022 at 01:23:40AM +0530, manish.mishra wrote:
Hi Everyone,
Hope everyone is doing great. I have seen some live migration issues with
qemu-4.2 when using multiFD. Signature of issue is something like this.
2022-10-01T09:57
Hi Everyone,
Hope everyone is doing great. I have seen some live migration issues with
qemu-4.2 when using multiFD. Signature of issue is something like this.
2022-10-01T09:57:53.972864Z qemu-kvm: failed to receive packet via multifd
channel 0: multifd: received packet magic 5145564d expected 11
On 16/06/22 9:20 pm, Dr. David Alan Gilbert wrote:
* Daniel P. Berrangé (berra...@redhat.com) wrote:
On Wed, Jun 15, 2022 at 05:43:28PM +0100, Daniel P. Berrangé wrote:
On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
On Thu, Jun
On 17/06/22 12:17 am, Dr. David Alan Gilbert wrote:
* Het Gala (het.g...@nutanix.com) wrote:
i) Dynamically decide appropriate source and destination ip pairs for the
corresponding multi-FD channel to be connected.
ii) Removed the support for setting the number of multi-fd channels from q
Hi Daniel, David,
Thank you so much for review on patches. I am posting this message on
behalf of Het. We wanted to get a early feedback so sorry if code was not
in best of shape. Het is currently on break intership break so does not have
access to nutanix mail, he will join in first week of j
On 16/06/22 1:46 pm, Daniel P. Berrangé wrote:
On Wed, Jun 15, 2022 at 08:14:26PM +0100, Dr. David Alan Gilbert wrote:
* Daniel P. Berrangé (berra...@redhat.com) wrote:
On Fri, Jun 10, 2022 at 05:58:31PM +0530, manish.mishra wrote:
On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
On Thu, Jun
On 13/06/22 8:03 pm, Peter Xu wrote:
On Mon, Jun 13, 2022 at 03:28:34PM +0530, manish.mishra wrote:
On 26/05/22 8:21 am, Jason Wang wrote:
On Wed, May 25, 2022 at 11:56 PM Peter Xu wrote:
On Wed, May 25, 2022 at 11:38:26PM +0800, Hyman Huang wrote:
2. Also this algorithm only control or
On 26/05/22 8:21 am, Jason Wang wrote:
On Wed, May 25, 2022 at 11:56 PM Peter Xu wrote:
On Wed, May 25, 2022 at 11:38:26PM +0800, Hyman Huang wrote:
2. Also this algorithm only control or limits dirty rate by guest
writes. There can be some memory dirtying done by virtio based devices
which i
On 09/06/22 9:17 pm, Daniel P. Berrangé wrote:
On Thu, Jun 09, 2022 at 07:33:01AM +, Het Gala wrote:
As of now, the multi-FD feature supports connection over the default network
only. This Patchset series is a Qemu side implementation of providing multiple
interfaces support for multi-FD.
On 26/05/22 8:21 am, Jason Wang wrote:
On Wed, May 25, 2022 at 11:56 PM Peter Xu wrote:
On Wed, May 25, 2022 at 11:38:26PM +0800, Hyman Huang wrote:
2. Also this algorithm only control or limits dirty rate by guest
writes. There can be some memory dirtying done by virtio based devices
which
On 23/05/22 4:26 pm, Dr. David Alan Gilbert wrote:
* Peter Xu (pet...@redhat.com) wrote:
With preemption mode on, when we see a postcopy request that was requesting
for exactly the page that we have preempted before (so we've partially sent
the page already via PRECOPY channel and it got preem
On 17/05/22 1:49 pm, Hyman Huang wrote:
Thanks Manish for the comment, i'll give my explanation and any supplement are
welcomed.
Really sorry for such late reply Hyman, this slipped my mind.
在 2022/5/17 1:13, manish.mishra 写道:
Hi Hyman Huang,
I had few doubts regarding this patch s
Hi Hyman Huang,
I had few doubts regarding this patch series.
1. Why we choose for dirty rate limit per vcpu. I mean it becomes very hard for
user to decide per
vcpu dirty rate limit. For e.g. we have 1Gbps network and 10 vcpu vm. Now
if someone wants to
keep criteria for convergence
On 16/05/22 8:21 pm, manish.mishra wrote:
On 16/05/22 7:41 pm, Peter Xu wrote:
Hi, Manish,
On Mon, May 16, 2022 at 07:01:35PM +0530, manish.mishra wrote:
On 26/04/22 5:08 am, Peter Xu wrote:
LGTM,
Peter, I wanted to give review-tag for this and ealier patch too. I am new
to qemu
review
On 26/04/22 5:08 am, Peter Xu wrote:
This is v5 of postcopy preempt series. It can also be found here:
https://github.com/xzpeter/qemu/tree/postcopy-preempt
RFC: https://lore.kernel.org/qemu-devel/20220119080929.39485-1-pet...@redhat.com
V1: https://lore.kernel.org/qemu-devel/20220216062
On 26/04/22 5:08 am, Peter Xu wrote:
Add a parameter that can conditionally disable the "break sending huge
page" behavior in postcopy preemption. By default it's enabled.
It should only be used for debugging purposes, and we should never remove
the "x-" prefix.
Signed-off-by: Peter Xu
Revie
On 16/05/22 7:41 pm, Peter Xu wrote:
Hi, Manish,
On Mon, May 16, 2022 at 07:01:35PM +0530, manish.mishra wrote:
On 26/04/22 5:08 am, Peter Xu wrote:
LGTM,
Peter, I wanted to give review-tag for this and ealier patch too. I am new
to qemu
review process so not sure how give review-tag, did not
On 26/04/22 5:08 am, Peter Xu wrote:
This patch allows the postcopy preempt channel to be created
asynchronously. The benefit is that when the connection is slow, we won't
take the BQL (and potentially block all things like QMP) for a long time
without releasing.
A function postcopy_preempt_wa
On 26/04/22 5:08 am, Peter Xu wrote:
LGTM,
Peter, I wanted to give review-tag for this and ealier patch too. I am
new to qemu
review process so not sure how give review-tag, did not find any
reference on
google too. So if you please let me know how to do it.
To allow postcopy recovery, the r
On 12/05/22 9:52 pm, Peter Xu wrote:
Hi, Manish,
On Wed, May 11, 2022 at 09:24:28PM +0530, manish.mishra wrote:
@@ -1962,9 +2038,17 @@ static bool get_queued_page(RAMState *rs,
PageSearchStatus *pss)
RAMBlock *block;
ram_addr_t offset;
+again:
block = unqueue_page(rs
LGTM
On 26/04/22 5:08 am, Peter Xu wrote:
Create a new socket for postcopy to be prepared to send postcopy requested
pages via this specific channel, so as to not get blocked by precopy pages.
A new thread is also created on dest qemu to receive data from this new channel
based on the ram_load_
On 31/03/22 8:38 pm, Peter Xu wrote:
LGTM
This patch enables postcopy-preempt feature.
It contains two major changes to the migration logic:
(1) Postcopy requests are now sent via a different socket from precopy
background migration stream, so as to be isolated from very high page
r
ore trying throttling.
Signed-off-by: manish.mishra
---
migration/ram.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index 7a43bfd7af..9ba1c8b235 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1006,8 +1006,12 @@ static v
87 matches
Mail list logo