Branch: refs/heads/master
Home: https://github.com/qemu/qemu
Commit: a7a3784128fa1de275b5eb2406f3f46842fdbd1a
https://github.com/qemu/qemu/commit/a7a3784128fa1de275b5eb2406f3f46842fdbd1a
Author: Akihiko Odaki <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M hw/pci/pci.c
M hw/vfio/pci.c
M include/hw/pci/pci_device.h
Log Message:
-----------
hw/pci: Use -1 as the default value for rombar
vfio_pci_size_rom() distinguishes whether rombar is explicitly set to 1
by checking dev->opts, bypassing the QOM property infrastructure.
Use -1 as the default value for rombar to tell if the user explicitly
set it to 1. The property is also converted from unsigned to signed.
-1 is signed so it is safe to give it a new meaning. The values in
[2 ^ 31, 2 ^ 32) become invalid, but nobody should have typed these
values by chance.
Suggested-by: Markus Armbruster <[email protected]>
Signed-off-by: Akihiko Odaki <[email protected]>
Reviewed-by: Markus Armbruster <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Commit: ad1ea5ffa10d4cf365c142caf627f2c43b3592c2
https://github.com/qemu/qemu/commit/ad1ea5ffa10d4cf365c142caf627f2c43b3592c2
Author: Akihiko Odaki <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M hw/core/qdev.c
M include/hw/qdev-core.h
M system/qdev-monitor.c
Log Message:
-----------
qdev: Remove opts member
It is no longer used.
Signed-off-by: Akihiko Odaki <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Markus Armbruster <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Commit: 558ee1ede6cc95d3dde806f0ac323911c5dbb4b4
https://github.com/qemu/qemu/commit/558ee1ede6cc95d3dde806f0ac323911c5dbb4b4
Author: Philippe Mathieu-Daudé <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M accel/tcg/tcg-all.c
M hw/core/meson.build
A hw/core/qdev-user.c
M include/hw/qdev-core.h
Log Message:
-----------
qdev: Implement qdev_create_fake_machine() for user emulation
When a QDev instance is realized, qdev_get_machine() ends up called.
In the next commit, qdev_get_machine() will require a "machine"
container to be always present. To satisfy this QOM containers design,
Implement qdev_create_fake_machine() which creates a fake "machine"
container for user emulation.
On system emulation, qemu_create_machine() is called from qemu_init().
For user emulation, since the TCG accelerator always calls
tcg_init_machine(), we use it to hook our fake machine creation.
Suggested-by: Peter Xu <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Acked-by: Peter Xu <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Message-Id: <[email protected]>
Commit: 63450f322bf76faab7add3def89815d9198492dc
https://github.com/qemu/qemu/commit/63450f322bf76faab7add3def89815d9198492dc
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M hw/core/qdev.c
Log Message:
-----------
qdev: Make qdev_get_machine() not use container_get()
Currently, qdev_get_machine() has a slight misuse on container_get(), as
the helper says "get a container" but in reality the goal is to get the
machine object. It is still a "container" but not strictly.
Note that it _may_ get a container (at "/machine") in our current unit test
of test-qdev-global-props.c before all these changes, but it's probably
unexpected and worked by accident.
Switch to an explicit object_resolve_path_component(), with a side benefit
that qdev_get_machine() can happen a lot, and we don't need to split the
string ("/machine") every time. This also paves way for making the helper
container_get() never try to return a non-container at all.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Commit: 41fc91772841c93c218df78d7e359cb2cd00dff5
https://github.com/qemu/qemu/commit/41fc91772841c93c218df78d7e359cb2cd00dff5
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M hw/core/qdev.c
M include/hw/qdev-core.h
Log Message:
-----------
qdev: Add machine_get_container()
Add a helper to fetch machine containers. Add some sanity check around.
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Commit: 1c34335844950a152c020ec80ce7cf711b1861bc
https://github.com/qemu/qemu/commit/1c34335844950a152c020ec80ce7cf711b1861bc
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M hw/core/gpio.c
M hw/core/qdev.c
M hw/core/sysbus.c
M hw/i386/pc.c
M system/ioport.c
M system/memory.c
M system/qdev-monitor.c
M system/vl.c
Log Message:
-----------
qdev: Use machine_get_container()
Use machine_get_container() whenever applicable across the tree.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Commit: 180e8f16f0ad6835ce0c437c7ffc9f25801a399e
https://github.com/qemu/qemu/commit/180e8f16f0ad6835ce0c437c7ffc9f25801a399e
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M include/qom/object.h
M qom/object.c
Log Message:
-----------
qom: Add object_get_container()
Add a helper to fetch a root container (under object_get_root()). Sanity
check on the type of the object.
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Daniel P. Berrangé <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Commit: d3176a9f387f8b6b56882045d36f5b3f82565d90
https://github.com/qemu/qemu/commit/d3176a9f387f8b6b56882045d36f5b3f82565d90
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M backends/cryptodev.c
M chardev/char.c
M qom/object.c
M scsi/pr-manager.c
M ui/console.c
M ui/dbus-chardev.c
Log Message:
-----------
qom: Use object_get_container()
Use object_get_container() whenever applicable across the tree.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Commit: f6f0284b6fd495c4a0d7d3b91317105d8e1a8bf3
https://github.com/qemu/qemu/commit/f6f0284b6fd495c4a0d7d3b91317105d8e1a8bf3
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M include/qom/object.h
M qom/container.c
Log Message:
-----------
qom: Remove container_get()
Now there's no user of container_get(), remove it.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Philippe Mathieu-Daudé <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Commit: bc4e7522ad19890eab8cf1df04360abf610b1236
https://github.com/qemu/qemu/commit/bc4e7522ad19890eab8cf1df04360abf610b1236
Author: Paolo Bonzini <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M include/qom/object.h
M qom/object.c
Log Message:
-----------
qom: remove unused InterfaceInfo::concrete_class field
The "concrete_class" field of InterfaceClass is only ever written, and as far
as I can tell is not particularly useful when debugging either; remove it.
Signed-off-by: Paolo Bonzini <[email protected]>
Reviewed-by: Peter Maydell <[email protected]>
Message-ID: <[email protected]>
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Commit: 5f396935f8f1628005ef14a3c4c3dc84c6aa3d96
https://github.com/qemu/qemu/commit/5f396935f8f1628005ef14a3c4c3dc84c6aa3d96
Author: Philippe Mathieu-Daudé <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M system/vl.c
Log Message:
-----------
system: Inline machine_containers[] in qemu_create_machine_containers()
Only qemu_create_machine_containers() uses the
machine_containers[] array, restrict the scope
to this single user.
Signed-off-by: Philippe Mathieu-Daudé <[email protected]>
Acked-by: Peter Xu <[email protected]>
Reviewed-by: Richard Henderson <[email protected]>
Message-Id: <[email protected]>
Commit: d127294f265e6a17f8d614f2bef7df8455e81f56
https://github.com/qemu/qemu/commit/d127294f265e6a17f8d614f2bef7df8455e81f56
Author: Shameer Kolothum <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd-uadk.c
Log Message:
-----------
migration/multifd: Fix compile error caused by page_size usage
>From Commit 90fa121c6c07 ("migration/multifd: Inline page_size and
page_count") onwards page_size is not part of MutiFD*Params but uses
an inline constant instead.
However, it missed updating an old usage, causing a compile error.
Fixes: 90fa121c6c07 ("migration/multifd: Inline page_size and page_count")
Signed-off-by: Shameer Kolothum <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 1d457daf868191dc1c0b58dc7280799964f40334
https://github.com/qemu/qemu/commit/1d457daf868191dc1c0b58dc7280799964f40334
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/ram.c
Log Message:
-----------
migration/multifd: Further remove the SYNC on complete
Commit 637280aeb2 ("migration/multifd: Avoid the final FLUSH in
complete()") stopped sending the RAM_SAVE_FLAG_MULTIFD_FLUSH flag at
ram_save_complete(), because the sync on the destination side is not
needed due to the last iteration of find_dirty_block() having already
done it.
However, that commit overlooked that multifd_ram_flush_and_sync() on the
source side is also not needed at ram_save_complete(), for the same
reason.
Moreover, removing the RAM_SAVE_FLAG_MULTIFD_FLUSH but keeping the
multifd_ram_flush_and_sync() means that currently the recv threads will
hang when receiving the MULTIFD_FLAG_SYNC message, waiting for the
destination sync which only happens when RAM_SAVE_FLAG_MULTIFD_FLUSH is
received.
Luckily, multifd is still all working fine because recv side cleanup
code (mostly multifd_recv_sync_main()) is smart enough to make sure even
if recv threads are stuck at SYNC it'll get kicked out. And since this
is the completion phase of migration, nothing else will be sent after
the SYNCs.
This needs to be fixed because in the future VFIO will have data to push
after ram_save_complete() and we don't want the recv thread to be stuck
in the MULTIFD_FLAG_SYNC message.
Remove the unnecessary (and buggy) invocation of
multifd_ram_flush_and_sync().
For very old binaries (multifd_flush_after_each_section==true), the
flush_and_sync is still needed because each EOS received on destination
will enforce all-channel sync once.
Stable branches do not need this patch, as no real bug I can think of
that will go wrong there.. so not attaching Fixes to be clear on the
backport not needed.
Reviewed-by: Fabiano Rosas <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 10801e08ac926a5a6083a9bd2ff87b153ccb95b1
https://github.com/qemu/qemu/commit/10801e08ac926a5a6083a9bd2ff87b153ccb95b1
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd-nocomp.c
M migration/multifd.c
M migration/multifd.h
Log Message:
-----------
migration/multifd: Allow to sync with sender threads only
Teach multifd_send_sync_main() to sync with threads only.
We already have such requests, which is when mapped-ram is enabled with
multifd. In that case, no SYNC messages will be pushed to the stream when
multifd syncs the sender threads because there's no destination threads
waiting for that. The whole point of the sync is to make sure all threads
finished their jobs.
So fundamentally we have a request to do the sync in different ways:
- Either to sync the threads only,
- Or to sync the threads but also with the destination side.
Mapped-ram did it already because of the use_packet check in the sync
handler of the sender thread. It works.
However it may stop working when e.g. VFIO may start to reuse multifd
channels to push device states. In that case VFIO has similar request on
"thread-only sync" however we can't check a flag because such sync request
can still come from RAM which needs the on-wire notifications.
Paving way for that by allowing the multifd_send_sync_main() to specify
what kind of sync the caller needs. We can use it for mapped-ram already.
No functional change intended.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 604b4749c58f676aa37bd4d96496152f36f3b293
https://github.com/qemu/qemu/commit/604b4749c58f676aa37bd4d96496152f36f3b293
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/ram.c
M migration/ram.h
M migration/rdma.h
Log Message:
-----------
migration/ram: Move RAM_SAVE_FLAG* into ram.h
Firstly, we're going to use the multifd flag soon in multifd code, so ram.c
isn't gonna work.
Secondly, we have a separate RDMA flag dangling around, which is definitely
not obvious. There's one comment that helps, but not too much.
Put all RAM save flags altogether, so nothing will get overlooked.
Add a section explain why we can't use bits over 0x200.
Remove RAM_SAVE_FLAG_FULL as it's already not used in QEMU, as the comment
explained.
Reviewed-by: Fabiano Rosas <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: e5f14aa5fe7f44649f5413558cac81c09d6c7f93
https://github.com/qemu/qemu/commit/e5f14aa5fe7f44649f5413558cac81c09d6c7f93
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd-nocomp.c
M migration/multifd.h
M migration/ram.c
Log Message:
-----------
migration/multifd: Unify RAM_SAVE_FLAG_MULTIFD_FLUSH messages
RAM_SAVE_FLAG_MULTIFD_FLUSH message should always be correlated to a sync
request on src. Unify such message into one place, and conditionally send
the message only if necessary.
Reviewed-by: Fabiano Rosas <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: de695b1399242da0c618049932a9a6f1a0a0a4f1
https://github.com/qemu/qemu/commit/de695b1399242da0c618049932a9a6f1a0a0a4f1
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/ram.c
Log Message:
-----------
migration/multifd: Remove sync processing on postcopy
Multifd never worked with postcopy, at least yet so far.
Remove the sync processing there, because it's confusing, and they should
never appear. Now if RAM_SAVE_FLAG_MULTIFD_FLUSH is observed, we fail hard
instead of trying to invoke multifd code.
Reviewed-by: Fabiano Rosas <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 1aa81c3098f0270905deff516d455604fcbfaab5
https://github.com/qemu/qemu/commit/1aa81c3098f0270905deff516d455604fcbfaab5
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd-nocomp.c
M migration/multifd.h
M migration/ram.c
Log Message:
-----------
migration/multifd: Cleanup src flushes on condition check
The src flush condition check is over complicated, and it's getting more
out of control if postcopy will be involved.
In general, we have two modes to do the sync: legacy or modern ways.
Legacy uses per-section flush, modern uses per-round flush.
Mapped-ram always uses the modern, which is per-round.
Introduce two helpers, which can greatly simplify the code, and hopefully
make it readable again.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: baab4473dba2b85adf3c0622b92bc209f7a8dec0
https://github.com/qemu/qemu/commit/baab4473dba2b85adf3c0622b92bc209f7a8dec0
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/ram.c
Log Message:
-----------
migration/multifd: Document the reason to sync for save_setup()
It's not straightforward to see why src QEMU needs to sync multifd during
setup() phase. After all, there's no page queued at that point.
For old QEMUs, there's a solid reason: EOS requires it to work. While it's
clueless on the new QEMUs which do not take EOS message as sync requests.
One will figure that out only when this is conditionally removed. In fact,
the author did try it out. Logically we could still avoid doing this on
new machine types, however that needs a separate compat field and that can
be an overkill in some trivial overhead in setup() phase.
Let's instead document it completely, to avoid someone else tries this
again and do the debug one more time, or anyone confused on why this ever
existed.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: b93d897ea2f0abbe7fc341a9ac176b5ecd0f3c93
https://github.com/qemu/qemu/commit/b93d897ea2f0abbe7fc341a9ac176b5ecd0f3c93
Author: Fabiano Rosas <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd.c
Log Message:
-----------
migration/multifd: Fix compat with QEMU < 9.0
Commit f5f48a7891 ("migration/multifd: Separate SYNC request with
normal jobs") changed the multifd source side to stop sending data
along with the MULTIFD_FLAG_SYNC, effectively introducing the concept
of a SYNC-only packet. Relying on that, commit d7e58f412c
("migration/multifd: Don't send ram data during SYNC") later came
along and skipped reading data from SYNC packets.
In a versions timeline like this:
8.2 f5f48a7 9.0 9.1 d7e58f41 9.2
The issue arises that QEMUs < 9.0 still send data along with SYNC, but
QEMUs > 9.1 don't gather that data anymore. This leads to various
kinds of migration failures due to desync/missing data.
Stop checking for a SYNC packet on the destination and unconditionally
unfill the packet.
>From now on:
old -> new:
the source sends data + sync, destination reads normally
new -> new:
source sends only sync, destination reads zeros
new -> old:
source sends only sync, destination reads zeros
CC: [email protected]
Fixes: d7e58f412c ("migration/multifd: Don't send ram data during SYNC")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2720
Reviewed-by: Peter Xu <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 7815f69867da92335055d4b5248430b0f122ce4e
https://github.com/qemu/qemu/commit/7815f69867da92335055d4b5248430b0f122ce4e
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/migration.c
Log Message:
-----------
migration: Add helper to get target runstate
In 99% cases, after QEMU migrates to dest host, it tries to detect the
target VM runstate using global_state_get_runstate().
There's one outlier so far which is Xen that won't send global state.
That's the major reason why global_state_received() check was always there
together with global_state_get_runstate().
However it's utterly confusing why global_state_received() has anything to
do with "let's start VM or not".
Provide a helper to explain it, then we have an unified entry for getting
the target dest QEMU runstate after migration.
Suggested-by: Fabiano Rosas <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: e4e5e89bbd8e731e86735d9d25b7b5f49e8f08b6
https://github.com/qemu/qemu/commit/e4e5e89bbd8e731e86735d9d25b7b5f49e8f08b6
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M monitor/qmp-cmds.c
Log Message:
-----------
qmp/cont: Only activate disks if migration completed
As the comment says, the activation of disks is for the case where
migration has completed, rather than when QEMU is still during
migration (RUN_STATE_INMIGRATE).
Move the code over to reflect what the comment is describing.
Cc: Kevin Wolf <[email protected]>
Cc: Markus Armbruster <[email protected]>
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: fca9aef1c8d8fc4482cc541638dbfac76dc125d6
https://github.com/qemu/qemu/commit/fca9aef1c8d8fc4482cc541638dbfac76dc125d6
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/migration.c
Log Message:
-----------
migration/block: Make late-block-active the default
Migration capability 'late-block-active' controls when the block drives
will be activated. If enabled, block drives will only be activated until
VM starts, either src runstate was "live" (RUNNING, or SUSPENDED), or it'll
be postponed until qmp_cont().
Let's do this unconditionally. There's no harm to delay activation of
block drives. Meanwhile there's no ABI breakage if dest does it, because
src QEMU has nothing to do with it, so it's no concern on ABI breakage.
IIUC we could avoid introducing this cap when introducing it before, but
now it's still not too late to just always do it. Cap now prone to
removal, but it'll be for later patches.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 61f2b489987c51159c53101a072c6aa901b50506
https://github.com/qemu/qemu/commit/61f2b489987c51159c53101a072c6aa901b50506
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/savevm.c
Log Message:
-----------
migration/block: Apply late-block-active behavior to postcopy
Postcopy never cared about late-block-active. However there's no mention
in the capability that it doesn't apply to postcopy.
Considering that we _assumed_ late activation is always good, do that too
for postcopy unconditionally, just like precopy. After this patch, we
should have unified the behavior across all.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 8c97c5a476d146b35b2873ef73df601216a494d9
https://github.com/qemu/qemu/commit/8c97c5a476d146b35b2873ef73df601216a494d9
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/migration.c
M migration/savevm.c
Log Message:
-----------
migration/block: Fix possible race with block_inactive
Src QEMU sets block_inactive=true very early before the invalidation takes
place. It means if something wrong happened during setting the flag but
before reaching qemu_savevm_state_complete_precopy_non_iterable() where it
did the invalidation work, it'll make block_inactive flag inconsistent.
For example, think about when qemu_savevm_state_complete_precopy_iterable()
can fail: it will have block_inactive set to true even if all block drives
are active.
Fix that by only update the flag after the invalidation is done.
No Fixes for any commit, because it's not an issue if bdrv_activate_all()
is re-entrant upon all-active disks - false positive block_inactive can
bring nothing more than "trying to active the blocks but they're already
active". However let's still do it right to avoid the inconsistent flag
v.s. reality.
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 8597af76153a87068b675d8099063c3ad8695773
https://github.com/qemu/qemu/commit/8597af76153a87068b675d8099063c3ad8695773
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M include/migration/misc.h
A migration/block-active.c
M migration/colo.c
M migration/meson.build
M migration/migration.c
M migration/migration.h
M migration/savevm.c
M migration/trace-events
M monitor/qmp-cmds.c
Log Message:
-----------
migration/block: Rewrite disk activation
This patch proposes a flag to maintain disk activation status globally. It
mostly rewrites disk activation mgmt for QEMU, including COLO and QMP
command xen_save_devices_state.
Backgrounds
===========
We have two problems on disk activations, one resolved, one not.
Problem 1: disk activation recover (for switchover interruptions)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When migration is either cancelled or failed during switchover, especially
when after the disks are inactivated, QEMU needs to remember re-activate
the disks again before vm starts.
It used to be done separately in two paths: one in qmp_migrate_cancel(),
the other one in the failure path of migration_completion().
It used to be fixed in different commits, all over the places in QEMU. So
these are the relevant changes I saw, I'm not sure if it's complete list:
- In 2016, commit fe904ea824 ("migration: regain control of images when
migration fails to complete")
- In 2017, commit 1d2acc3162 ("migration: re-active images while migration
been canceled after inactive them")
- In 2023, commit 6dab4c93ec ("migration: Attempt disk reactivation in
more failure scenarios")
Now since we have a slightly better picture maybe we can unify the
reactivation in a single path.
One side benefit of doing so is, we can move the disk operation outside QMP
command "migrate_cancel". It's possible that in the future we may want to
make "migrate_cancel" be OOB-compatible, while that requires the command
doesn't need BQL in the first place. This will already do that and make
migrate_cancel command lightweight.
Problem 2: disk invalidation on top of invalidated disks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This is an unresolved bug for current QEMU. Link in "Resolves:" at the
end. It turns out besides the src switchover phase (problem 1 above), QEMU
also needs to remember block activation on destination.
Consider two continuous migration in a row, where the VM was always paused.
In that scenario, the disks are not activated even until migration
completed in the 1st round. When the 2nd round starts, if QEMU doesn't
know the status of the disks, it needs to try inactivate the disk again.
Here the issue is the block layer API bdrv_inactivate_all() will crash a
QEMU if invoked on already inactive disks for the 2nd migration. For
detail, see the bug link at the end.
Implementation
==============
This patch proposes to maintain disk activation with a global flag, so we
know:
- If we used to inactivate disks for migration, but migration got
cancelled, or failed, QEMU will know it should reactivate the disks.
- On incoming side, if the disks are never activated but then another
migration is triggered, QEMU should be able to tell that inactivate is
not needed for the 2nd migration.
We used to have disk_inactive, but it only solves the 1st issue, not the
2nd. Also, it's done in completely separate paths so it's extremely hard
to follow either how the flag changes, or the duration that the flag is
valid, and when we will reactivate the disks.
Convert the existing disk_inactive flag into that global flag (also invert
its naming), and maintain the disk activation status for the whole
lifecycle of qemu. That includes the incoming QEMU.
Put both of the error cases of source migration (failure, cancelled)
together into migration_iteration_finish(), which will be invoked for
either of the scenario. So from that part QEMU should behave the same as
before. However with such global maintenance on disk activation status, we
not only cleanup quite a few temporary paths that we try to maintain the
disk activation status (e.g. in postcopy code), meanwhile it fixes the
crash for problem 2 in one shot.
For freshly started QEMU, the flag is initialized to TRUE showing that the
QEMU owns the disks by default.
For incoming migrated QEMU, the flag will be initialized to FALSE once and
for all showing that the dest QEMU doesn't own the disks until switchover.
That is guaranteed by the "once" variable.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2395
Signed-off-by: Peter Xu <[email protected]>
Reviewed-by: Fabiano Rosas <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 86bee9e0c761a3d0e67c43b44001fd752f894cb0
https://github.com/qemu/qemu/commit/86bee9e0c761a3d0e67c43b44001fd752f894cb0
Author: Fabiano Rosas <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M scripts/analyze-migration.py
Log Message:
-----------
migration: Add more error handling to analyze-migration.py
The analyze-migration script was seen failing in s390x in misterious
ways. It seems we're reaching the VMSDFieldStruct constructor without
any fields, which would indicate an empty .subsection entry, a
VMSTATE_STRUCT with no fields or a vmsd with no fields. We don't have
any of those, at least not without the unmigratable flag set, so this
should never happen.
Add some debug statements so that we can see what's going on the next
time the issue happens.
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 2aead53d39b828f8d9d0769ffa3579dadd64d846
https://github.com/qemu/qemu/commit/2aead53d39b828f8d9d0769ffa3579dadd64d846
Author: Fabiano Rosas <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/vmstate.c
Log Message:
-----------
migration: Remove unused argument in vmsd_desc_field_end
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 69d1f784569fdb950f2923c3b6d00d7c1b71acc1
https://github.com/qemu/qemu/commit/69d1f784569fdb950f2923c3b6d00d7c1b71acc1
Author: Fabiano Rosas <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M scripts/analyze-migration.py
Log Message:
-----------
migration: Fix parsing of s390 stream
The parsing for the S390StorageAttributes section is currently leaving
an unconsumed token that is later interpreted by the generic code as
QEMU_VM_EOF, cutting the parsing short.
The migration will issue a STATTR_FLAG_DONE between iterations, which
the script consumes correctly, but there's a final STATTR_FLAG_EOS at
.save_complete that the script is ignoring. Since the EOS flag is a
u64 0x1ULL and the stream is big endian, on little endian hosts a byte
read from it will be 0x0, the same as QEMU_VM_EOF.
Fixes: 81c2c9dd5d ("tests/qtest/migration-test: Fix analyze-migration.py for
s390x")
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: f52965bf0eeee28e89933264f1a9dbdcdaa76a7e
https://github.com/qemu/qemu/commit/f52965bf0eeee28e89933264f1a9dbdcdaa76a7e
Author: Fabiano Rosas <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/vmstate-types.c
M scripts/analyze-migration.py
Log Message:
-----------
migration: Rename vmstate_info_nullptr
Rename vmstate_info_nullptr from "uint64_t" to "nullptr". This vmstate
actually reads and writes just a byte, so the proper name would be
uint8. However, since this is a marker for a NULL pointer, it's
convenient to have a more explicit name that can be identified by the
consumers of the JSON part of the stream.
Change the name to "nullptr" and add support for it in the
analyze-migration.py script. Arbitrarily use the name of the type as
the value of the field to avoid the script showing 0x30 or '0', which
could be confusing for readers.
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 9867c3a7ced12dd7519155c047eb2c0098a11c5f
https://github.com/qemu/qemu/commit/9867c3a7ced12dd7519155c047eb2c0098a11c5f
Author: Peter Xu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/vmstate.c
Log Message:
-----------
migration: Dump correct JSON format for nullptr replacement
QEMU plays a trick with null pointers inside an array of pointers in a VMSD
field. See 07d4e69147 ("migration/vmstate: fix array of ptr with
nullptrs") for more details on why. The idea makes sense in general, but
it may overlooked the JSON writer where it could write nothing in a
"struct" in the JSON hints section.
We hit some analyze-migration.py issues on s390 recently, showing that some
of the struct field contains nothing, like:
{"name": "css", "array_len": 256, "type": "struct", "struct": {}, "size": 1}
As described in details by Fabiano:
https://lore.kernel.org/r/[email protected]
It could be that we hit some null pointers there, and JSON was gone when
they're null pointers.
To fix it, instead of hacking around only at VMStateInfo level, do that
from VMStateField level, so that JSON writer can also be involved. In this
case, JSON writer will replace the pointer array (which used to be a
"struct") to be the real representation of the nullptr field.
Signed-off-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 35049eb0d2fc72bb8c563196ec75b4d6c13fce02
https://github.com/qemu/qemu/commit/35049eb0d2fc72bb8c563196ec75b4d6c13fce02
Author: Fabiano Rosas <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/vmstate.c
M scripts/analyze-migration.py
Log Message:
-----------
migration: Fix arrays of pointers in JSON writer
Currently, if an array of pointers contains a NULL pointer, that
pointer will be encoded as '0' in the stream. Since the JSON writer
doesn't define a "pointer" type, that '0' will now be an uint8, which
is different from the original type being pointed to, e.g. struct.
(we're further calling uint8 "nullptr", but that's irrelevant to the
issue)
That mixed-type array shouldn't be compressed, otherwise data is lost
as the code currently makes the whole array have the type of the first
element:
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 256, "type": "nullptr", "size": 1},
...,
]}
In the above, the valid pointer at position 254 got lost among the
compressed array of nullptr.
While we could disable the array compression when a NULL pointer is
found, the JSON part of the stream still makes part of downtime, so we
should avoid writing unecessary bytes to it.
Keep the array compression in place, but if NULL and non-NULL pointers
are mixed break the array into several type-contiguous pieces :
css = {NULL, NULL, ..., 0x5555568a7940, NULL};
{"name": "s390_css", "instance_id": 0, "vmsd_name": "s390_css",
"version": 1, "fields": [
...,
{"name": "css", "array_len": 254, "type": "nullptr", "size": 1},
{"name": "css", "type": "struct", "struct": {"vmsd_name": "s390_css_img",
... }, "size": 768},
{"name": "css", "type": "nullptr", "size": 1},
...,
]}
Now each type-discontiguous region will become a new JSON entry. The
reader should interpret this as a concatenation of values, all part of
the same field.
Parsing the JSON with analyze-script.py now shows the proper data
being pointed to at the places where the pointer is valid and
"nullptr" where there's NULL:
"s390_css (14)": {
...
"css": [
"nullptr",
"nullptr",
...
"nullptr",
{
"chpids": [
{
"in_use": "0x00",
"type": "0x00",
"is_virtual": "0x00"
},
...
]
},
"nullptr",
}
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: c76ee1f6255c3988a9447d363bb17072f1ec84e1
https://github.com/qemu/qemu/commit/c76ee1f6255c3988a9447d363bb17072f1ec84e1
Author: Fabiano Rosas <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M hw/s390x/s390-virtio-ccw.c
Log Message:
-----------
s390x: Fix CSS migration
Commit a55ae46683 ("s390: move css_migration_enabled from machine to
css.c") disabled CSS migration globally instead of doing it
per-instance.
CC: Paolo Bonzini <[email protected]>
CC: [email protected] #9.1
Fixes: a55ae46683 ("s390: move css_migration_enabled from machine to css.c")
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2704
Reviewed-by: Thomas Huth <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: cdc3970f8597ebdc1a4c2090cfb4d11e297329ed
https://github.com/qemu/qemu/commit/cdc3970f8597ebdc1a4c2090cfb4d11e297329ed
Author: Yuan Liu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd-nocomp.c
Log Message:
-----------
multifd: bugfix for migration using compression methods
When compression is enabled on the migration channel and
the pages processed are all zero pages, these pages will
not be sent and updated on the target side, resulting in
incorrect memory data on the source and target sides.
The root cause is that all compression methods call
multifd_send_prepare_common to determine whether to compress
dirty pages, but multifd_send_prepare_common does not update
the IOV of MultiFDPacket_t when all dirty pages are zero pages.
The solution is to always update the IOV of MultiFDPacket_t
regardless of whether the dirty pages are all zero pages.
Fixes: 303e6f54f9 ("migration/multifd: Implement zero page transmission on the
multifd thread.")
Cc: [email protected] #9.0+
Signed-off-by: Yuan Liu <[email protected]>
Reviewed-by: Jason Zeng <[email protected]>
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 2588a5f99b0c3493b4690e3ff01ed36f80e830cc
https://github.com/qemu/qemu/commit/2588a5f99b0c3493b4690e3ff01ed36f80e830cc
Author: Yuan Liu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd-qpl.c
Log Message:
-----------
multifd: bugfix for incorrect migration data with QPL compression
When QPL compression is enabled on the migration channel and the same
dirty page changes from a normal page to a zero page in the iterative
memory copy, the dirty page will not be updated to a zero page again
on the target side, resulting in incorrect memory data on the source
and target sides.
The root cause is that the target side does not record the normal pages
to the receivedmap.
The solution is to add ramblock_recv_bitmap_set_offset in target side
to record the normal pages.
Signed-off-by: Yuan Liu <[email protected]>
Reviewed-by: Jason Zeng <[email protected]>
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: a523bc52166c80d8a04d46584f9f3868bd53ef69
https://github.com/qemu/qemu/commit/a523bc52166c80d8a04d46584f9f3868bd53ef69
Author: Yuan Liu <[email protected]>
Date: 2025-01-09 (Thu, 09 Jan 2025)
Changed paths:
M migration/multifd-qatzip.c
Log Message:
-----------
multifd: bugfix for incorrect migration data with qatzip compression
When QPL compression is enabled on the migration channel and the same
dirty page changes from a normal page to a zero page in the iterative
memory copy, the dirty page will not be updated to a zero page again
on the target side, resulting in incorrect memory data on the source
and target sides.
The root cause is that the target side does not record the normal pages
to the receivedmap.
The solution is to add ramblock_recv_bitmap_set_offset in target side
to record the normal pages.
Signed-off-by: Yuan Liu <[email protected]>
Reviewed-by: Jason Zeng <[email protected]>
Reviewed-by: Peter Xu <[email protected]>
Message-Id: <[email protected]>
Signed-off-by: Fabiano Rosas <[email protected]>
Commit: 290f950361e79d43c9a73d063964631107cac851
https://github.com/qemu/qemu/commit/290f950361e79d43c9a73d063964631107cac851
Author: Stefan Hajnoczi <[email protected]>
Date: 2025-01-10 (Fri, 10 Jan 2025)
Changed paths:
M accel/tcg/tcg-all.c
M backends/cryptodev.c
M chardev/char.c
M hw/core/gpio.c
M hw/core/meson.build
A hw/core/qdev-user.c
M hw/core/qdev.c
M hw/core/sysbus.c
M hw/i386/pc.c
M hw/pci/pci.c
M hw/vfio/pci.c
M include/hw/pci/pci_device.h
M include/hw/qdev-core.h
M include/qom/object.h
M qom/container.c
M qom/object.c
M scsi/pr-manager.c
M system/ioport.c
M system/memory.c
M system/qdev-monitor.c
M system/vl.c
M ui/console.c
M ui/dbus-chardev.c
Log Message:
-----------
Merge tag 'qom-qdev-20250109' of https://github.com/philmd/qemu into staging
QOM & QDev patches
- Remove DeviceState::opts (Akihiko)
- Replace container_get by machine/object_get_container (Peter)
- Remove InterfaceInfo::concrete_class field (Paolo)
- Reduce machine_containers[] scope (Philippe)
# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmeABNgACgkQ4+MsLN6t
# wN4XtQ/+NyXEK9vjq+yXnk7LRxTDQBrXxNc71gLqNA8rGwXTuELIXOthNW+UM2a9
# CdnVbrIX/FRfQLXTHx0C2ENteafrR1oXDQmEOz1UeYgaCWJsNdVe3r1MYUdHcwVM
# 90JcSbYhrvxFE/p/6WhTjjv2DXn4E8witsPwRc8EBi5bHeFz6cNPzhdF59A3ljZF
# 0zr1MLHJHhwR6OoBbm9HM8x8i4Zw4LoKEjo8cCgcBfPQIMKf0HQ4XsinIDwn0VXN
# S3jIysNyGHlptHOiJuErILZtzrm4F2lGwYan89jxuElfWjC7SVB2z4CQkQtPceIJ
# HRBrE7VPwJ566OAThoSwPG3jXT1yCDOYmNCX1kJOMo9rYh3MwG0VrbMr5iwfYk8Z
# wO+8IyMAx7m8FibdsoMmxtI1PYTf0JQaCB6MSwdoAMMQVp1FDWBun2g+swLjQgO4
# 15iSB+PMIZe7Ywd0b63VZrUMHKwMxd9RFYEbbsdA8DRI50W3HMQPZAJiGXt7RxJ9
# p9qxqg0WGpVjgTnInt/KH4axiWPD5cru+THVYk6dvOdtTM5wj2jEswWy2vQ6LkEF
# MgxaUXfja8E20AXvdr6uXKwcKOIJ9+TaU5AhUmjpvacjJhy5eQdoFt9OnIMQt25U
# KTtapCVsong5JzYZWhITNCMf5w2YGCJGJJekxdrqBvFk+FkMR38=
# =+TLu
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 09 Jan 2025 12:18:16 EST
# gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
# gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <[email protected]>"
[full]
# Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'qom-qdev-20250109' of https://github.com/philmd/qemu:
system: Inline machine_containers[] in qemu_create_machine_containers()
qom: remove unused InterfaceInfo::concrete_class field
qom: Remove container_get()
qom: Use object_get_container()
qom: Add object_get_container()
qdev: Use machine_get_container()
qdev: Add machine_get_container()
qdev: Make qdev_get_machine() not use container_get()
qdev: Implement qdev_create_fake_machine() for user emulation
qdev: Remove opts member
hw/pci: Use -1 as the default value for rombar
Signed-off-by: Stefan Hajnoczi <[email protected]>
Commit: 3214bec13d8d4c40f707d21d8350d04e4123ae97
https://github.com/qemu/qemu/commit/3214bec13d8d4c40f707d21d8350d04e4123ae97
Author: Stefan Hajnoczi <[email protected]>
Date: 2025-01-10 (Fri, 10 Jan 2025)
Changed paths:
M hw/s390x/s390-virtio-ccw.c
M include/migration/misc.h
A migration/block-active.c
M migration/colo.c
M migration/meson.build
M migration/migration.c
M migration/migration.h
M migration/multifd-nocomp.c
M migration/multifd-qatzip.c
M migration/multifd-qpl.c
M migration/multifd-uadk.c
M migration/multifd.c
M migration/multifd.h
M migration/ram.c
M migration/ram.h
M migration/rdma.h
M migration/savevm.c
M migration/trace-events
M migration/vmstate-types.c
M migration/vmstate.c
M monitor/qmp-cmds.c
M scripts/analyze-migration.py
Log Message:
-----------
Merge tag 'migration-20250110-pull-request' of
https://gitlab.com/farosas/qemu into staging
Migration pull request
- compression:
Shameer's fix for CONFIG_UADK build
Yuan Liu fixes for zero-page, QPL, qatzip
- multifd sync cleanups, prereq. for VFIO and postcopy work
- fixes for 9.2 regressions:
multifd with pre-9.0 -> post-9.1 migrations (#2720)
s390x migration (#2704)
- fix for assertions during paused migrations; rework of
late-block-activate logic (#2395, #686)
- fixes for compressed arrays creation and parsing, mostly affecting
s390x
# -----BEGIN PGP SIGNATURE-----
#
# iQJEBAABCAAuFiEEqhtIsKIjJqWkw2TPx5jcdBvsMZ0FAmeBDgkQHGZhcm9zYXNA
# c3VzZS5kZQAKCRDHmNx0G+wxnSlUEACl31wY+77JxWnBva/eDDwnJ9HiCrqsoqaZ
# YIJJXNlk4lYJWNdZRt6p27exzWrQwm+kWKPECeCakgCMlfhnKCvejGq7iV/fJY4o
# D8hjE3t1htQ8mfblY1+bqzg3Rml59KwXxiqAwvlljbNWdkXruv026dq9vgJMzFhi
# ia043fOO1tYULIoawgmwmLEHnztht0v+ZTZ1v5KQbrH655tpxls/8kHc6v5PXEpA
# 3PSmCrCQh1dPtkYRjuJ9yHyfU+/T8tYwIjrU6VR1wQW7MBNkjtqNudaqAFiuyuqn
# P8gh4rAQrMhA9y+aq6xSoJP8XGkuOHxLQtlNutlmtbcQyZ7JqgLmK9ZLdoPf21sK
# //erV63NoyaciYB9Nk3NXflwroc6zyvo8A584kGNPwBznZOJLESP4SPvVm/nlE29
# vbyq8AWHRjFiqqf6P0ttQLAFkusZJzM1Y9UakF51hyVBX70yfqLG20XXZtIq/aZA
# GbBB2Fo0MIlbmWaur3vLsSzn7B8d++Gl9TTGcK/eIXJ1ANCuCxGv9fbXJQlP5F4I
# 3OAoSmAVJ2eqw4v0+2WMiEa8yUA5drNnDSI3VRkG+0K9jRfHKXki466/QQdGrNw7
# 8GuuzLBNai3gEKbavDU0Be73r982KjXeYXj7RuAkQfm0d4H7tiwtg91Cd1dPKfzh
# mhpmOFJDCg==
# =joNM
# -----END PGP SIGNATURE-----
# gpg: Signature made Fri 10 Jan 2025 07:09:45 EST
# gpg: using RSA key AA1B48B0A22326A5A4C364CFC798DC741BEC319D
# gpg: issuer "[email protected]"
# gpg: Good signature from "Fabiano Rosas <[email protected]>" [unknown]
# gpg: aka "Fabiano Almeida Rosas <[email protected]>"
[unknown]
# gpg: WARNING: The key's User ID is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: AA1B 48B0 A223 26A5 A4C3 64CF C798 DC74 1BEC 319D
* tag 'migration-20250110-pull-request' of https://gitlab.com/farosas/qemu: (25
commits)
multifd: bugfix for incorrect migration data with qatzip compression
multifd: bugfix for incorrect migration data with QPL compression
multifd: bugfix for migration using compression methods
s390x: Fix CSS migration
migration: Fix arrays of pointers in JSON writer
migration: Dump correct JSON format for nullptr replacement
migration: Rename vmstate_info_nullptr
migration: Fix parsing of s390 stream
migration: Remove unused argument in vmsd_desc_field_end
migration: Add more error handling to analyze-migration.py
migration/block: Rewrite disk activation
migration/block: Fix possible race with block_inactive
migration/block: Apply late-block-active behavior to postcopy
migration/block: Make late-block-active the default
qmp/cont: Only activate disks if migration completed
migration: Add helper to get target runstate
migration/multifd: Fix compat with QEMU < 9.0
migration/multifd: Document the reason to sync for save_setup()
migration/multifd: Cleanup src flushes on condition check
migration/multifd: Remove sync processing on postcopy
...
Signed-off-by: Stefan Hajnoczi <[email protected]>
Compare: https://github.com/qemu/qemu/compare/bc6afa1c711d...3214bec13d8d
To unsubscribe from these emails, change your notification settings at
https://github.com/qemu/qemu/settings/notifications