Hi,

I would like to add some more points to Gerd's explanation:
On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
   Hi,

Absolutely not.  This is hideously ugly and affects a bunch of code.

Spice is *not* getting a hook in migration where it gets to add
arbitrary amounts of downtime to the migration traffic.  That's a
terrible idea.

I'd like to be more constructive in my response, but you aren't
explaining the problem well enough for me to offer an alternative
solution.  You need to find another way to solve this problem.
Actually, this is not the first time we address you with this issues. For example: http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01805.html (The first part of the above discussion is not directly related to the current one). I'll try to explain in more details:

As Gerd mentioned, migrating the spice connection smoothly requires the src server to keep running and send/receive data to/from the client, after migration has already completed, till the client completely transfers to the target. The suggested patch series only delays the migration state change from ACTIVE to COMPLETED/ERROR/CANCELED, till spice signals it has completed its part in migration. As I see it, if spice connection does exists, its migration should be treated as a non separate part of the whole migration process, and thus, the migration state shouldn't change from ACTIVE, till spice has completed its part. Hence, I don't think we should have a qmp event for signaling libvirt about spice migration.


The second challenge we are facing, which I addressed in the "plans" part of the cover-letter, and on which I think you (anthony) actually replied, is how to tackle migrating spice data from the src server to the target server. Such data can be usb/smartcard packets sent from a device connected on the client, to the server, and that haven't reached the device. Or partial data that has been read from a guest character device and that haven't been sent to the client. Other data can be internal server-client state data we would wish to keep on the server in order to avoid establishing the connection to the target from scratch, and possibly also suffer from a slower responsiveness at start. In the cover-letter I suggested to transfer spice migration data via the vmstate infrastructure. The other alternative which we also discussed in the link above, is to transfer the data via the client. The latter also requires holding the src process alive after migration completion, in order to manage to complete transferring the data from the src to the client. The vmstate option has the advantages of faster data transfer (src->dst, instead of src->client->dst), and in addition employing an already existing reliable mechanism for data migration. The disadvantage is that in order to have an updated vmstate we need to communicate with spice client and get all in-flight data before saving the vmstate. So, we can either busy wait on the relevant fds during the pre_save of the vmstates, or have async pre_save, so that the main loop will be active (but I think that it can be risky once the non-live phase started), or have an async notifier for changing from live to non-live phase, (spice will be able to update the vmstates during this notification handler). Of course, we would in any case use a timeout in order to prevent too long delay.

To summarize, since we can still use the client to transfer data from the src to the target (instead of using vmstate), the major requirement of spice, is to keep the src running after migration has completed.

Yonit.


Very short version:  The requirement is simply to not kill qemu on the
source side until the source spice-server has finished session handover
to the target spice-server.

Long version:  spice-client connects automatically to the target
machine, so the user ideally doesn't notice that his virtual machine was
just migrated over to another host.

Today this happens via "switch-host", which is a simple message asking
the spice client to connect to the new host.

We want move to "seamless migration" model where we don't start over
from scratch, but hand over the session from the source to the target.
Advantage is that various state cached in spice-client will stay valid
and doesn't need to be retransmitted.  It also requires a handshake
between spice-servers on source and target.  libvirt killing qemu on the
source host before the handshake is done isn't exactly helpful.

[ Side note: In theory this issue exists even today: in case the data
   pipe to the client is full spice-server will queue up the switch-host
   message and qemu might be killed before it is sent out.  In practice
   it doesn't happen though because it goes through the low-traffic main
   channel so the socket buffers usually have enougth space. ]

So, the big question is how to tackle the issue?

Option (1): Wait until spice-server is done before signaling completion
to libvirt.  This is what this patch series implements.

Advantage is that it is completely transparent for libvirt, thats why I
like it.

Disadvantage is that it indeed adds a small delay for the spice-server
handshake.  The target qemu doesn't process main loop events while the
incoming migration is running, and because of that the spice-server
handshake doesn't run in parallel with the final stage of vm migration,
which it could in theory.

BTW: There will be no "arbitrary amounts of downtime".  Seamless spice
client migration is pretty pointless if it doesn't finish within a
fraction of a second, so we can go with a very short timeout there.

Option (2): Add a new QMP event which is emmitted when spice-server is
done, then make libvirt wait for it before killing qemu.

Obvious disadvantage is that it requires libvirt changes.

Option (3): Your suggestion?

thanks,
   Gerd



Reply via email to