Am 08.07.2015 um 14:08 schrieb Juan Quintela:
> Christian Borntraeger <borntrae...@de.ibm.com> wrote:
>> Am 07.07.2015 um 15:08 schrieb Juan Quintela:
>>> This includes a new section that for now just stores the current qemu state.
>>>
>>> Right now, there are only one way to control what is the state of the
>>> target after migration.
>>>
>>> - If you run the target qemu with -S, it would start stopped.
>>> - If you run the target qemu without -S, it would run just after
>>> migration finishes.
>>>
>>> The problem here is what happens if we start the target without -S and
>>> there happens one error during migration that puts current state as
>>> -EIO.  Migration would ends (notice that the error happend doing block
>>> IO, network IO, i.e. nothing related with migration), and when
>>> migration finish, we would just "continue" running on destination,
>>> probably hanging the guest/corruption data, whatever.
>>>
>>> Signed-off-by: Juan Quintela <quint...@redhat.com>
>>> Reviewed-by: Dr. David Alan Gilbert <dgilb...@redhat.com>
>>
>> This is bisected to cause a regression on s390.
>>
>> A guest restarts (booting) after managedsave/start instead of continuing.
>>
>> Do you have any idea what might be wrong?
> 
> Can you check the new patch series that I sent.  There is a fix that
> *could* help there.  *could* because I don't fully understand why it can
> give you problems (and even only sometimes).  Current guess is that some
> of the devices are testing the guest state on LOAD, so that patch.
> 
> Please, test.

That patch does indeed fix my problem.
I can see that virtio_init uses the runstate to set vm_running of the vdev. This
is used in virtio-net for several aspects.
But I really dont understand why this causes the symptoms.
So I am tempted to add
a
Tested-by: Christian Borntraeger <borntrae...@de.ibm.com>

but I have a bad feeling on the "why" :-/



Christian



Reply via email to