On Thu, Aug 21, 2014 at 7:48 PM, Andrey Korolyov <and...@xdel.ru> wrote:
> On Sat, Aug 9, 2014 at 10:35 AM, Paolo Bonzini <pbonz...@redhat.com> wrote:
>>
>>> > Yeah, I need to sit down and look at the code more closely...  Perhaps a
>>> > cpu_mark_all_dirty() is enough.
>>>
>>> Hi Paolo,
>>>
>>> cpu_clean_all_dirty, you mean? Has the same effect.
>>>
>>> Marcin's patch to add cpu_synchronize_state_always() has the same
>>> effect.
>>>
>>> What do you prefer ?
>>
>> I'd prefer cpu_clean_all_dirty because you can call it from the APIC
>> load functions.  The bug with your patch is due to the APIC and to
>> migration, it's not in the way your patch touches the kvmclock
>> vmstate_change_handler.
>>
>> Paolo
>
> Hello,
>
>
> JFYI - Windows showing the same behavior after migration using bare
> 2.1 (frozen disk with latest virtio block drivers). Reverting agraf`s
> patches back plus Marcin`s fix saves the situation, so AFAICS problem
> is critical to M$ products on KVM.

Sorry, the test series revealed that the problem is still here, but
with lower hit ratio with modified 2.1-HEAD using selected argument
set. The actual root of the issue is in '-cpu
qemu64,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1000'
Windows-specific addition which can be traded for some idle CPU
consumption. Reproduction of bug is quite simple - fire up Windows VM,
migrate it two-three times and then try to log in using rdesktop/VNC -
if disk is frozen, login progress will freeze too. It is not so easy
to detect blocked I/O on Windows in the interactive session due to
lack of soft lockup warning equivalent, so one can try such a sequence
to check.

Reply via email to