Avi Kivity wrote:
> On 10/15/2009 01:27 PM, Jan Kiszka wrote:
>>> Perhaps it makes sense to query about individual states, including
>>> existing ones? That will allow us to deprecate and then phase out
>>> broken states. It's probably not worth it.
>>>
>> You may do this already with the g
On 10/15/2009 01:27 PM, Jan Kiszka wrote:
Perhaps it makes sense to query about individual states, including
existing ones? That will allow us to deprecate and then phase out
broken states. It's probably not worth it.
You may do this already with the given design: Set up a VCPU, then i
Avi Kivity wrote:
> On 10/15/2009 06:22 PM, Jan Kiszka wrote:
>>> Needs a KVM_CAP as well.
>>>
>> KVM_CAP_VCPU_STATE will imply KVM_CAP_NMI_STATE, so I skipped the latter
>> (user space code would use the former anyway to avoid yet another #ifdef
>> layer).
>>
>
> OK. New bits will need
On 10/15/2009 06:22 PM, Jan Kiszka wrote:
Needs a KVM_CAP as well.
KVM_CAP_VCPU_STATE will imply KVM_CAP_NMI_STATE, so I skipped the latter
(user space code would use the former anyway to avoid yet another #ifdef
layer).
OK. New bits will need the KVM_CAP, though.
Perhaps it makes
Avi Kivity wrote:
> On 10/14/2009 01:06 AM, Jan Kiszka wrote:
>> This plugs an NMI-related hole in the VCPU synchronization between
>> kernel and user space. So far, neither pending NMIs nor the inhibit NMI
>> mask was properly read/set which was able to cause problems on
>> vmsave/restore, live mi
On 10/14/2009 01:06 AM, Jan Kiszka wrote:
This plugs an NMI-related hole in the VCPU synchronization between
kernel and user space. So far, neither pending NMIs nor the inhibit NMI
mask was properly read/set which was able to cause problems on
vmsave/restore, live migration and system reset. Fix
This plugs an NMI-related hole in the VCPU synchronization between
kernel and user space. So far, neither pending NMIs nor the inhibit NMI
mask was properly read/set which was able to cause problems on
vmsave/restore, live migration and system reset. Fix it by making use
of the new VCPU substate in