Carsten Otte/Germany/[EMAIL PROTECTED] wrote on 10/11/2007 12:57:38 PM:
> Dong, Eddie wrote:
> > [EMAIL PROTECTED] wrote:
> >> Jim Paris wrote:
> >>> If I stop KVM in the monitor with "stop", wait a minute, and do
> >>> "cont", a Linux guest gives me a "BUG: soft lockup detected on
> >>> CPU#0". I
Dong, Eddie wrote:
> [EMAIL PROTECTED] wrote:
>> Jim Paris wrote:
>>> If I stop KVM in the monitor with "stop", wait a minute, and do
>>> "cont", a Linux guest gives me a "BUG: soft lockup detected on
>>> CPU#0". Is that expected behavior?
>> We have the same behavior on s390 when running in a virt
[EMAIL PROTECTED] wrote:
> Jim Paris wrote:
>> If I stop KVM in the monitor with "stop", wait a minute, and do
>> "cont", a Linux guest gives me a "BUG: soft lockup detected on
>> CPU#0". Is that expected behavior?
> We have the same behavior on s390 when running in a virtual
> environment. The iss
Jim Paris wrote:
> If I stop KVM in the monitor with "stop", wait a minute, and do
> "cont", a Linux guest gives me a "BUG: soft lockup detected on CPU#0".
> Is that expected behavior?
We have the same behavior on s390 when running in a virtual
environment. The issue is, that the guest physical
Avi Kivity wrote:
> cpu_save (qemu/hw/pc.c) has this:
>
> #ifdef USE_KVM
>if (kvm_allowed) {
>for (i = 0; i < NR_IRQ_WORDS ; i++) {
>qemu_put_be32s(f, &env->kvm_interrupt_bitmap[i]);}
>qemu_put_be64s(f, &env->tsc);
>}
> #endif
Mmm, so this is not the ro
Avi Kivity wrote:
> Jim Paris wrote:
> > If I stop KVM in the monitor with "stop", wait a minute, and do
> > "cont", a Linux guest gives me a "BUG: soft lockup detected on CPU#0".
> > Is that expected behavior?
>
> No.
>
> > What isn't virtualized that allows it to
> > detect that? The host is
Dong, Eddie wrote:
>> Or is the timer saved in absolute time? if so you are right, and yes,
>> your solution is needed.
>>
>>
> Looks like current live migration, also save/restore, doesn't migrate
> guest time. (do I miss something?) So the new guest will
> see a totally different TSC and
>Or is the timer saved in absolute time? if so you are right, and yes,
>your solution is needed.
>
Looks like current live migration, also save/restore, doesn't migrate
guest time. (do I miss something?) So the new guest will
see a totally different TSC and OS feel stranger or
many lost ticks
Dong, Eddie wrote:
> Avi Kivity wrote:
>
>> Dong, Eddie wrote:
>>
>>> [EMAIL PROTECTED] wrote:
>>>
>>>
It may be that the timer correction code detects that zillions of
timer interrupts have not been serviced by the guest so it floods
the guest with these interrupts.
Avi Kivity wrote:
> Dong, Eddie wrote:
>> [EMAIL PROTECTED] wrote:
>>
>>> It may be that the timer correction code detects that zillions of
>>> timer interrupts have not been serviced by the guest so it floods
>>> the guest with these interrupts. Does -no-kvm-irqchip help?
>>>
>>>
>> In Xen, we
Dong, Eddie wrote:
> [EMAIL PROTECTED] wrote:
>
>> It may be that the timer correction code detects that zillions of
>> timer interrupts have not been serviced by the guest so it floods the
>> guest with these interrupts. Does -no-kvm-irqchip help?
>>
>>
> In Xen, we decide to froze the gu
[EMAIL PROTECTED] wrote:
>
> It may be that the timer correction code detects that zillions of
> timer interrupts have not been serviced by the guest so it floods the
> guest with these interrupts. Does -no-kvm-irqchip help?
>
In Xen, we decide to froze the guest time after save/restore. In this
Jim Paris wrote:
> If I stop KVM in the monitor with "stop", wait a minute, and do
> "cont", a Linux guest gives me a "BUG: soft lockup detected on CPU#0".
> Is that expected behavior?
No.
> What isn't virtualized that allows it to
> detect that? The host is a core 2 duo.
>
>
It may be tha
If I stop KVM in the monitor with "stop", wait a minute, and do
"cont", a Linux guest gives me a "BUG: soft lockup detected on CPU#0".
Is that expected behavior? What isn't virtualized that allows it to
detect that? The host is a core 2 duo.
I have bigger problems in the guest after migrating to
14 matches
Mail list logo