* Chris Wright <[EMAIL PROTECTED]> [2008-05-15 02:01]:
> * Anthony Liguori ([EMAIL PROTECTED]) wrote:
> > From a quick look, I suspect that the number of wildly off TSC
> > calibrations correspond to the VMs that are misbehaving. I think this
> > may mean that we have to re-examine the tsc delt
Chris Wright wrote:
> * Anthony Liguori ([EMAIL PROTECTED]) wrote:
>
>> From a quick look, I suspect that the number of wildly off TSC
>> calibrations correspond to the VMs that are misbehaving. I think this
>> may mean that we have to re-examine the tsc delta computation.
>>
>> 10_serial.lo
* Anthony Liguori ([EMAIL PROTECTED]) wrote:
> From a quick look, I suspect that the number of wildly off TSC
> calibrations correspond to the VMs that are misbehaving. I think this
> may mean that we have to re-examine the tsc delta computation.
>
> 10_serial.log:time.c: Detected 1995.038 MHz
Marcelo Tosatti wrote:
> On Mon, May 12, 2008 at 02:19:24PM -0500, Ryan Harper wrote:
>
> Hi Ryan,
>
> There are two places that attempt to use delivery mode 7: kexec crash
> and io_apic_64.c::check_timer().
>
> The later will happen if the guest fails to receive PIT IRQ's for 10
> ticks. If you
On Mon, May 12, 2008 at 02:19:24PM -0500, Ryan Harper wrote:
> I've been digging into some of the instability we see when running
> larger numbers of guests at the same time. The test I'm currently using
> involves launching 64 1vcpu guests on an 8-way AMD box. With the latest
> kvm-userspace git
* Anthony Liguori <[EMAIL PROTECTED]> [2008-05-12 17:00]:
> Ryan Harper wrote:
> >>BTW, what if you don't pace-out the startups? Do we still have issues
> >>with that?
> >>
> >
> >Do you mean without the 1 second delay or with a longer delay? My
> >experience is that delay helps (fewer hangs
Ryan Harper wrote:
> * Anthony Liguori <[EMAIL PROTECTED]> [2008-05-12 15:05]:
>
>> Ryan Harper wrote:
>>
>>> I've been digging into some of the instability we see when running
>>> larger numbers of guests at the same time. The test I'm currently using
>>> involves launching 64 1vcpu guest
* Anthony Liguori <[EMAIL PROTECTED]> [2008-05-12 15:05]:
> Ryan Harper wrote:
> >I've been digging into some of the instability we see when running
> >larger numbers of guests at the same time. The test I'm currently using
> >involves launching 64 1vcpu guests on an 8-way AMD box.
>
> Note this
Ryan Harper wrote:
> I've been digging into some of the instability we see when running
> larger numbers of guests at the same time. The test I'm currently using
> involves launching 64 1vcpu guests on an 8-way AMD box.
Note this is a Barcelona system and therefore should have a
fixed-frequency
I've been digging into some of the instability we see when running
larger numbers of guests at the same time. The test I'm currently using
involves launching 64 1vcpu guests on an 8-way AMD box. With the latest
kvm-userspace git and kvm.git + Gerd's kvmclock fixes, I can launch all
64 of these 1
10 matches
Mail list logo