[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Luca
On 8/22/07, Dor Laor <[EMAIL PROTECTED]> wrote:
> >> > >>> This is QEMU, with dynticks and HPET:
> >> > >>>
> >> > >>> % time seconds  usecs/call callserrors syscall
> >> > >>> -- --- --- - -
> -
> >---
> >> > >>>  52.100.002966   0 96840
> clock_gettime
> >> > >>>  19.500.001110   0 37050
> timer_gettime
> >> > >>>  10.660.000607   0 20086
> timer_settime
> >> > >>>  10.400.000592   0  8985  2539 sigreturn
> >> > >>>   4.940.000281   0  8361  2485 select
> >> > >>>   2.410.000137   0  8362   gettimeofday
> >> > >>> -- --- --- - -
> -
> >---
> >> > >>> 100.000.005693179684  5024 total
> >> > >>>
> >> > >>>
> >> > >> This looks like 250 Hz?
> >> > >>
> >> > >
> >> > > Nope:
> >> > >
> >> > > # CONFIG_NO_HZ is not set
> >> > > # CONFIG_HZ_100 is not set
> >> > > # CONFIG_HZ_250 is not set
> >> > > # CONFIG_HZ_300 is not set
> >> > > CONFIG_HZ_1000=y
> >> > > CONFIG_HZ=1000
> >> > >
> >> > > and I'm reading it from /proc/config.gz on the guest.
> >> > >
> >> >
> >> > Yeah, thought so -- so dyntick is broken at present.
> >>
> >> I see a lot of sub ms timer_settime(). Many of them are the result of
> >> ->expire_time being less than the current qemu_get_clock(). This
> >> results into 250us timer due to MIN_TIMER_REARM_US; this happens only
> >> for the REALTIME timer. Other sub-ms timers are generated by the
> >> VIRTUAL timer.
> >>
> >> This first issue is easily fixed; if expire_time < current time then
> >> the timer has expired and hasn't been reprogrammed (and thus can be
> >> ignored).
> >> VIRTUAL just becomes more accurate with dyntics, before multiple
> >> timers were batched together.
> >>
> >> > Or maybe your host kernel can't support such a high rate.
> >>
> >> I don't know... a simple printf tells me that the signal handler is
> >> called about 1050 times per second, which sounds about right.
> >
> >...unless strace is attached. ptrace()'ing the process really screw up
> >the timing with dynticks; HPET is also affected but the performance
> >hit is not as severe.
> >
> I didn't figure out how you use both hpet and dyn-tick together.

I don't. Only one timer source is active at any time; the selection is
done at startup with -clock option.

> Hpet has periodic timer while dyn-tick is one shot timer each time.
> Is ther a chance that both are working and that's the source of our
> problems?

No, the various sources are exclusive (though it might be possible to
use HPET in one shot mode).

Luca




[Qemu-devel] RE: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Dor Laor
>> > >>> This is QEMU, with dynticks and HPET:
>> > >>>
>> > >>> % time seconds  usecs/call callserrors syscall
>> > >>> -- --- --- - -
-
>---
>> > >>>  52.100.002966   0 96840
clock_gettime
>> > >>>  19.500.001110   0 37050
timer_gettime
>> > >>>  10.660.000607   0 20086
timer_settime
>> > >>>  10.400.000592   0  8985  2539 sigreturn
>> > >>>   4.940.000281   0  8361  2485 select
>> > >>>   2.410.000137   0  8362   gettimeofday
>> > >>> -- --- --- - -
-
>---
>> > >>> 100.000.005693179684  5024 total
>> > >>>
>> > >>>
>> > >> This looks like 250 Hz?
>> > >>
>> > >
>> > > Nope:
>> > >
>> > > # CONFIG_NO_HZ is not set
>> > > # CONFIG_HZ_100 is not set
>> > > # CONFIG_HZ_250 is not set
>> > > # CONFIG_HZ_300 is not set
>> > > CONFIG_HZ_1000=y
>> > > CONFIG_HZ=1000
>> > >
>> > > and I'm reading it from /proc/config.gz on the guest.
>> > >
>> >
>> > Yeah, thought so -- so dyntick is broken at present.
>>
>> I see a lot of sub ms timer_settime(). Many of them are the result of
>> ->expire_time being less than the current qemu_get_clock(). This
>> results into 250us timer due to MIN_TIMER_REARM_US; this happens only
>> for the REALTIME timer. Other sub-ms timers are generated by the
>> VIRTUAL timer.
>>
>> This first issue is easily fixed; if expire_time < current time then
>> the timer has expired and hasn't been reprogrammed (and thus can be
>> ignored).
>> VIRTUAL just becomes more accurate with dyntics, before multiple
>> timers were batched together.
>>
>> > Or maybe your host kernel can't support such a high rate.
>>
>> I don't know... a simple printf tells me that the signal handler is
>> called about 1050 times per second, which sounds about right.
>
>...unless strace is attached. ptrace()'ing the process really screw up
>the timing with dynticks; HPET is also affected but the performance
>hit is not as severe.
>
>Luca

I didn't figure out how you use both hpet and dyn-tick together.
Hpet has periodic timer while dyn-tick is one shot timer each time.
Is ther a chance that both are working and that's the source of our 
problems?
Dor




Re: [kvm-devel] [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.

2007-08-22 Thread Andi Kleen
> $ dmesg |grep -i hpet
> ACPI: HPET 7D5B6AE0, 0038 (r1 A M I  OEMHPET   5000708 MSFT   97)
> ACPI: HPET id: 0x8086a301 base: 0xfed0
> hpet0: at MMIO 0xfed0, IRQs 2, 8, 0, 0
> hpet0: 4 64-bit timers, 14318180 Hz
> hpet_resources: 0xfed0 is busy

What kernel version was that? There was a bug that caused this pre .22

-Andi




Re: [kvm-devel] [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.

2007-08-22 Thread Dan Kenigsberg
On Wed, Aug 22, 2007 at 02:34:24PM +0200, Andi Kleen wrote:
> On Wed, Aug 22, 2007 at 10:03:32AM +0300, Avi Kivity wrote:
> > Maybe the kernel is using the timer, so userspace can't.  Just a guess.
> 
> HPET has multiple timers (variable, but typically 2 or 4). The kernel
> only uses timer 0. It's possible someone else in user space is using
> it though. Try lsof /dev/hpet

Thanks for the ideas; however even after I made the kernel use tsc as
time source, and made sure that no one opens /dev/hpet, I fail to use
HPET (with same errors as before)

I now have

$ dmesg |grep -i hpet
ACPI: HPET 7D5B6AE0, 0038 (r1 A M I  OEMHPET   5000708 MSFT   97)
ACPI: HPET id: 0x8086a301 base: 0xfed0
hpet0: at MMIO 0xfed0, IRQs 2, 8, 0, 0
hpet0: 4 64-bit timers, 14318180 Hz
hpet_resources: 0xfed0 is busy

Any other idea?

Dan.




[Qemu-devel] Porting QEMU to Minix - op_goto_tb1 segfaults because tb_next[1] is NULL

2007-08-22 Thread Erik van der Kouwe

Dear all,

I have been attempting to get QEMU to run on the Minix operation system 
(running on x86, see http://www.minix3.org/ for more info on the OS) for 
some time now. I have gotten the program to compile and have added the 
Minix-specific a.out-like format to dyngen. I am quite certain this bit 
works, as I have been looking at the generated relocated code in the 
disassebler at length.


My problem is the following: quickly after starting I get a segmentation 
fault while the generated code is running.


This happens in the code generated from op_goto_tb1 and is caused by jumping 
to a NULL pointer. This NULL pointer originates from the tb_next[1] field of 
the translation block data structure passed as a parameter. I have verified 
in the disassembler that the parameter in the generated code is processed 
correctly and the field is indeed tb_next[1].


I would like to know what would be the normal place for tb_next[1] to be 
initialized, and perhaps if anyone has a suggesting why that might not be 
happening in this case.


I found that (but please correct me if I am wrong) the assignment can only 
take place in tb_set_jmp_target, which in turn is called only by tb_add_jump 
and tb_reset_jump. When stepping through the code I found that neither of 
these functions is ever called either.


Versions i'm using:
- Qemu 0.8.2 (newest when i started, but from changelog ISTM upgrading to 
0.9.0 would not help)

- Minix 3.1.2 (current release version)
- GCC 3.4.3 (version that comes with Minix)

Compilation settings:
- Target: i386-softmmu (must use soft MMU, Minix does not support paging)
- target_user_only enabled
- CONFIG_SOFTFLOAT enabled (Minix does not support FPU, everything is 
emulated anyways)
- USE_DIRECT_JUMP disabled (but had similar problem before disabling, and 
this seems easier to debug)


Virtual machine:
- The Linux image at http://fabrice.bellard.free.fr/qemu/linux-0.2.img.bz2

If you need any more information to answer my question (or at least guide me 
in the right direction) do not hesitate to ask.


Thanks in advance for any answers, suggestions or other advice you may have.

With kind regards,
Erik van der Kouwe 






[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Dan Kenigsberg
On Wed, Aug 22, 2007 at 06:38:18PM +0200, Luca wrote:
> and I'm reading it from /proc/config.gz on the guest.
> 
> > And a huge number of settime calls?
> 
> Yes, maybe some QEMU timer is using an interval < 1ms?
> Dan do you any any idea of what's going on?

Not really...




Re: [Qemu-devel] [PATCH][RFC] SVM support

2007-08-22 Thread Alexander Graf
Blue Swirl wrote:
> On 8/22/07, Alexander Graf <[EMAIL PROTECTED]> wrote:
>   
>> - All interceptions (well, maybe I did oversee one or two)
>> 
>
> Nice work! For better performance, you should do the op.c checks
> statically at translation time (if possible).
>
>
>   
Thanks. I thought about that first as well, but can't. The information
if an intercept should occur is defined in the VMCB, which is passed as
argument on VMRUN (so whenever one enters the VM). This means that the
very same TB can be executed with completely different intercepts, which
means I have to fall back to runtime detection in op.c.

I thought about moving some functionality from helper.c to op.c.
Does that improve anything?





Re: [Qemu-devel] [PATCH][RFC] SVM support

2007-08-22 Thread Blue Swirl
On 8/22/07, Alexander Graf <[EMAIL PROTECTED]> wrote:
> - All interceptions (well, maybe I did oversee one or two)

Nice work! For better performance, you should do the op.c checks
statically at translation time (if possible).




[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Luca
On 8/22/07, Luca <[EMAIL PROTECTED]> wrote:
> On 8/22/07, Avi Kivity <[EMAIL PROTECTED]> wrote:
> > Luca wrote:
> > >>> This is QEMU, with dynticks and HPET:
> > >>>
> > >>> % time seconds  usecs/call callserrors syscall
> > >>> -- --- --- - - 
> > >>>  52.100.002966   0 96840   clock_gettime
> > >>>  19.500.001110   0 37050   timer_gettime
> > >>>  10.660.000607   0 20086   timer_settime
> > >>>  10.400.000592   0  8985  2539 sigreturn
> > >>>   4.940.000281   0  8361  2485 select
> > >>>   2.410.000137   0  8362   gettimeofday
> > >>> -- --- --- - - 
> > >>> 100.000.005693179684  5024 total
> > >>>
> > >>>
> > >> This looks like 250 Hz?
> > >>
> > >
> > > Nope:
> > >
> > > # CONFIG_NO_HZ is not set
> > > # CONFIG_HZ_100 is not set
> > > # CONFIG_HZ_250 is not set
> > > # CONFIG_HZ_300 is not set
> > > CONFIG_HZ_1000=y
> > > CONFIG_HZ=1000
> > >
> > > and I'm reading it from /proc/config.gz on the guest.
> > >
> >
> > Yeah, thought so -- so dyntick is broken at present.
>
> I see a lot of sub ms timer_settime(). Many of them are the result of
> ->expire_time being less than the current qemu_get_clock(). This
> results into 250us timer due to MIN_TIMER_REARM_US; this happens only
> for the REALTIME timer. Other sub-ms timers are generated by the
> VIRTUAL timer.
>
> This first issue is easily fixed; if expire_time < current time then
> the timer has expired and hasn't been reprogrammed (and thus can be
> ignored).
> VIRTUAL just becomes more accurate with dyntics, before multiple
> timers were batched together.
>
> > Or maybe your host kernel can't support such a high rate.
>
> I don't know... a simple printf tells me that the signal handler is
> called about 1050 times per second, which sounds about right.

...unless strace is attached. ptrace()'ing the process really screw up
the timing with dynticks; HPET is also affected but the performance
hit is not as severe.

Luca




[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Luca
On 8/22/07, Luca <[EMAIL PROTECTED]> wrote:
> I see a lot of sub ms timer_settime(). Many of them are the result of
> ->expire_time being less than the current qemu_get_clock().

False alarm, this was a bug in the debug code :D

Luca




[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Luca
On 8/22/07, Avi Kivity <[EMAIL PROTECTED]> wrote:
> Luca wrote:
> >>> This is QEMU, with dynticks and HPET:
> >>>
> >>> % time seconds  usecs/call callserrors syscall
> >>> -- --- --- - - 
> >>>  52.100.002966   0 96840   clock_gettime
> >>>  19.500.001110   0 37050   timer_gettime
> >>>  10.660.000607   0 20086   timer_settime
> >>>  10.400.000592   0  8985  2539 sigreturn
> >>>   4.940.000281   0  8361  2485 select
> >>>   2.410.000137   0  8362   gettimeofday
> >>> -- --- --- - - 
> >>> 100.000.005693179684  5024 total
> >>>
> >>>
> >> This looks like 250 Hz?
> >>
> >
> > Nope:
> >
> > # CONFIG_NO_HZ is not set
> > # CONFIG_HZ_100 is not set
> > # CONFIG_HZ_250 is not set
> > # CONFIG_HZ_300 is not set
> > CONFIG_HZ_1000=y
> > CONFIG_HZ=1000
> >
> > and I'm reading it from /proc/config.gz on the guest.
> >
>
> Yeah, thought so -- so dyntick is broken at present.

I see a lot of sub ms timer_settime(). Many of them are the result of
->expire_time being less than the current qemu_get_clock(). This
results into 250us timer due to MIN_TIMER_REARM_US; this happens only
for the REALTIME timer. Other sub-ms timers are generated by the
VIRTUAL timer.

This first issue is easily fixed; if expire_time < current time then
the timer has expired and hasn't been reprogrammed (and thus can be
ignored).
VIRTUAL just becomes more accurate with dyntics, before multiple
timers were batched together.

> Or maybe your host kernel can't support such a high rate.

I don't know... a simple printf tells me that the signal handler is
called about 1050 times per second, which sounds about right.

Luca




[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Avi Kivity
Luca wrote:
>>> This is QEMU, with dynticks and HPET:
>>>
>>> % time seconds  usecs/call callserrors syscall
>>> -- --- --- - - 
>>>  52.100.002966   0 96840   clock_gettime
>>>  19.500.001110   0 37050   timer_gettime
>>>  10.660.000607   0 20086   timer_settime
>>>  10.400.000592   0  8985  2539 sigreturn
>>>   4.940.000281   0  8361  2485 select
>>>   2.410.000137   0  8362   gettimeofday
>>> -- --- --- - - 
>>> 100.000.005693179684  5024 total
>>>
>>>   
>> This looks like 250 Hz?
>> 
>
> Nope:
>
> # CONFIG_NO_HZ is not set
> # CONFIG_HZ_100 is not set
> # CONFIG_HZ_250 is not set
> # CONFIG_HZ_300 is not set
> CONFIG_HZ_1000=y
> CONFIG_HZ=1000
>
> and I'm reading it from /proc/config.gz on the guest.
>   

Yeah, thought so -- so dyntick is broken at present.

Or maybe your host kernel can't support such a high rate.  Probably
needs hrtimers or qemu dyntick over hpet oneshot support.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.





[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Luca
On 8/22/07, Avi Kivity <[EMAIL PROTECTED]> wrote:
> Luca Tettamanti wrote:
> > Il Wed, Aug 22, 2007 at 08:02:07AM +0300, Avi Kivity ha scritto:
> >
> >> Luca Tettamanti wrote:
> >>
> >>
> >>> Actually I'm having troubles with cyclesoak (probably it's calibration),
> >>> numbers are not very stable across multiple runs...
> >>>
> >>>
> >> I've had good results with cyclesoak; maybe you need to run it in
> >> runlevel 3 so the load generated by moving the mouse or breathing
> >> doesn't affect meaurements.
> >>
> >
> > This is what I did, I tested with -no-grapich in text console.
>
> Okay.  Maybe cpu frequency scaling confused it then. Or something else?

I set it performance, frequency was locked at 2.1GHz.

> >>> The guest is an idle kernel with HZ=1000.
> >>>
> >>>
> >> Can you double check this?  The dyntick results show that this is either
> >> a 100Hz kernel, or that there is a serious bug in dynticks.
> >>
> >
> > Ops I sent the wrong files, sorry.
> >
> > This is QEMU, with dynticks and HPET:
> >
> > % time seconds  usecs/call callserrors syscall
> > -- --- --- - - 
> >  52.100.002966   0 96840   clock_gettime
> >  19.500.001110   0 37050   timer_gettime
> >  10.660.000607   0 20086   timer_settime
> >  10.400.000592   0  8985  2539 sigreturn
> >   4.940.000281   0  8361  2485 select
> >   2.410.000137   0  8362   gettimeofday
> > -- --- --- - - 
> > 100.000.005693179684  5024 total
> >
>
> This looks like 250 Hz?

Nope:

# CONFIG_NO_HZ is not set
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000

and I'm reading it from /proc/config.gz on the guest.

> And a huge number of settime calls?

Yes, maybe some QEMU timer is using an interval < 1ms?
Dan do you any any idea of what's going on?

Luca




[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Avi Kivity
Luca Tettamanti wrote:
> Il Wed, Aug 22, 2007 at 08:02:07AM +0300, Avi Kivity ha scritto: 
>   
>> Luca Tettamanti wrote:
>>
>> 
>>> Actually I'm having troubles with cyclesoak (probably it's calibration),
>>> numbers are not very stable across multiple runs...
>>>   
>>>   
>> I've had good results with cyclesoak; maybe you need to run it in
>> runlevel 3 so the load generated by moving the mouse or breathing
>> doesn't affect meaurements.
>> 
>
> This is what I did, I tested with -no-grapich in text console.
>
>   

Okay.  Maybe cpu frequency scaling confused it then.  Or something else?

>>> The guest is an idle kernel with HZ=1000.
>>>   
>>>   
>> Can you double check this?  The dyntick results show that this is either
>> a 100Hz kernel, or that there is a serious bug in dynticks.
>> 
>
> Ops I sent the wrong files, sorry.
>
> This is QEMU, with dynticks and HPET:
>
> % time seconds  usecs/call callserrors syscall
> -- --- --- - - 
>  52.100.002966   0 96840   clock_gettime
>  19.500.001110   0 37050   timer_gettime
>  10.660.000607   0 20086   timer_settime
>  10.400.000592   0  8985  2539 sigreturn
>   4.940.000281   0  8361  2485 select
>   2.410.000137   0  8362   gettimeofday
> -- --- --- - - 
> 100.000.005693179684  5024 total
>   

This looks like 250 Hz?  And a huge number of settime calls?

Something's broken with dynticks.

> % time seconds  usecs/call callserrors syscall
> -- --- --- - - 
>  93.370.025541   3 10194 10193 select
>   4.820.001319   0 33259   clock_gettime
>   1.100.000301   0 10195   gettimeofday
>   0.710.000195   0 10196 10194 sigreturn
> -- --- --- - - 
> 100.000.027356 63844 20387 total
>   

This is expected and sane.

> And this KVM:
>
> % time seconds  usecs/call callserrors syscall
> -- --- --- - - 
>  42.660.002885   0 4552724 ioctl
>  25.620.001733   0 89305   clock_gettime
>  13.120.000887   0 34894   timer_gettime
>   7.970.000539   0 18016   timer_settime
>   4.700.000318   0 12224  7270 rt_sigtimedwait
>   2.790.000189   0  7271   select
>   1.860.000126   0  7271   gettimeofday
>   1.270.86   0  4954   rt_sigaction
> -- --- --- - - 
> 100.000.006763219462  7294 total
>   

Similarly broken.  The effective frequency is twice qemu's.  I think we
had the same effect last time.

> % time seconds  usecs/call callserrors syscall
> -- --- --- - - 
>  49.410.004606   0 5990027 ioctl
>  24.140.002250   0 31252 21082 rt_sigtimedwait
>   9.650.000900   0 51856   clock_gettime
>   8.440.000787   0 17819   select
>   4.420.000412   0 17819   gettimeofday
>   3.940.000367   0 10170   rt_sigaction
> -- --- --- - - 
> 100.000.009322188816 21109 total
>
>   

Similarly sane.


-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.





[Qemu-devel] Re: [kvm-devel] [PATCH 0/4] Rework alarm timer infrastrucure - take2

2007-08-22 Thread Luca Tettamanti
Il Wed, Aug 22, 2007 at 08:02:07AM +0300, Avi Kivity ha scritto: 
> Luca Tettamanti wrote:
> 
> > Actually I'm having troubles with cyclesoak (probably it's calibration),
> > numbers are not very stable across multiple runs...
> >   
> 
> I've had good results with cyclesoak; maybe you need to run it in
> runlevel 3 so the load generated by moving the mouse or breathing
> doesn't affect meaurements.

This is what I did, I tested with -no-grapich in text console.

> > The guest is an idle kernel with HZ=1000.
> >   
> 
> Can you double check this?  The dyntick results show that this is either
> a 100Hz kernel, or that there is a serious bug in dynticks.

Ops I sent the wrong files, sorry.

This is QEMU, with dynticks and HPET:

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 52.100.002966   0 96840   clock_gettime
 19.500.001110   0 37050   timer_gettime
 10.660.000607   0 20086   timer_settime
 10.400.000592   0  8985  2539 sigreturn
  4.940.000281   0  8361  2485 select
  2.410.000137   0  8362   gettimeofday
-- --- --- - - 
100.000.005693179684  5024 total

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 93.370.025541   3 10194 10193 select
  4.820.001319   0 33259   clock_gettime
  1.100.000301   0 10195   gettimeofday
  0.710.000195   0 10196 10194 sigreturn
-- --- --- - - 
100.000.027356 63844 20387 total

And this KVM:

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 42.660.002885   0 4552724 ioctl
 25.620.001733   0 89305   clock_gettime
 13.120.000887   0 34894   timer_gettime
  7.970.000539   0 18016   timer_settime
  4.700.000318   0 12224  7270 rt_sigtimedwait
  2.790.000189   0  7271   select
  1.860.000126   0  7271   gettimeofday
  1.270.86   0  4954   rt_sigaction
-- --- --- - - 
100.000.006763219462  7294 total

% time seconds  usecs/call callserrors syscall
-- --- --- - - 
 49.410.004606   0 5990027 ioctl
 24.140.002250   0 31252 21082 rt_sigtimedwait
  9.650.000900   0 51856   clock_gettime
  8.440.000787   0 17819   select
  4.420.000412   0 17819   gettimeofday
  3.940.000367   0 10170   rt_sigaction
-- --- --- - - 
100.000.009322188816 21109 total


Luca
-- 
Runtime error 6D at f000:a12f : user incompetente




Re: [kvm-devel] [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.

2007-08-22 Thread Andi Kleen
On Wed, Aug 22, 2007 at 10:03:32AM +0300, Avi Kivity wrote:
> Maybe the kernel is using the timer, so userspace can't.  Just a guess.

HPET has multiple timers (variable, but typically 2 or 4). The kernel
only uses timer 0. It's possible someone else in user space is using
it though. Try lsof /dev/hpet

-Andi




Re: [kvm-devel] [Qemu-devel] [PATCH 3/4] Add support for HPET periodic timer.

2007-08-22 Thread Avi Kivity
Dan Kenigsberg wrote:
> On Tue, Aug 21, 2007 at 01:15:22PM -0700, Matthew Kent wrote:
>   
>> On Tue, 2007-21-08 at 21:40 +0200, Luca wrote:
>> 
>>> On 8/21/07, Matthew Kent <[EMAIL PROTECTED]> wrote:
>>>   
 On Sat, 2007-18-08 at 01:11 +0200, Luca Tettamanti wrote:
 
> plain text document attachment (clock-hpet)
> Linux operates the HPET timer in legacy replacement mode, which means that
> the periodic interrupt of the CMOS RTC is not delivered (qemu won't be 
> able
> to use /dev/rtc). Add support for HPET (/dev/hpet) as a replacement for 
> the
> RTC; the periodic interrupt is delivered via SIGIO and is handled in the
> same way as the RTC timer.
>
> Signed-off-by: Luca Tettamanti <[EMAIL PROTECTED]>
>   
 I must be missing something silly here.. should I be able to open more
 than one instance of qemu with -clock hpet? Because upon invoking a
 second instance of qemu HPET_IE_ON fails.
 
>>> It depends on your hardware. Theoretically it's possible, but I've yet
>>> to see a motherboard with more than one periodic timer.
>>>   
>> Ah thank you, after re-reading the docs I think I better understand
>> this.
>> 
>
> In a risk of being off-topic, maybe you can help me try the hpet support.
> When I try the hpet Documentation demo I get
>
> # ./hpet poll /dev/hpet 1 1000
> -hpet: executing poll
> hpet_poll: info.hi_flags 0x0
> hpet_poll, HPET_IE_ON failed
>
> while I have
>
> $ dmesg|grep -i HPET
> ACPI: HPET 7D5B6AE0, 0038 (r1 A M I  OEMHPET   5000708 MSFT   97)
> ACPI: HPET id: 0x8086a301 base: 0xfed0
> hpet0: at MMIO 0xfed0, IRQs 2, 8, 0, 0
> hpet0: 4 64-bit timers, 14318180 Hz
> hpet_resources: 0xfed0 is busy
> Time: hpet clocksource has been installed.
>
> Any idea what I am misconfiguring?
>   

Maybe the kernel is using the timer, so userspace can't.  Just a guess.


-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.