Jamie Lokier wrote:
> Paul Brook wrote:
>   
>>> Yes, good thinking, but this should only be done if it actually impacts
>>> something.  Reducing overhead from 0.1% to 0.05% is not worthwhile if it
>>> introduces extra complexity.
>>>       
>> If the overhead is that small, why are we touching this code in the first 
>> place?
>>     
>
> Insightful.
>
> A benchmark result was posted which is rather interesting:
>
>   
>> [EMAIL PROTECTED] ~]$ time ./hackbench 50
>> x86_64 host                 : real 0m10.845s
>> x86_64 host, bound to 1 cpu : real 0m21.884s
>> i386 guest+unix clock       : real 0m49.206s
>> i386 guest+hpet clock       : real 0m48.292s
>> i386 guest+dynticks clock   : real 0m28.835s
>>
>> Results are repeatable and verfied with a stopwatch because I didn't
>> believe them at first :)
>>     
>
> I am surprised if 1000 redundant SIGALRMs per second is really causing
> 70% overhead in normal qemu execution, except on a rather old or slow
> machine where signal delivery is very slow.
>   
No, something else is happening here.  On kvm, a signal is maybe 10us,
to the SIGALRMs represent 1% of overhead.  Respectable, but not more.

I suspect that the increased accuracy causes something to behave better
(or just differently).  This won't be representative.


-- 
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.



Reply via email to