On 04/26/17 12:40, Peltonen, Janne (Nokia - FI/Espoo) wrote:
>>> odp_sched_latency currently uses clock_gettime. It is my understanding
>>> that clock_gettime does not have the over head of the system call. Can
>>> you elaborate more on the 'improved significantly' part?
>>>
>>
>> clock_gettime() uses the same TSC, but when you profile it with perf you can 
>> see tens of
>> kernel functions including system call entry, RCU maintenance, etc.
> 
> clock_gettime() does not use the vdso implementation without syscall overhead
> on x86 if clock id is CLOCK_MONOTONIC_RAW as it seems to be in ODP. I think
> new enough kernels do support CLOCK_MONOTONIC_RAW in vsdo for arm64 though.
> 
> CLOCK_MONOTONIC is supported in vdso in x86 and would not cause syscall
> overhead provided that the kernel time source is tsc (which it often is,
> but not always (e.g. in some VMs)).
> 
>       Janne
> 
> 

here we need to be very careful with 2 things:
1) if api says nanosecond should be returned than it has to be nanoseconds.

2) We need to go in more generic way. If on fresh kernels
clock_gettime() shows great results then maybe it's reasonable to say
with it. But in that case we need to do measurements and define minimal
kernel version. I think that on any call kernel does some internal time
store which might trigger other sybsystem to take an action (soft irq
timers and rcu run as Petri saw).

This thread looks like very old but it says that CLOCK_MONOTONIC_RAW has
worst performance according to rdtsc:
http://btorpey.github.io/blog/2014/02/18/clock-sources-in-linux/

ps. link also has test code to measure values.

Maxim.

Reply via email to