Digging in LKML, looks like someone proposed a patch to expose tsc_khz
but it was rejected because not all platforms support constant timer
and exposing this would encourage users to use rdtsc.

https://lwn.net/Articles/388188/

I was actually not hoping to get aid from the kernel, but to have some
way to measure that in userspace even without the kernel.


On Wed, Apr 20, 2016 at 1:03 PM, Elazar Leibovich <elaz...@gmail.com> wrote:
> I didn't think about it at first, but since the kernel uses tsc as a
> clock source, I'd better have a look at what it does.
>
> Note that there are some tsc calibration specific functions for MID
> and atom platforms, but I assume the main one is the most interesting
> one.
>
> http://lxr.free-electrons.com/source/arch/x86/kernel/tsc.c#L665
>
> Further reading of the kernel source shows that for regular X86
> platforms, the tsc calibration function either reads the information
> from specific MSRs, or actually measures it against the hpet, probably
> for older platforms.
>
> I guess I could measure it against the hpet, assume it's in khz
> precision and round it down from userspace, but I'm still looking for
> a better solution to do that.
>
> I'm not a kernel expert, maybe tsc_khz is exported to userspace somehow.
>
> Anyone have any idea?
>
> On Wed, Apr 20, 2016 at 8:10 AM, Elazar Leibovich <elaz...@gmail.com> wrote:
>> Hi,
>>
>> In all recent Intel hardware, rdtsc is providing number of ticks since
>> boot, with a constant rate, and is equal among CPUs.
>>
>> Vol III 17.14
>> For Pentium 4 processors, (...): the time-stamp counter increments at
>> a constant rate. That rate may be set by the
>> maximum core-clock to bus-clock ratio of the processor or may be set
>> by the maximum resolved frequency at
>> which the processor is booted.
>> Vol III 17.14.1
>> "On processors with invariant TSC support, the OS may use the TSC for
>> wall clock timer services
>> (instead of ACPI or HPET timers). TSC reads are much more efficient
>> and do not incur the overhead associated with
>> a ring transition or access to a platform resource."
>>
>>
>> This patch by Andi Kleen, which should know a thing or two about
>> processor architecture, seems to imply that sysfs cpuinfo_max_freq is
>> rdtsc's rate:
>>
>> https://sourceware.org/ml/libc-alpha/2009-08/msg00001.html
>>
>> However when running the following, I get a constant drift from the
>> expected result:
>>
>> https://gist.github.com/elazarl/140ccc8ebe8c98fc5050a4fdb7545df8
>>
>> gist of the code
>>
>> uint64_t nanosleep_jitter(struct timespec req) {
>>     uint64_t now = __real_tsc();
>>     nanosleep_must(req);
>>     return __real_tsc() - now;
>> }
>>
>> When running it I get
>>
>> $ ./jitter -s 10ms -n 1
>> -3.5615ms jitter actual 6.4385ms expected 10.0000ms
>>
>>
>> What am I missing?
>>
>> How can I get rdtsc's frequency in Linux, preferably from user land.
>>
>> PS
>> Yes, this is not correct for all processors, it is correct enough to
>> work on virtually all recent server hardware, and constant_tsc flag
>> can be verified with /proc/cpu_info. So while not perfect for all use
>> cases, this is good enough.

_______________________________________________
Linux-il mailing list
Linux-il@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il

Reply via email to