On Tue, 15 Mar 2011, Jung-uk Kim wrote:
Log: Unconditionally use binuptime(9) for get_cyclecount(9) on i386. Since this function is almost exclusively used for random harvesting, there is no need for micro-optimization. Adjust the manual page accordingly.
That's what I said when it was being committed, but it isn't clear that there is _no_ need for micro-optimization in random harvesting. IIRC, random harvesting originally was too active so it benefited more from micro-optimizations. The timecounter is fast enough if the timecounter hardware is the TSC, since TSC reads used to take 12 cycles on Athlons and now take 40+, and timecounter software overhead only adds about 30 cycles to this. But now the timecounter harware is even more rarely the TSC, and most timecounter hardware is very slow -- about 3000 cycles for APCI-"fast" and 9000 for ACPI and 15000 for i8254 at 3 GHz.
Modified: head/sys/i386/include/cpu.h ============================================================================== --- head/sys/i386/include/cpu.h Tue Mar 15 16:50:17 2011 (r219671) +++ head/sys/i386/include/cpu.h Tue Mar 15 17:14:26 2011 (r219672) @@ -70,15 +70,10 @@ void swi_vm(void *); static __inline uint64_t get_cyclecount(void) { -#if defined(I486_CPU) || defined(KLD_MODULE) struct bintime bt; - if (!tsc_present) { - binuptime(&bt); - return ((uint64_t)bt.sec << 56 | bt.frac >> 8); - } -#endif - return (rdtsc()); + binuptime(&bt); + return ((uint64_t)bt.sec << 56 | bt.frac >> 8); }
You should pessimize all arches to use binuptime() to get enough test coverage to see if anyone notices. Then get_cyclecount() can be removed. The correct function to use for fast and possibly-wrong times is clock_binuptime(CLOCK_FASTEST, tsp), where CLOCK_FASTEST maps to CLOCK_TSC on x86 if there is a TSC and nothing faster. Bruce _______________________________________________ svn-src-all@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/svn-src-all To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"