Bruce Momjian wrote:
In fact, the kernel doesn't even contain have a way
to measure microsecond timings.
Linux has patches available to do microsecond timings, but they're
nonportable, of course.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11
Bruce Momjian [EMAIL PROTECTED] writes:
A comment on microsecond delays using select(). Most Unix kernels run
at 100hz, meaning that they have a programmable timer that interrupts
the CPU every 10 milliseconds.
Right --- this probably also explains my observation that some kernels
I have been thinking some more about the s_lock() delay loop in
connection with this. We currently have
/*
* Each time we busy spin we select the next element of this array as the
* number of microseconds to wait. This accomplishes pseudo random back-off.
* Values are not critical but
Bruce Momjian [EMAIL PROTECTED] writes:
Having read the select(2) man page more closely, I now realize that
it is *defined* not to yield the processor when the requested delay
is zero: it just checks the file ready status and returns immediately.
Actually, a kernel call is something. On
So *if* some I/O just completed, the call *might* do what we need,
which is yield the CPU. Otherwise we're just wasting cycles, and
will continue to waste them until we do a select with a nonzero
delay. I propose we cut out the spinning and just do a nonzero delay
immediately.
Well, any
On Sat, Feb 17, 2001 at 12:26:31PM -0500, Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
A comment on microsecond delays using select(). Most Unix kernels run
at 100hz, meaning that they have a programmable timer that interrupts
the CPU every 10 milliseconds.
Right --- this
[EMAIL PROTECTED] (Nathan Myers) writes:
Certainly there are machines and kernels that count time more precisely
(isn't PG ported to QNX?). We do users of such kernels no favors by
pretending they only count clock ticks. Furthermore, a 1ms clock
tick is pretty common, e.g. on Alpha boxes.