On Wed, 3 Aug 2005, Alex Williamson wrote:
>Ok, I can see the scenario where that could produce jitter. However,
> that implies than any exit through that path could produce jitter as it
> is. For instance:
Well what is the difference of this approach from booting with "nojitter"?
The ITC
>Think about a threaded process that gets time on multiple processors
>and then compares the times. This means that the time value obtained later
>on one thread may indicate a time earlier than that obtained on another
>thread. An essential requirement for time values is that they are
>monoton
On Wed, 2005-08-03 at 09:10 -0700, Christoph Lameter wrote:
> On Wed, 3 Aug 2005, Alex Williamson wrote:
>
> > be a reasonable performance vs absolute accuracy trade-off. What
> > happens to your worst case time if you (just for a test) hard code a
> > min_delta of something around 20-50? There
On Wed, 3 Aug 2005, Alex Williamson wrote:
> be a reasonable performance vs absolute accuracy trade-off. What
> happens to your worst case time if you (just for a test) hard code a
> min_delta of something around 20-50? There could be some kind of
Think about a threaded process that gets time o
On Tue, 2005-08-02 at 11:37 -0700, [EMAIL PROTECTED] wrote:
> Sadly, running my test case (running 1-4 tasks, each bound to a cpu, each
> pounding
> on gettimeofday(2)) I'm still seeing significant time spent spinning in this
> loop.
> Things are better: worst case time was down to just over 2ms
On Tue, 2 Aug 2005, Luck, Tony wrote:
> Yes this is an SMP system (Intel Tiger4). Cpu0 is the boot cpu, and is
> indeed the one that takes the write lock, and thus the fast-return from
> the get_counter() code. I'm just very confused as to why I only see these
> 10X worse outliers on cpu3. Ther
>> I'm still seeing the asymmetric behavior where cpu3 sees the really high
>> times,
>> while cpu0,1,2 are seeing peaks of 170us, which is still not pretty.
>
>Is this an SMP system? Updates are performed by cpu0 and therefore the
>cacheline is mostly exclusively owned by that processor and then
On Tue, 2 Aug 2005 [EMAIL PROTECTED] wrote:
> Sadly, running my test case (running 1-4 tasks, each bound to a cpu, each
> pounding
> on gettimeofday(2)) I'm still seeing significant time spent spinning in this
> loop.
> Things are better: worst case time was down to just over 2ms from 34ms ...
+ /* When holding the xtime write lock, there's no need
+* to add the overhead of the cmpxchg. Readers are
+* force to retry until the write lock is released.
+*/
+ if (writelock) {
Could we remove some code duplication?
--
When using a time interpolator that is susceptible to jitter there's
potentially contention over a cmpxchg used to prevent time from going
backwards. This is unnecessary when the caller holds the xtime write
seqlock as all readers will be blocked from
10 matches
Mail list logo