On Tue, Jan 28, 2014 at 01:19:10PM -0500, Waiman Long wrote:
> For single-thread performance (no contention), a 256K lock/unlock
> loop was run on a 2.4Ghz Westmere x86-64 CPU.  The following table
> shows the average time (in ns) for a single lock/unlock sequence
> (including the looping and timing overhead):
> 
>   Lock Type                   Time (ns)
>   ---------                   ---------
>   Ticket spinlock               14.1
>   Queue spinlock (Normal)        8.8*

What CONFIG_NR_CPUS ?

Because for CONFIG_NR_CPUS < 128 (or 256 if you got !PARAVIRT), the fast
path code should be:

ticket:

  mov $0x100,eax
  lock xadd %ax,(%rbx)
  cmp %al,%ah
  jne ...

although my GCC is being silly and writes:

  mov $0x100,eax
  lock xadd %ax,(%rbx)
  movzbl %ah,%edx
  cmp %al,%dl
  jne ...

Which seems rather like a waste of a perfectly good cycle.

With a bigger NR_CPUS you do indeed need more ops:

  mov $0x10000,%edx
  lock xadd %edx,(%rbx)
  mov %edx,%ecx
  shr $0x10,%ecx
  cmp %dx,%cx
  jne ...


Whereas for the straight cmpxchg() you'd get something relatively simple
like:

  mov %edx,%eax
  lock cmpxchg %ecx,(%rbx)
  cmp %edx,%eax
  jne ...



Anyway, as soon as you get some (light) contention you're going to tank
because you have to pull in extra cachelines, which is sad.


I suppose we could from the ticket code more and optimize the
uncontended path, but that'll make the contended path more expensive
again, although probably not as bad as hitting a new cacheline.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to