> Anyway, I'm attaching my completely mindless test program. It has
> hacky things like "unsigned long count[MAXTHREADS][32]" which are
> purely to just spread out the counts so that they aren't in the same
> cacheline etc.
> 
> Also note that the performance numbers it spits out depend a lot on
> tings like how long the dcache hash chains etc are, so they are not
> really reliable. Running the test-program right after reboot when the
> dentries haven't been populated can result in much higher numbers -
> without that having anything to do with contention or locking at all.

Running on a POWER7 here with 32 threads (8 cores x 4 threads) I'm
getting some good improvements:

  Without patch:
    # ./t
    Total loops: 3730618

  With patch:
    # ./t
    Total loops: 16826271

The numbers move around about 10% from run to run.  I didn't change your
program at all, so it's still running with MAXTHREADS 16.

powerpc patch below. I'm using arch_spin_is_locked() to implement
arch_spin_value_unlocked().

Mikey

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 9cf59816d..4a3f86b 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -139,6 +139,7 @@ config PPC
        select OLD_SIGSUSPEND
        select OLD_SIGACTION if PPC32
        select HAVE_DEBUG_STACKOVERFLOW
+       select ARCH_USE_CMPXCHG_LOCKREF
 
 config EARLY_PRINTK
        bool
diff --git a/arch/powerpc/include/asm/spinlock.h 
b/arch/powerpc/include/asm/spinlock.h
index 5b23f91..65c25272 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -156,6 +156,11 @@ extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
        do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
 #endif
 
+static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
+{
+       return !arch_spin_is_locked(&lock);
+}
+
 /*
  * Read-write spinlocks, allowing multiple readers
  * but only one writer.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to