* Tim Chen <tim.c.c...@linux.intel.com> wrote:

> For version 8 of the patchset, we included the patch from Waiman to 
> streamline wakeup operations and also optimize the MCS lock used in 
> rwsem and mutex.

I'd be feeling a lot easier about this patch series if you also had 
performance figures that show how mmap_sem is affected.

These:

> Tim got the following improvement for exim mail server 
> workload on 40 core system:
> 
> Alex+Tim's patchset:             +4.8%
> Alex+Tim+Waiman's patchset:        +5.3%

appear to be mostly related to the anon_vma->rwsem. But once that lock is 
changed to an rwlock_t, this measurement falls away.

Peter Zijlstra suggested the following testcase:

===============================>
In fact, try something like this from userspace:

n-threads:

  pthread_mutex_lock(&mutex);
  foo = mmap();
  pthread_mutex_lock(&mutex);

  /* work */

  pthread_mutex_unlock(&mutex);
  munma(foo);
  pthread_mutex_unlock(&mutex);

vs

n-threads:

  foo = mmap();
  /* work */
  munmap(foo);

I've had reports that the former was significantly faster than the
latter.
<===============================

this could be put into a standalone testcase, or you could add it as a new 
subcommand of 'perf bench', which already has some pthread code, see for 
example in tools/perf/bench/sched-messaging.c. Adding:

   perf bench mm threads

or so would be a natural thing to have.

Thanks,

        Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to