On Thu, 10 Apr 2014 09:44:30 -0500
Clark Williams <willi...@redhat.com> wrote:

> I wrote a program named whack_mmap_sem which creates a large (4GB)
> buffer, then creates 2 x ncpus threads that are affined across all the
> available cpus. These threads then randomly write into the buffer,
> which should cause page faults galore.
> 
> I then built the following kernel configs:
> 
>   vanilla-3.13.15  - no RT patches applied

 vanilla-3.*12*.15?

>   rt-3.12.15       - PREEMPT_RT patchset
>   rt-3.12.15-fixes - PREEMPT_RT + rwsem fixes
>   rt-3.12.15-multi - PREEMPT_RT + rwsem fixes + rwsem-multi patch
> 
> My test h/w was a Dell R520 with a 6-core Intel(R) Xeon(R) CPU E5-2430
> 0 @ 2.20GHz (hyperthreaded). So whack_mmap_sem created 24 threads
> which all partied in the 4GB address range.
> 
> I ran whack_mmap_sem with the argument -w 100000 which means each
> thread does 100k writes to random locations inside the buffer and then
> did five runs per each kernel. At the end of the run whack_mmap_sem
> prints out the time of the run in microseconds.
> 
> The means of each group of five test runs are:
> 
>   vanilla.log:  1210117
>        rt.log:  17210953 (14.2 x slower than vanilla)
>  rt-fixes.log:  10062027 (8.3 x slower than vanilla)
>  rt-multi.log:  3179582  (2.x x slower than vanilla)
> 
> 
> As expected, vanilla kicked RT's butt when hammering on the
> mmap_sem. But somewhat unexpectedly, your fixups helped quite a bit

That doesn't surprise me too much. As I removed the check for the
nesting, which also shrunk the size of the rwsem itself (removed the
read_depth from the struct). This itself can give a bonus boost.

Now the question is, how much will this affect real use case scenarios?

-- Steve


> and the multi+fixups got RT back into being almost respectable.
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to