On Wednesday, February 27, 2019 at 6:34:41 AM UTC-8, Manuel Pöter wrote:
>
> Benchmarked this on 2 systems with `THREADS` set to 1, 2 and 4.
>
> Result 1:
>
> 8x Intel(R) Xeon(R) CPU E7- 8850  @ 2.00GHz (80 cores, 2x SMT)
>
> Testing Version 0.1: Chris M. Thomasson's Experimental Read/Write Mutex
> ___________________________________
> cpu_threads_n = 160
> threads_n = 160
> writers = 80
> readers = 80
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 186803
> Raw Writes: 1922
> reads_per_tick = 3111
> writes_per_tick = 32
> Ticks = 60.0334
> ___________________________________
>
> Testing Version 0.1: Chris M. Thomasson's Experimental Read/Write Mutex
> ___________________________________
> cpu_threads_n = 160
> threads_n = 320
> writers = 160
> readers = 160
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 336265
> Raw Writes: 2559
> reads_per_tick = 5596
> writes_per_tick = 42
> Ticks = 60.0848
> ___________________________________
>
> Testing Version 0.1: Chris M. Thomasson's Experimental Read/Write Mutex
> ___________________________________
> cpu_threads_n = 160
> threads_n = 640
> writers = 320
> readers = 320
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 449283
> Raw Writes: 3718
> reads_per_tick = 7471
> writes_per_tick = 61
> Ticks = 60.1302
> ___________________________________
>
> Testing Version 0.1: std::shared_mutex
> ___________________________________
> cpu_threads_n = 160
> threads_n = 160
> writers = 80
> readers = 80
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 191840
> Raw Writes: 784
> reads_per_tick = 3194
> writes_per_tick = 13
> Ticks = 60.0533
> ___________________________________
>
> Testing Version 0.1: std::shared_mutex
> ___________________________________
> cpu_threads_n = 160
> threads_n = 320
> writers = 160
> readers = 160
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 350020
> Raw Writes: 1738
> reads_per_tick = 5826
> writes_per_tick = 28
> Ticks = 60.0688
> ___________________________________
>
> Testing Version 0.1: std::shared_mutex
> ___________________________________
> cpu_threads_n = 160
> threads_n = 640
> writers = 320
> readers = 320
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 706867
> Raw Writes: 1830
> reads_per_tick = 11752
> writes_per_tick = 30
> Ticks = 60.1452
> ___________________________________
>


Okay. Thank you. So, my work is losing wrt reads, but performing more 
writes. This has to be an aspect of my algorithms strict bakery style 
fairness. Writers can never starve readers, and vise versa. It has no read 
or writer preference.

 

>
> Result 2:
>
> 4x SPARC-T5-4 (64 cores, 8x SMT)
>
> Testing Version 0.1: Chris M. Thomasson's Experimental Read/Write Mutex
> ___________________________________
> cpu_threads_n = 512
> threads_n = 512
> writers = 256
> readers = 256
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 640255
> Raw Writes: 7999
> reads_per_tick = 10650
> writes_per_tick = 133
> Ticks = 60.1149
> ___________________________________
>
> Testing Version 0.1: Chris M. Thomasson's Experimental Read/Write Mutex
> ___________________________________
> cpu_threads_n = 512
> threads_n = 1024
> writers = 512
> readers = 512
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 948097
> Raw Writes: 12602
> reads_per_tick = 15746
> writes_per_tick = 209
> Ticks = 60.2094
> ___________________________________
>
> Testing Version 0.1: Chris M. Thomasson's Experimental Read/Write Mutex
> ___________________________________
> cpu_threads_n = 512
> threads_n = 2048
> writers = 1024
> readers = 1024
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 1718250
> Raw Writes: 23019
> reads_per_tick = 28402
> writes_per_tick = 380
> Ticks = 60.4974
> ___________________________________
>
>
> Testing Version 0.1: std::shared_mutex
> ___________________________________
> cpu_threads_n = 512
> threads_n = 512
> writers = 256
> readers = 256
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 4482
> Raw Writes: 2166488
> reads_per_tick = 74
> writes_per_tick = 36045
> Ticks = 60.1037
> ___________________________________
>
> Testing Version 0.1: std::shared_mutex
> ___________________________________
> cpu_threads_n = 512
> threads_n = 1024
> writers = 512
> readers = 512
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 1536
> Raw Writes: 2093636
> reads_per_tick = 25
> writes_per_tick = 34767
> Ticks = 60.2185
> ___________________________________
>
> Testing Version 0.1: std::shared_mutex
> ___________________________________
> cpu_threads_n = 512
> threads_n = 2048
> writers = 1024
> readers = 1024
> test duration = 60 seconds
> ___________________________________
> ___________________________________
> Raw Reads: 4096
> Raw Writes: 2001034
> reads_per_tick = 67
> writes_per_tick = 33130
> Ticks = 60.3978
> ___________________________________
>
>
Ummm.... This is wild! It is sort of hard to believe that std::shared_mutex 
is "tanking" on the read throughput so much wrt the SPARC. Well, it must 
have a heavy writer preference. I will have some more time in a day or two 
to work on this, wrt trying to make sense of these results.

Imvvho, std::shared_mutex should always be able to beat out a rwmutex 
implemented with 100% pure c++11. My algorithm is creating some very 
interesting results on these larger systems. 

Thanks again Manuel, and Dmitry. :^)

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/dabbde42-8be4-4761-be22-ceb0b25ebfdc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to