On Wednesday, February 27, 2019 at 1:33:29 AM UTC-8, Dmitry Vyukov wrote:
>
> On Tue, Feb 26, 2019 at 11:47 PM Chris M. Thomasson <cri...@charter.net 
> <javascript:>> wrote: 
> > 
> > On Tuesday, February 26, 2019 at 12:10:02 AM UTC-8, Dmitry Vyukov wrote: 
> >> 
> >> On Wed, Feb 20, 2019 at 7:51 AM Chris M. Thomasson <cri...@charter.net> 
> wrote: 
> >> > 
> >> > Fwiw, I wrote a crude new benchmark that measures how many reads and 
> writes can be performed in a given amount of time. My algorithm vs 
> std::shared_mutex. So, we are _primarily_ looking for how many reads can be 
> performed in this test at 60 seconds. The number of threads is variable and 
> is determined by std::hardware_concurrency * THREADS, set at 8 in the test. 
> So on my system the setup is: 
> >> > ___________________________________ 
> >> > cpu_threads_n = 4 
> >> > threads_n = 32 
> >> > writers = 16 
> >> > readers = 16 
> >> > test duration = 60 seconds 
> >> > ___________________________________ 
> >> [...] 
> >> 
> >> Benchmarked this on 2 systems (3 alternating runs for each mutex): 
> >> 
> >> [...] 
> > 
> > 
> > Thank you! :^) 
> > 
> >> 
> >> 
> >> Now it is your problem to interpret this :) 
> > 
> > 
> > std::shared_mutex might have a reader priority? Wondering if it is using 
> a distributed algorithm on the larger system? 
> > 
> > The strict bakery style interleaving in my algorithm must be doing 
> something "interesting" on the larger system. Mine seems to allow for some 
> more writes in the data, too fair? The mutex aspect of my algorithm might 
> be kicking in here. It is using Alexander Terekhov's algorithm from 
> pthread_mutex_t in pthreads-win32, actually it can use any mutual exclusion 
> algorithm for writer access: 
> > 
> > https://www.sourceware.org/pthreads-win32/ 
> > 
> > Integrating writer access wrt ct_rwmutex::m_count should be 
> beneficial... 
> > 
> > 
> > 
> >> 
> >> 
> >> Another interesting data point is time output. 
> >> On the large system for your mutex: 
> >> 848.76user 10.83system 1:00.26elapsed 1426%CPU 
> >> 
> >> for std mutex: 
> >> 4238.27user 26.79system 1:00.18elapsed 7086%CPU 
> >> 
> >> So whatever your mutex did, it used 5 times less CPU time for that. 
> > 
> > 
> > Bakery style, and the slow paths boiling down to condvar/mutex? I wonder 
> what std::shared_mutex is using on your end when it has to wait? futex? 
>
> It's just a wrapper around pthread_rwlock and I have glibc 2.24. 
>

Need to examine the implementation. If it is based solely on mutex/condvar, 
I am going to be really surprised! I would think that pthread_rwlock 
should  be using atomics and futex on Linux.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to lock-free+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/lock-free/b0b0cb6f-3b40-47fa-9d47-4b7ba1b830ca%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to