Hello,first of all thank you for great work.
If I understood correctly on compile time we need to provide a
configuration option to compile NaviServer with mutex or rwlocks, so option
b sounds good and more than enough.
Default is rwlocks.
Correct me if I'm wrong.


On Tue, Jun 30, 2020 at 7:56 PM Zoran Vasiljevic <z...@archiware.com> wrote:

> Excellent work as always!
> I'd go for the b. option.
>
> Am 30.06.2020 um 22:39 schrieb Gustaf Neumann <neum...@wu.ac.at>:
>
> 
>
> Dear all,
>
> some of you might have noticed the recent changes in the NaviServer
> repository concerning rwlocks. rwlocks support multiple concurrent readers
> and behave like mutex locks in the case of writers. rwlocks can improve
> concurrency especially in applications with many parallel threads and high
> loads, but these are only better in cases, where there are more readers
> than writers.
>
> The current version of NaviServer uses rwlocks e.g. for URLspace and for
> nsv variables. Here are some statistics of these locks collected on a
> real-world application (openacs.org):
>
> Name                  Locks   Busy    Read    Write   Write %
> nsv:7:openacs.org     33.14M  4       33.13M  8.41K   0.03%
> nsv:6:openacs.org     16.85M  249     16.4M   453.88K 2.69%
> nsv:3:openacs.org     15.09M  3       15.04M  46.88K  0.31%
> nsv:2:openacs.org     10.26M  5       10.23M  38.17K  0.37%
> nsv:5:openacs.org     9.98M   0       9.98M   1.57K   0.02%
> ns:rw:urlspace:4      4.45M   0       4.45M   86      0.00%
>
>
> As one can see, the vast majority of operations are read operations, where
> typically less than one percent are write operations. One can see as well
> the very little number of busy operations, where a lock has to be blocked.
> The same site with the same traffic reaches about 2K busy locks for nsv:6
> (instead of 4) and about 3.5K busy locks for nsv:6 (instead of 249) on the
> same number of locks. The improvements on the busy locks are significantly
> higher on sites with more connection threads and more traffic.
>
> However, on some other applications, the write ratio might be different,
> and there might not be always only improvements, as the test below show.
>
> In our little test, we create 20 background threads busily reading and
> writing from nsv variables, while we are measuring of a foreground task
> hammering on the very same nsv variables (with different mixes of
> background tasks and writer percentages).
>
> The first chart shows results under macOS, relative to the same task using
> mutex locks instead of rwlocks:
>
> <dnkmmdiholbgceok.png>
>
> In the first three columns, we see the performance without background
> traffic (i.e. without concurrency). The performance is measured relative to
> the same application using mutex locks (lower numbers are better). We see
> that  without concurrency, the rwlocks lead to better results (runtime
> improvement about 25%). The next three columns show results with 20
> background tasks, just reading busily from the nsv variables. The version
> with rwlocks is faster by a factor of 10 in the best case (just foreground
> and background readers). But as one can see, the performance benefit
> reduces when more write operations are performed. The last three columns
> show the situation, when we have 50% read and write operations in the
> background.
>
> When we consider these values in combination with the statistics of
> openacs.org, we see that we are there essentially in the sweep spot where
> rwlocks shine (with practically no writers).
>
> The situation on Linux is different with less positive results (macOS
> seems to have a very good implementation of POSIX rwlocks). While in the
> best cases, the improvement on Linux is just a factor of 5 better than with
> mutex locks, in the worst cases, the performance is nearly 3 times worse.
>
> <clckkhjfmbfhgjol.png>
>
> One should also notice that these values were achieved with a conservative
> setting of rwlocks, which prevents writer starvation (the results would
> have been better without these).
>
> Since on sites like openacs.org (and even more on our high traffic sites
> of our e-learning environment), we are always in range of the first 5 bars,
> so using rwlocks is a big improvement. The improvement will be even higher
> for situations, where rwlocks are protecting areas where more computing
> intense operations are performed, or when more cores are available, etc.
>
> However, there might be NaviServer applications with nsvs out there, for
> which the change of using rwlocks for nsv variables might lead to a reduced
> performance. So we have in general the following options for the
> forthcoming release:
>
> a) hardwire nsvs to rwlocks
> b) make it a compile-time decision to choose between rwlocks and mutex
> locks for nsvs
> c) provide a configuration variable in the config file to choose between
> rwlocks and mutex locks for nsvs at startup
> d) provide a runtime API for creating nsv arrays with rwlock or mutex
>
> Currently, we have essentially (b) in the repository. (c) seems feasible
> with moderate implementation effort. (d) would be a larger project.
>
> What do you think? My approach would be to leave the code with (b) and
> consider (c) in the future, when necessary ... unless someone convinces me
> that more control is essential now.
>
> -g
>
> PS: Used hardware:
> Linux machine: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz, 10 cores +
> hyperthreading, Fedora release 32.
> macOS machine: 2,4 GHz Intel Core i9, 18.7.0 Darwin Kernel Version 18.7.0:
> Mon Apr 27 20:09:39 PDT 2020; root:xnu-4903.278.35~1/RELEASE_X86_64 x86_64
>
>
> _______________________________________________
> naviserver-devel mailing list
> naviserver-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/naviserver-devel
>
> _______________________________________________
> naviserver-devel mailing list
> naviserver-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/naviserver-devel
>
_______________________________________________
naviserver-devel mailing list
naviserver-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/naviserver-devel

Reply via email to