On 10/8/19 4:55 PM, William Brown wrote:
Hi everyone,

In our previous catch up (about 4/5 weeks ago when I was visiting Matus/Simon), 
we talked about nunc-stans and getting it at least cleaned up and into the code 
base.

I've been looking at it again, and really thinking about it and reflecting on 
it and I have a lot of questions and ideas now.

The main question is *why* do we want it merged?

Is it performance? Recently I provided a patch that yielded an approximate ~30% 
speed up in the entire server through put just by changing our existing 
connection code.
Is it features? What features are we wanting from this? We have no complaints 
about our current threading model and thread allocations.
Is it maximum number of connections? We can always change the conntable to a 
better datastructure that would help scale this number higher (which would also 
yield a performance gain).

It is mostly about the c10k problem, trying to figure out a way to use epoll, via an event framework like libevent, libev, or libtevent, but in a multi-threaded way (at the time none of those were really thread safe, or suitable for use in the way we do multi-threading in 389).

It wasn't about performance, although I hoped that using lock-free data structures might solve some of the performance issues around thread contention, and perhaps using a "proper" event framework might give us some performance boost e.g. the idle thread processing using libevent timeouts. I think that using poll() is never going to scale as well as epoll() in some cases e.g. lots of concurrent connections, no matter what sort of datastructure you use for the conntable.

As far as features goes - it would be nice to give plugins the ability to inject event requests, get timeout events, using the same framework as the main server engine.



The more I have looked at the code, I guess with time and experience, the more 
hesitant I am to actually commit to merging it. It was designed by people who 
did not understand low-level concurrency issues and memory architectures of 
systems,

I resemble that remark. I suppose you could "turn off" the lock-free code and use mutexes.

so it's had a huge number of (difficult and subtle) unsafety issues. And while 
most of those are fixed, what it does is duplicating the connection structure 
from core 389,

It was supposed to eventually replace the connection code.

leading to weird solutions like lock sharing and having to use monitors and 
more. We've tried a few times to push forward with this, but each time we end 
up with a lot of complexity and fragility.


So I'm currently thinking a better idea is to step back, re-evaluate what the 
problem is we are trying to solve for, then to solve *that*.

The question now is "what is the concern that ns would solve". From knowing 
that, then we can make a plan and approach it more constructively I think.

I agree.  There are probably better ways to solve the problems now.


At the end of the day, I'm questioning if we should just rm -r src/nunc-stans 
and rethink this whole approach - there are just too many architectural flaws 
and limitations in ns that are causing us headaches.

Ideas and thoughts?

--
Sincerely,

William
_______________________________________________
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org

_______________________________________________
389-devel mailing list -- 389-devel@lists.fedoraproject.org
To unsubscribe send an email to 389-devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/389-devel@lists.fedoraproject.org

Reply via email to