On 2013-09-12 12:31, Paul J Stevens wrote:
On 09/12/2013 07:51 AM, Thomas Raschbacher wrote:
Duno about your change but 0.2 sec seems to be a long time for lots of
concurrent clients.

The .2 interval is not affected by the number of concurrent clients. No
matter how many clients are connected, the main thread checks that often
to see if any worker threads queued data for clients. During such a
check all pending messages on the queue are handled. But I have some
ideas to make it faster, without putting the main thread in a cpu
soaking timeout loop.

I guess the only way to tell if this is going to work is to do load testing ;)

I just looked at ZeroMQ. sounds interesting (and part of it somehow
reminds me a bit of the twisted framework (python) ^^)

Yep. But then no-one ever really used twisted because it's too complex.
I know Zope uses it internally, but then besides Plone, who uses Zope
anymore. ZeroMQ on the other hand is very simple and elegant. Wicked stuff.

Well I did - and do - use twisted ;)
But I do admit that the learning courve is a bit steep at first (not as bad as plone though haha)



Are you planning to use that anyway or is that just some thought you
were playing with?

I did a project using libzdb and zeromq, and loved it. Using it in
dbmail just seems like a good idea, just like libevent seemed like just
a good idea - back in 2006.

I am curious how many changes this would require to the code or if the
code currently is "modular" enough to replace the recv/send easily.

I have no intention to replace the current client-facing libevent code.
I would mainly use it for 'internal' messaging, where internal
could/would mean multiple instances of a dbmail process. One application
I have been thinking about a lot is user-based sharding.

But it would entail a massive refactoring of a lot of the code. Really
not a soon-to-happen thing.

ok thought so too ;)
_______________________________________________
DBmail mailing list
[email protected]
http://mailman.fastxs.nl/cgi-bin/mailman/listinfo/dbmail

Reply via email to