On Nov 4, 2007, at 3:07 PM, Christopher Layne wrote:
The issue in itself is having multiple threads monitor the *same* fd via any kind of wait mechanism. It's short circuiting application layers, so that a thread (*any* thread in that pool) can immediately process new data. I think it would be much more structured, less complex (i.e. better performance in the long run anyways), and a cleaner design to have a set number (or even 1) thread handle the "controller" task of tending to new network events, push them onto a per-connection PDU queue, or pre-process in some form or fashion, condsig, and let previously mentioned thread pool handle it in an
ordered fashion.

You've just pretty accurately described my initial implementation of thread support in memcached. It worked, but it was both more CPU- intensive and had higher response latency (yes, I actually measured it) than the model I'm using now. The only practical downside of my current implementation is that when there is only one UDP packet waiting to be processed, some CPU time is wasted on the threads that don't end up winning the race to read it. But those threads were idle at that instant anyway (or they wouldn't have been in a position to wake up) so, according to my benchmarking, there doesn't turn out to be an impact on latency. And though I am wasting CPU cycles, my total CPU consumption still ends up being lower than passing messages around between threads.

It wasn't what I expected; I was fully confident at first that the thread-pool, work-queue model would be the way to go, since it's one I've implemented in many applications in the past. But the numbers said otherwise.

-Steve
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to