>> Thanks for the thorough explanation. I'm actually developing a fast socket
>> server that will be using a storage backend library to store thousands of
>> millions of "small objects" (basically less than 1K) in a large optimized
>> hash table. This storage layer maintains  a large objects cache in memory
>> (so to not hit the HD again on next read for the same object), this memory
>> cache not being share-able across processes.
> 
> If it isn't sharable across processes (using, well, shared memory) then
> it's not sharable between threads either, me thinks. The difference is
> simply that threads force shared memory for everything.

It's a simple in-memory malloc-ed cache, no shared memory is involved
(i.e. no shmem() and stuff).

> Now, threads will likely be slower, but easier, since pthreads offers
> portable and widely-implemented inter-thread locking, while e.g.
> posix/pthread inter-process locking is not as widely implemented.
> 
>> FYI I followed your previous advice and already migrated to a lightweight
>> memory queue + simple pthread mutex + ev_async_send(), instead of
>> using pipes. The performance gain is substantial (I went from 15kreq/s to
>> 19-20kreq/s up, i.e. more than 20% improvement), and it seems the whole
> 
> That's great to hear :)
> 
>> server uses less CPU in general (less system calls and kernel/user-space
>> switches I guess). I've tried playing with thread/core affinity, but no 
>> conclusive
>> result sofar.
> 
> As long as your workers do little to shared (or unshared) data structures,
> do not change machine state too often, then threads shouldn't do badly at
> all, and will be about as difficult, or easier, than using processes.


Pierre-Yves





_______________________________________________
libev mailing list
libev@lists.schmorp.de
http://lists.schmorp.de/cgi-bin/mailman/listinfo/libev

Reply via email to