On Wed 13 Aug 2014 04:19:19 PM EDT, Grant Edwards wrote:
> On 2014-08-13, Alec Ten Harmsel <a...@alectenharmsel.com> wrote:
>
>>> I may have to stick with sockets when I want to block until some event
>>> happens.
>>
>> To be clear, do you want to block or sleep/yield until an event
>> happens?
>
> I don't see the difference -- isn't that what a blocking call does:
> sleep/yield until some event happens?

My bad, got a little confused. Trying to think a little too hard ;) 
Also, like Mr.McKinnon mentioned above, I'm a little out of my depth as 
well, although I have written some fancy multi-threaded computer vision 
code before.

>> I'm sorry for not being too helpful. Just one last question: Can you
>> describe what exactly your code is supposed to do, or is it something
>> that you can't talk about because it's a work thing? I don't care
>> either way, but I'm just curious because it seems you need to
>> optimize quite a bit.
>
> One process implements a communications protocol that is maintaining
> communications links with a handful of slave devices connected to
> serial ports running at baud rates up to 230400 baud (38400 is the
> most common).  There are typically one (or maybe two) hundred messages
> per second being exchanged with each slave device.  I'll call that
> process the server.

Alright, following you so far.

> There are other client processes that want to access the slaves and
> the inforamation being received from them. Some of the clients just
> want to do low-frequency transactions for configuration/diagnostic
> purposes, and Unix domain sockets work fine for that.

Seems legit.

> Other clients may want to wake up every time a certain high frequency
> event happens.  That's where I'm trying to use condition variables.

I think you should step away from the fancy decoupling and process the 
high frequency events in a separate thread in the server; if passing 
them over sockets is too much overhead, I don't see any other way to do 
this. I don't want to be extremely critical, but if the whole point of 
the server process is *only* the message passing, what does that gain 
you? Why bother only passing messages in the server? Might as well do 
some processing as well.

> Other clients will periodically (typically once every 5-50 ms) want to
> see the most recent copy of a particular received message.  I'm
> thinking about using shared memory and rwlocks for that, but I haven't
> figured out how to deal with the case where a process aborts while
> holding a lock.

Assuming there's not too many different message types, you could cache 
the most recent of each type in a hash map in the server process, 
making retrieving the most recent copy just a query to the server and 
no problems with locking on the client side.

In summation, I think you should process high-frequency messages in the 
server process, cache most recent messages in the server, and you can 
keep a separate client for checking the most recent messages and a 
separate client for the occasional debugging/configuration tasks.

IIRC, I'm pretty sure that's what made nginx so much more performant 
than Apache initially; instead of launching a new thread for every 
request (Apache did/does this) or doing some sort of fancy message 
passing, nginx pushes requests into a queue in shared memory and a 
thread pool processes the queue.

Alec

Reply via email to