Fwiw, with edge-triggered mode, you don't need to do any such futzing.

On May 2, 2009 7:34 PM, "Clint Webb" <webb.cl...@gmail.com> wrote:

I am using libevent for a high performance messaging system and am currently
doing some cleanup of the code.
In looking at the way I use libevent I'm wondering if I am doing it right.

Essentially I have a daemon that is a managed message queue, and it receives
connections from a large number of nodes.  Once a node is connected, it
generally stays connected.  The nodes send messages that need to be
delivered to either once of a load-balanced set, a single node, or all nodes
of a group.  So therefore, all nodes are set with EV_READ | EV_PERSIST.

All nodes are in non-blocking mode.  When data needs to be sent to a node,
it does a send straight away.  If it manages to send it all then all is
good, but if it only sends part of the data, or returns EWOULDBLOCK, it then
needs to change the event to add EV_WRITE to it, so that it can send the
rest of the data when the socket is ready for more writes.

I have been doing this by using event_del to delete the event, then I was
using event_set to change the event to EV_READ | EV_WRITE | EV_PERSIST, and
then I event_add it back into the loop.

When all data has been sent, it does the same thing again, but removed the
EV_WRITE.

Is this how other people do it, or am I doing some unnecessary things?

-- 
"Be excellent to each other"

_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users
_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkeymail.org/mailman/listinfo/libevent-users

Reply via email to