Hi,

I'm trying to build a pub/sub engine on top of jzmq for performing distributed 
computation in cloud environments. Systems have a relatively high chance of 
failure, so I want to ensure reliability.

I was wondering if there was any advice on how to do this in an efficient and 
reliable manner. By efficient, I mean that I don't want to rely on a central 
broker for federating events because whatever system running the broker could 
easily get its network card saturated. By reliable, I mean that I want the 
engine to be resilient to individual system failure. So if process P publishes 
an event with a channel that process Q is subscribed to, and Q dies before it 
finishes processing the event, then P should persist the event. At some point 
in time, a process R will be created to take Q's place. It will notify P that 
it is the substitute, and P will send it the event.

I could use zmq's PUB/SUB socket types but they seem to fail the reliability 
clause. From what I'm reading, it appears as if I could do many-to-many socket 
connections with them though, which means they would not fail the efficiency 
clause. Although unless the Publisher Side Messaging Filtering topic is out of 
date (http://www.zeromq.org/topics:new-topics), events are filtered at the 
subscriber side, which could be a bottleneck.

I could also do REP/REQ, where each process is running a REP socket. When a 
process wants to publish, it queries a broker to see who is subscribed to the 
relevant channel. Then it connects to each of the processes' REP sockets, sends 
the event, and waits for an acknowledgement before moving on. This seems like a 
bad solution because afaics messages have to be processed sequentially, and it 
might take a while for a process to handle a message.

Is there anything I'm overlooking or incorrect on? Any advice on what I should 
do?

Thank you for any help!
- yas
_______________________________________________
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to