Hi,

I'll try to address the question about Proxy process.

AFAIK there is no way yet in zmq to bind more than once to a specific port (e.g. tcp://*:9501).

Apparently we can:

socket1.bind('tcp://node1:9501')
socket2.bind('tcp://node2:9501')

but we can not:

socket1.bind('tcp://*:9501')
socket2.bind('tcp://*:9501')

So if we would like to have a definite port assigned with the driver we need to use a proxy which receives on a single socket and redirects to a number of sockets.

It is a normal practice in zmq to do so. There are even some helpers implemented in the library so-called 'devices'.

Here the performance question is relevant. According to ZeroMQ documentation [1] The basic heuristic is to allocate 1 I/O thread in the context for every gigabit per second of data that will be sent and received (aggregated).

The other way is to 'bind_to_random_port', but here we need some mechanism to notify the client about the port we are listening to. So it is more complicated solution.

Why to run in a separate process? For zmq api it doesn't matter to communicate between threads (INPROC), between processes (IPC) or between nodes (TCP, PGM and others). Because we need to run proxy once on a node it's easier to do it in a separate process. How to track the proxy is running already if we put it in a thread of some service?

In spite of having a broker-like instance locally we still stay brokerless because we have no central broker node with a queue we need to replicate and keep alive. Each node is acutally a peer. The broker is not a standalone node so we can not say that it is a 'single point of failure' . We can consider the local broker as a part of a server. It is worth noting that IPC communication is much more reliable than real network communication. One more benefit is that the proxy is stateless so we don't have to bother about managing the state (syncing it or having enough memory to keep it)

I'll cite the zmq-guide about broker/brokerless (4.14. Brokerless Reliability p.221):

"It might seem ironic to focus so much on broker-based reliability, when we often explain ØMQ as "brokerless messaging". However, in messaging, as in real life, the middleman is both a burden and a benefit. In practice, *_most messaging architectures benefit from a mix of distributed and brokered messaging_*. "


Thanks,
Oleksii


1 - http://zeromq.org/area:faq#toc7


5/26/15 18:57, Davanum Srinivas пишет:
Alec,

Here are the slides:
http://www.slideshare.net/davanum/oslomessaging-new-0mq-driver-proposal

All the 0mq patches to date should be either already merged in trunk
or waiting for review on trunk.

Oleksii, Li Ma,
Can you please address the other questions?

thanks,
Dims

On Tue, May 26, 2015 at 11:43 AM, Alec Hothan (ahothan)
<ahot...@cisco.com> wrote:
Looking at what is the next step following the design summit meeting on
0MQ as the etherpad does not provide too much information.
Few questions:
- would it be possible to have the slides presented (showing the proposed
changes in the 0MQ driver design) to be available somewhere?
- is there a particular branch in the oslo messaging repo that contains
0MQ related patches - I'm more particularly interested by James Page's
patch to pool the 0MQ connections but there might be other
- question for Li Ma, are you deploying with the straight upstream 0MQ
driver or with some additional patches?

The per node proxy process (which is itself some form of broker) needs to
be removed completely if the new solution is to be made really
broker-less. This will also eliminate the only single point of failure in
the path and reduce the number of 0MQ sockets (and hops per message) by
half.

I think it was proposed that we go on with the first draft of the new
driver (which still keeps the proxy server but reduces the number of
sockets) before eventually tackling the removal of the proxy server?



Thanks

   Alec



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to