Joe,

>>> I think both could be desirable depending on what use cases you are
>>> considering.  I am coming at it from the perspective of having a
>>> "fluid service bus".  Let's say I have one node that is pushing
>>> capacity and I need to bring another node online that offers the same
>>> services to help out.  With a dynamic registration (e.g. DNS-SD),
>>> clients could discover the new node and begin to load balance
>>> immediately (again in a gentlemanly fashion).  Whereas, with a static
>>> DNS model you are at the mercy of TTLs  and having the tools to
>>> publish, which may be just fine for certain applications.
>> Hm. I would assume that "worker" applications would connect rather than
>> bind and thus won't need to register their addresses, no?
> 
> Well, perhaps I have been reading too much into the RPC cookbook
> examples, but this is my current understanding...
> 
> To me, the "service provider" is what binds and once the client has
> resolved that endpoint (either statically or dynamically) it would
> send messages directly to the service provider.  If the service
> provider has "workers" (that connect), that would be an implementation
> detail hidden from view of the clients.
> 
> In the configuration I have in my mind, it's less of a problem for
> such workers to discover that binding because I'd most likely
> co-locate them in the same node/process as the service provider (bind)
> and be able to infer/assume an ipc/inproc endpoint for them.
> 
> But if the workers are remote to the service provider, then that could
> make it more difficult to infer the binding so you could have the same
> discovery problem on that end as a client would have.
> 
> For a concrete RPC use case, let's say I have a Catalog service and an
> Order service.  I have a client application that needs to query the
> catalog and place orders in real time, maybe to support a call center
> or something.  I want to be able to deploy the Catalog service and the
> Order service independently of one another.  I also want them to be
> fault tolerant such that I can distribute the services across physical
> nodes.  Taking down one node that offers the Catalog service
> shouldn't, in theory, affect the service level of the Catalog service
> when I have other nodes offering it.
> 
> When I "bring up" a node offering either service, I would like the
> client application to eventually learn of its availability and be able
> to use it.  Maybe the client load balances, maybe it just looks for a
> new service provider when it has lost its current one (perhaps it
> timed out or stopped responding to a heartbeat message or something).
> 
> So back to my conceptual understanding, I could bind the Catalog
> service on a couple of nodes and the Order service on a couple of
> nodes.  My client application would query some registry for the
> current endpoint(s) of the Catalog service and Order service.
> 
> It sounds like you may be suggesting that I'd have a more abstract
> message queue in between the clients and workers and perhaps use some
> detail of the application layer to dispatch the messages instead of
> having a "direct" link between client and service provider?
> 
> (I realize I'm probably overloading/misusing some terms so feel free
> to correct me if it helps get on the same page)

No, you are right. We were just speaking about different scenarios.

You were speaking of the use case where client connect directly to the 
services while I had the case where there's a node (shared queue) in the 
middle.

Martin
_______________________________________________
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to