Hi Martin,

>>>> The part that confuses me about this is in a traditional RPC scenario
>>>> you may be calling services that have side effects or that lack
>>>> idempotence, per se.  You'd want the message to be handled by the
>>>> first reachable service provider and no others.  Unless I
>>>> misunderstand what you're saying, it seems counter-intuitive that
>>>> RPC-style messages would queue up rather than failover to alternate
>>>> instances or fail fast when there are no reachable instances?
>>> The question is: How do you know the instance is reachable. You can
>>> never tell before you actually get a reply from it. To get a reply you
>>> have to send a request. That means that at least 1 message (the request)
>>> is "queued" for delivery to the service with unknown availability.
>>> There's no way to avoid that.
>>
>> I suppose you have the same problem if you are trying to implement
>> RPC-style transactions on top of any queuing technology?  I think I
>> still get confused because the API appears to be socket-oriented as if
>> you are making a direct socket connection, but then there's really a
>> queuing layer in the middle.
>
> The problem exists in any RPC solution whether there's explicit queue or
> not.
>
> The issue is that you never know whether a service is available until it
> responds to your request. Thus there's always at least one request "in
> the air". The same would happen with simple TCP connection.

I think I am missing a subtlety, but if you are using a direct TCP
connection for RPC, wouldn't you at least be able to distinguish the
scenario where you can't connect the socket versus the scenario where
the socket is broken after you started exchanging messages?  In the
scenario where the socket can't be connected to begin with, a client
would be able to fail over to another service provider and attempt a
new connection.  All this assuming there aren't proxies and
what-have-you in between the client and server.  Connection pooling
makes it a little trickier, but I think you would still have a good
idea when to evict a connection from the pool, because you should know
when the socket is broken.

I guess this is an academic conversation since it's not how 0MQ works,
but I'm just trying to understand why you say there's always at least
one request up in the air.  Are we talking about different layers of
the stack?

>> Specifically as it relates to RFC 2782, there are a couple items off
>> the top of my head that could be addressed (just brainstorming):
>>
>> 1) A translation scheme to/from RFC 2782 style names (e.g.
>> _zeromq._tcp.example.com 5555) into 0MQ style address strings  (e.g.
>> tcp://example.com:5555)
>
> Ack.
>
>> 2) Pros and cons of static registration (RFC 2782) vs. dynamic
>> registration (DNS-SD).
>
> I would say the static registration should be strongly preferred. The
> rationale is that you want to know location an entity even though it may
> not be running/online at the moment. In such a case messages can be
> queued and sent once it gets available.

I think both could be desirable depending on what use cases you are
considering.  I am coming at it from the perspective of having a
"fluid service bus".  Let's say I have one node that is pushing
capacity and I need to bring another node online that offers the same
services to help out.  With a dynamic registration (e.g. DNS-SD),
clients could discover the new node and begin to load balance
immediately (again in a gentlemanly fashion).  Whereas, with a static
DNS model you are at the mercy of TTLs  and having the tools to
publish, which may be just fine for certain applications.

There's also perhaps a hybrid model where certain infrastructure
services are statically registered in DNS, but application services
use an application-level registration system.  For example, go to DNS
to locate the "service directory", but the service directory is really
just its own 0MQ application that knows the current topology of the
service bus.   The DNS-SD idea feels a bit more distributed since
every node potentially replicates that knowledge of the current
topology, but it could be done with an application of 0MQ as well.

Regardless, my opinion is that all of this sits in a library of
toolkits above 0MQ and perhaps only needs minimal support from the
kernel.

>> 3) Can a client detect that a service is local and switch over to
>> inproc/ipc transport for optimization (or does the 0MQ kernel already
>> attempt this?)
>
> That's an interesting question. No, 0MQ does not do that at the moment.
> But it would be nice if it could. How should it be done? Once again,
> more research is needed.

Yeah, I don't know either, really just throwing it out there.  I can
imagine a scenario where decoupled services need to transact with one
another, but they don't realize they are deployed together on the same
physical node or perhaps even in the same process space.   Perhaps
this is an application/deployment problem to solve, not a kernel
problem since the basic protocols are already there.

Thanks
_______________________________________________
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to