The traditional way to deal with this issue is to have your schedulers use
shared state between themselves. It would work something like this

1) worker talks to scheduler, performs an operation. Scheduler hands back a
request ID or something and writes it to the database.
    worker->scheduler.doSomething()
    scheduler-> worker "ok!. This is request ID 5".
    scheduler now writes "doing something for request ID 5" to the database.
2) Worker needs to talk to scheduler again, this time, it hands back the
request ID.
    worker->scheduler.DoMore(my request id = 5)
    scheduler looks up request ID 5 in the database, finds the information
that it needs
    scheduler->worker : "ok!".

It doesn't matter if you get different schedulers in steps 1 and 2 because
the state is shared between them via a database. Your bottleneck becomes
the database in this case, but databases are very fast and for most real
services, this won't be a problem. You can use Redis, Postgres, etcd,
whatever, as long as your scheduler instances share state.

If you try to keep persistent connections to specific schedulers, just use
DNS. Have "scheduler" resolve to a bunch of different IP addresses, then
each of your workers will end up resolving that name to a random scheduler.
If it holds onto the IP address after resolving, it can consistently talk
to the same scheduler. However, this approach adds more failure modes and
ends up being more painful to manage long term.

On Mon, Jul 3, 2017 at 6:57 AM, <picfl...@web.de> wrote:

> Hi,
>
> my setup looks like this, where the number of instances is setup using
> docker swarm scaling https://docs.docker.com/engine/swarm/swarm-
> tutorial/scale-service/
> Redis <-> db_instances <-> scheduler <-> worker
>
> Every instance runs inside a docker container and at the moment there is
> only one redis instance. I intend to move to a redis cluster setup at some
> point. I am using
> db_instances:router  <-> scheduler:dealer scheduler:router
> <-> worker:dealer socket setup.
>
> Currently I connect the dealer sockets like this tcp://scheduler:5000, and
> docker swarm automatically directs the message to one out of the set of
> scaled scheduler services.
> It is possible to connect directly to one instance of scheduler,
> tcp://scheduler.1:5000 with that would i t make sense to have one socket
> per connected service instance?
> And then using a LRU scheduling approach to direct my requests?
>
> I hope that my setup became clearer.
>
> Thanks
>
>
> Gesendet: Sonntag, 02. Juli 2017 um 12:00 Uhr
> Von: zeromq-dev-requ...@lists.zeromq.org
> An: zeromq-dev@lists.zeromq.org
> Betreff: zeromq-dev Digest, Vol 16, Issue 2
> Send zeromq-dev mailing list submissions to
> zeromq-dev@lists.zeromq.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> or, via email, send a message with subject or body 'help' to
> zeromq-dev-requ...@lists.zeromq.org
>
> You can reach the person managing the list at
> zeromq-dev-ow...@lists.zeromq.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of zeromq-dev digest..."
>
>
> Today's Topics:
>
> 1. Re: [ANN] JeroMQ 0.4.2 - A pure Java clone of libzmq
> (Trevor Bernard)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 1 Jul 2017 08:49:03 -0400
> From: Trevor Bernard <trevor.bern...@gmail.com>
> To: ZeroMQ development list <zeromq-dev@lists.zeromq.org>
> Subject: Re: [zeromq-dev] [ANN] JeroMQ 0.4.2 - A pure Java clone of
> libzmq
> Message-ID:
> <CAGGWQ++59w+ioJrv5DJmiVn2Or5TQJ5yXv8R+OOFWWGukOB6=a...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Yes, I found a pure Java port of NaCl with a good license.
>
> On Jul 1, 2017 9:58 AM, "Doron Somech" <somdo...@gmail.com> wrote:
>
> Amazing, I want to do that for NetMQ for a long time.
>
> Does the libsodium implementation is also pure Java?
>
> On Fri, Jun 30, 2017 at 4:36 PM, Luca Boccassi <luca.bocca...@gmail.com>
> wrote:
>
> > On Fri, 2017-06-30 at 09:20 -0400, Trevor Bernard wrote:
> > > > Is the CURVE implementation fully inter-operable with libzmq?
> > >
> > > Yup, it's fully interoperable.
> >
> > Fantastic, thanks!
> >
> > --
> > Kind regards,
> > Luca Boccassi
> >
> > _______________________________________________
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev[
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev]
> >
>
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev[
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev]
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <https://lists.zeromq.org/pipermail/zeromq-dev/
> attachments/20170701/35025fe7/attachment-0001.html[https://
> lists.zeromq.org/pipermail/zeromq-dev/attachments/
> 20170701/35025fe7/attachment-0001.html]>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev[
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev]
>
>
> ------------------------------
>
> End of zeromq-dev Digest, Vol 16, Issue 2
> *****************************************
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
_______________________________________________
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to