[zeromq-dev] "Identity" concept
Hi all, See https://github.com/zeromq/libzmq/issues/805 The "identity" concept has always been confusing and misnamed. This is especially visible when we do authentication. What this concept actually does is define a routing name. I'm not sure what a better name than "identity" would be, but would like to discuss this. -Pieter ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Heartbeating using TCP keepalives
02.01.2014 23:48, Pieter Hintjens kirjoitti: > Seconds is fine for this case but surprising overall since all other > durations in the API are in msec. > > I'm not sure what you mean about backwards compatibility. As it stands, the TCP keepalive intervals are given in seconds on the vast majority of operating systems. If we change it so the values are given in milliseconds instead (meaning that we divide the given value by 1000 before calling setsockopt()), this will break existing apps that set the keepalive intervals as seconds. > > On Thu, Jan 2, 2014 at 7:55 PM, Alex Grönholm > wrote: >> 02.01.2014 15:59, Pieter Hintjens kirjoitti: >>> It makes sense, and I'd try this; the timeout should be in msec, to be >>> consistent with other duration arguments. You can take any of the >>> existing socket options like ZMQ_SNDBUF as a template, and make a pull >>> request. >> Wouldn't it be enough to document that the values are expressed in >> seconds and not ms? >> Who needs sub-second accuracy with keepalives? >> Besides, converting the values on non-Windows systems would break >> backwards compatibility. >> Are you fine with that? This definitely should not be done in a micro >> release. >>> On Mon, Dec 30, 2013 at 11:29 PM, Alex Grönholm >>> wrote: This isn't directly related to ZeroMQ, but it is somewhat relevant now given A) the addition of the (yet unimplemented) heartbeating feature in ZMTP/3.0 and B) the Windows TCP keepalive parameters fix I committed recently. The question is: has someone here used TCP keepalives as a substitute for application level heartbeating? Given the operating model of ZeroMQ, using TCP keepalives for this purpose would transparently shield the user from stale connections. Are there any downsides to this? TCP keepalives, when turned on, use a 2 hour interval by default (this is a de facto standard). This makes them impractical unless the values are adjusted. I've done some research on that. From what I've gathered, it seems that setting TCP keepalive parameters on a per-socket level is supported at least on the following operating systems: Linux FreeBSD Windows (since Windows 2000; set only, read not supported; number of keepalive probes is fixed on 10; must be set before connecting; values in milliseconds, not seconds) Mac OS X (since Mountain Lion) AIX Solaris (values in milliseconds, not seconds) It seems that both iOS and Android support sending TCP keepalives, but setting keepalive parameters is not supported. Note that the Windows TCP keepalive parameters patch takes the time intervals in seconds and multiplies by 1000 on Windows for cross platform compatibility. There is no similar fix for Solaris yet so Solaris users need to do it on the application level for now. Setting the keepalive idle and retransmission delay to values like 10 and 5 seconds would make a lot of sense to me. If the peer fails to respond to the probes, zmq will just see a disconnection. ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> ___ >>> zeromq-dev mailing list >>> zeromq-dev@lists.zeromq.org >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >> ___ >> zeromq-dev mailing list >> zeromq-dev@lists.zeromq.org >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
[zeromq-dev] test_filter_ipc failure
Hi, I’m seeing the test_filter_ipc test fail on Mac OS X and SmartOS (Solaris). I see it passes on linux in the travis tests. Other tests are passing on both platforms. After digging in to the code, I can see that the ipc:/ endpoint is starting with “@“ which on linux is converted to a NUL char and has a special meaning where the socket is “abstract” and the name does not appear in the file system. I assume this is linux specific as I haven’t found any Mac, BSD or Solaris If I change the endpoint to “ipc://test_filter_ipc.sock” it works. I also notice that it’s using ZMQ_PAIR sockets, which according to the docs are only for inproc: transports, so I’ve changed this to ZMQ_DEALER, which also matches usage of the “bounce()” test function. I’ve updated the test to not use an abstract socket, in https://github.com/zeromq/libzmq/pull/804 Was there a reason it was done on an abstract socket? Cheers, Matt ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Reading queued messages after disconnect
On 3 Jan 2014, at 12:16 am, Pieter Hintjens wrote: > On Wed, Jan 1, 2014 at 11:48 PM, Matt Connolly wrote: > >> Alternately, poll and while readable recv the messages. This is how I’m >> doing it at present. (But the ruby bindings doesn’t let me recv with no >> endpoints, whereas a C program can). > > Interesting. I'm also curious why you would want to do this. It seems > wrong. Instead, you'd terminate a protocol properly with whatever > handshake, and then destroy the socket. One use case I have for this is a simple logging service. It has a PULL socket and receives messages which it writes to disk (among other things). At shutdown time I want to disconnect the socket so subsequent messages are queued at the sender's PUSH socket (and will be delivered when the machine/service restarts). I want to consume all messages that have been received before shutting down. Certainly in more complex scenarios where duplex communication between services is important, then they can handshake a shutdown. I’m doing this already with a ROUTER based service and this works well. Cheers, Matt ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Heartbeating using TCP keepalives
On 3 Jan 2014, at 7:48 am, Pieter Hintjens wrote: > Seconds is fine for this case but surprising overall since all other > durations in the API are in msec. > > I'm not sure what you mean about backwards compatibility. > > On Thu, Jan 2, 2014 at 7:55 PM, Alex Grönholm > wrote: >> 02.01.2014 15:59, Pieter Hintjens kirjoitti: >>> It makes sense, and I'd try this; the timeout should be in msec, to be >>> consistent with other duration arguments. You can take any of the >>> existing socket options like ZMQ_SNDBUF as a template, and make a pull >>> request. >> Wouldn't it be enough to document that the values are expressed in >> seconds and not ms? For my two cents, I’d prefer all “time” measurements in an API to be in consistent units. -Matt ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] libzmq crash closing socket with pending messages
Done. Thanks! AJ On Thu, Jan 02, 2014 at 11:26:36PM +0100, Pieter Hintjens wrote: > For the backports you can do this now. For the master release I'll do > that all at once for the next release. > > On Thu, Jan 2, 2014 at 11:16 PM, AJ Lewis wrote: > > On Thu, Jan 02, 2014 at 10:16:40PM +0100, Pieter Hintjens wrote: > >> Thanks for those pull requests. I merged them. You can update the NEWS > >> as you go along (didn't check if you did that), particularly for > >> backports. > > > > Cool - thanks. I didn't adjust the NEWS. Should I make another pull > > request for that, or will it get adjusted by someone else later? > > > > AJ > > > >> On Thu, Jan 2, 2014 at 5:34 PM, AJ Lewis wrote: > >> > Just a heads up that I'm going to submit pull requests to libzmq, > >> > zeromq3-x, > >> > and zeromq4-x to revert the fix for LIBZMQ-497 in order to fix > >> > LIBZMQ-576. > >> > This means some other solution needs to be found for that problem though > >> > - I > >> > don't have a clear idea of how to do that, but I do know that crashing on > >> > socket close isn't acceptable behavior. > >> > > >> > Thanks, > >> > AJ > >> > > >> > On Wed, Nov 13, 2013 at 06:55:25PM -0600, AJ Lewis wrote: > >> >> Check out https://zeromq.jira.com/browse/LIBZMQ-576 for more info. It > >> >> looks like a previous fix for trying to ensure messages in the encoder > >> >> were sent out before socket close is causing issues. Reverting that fix > >> >> (for libzmq, it's commit f27eb67e) seems to clear this up. But we still > >> >> probably want something to fix what that commit was attempting to fix > >> >> (for > >> >> details on that, see https://zeromq.jira.com/browse/LIBZMQ-497). > >> >> > >> >> AJ > >> >> > >> >> On Wed, Nov 13, 2013 at 11:06:35PM +, Bill M wrote: > >> >> > AJ Lewis quantum.com> writes: > >> >> > > >> >> > > > >> >> > > I've recently seen the same thing in 3.2.3, but hadn't been able to > >> >> > > pinpoint > >> >> > > whether the problem was in zmq proper, or in the application using > >> >> > > it. I > >> >> > > look forward to the results of this question. > >> >> > > > >> >> > > On Wed, Nov 06, 2013 at 09:47:55AM -0800, Andy Tucker wrote: > >> >> > > > Hi, I have a program that sends messages on a ZMQ_DEALER socket > >> >> > > > with with > >> >> > > > ZMQ_DONTWAIT. If it gets back EAGAIN (perhaps because the other > >> >> > > > end is > >> >> > > > responding slowly or has gone away) it calls zmq_close to close > >> >> > > > the socket > >> >> > > > and then re-establish the connection (possibly to a new endpoint) > >> >> > > > with a > >> >> > > > new socket. ZMQ_LINGER is set to 0 (this doesn't appear to happen > >> >> > > > if > >> >> > > > ZMQ_LINGER isn't set, but that can cause other issues). > >> >> > > > > >> >> > > > I'm occasionally seeing crashes in the libzmq epoll_t thread with > >> >> > > > either > >> >> > > > "pure virtual method called" or a segmentation fault. The stack > >> >> > > > looks like > >> >> > > > (this is with libzmq 3.2.4 but others are similar): > >> >> > > > > >> >> > > > #4 0x7f8928939ca3 in std::terminate() () from > >> >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 > >> >> > > > #5 0x7f892893a77f in __cxa_pure_virtual () from > >> >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 > >> >> > > > #6 0x7f8929649db1 in zmq::v1_encoder_t::message_ready > >> >> > > > (this=0x7f8918000b90) at v1_encoder.cpp:66 > >> >> > > > #7 0x7f892964a2a4 in > >> >> > > > zmq::encoder_base_t::get_data > >> >> > > > (this=0x7f8918000b90, data_=0x7f8918000928, size_=0x7f8918000930, > >> >> > > > offset_=0x0) at encoder.hpp:93 > >> >> > > > #8 0x7f892963fb42 in zmq::stream_engine_t::out_event > >> >> > > > (this=0x7f89180008e0) at stream_engine.cpp:261 > >> >> > > > #9 0x7f8929627d1a in zmq::epoll_t::loop (this=0x8eace0) at > >> >> > > > epoll.cpp:158 > >> >> > > > #10 0x7f8929644996 in thread_routine (arg_=0x8ead50) at > >> >> > > > thread.cpp:83 > >> >> > > > #11 0x7f8928be6e9a in start_thread (arg=0x7f89271b9700) at > >> >> > > > pthread_create.c:308 > >> >> > > > #12 0x7f89293453fd in clone () at > >> >> > > > ../sysdeps/unix/sysv/linux/x86_64/clone.S:112 > >> >> > > > > >> >> > > > Looking at the core, it appears that the memory pointed to by the > >> >> > > > msg_source field in the encoder has been freed (the "pure virtual > >> >> > > > method > >> >> > > > called" is because the vtbl pointer has been munged by something > >> >> > > > that > >> >> > > > re-allocated the buffer). The msg_source field points to the > >> >> > > > session_base_t, but that was freed by the zmq_close. The > >> >> > > > session_base_t > >> >> > > > destructor calls engine->terminate(), which would normally free > >> >> > > > the engine > >> >> > > > state but doesn't do anything if the encoder still has data left > >> >> > > > to be sent. > >> >> > > > > >> >> > > > I've reproduced this with 3.2.4, 4.0.1, and master (as of a few > >> >> > >
Re: [zeromq-dev] libzmq crash closing socket with pending messages
For the backports you can do this now. For the master release I'll do that all at once for the next release. On Thu, Jan 2, 2014 at 11:16 PM, AJ Lewis wrote: > On Thu, Jan 02, 2014 at 10:16:40PM +0100, Pieter Hintjens wrote: >> Thanks for those pull requests. I merged them. You can update the NEWS >> as you go along (didn't check if you did that), particularly for >> backports. > > Cool - thanks. I didn't adjust the NEWS. Should I make another pull > request for that, or will it get adjusted by someone else later? > > AJ > >> On Thu, Jan 2, 2014 at 5:34 PM, AJ Lewis wrote: >> > Just a heads up that I'm going to submit pull requests to libzmq, >> > zeromq3-x, >> > and zeromq4-x to revert the fix for LIBZMQ-497 in order to fix LIBZMQ-576. >> > This means some other solution needs to be found for that problem though - >> > I >> > don't have a clear idea of how to do that, but I do know that crashing on >> > socket close isn't acceptable behavior. >> > >> > Thanks, >> > AJ >> > >> > On Wed, Nov 13, 2013 at 06:55:25PM -0600, AJ Lewis wrote: >> >> Check out https://zeromq.jira.com/browse/LIBZMQ-576 for more info. It >> >> looks like a previous fix for trying to ensure messages in the encoder >> >> were sent out before socket close is causing issues. Reverting that fix >> >> (for libzmq, it's commit f27eb67e) seems to clear this up. But we still >> >> probably want something to fix what that commit was attempting to fix (for >> >> details on that, see https://zeromq.jira.com/browse/LIBZMQ-497). >> >> >> >> AJ >> >> >> >> On Wed, Nov 13, 2013 at 11:06:35PM +, Bill M wrote: >> >> > AJ Lewis quantum.com> writes: >> >> > >> >> > > >> >> > > I've recently seen the same thing in 3.2.3, but hadn't been able to >> >> > > pinpoint >> >> > > whether the problem was in zmq proper, or in the application using >> >> > > it. I >> >> > > look forward to the results of this question. >> >> > > >> >> > > On Wed, Nov 06, 2013 at 09:47:55AM -0800, Andy Tucker wrote: >> >> > > > Hi, I have a program that sends messages on a ZMQ_DEALER socket >> >> > > > with with >> >> > > > ZMQ_DONTWAIT. If it gets back EAGAIN (perhaps because the other end >> >> > > > is >> >> > > > responding slowly or has gone away) it calls zmq_close to close the >> >> > > > socket >> >> > > > and then re-establish the connection (possibly to a new endpoint) >> >> > > > with a >> >> > > > new socket. ZMQ_LINGER is set to 0 (this doesn't appear to happen if >> >> > > > ZMQ_LINGER isn't set, but that can cause other issues). >> >> > > > >> >> > > > I'm occasionally seeing crashes in the libzmq epoll_t thread with >> >> > > > either >> >> > > > "pure virtual method called" or a segmentation fault. The stack >> >> > > > looks like >> >> > > > (this is with libzmq 3.2.4 but others are similar): >> >> > > > >> >> > > > #4 0x7f8928939ca3 in std::terminate() () from >> >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 >> >> > > > #5 0x7f892893a77f in __cxa_pure_virtual () from >> >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 >> >> > > > #6 0x7f8929649db1 in zmq::v1_encoder_t::message_ready >> >> > > > (this=0x7f8918000b90) at v1_encoder.cpp:66 >> >> > > > #7 0x7f892964a2a4 in >> >> > > > zmq::encoder_base_t::get_data >> >> > > > (this=0x7f8918000b90, data_=0x7f8918000928, size_=0x7f8918000930, >> >> > > > offset_=0x0) at encoder.hpp:93 >> >> > > > #8 0x7f892963fb42 in zmq::stream_engine_t::out_event >> >> > > > (this=0x7f89180008e0) at stream_engine.cpp:261 >> >> > > > #9 0x7f8929627d1a in zmq::epoll_t::loop (this=0x8eace0) at >> >> > > > epoll.cpp:158 >> >> > > > #10 0x7f8929644996 in thread_routine (arg_=0x8ead50) at >> >> > > > thread.cpp:83 >> >> > > > #11 0x7f8928be6e9a in start_thread (arg=0x7f89271b9700) at >> >> > > > pthread_create.c:308 >> >> > > > #12 0x7f89293453fd in clone () at >> >> > > > ../sysdeps/unix/sysv/linux/x86_64/clone.S:112 >> >> > > > >> >> > > > Looking at the core, it appears that the memory pointed to by the >> >> > > > msg_source field in the encoder has been freed (the "pure virtual >> >> > > > method >> >> > > > called" is because the vtbl pointer has been munged by something >> >> > > > that >> >> > > > re-allocated the buffer). The msg_source field points to the >> >> > > > session_base_t, but that was freed by the zmq_close. The >> >> > > > session_base_t >> >> > > > destructor calls engine->terminate(), which would normally free the >> >> > > > engine >> >> > > > state but doesn't do anything if the encoder still has data left to >> >> > > > be sent. >> >> > > > >> >> > > > I've reproduced this with 3.2.4, 4.0.1, and master (as of a few >> >> > > > days ago). >> >> > > > I filed LIBZMQ-576 and attached a small test program to the issue. >> >> > > > >> >> > > > This looks like a libzmq bug to me, though if I'm misusing the API >> >> > > > in some >> >> > > > way (or if there's a reasonable workaround) please let me know. >> >> > > > >> >> > > > Andy >> >> >
Re: [zeromq-dev] libzmq crash closing socket with pending messages
On Thu, Jan 02, 2014 at 10:16:40PM +0100, Pieter Hintjens wrote: > Thanks for those pull requests. I merged them. You can update the NEWS > as you go along (didn't check if you did that), particularly for > backports. Cool - thanks. I didn't adjust the NEWS. Should I make another pull request for that, or will it get adjusted by someone else later? AJ > On Thu, Jan 2, 2014 at 5:34 PM, AJ Lewis wrote: > > Just a heads up that I'm going to submit pull requests to libzmq, zeromq3-x, > > and zeromq4-x to revert the fix for LIBZMQ-497 in order to fix LIBZMQ-576. > > This means some other solution needs to be found for that problem though - I > > don't have a clear idea of how to do that, but I do know that crashing on > > socket close isn't acceptable behavior. > > > > Thanks, > > AJ > > > > On Wed, Nov 13, 2013 at 06:55:25PM -0600, AJ Lewis wrote: > >> Check out https://zeromq.jira.com/browse/LIBZMQ-576 for more info. It > >> looks like a previous fix for trying to ensure messages in the encoder > >> were sent out before socket close is causing issues. Reverting that fix > >> (for libzmq, it's commit f27eb67e) seems to clear this up. But we still > >> probably want something to fix what that commit was attempting to fix (for > >> details on that, see https://zeromq.jira.com/browse/LIBZMQ-497). > >> > >> AJ > >> > >> On Wed, Nov 13, 2013 at 11:06:35PM +, Bill M wrote: > >> > AJ Lewis quantum.com> writes: > >> > > >> > > > >> > > I've recently seen the same thing in 3.2.3, but hadn't been able to > >> > > pinpoint > >> > > whether the problem was in zmq proper, or in the application using it. > >> > > I > >> > > look forward to the results of this question. > >> > > > >> > > On Wed, Nov 06, 2013 at 09:47:55AM -0800, Andy Tucker wrote: > >> > > > Hi, I have a program that sends messages on a ZMQ_DEALER socket with > >> > > > with > >> > > > ZMQ_DONTWAIT. If it gets back EAGAIN (perhaps because the other end > >> > > > is > >> > > > responding slowly or has gone away) it calls zmq_close to close the > >> > > > socket > >> > > > and then re-establish the connection (possibly to a new endpoint) > >> > > > with a > >> > > > new socket. ZMQ_LINGER is set to 0 (this doesn't appear to happen if > >> > > > ZMQ_LINGER isn't set, but that can cause other issues). > >> > > > > >> > > > I'm occasionally seeing crashes in the libzmq epoll_t thread with > >> > > > either > >> > > > "pure virtual method called" or a segmentation fault. The stack > >> > > > looks like > >> > > > (this is with libzmq 3.2.4 but others are similar): > >> > > > > >> > > > #4 0x7f8928939ca3 in std::terminate() () from > >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 > >> > > > #5 0x7f892893a77f in __cxa_pure_virtual () from > >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 > >> > > > #6 0x7f8929649db1 in zmq::v1_encoder_t::message_ready > >> > > > (this=0x7f8918000b90) at v1_encoder.cpp:66 > >> > > > #7 0x7f892964a2a4 in > >> > > > zmq::encoder_base_t::get_data > >> > > > (this=0x7f8918000b90, data_=0x7f8918000928, size_=0x7f8918000930, > >> > > > offset_=0x0) at encoder.hpp:93 > >> > > > #8 0x7f892963fb42 in zmq::stream_engine_t::out_event > >> > > > (this=0x7f89180008e0) at stream_engine.cpp:261 > >> > > > #9 0x7f8929627d1a in zmq::epoll_t::loop (this=0x8eace0) at > >> > > > epoll.cpp:158 > >> > > > #10 0x7f8929644996 in thread_routine (arg_=0x8ead50) at > >> > > > thread.cpp:83 > >> > > > #11 0x7f8928be6e9a in start_thread (arg=0x7f89271b9700) at > >> > > > pthread_create.c:308 > >> > > > #12 0x7f89293453fd in clone () at > >> > > > ../sysdeps/unix/sysv/linux/x86_64/clone.S:112 > >> > > > > >> > > > Looking at the core, it appears that the memory pointed to by the > >> > > > msg_source field in the encoder has been freed (the "pure virtual > >> > > > method > >> > > > called" is because the vtbl pointer has been munged by something that > >> > > > re-allocated the buffer). The msg_source field points to the > >> > > > session_base_t, but that was freed by the zmq_close. The > >> > > > session_base_t > >> > > > destructor calls engine->terminate(), which would normally free the > >> > > > engine > >> > > > state but doesn't do anything if the encoder still has data left to > >> > > > be sent. > >> > > > > >> > > > I've reproduced this with 3.2.4, 4.0.1, and master (as of a few days > >> > > > ago). > >> > > > I filed LIBZMQ-576 and attached a small test program to the issue. > >> > > > > >> > > > This looks like a libzmq bug to me, though if I'm misusing the API > >> > > > in some > >> > > > way (or if there's a reasonable workaround) please let me know. > >> > > > > >> > > > Andy > >> > > > >> > > > ___ > >> > > > zeromq-dev mailing list > >> > > > zeromq-dev lists.zeromq.org > >> > > > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > >> > > > >> > > >> > > >> > I'm seeing something similar too, using zmq 3.2.3 through P
Re: [zeromq-dev] Async access to a single socket - send and recv without reducing throughput
Actually, if the inproc sockets are push/pull rather than pair, then you can now send and receive from arbitrary threads so long as you store the "public" endpoints of sendsock and recvsock in thread-local storage and are willing to construct new sockets whenever you try to send or receive from a new thread. (It's possible this is a bad idea. I don't really know.) On Thu, Jan 2, 2014 at 4:52 PM, Lindley French wrote: > Here's my thoughts. > > It's a lot easier to deal with asynchronous sends and receives if we can > use a different thread for each. Even if you can't send on multiple threads > at once, and can't receive on multiple threads at once, you should be able > to use different threads for each simultaneously. This is, for instance, > the guarantee offered by Java SocketChannels. > > So how can we arrange to allow this with optimal efficiency? Here's one > plan. Let me know if there's any problems with it. > 1) Have three ZMQ sockets: one the "normal" socket (probably TCP) called > commsock, and two inproc pair sockets. Call these sendsock and recvsock. > 2) Have a private thread which polls sendsock and commsock. If a message > is available on sendsock, forward it to commsock. If a message is available > on commsock, forward it to recvsock. > 3) One thread can now talk to the other endpoint of sendsock for sending, > while a second thread blocks on the other endpoint of recvsock for > receiving. > > > On Thu, Jan 2, 2014 at 4:27 PM, Lindley French wrote: > >> Did the inproc solution work well for you? >> >> I believe this is a very common problem (how do I send and receive >> asynchronously on the same zmq socket without touching it from multiple >> threads?), and there ought to be a best practice defined for it. >> >> >> On Tue, Dec 31, 2013 at 11:26 AM, Amir Taaki wrote: >> >>> thanks! I went with inproc sockets... hope performance is good. will >>> benchmark this. >>> >>> >>> https://github.com/spesmilo/obelisk/commit/c0882a9b74bcce3cca41e0cf6dab4cb93552ad39 >>> >>> >>> >>> >>> >>> On Tuesday, December 31, 2013 12:44 PM, Bruno D. Rodrigues < >>> bruno.rodrig...@litux.org> wrote: >>> poll on an additional pair of zmq inproc:// sockets ;) >>> >>> >>> On Dec 31, 2013, at 12:41, Matt Connolly wrote: >>> >>> > Zmq poller can also wake on standard file descriptors (eg unix >>> socket). If your custom event can write to a unix socket or pipe you might >>> be in luck. >>> > >>> > Cheers >>> > Matt. >>> > >>> >> On 31 Dec 2013, at 4:35 pm, Amir Taaki wrote: >>> >> >>> >> >>> >> >>> >> I was reading the docs and saw: >>> >> >>> >> "How can I integrate ØMQ sockets with normal sockets? Or with a GUI >>> event loop? >>> >> You can use the zmq_poll() function to poll for events on both ØMQ >>> and normal sockets. The zmq_poll() function accepts a timeout so if you >>> need to poll and process GUI >>> >> events in the same application thread you can set a timeout and >>> >> periodically poll for GUI events. See also the reference >>> documentation." >>> >> >>> >> >>> >> How can I wake up zmq_poll() with a custom event? This seems to solve >>> my problems if possible. Then I can create a wakeup to process sends in the >>> polling loop. >>> >> >>> >> >>> >> On Tuesday, December 31, 2013 6:14 AM, Amir Taaki >>> wrote: >>> >> >>> >> Hi! >>> >> >>> >> I have a bit of a design issue. I want to achieve this: >>> >> >>> >> - Software receives a request on a DEALER socket from a ROUTER socket >>> (think the worker in paranoid pirate pattern). >>> >> - Request is executed asynchronously in the software. Result is ready >>> inside another thread. >>> >> >>> >> - Send it back over the same socket as the receive. >>> >> >>> >> In an ideal world, I'd be polling the socket to receive in one >>> thread, and then performing the sends from another. But we cannot use >>> sockets from multiple threads - sockets can only be used from one thread. >>> >> >>> >> Currently I have a multi-producer single-consumer lockless queue for >>> sends, and then there's an std::condition_variable that's waiting for a >>> signal to wake-up and batch process the send queue. Also there's a polling >>> loop for receives. All the options I can think of have downsides: >>> >> >>> >> * Use a separate receive and send socket. I'd need some mechanism so >>> that the broker (ppqueue) is aware of receive/send socket pairs. Maybe a >>> special message for the broker that isn't relayed, but indicates the >>> identity of the receive socket. >>> >> >>> >> * Polling then sending reduces the throughput of sends. I've >>> benchmarked performance and it's a significant hit. You're essentially >>> penalising sends by T microsecs. Sleeping for 0 is not a good idea since >>> CPU usage hits 100%. >>> >> >>> >> * Using the same socket but synchronising access from the send/recv >>> threads - ZMQ crashes infrequently when I do this because of triggered >>> asserts. It'd be great if I know how to do this safely (maybe by calling >>> some method on the soc
Re: [zeromq-dev] Async access to a single socket - send and recv without reducing throughput
Here's my thoughts. It's a lot easier to deal with asynchronous sends and receives if we can use a different thread for each. Even if you can't send on multiple threads at once, and can't receive on multiple threads at once, you should be able to use different threads for each simultaneously. This is, for instance, the guarantee offered by Java SocketChannels. So how can we arrange to allow this with optimal efficiency? Here's one plan. Let me know if there's any problems with it. 1) Have three ZMQ sockets: one the "normal" socket (probably TCP) called commsock, and two inproc pair sockets. Call these sendsock and recvsock. 2) Have a private thread which polls sendsock and commsock. If a message is available on sendsock, forward it to commsock. If a message is available on commsock, forward it to recvsock. 3) One thread can now talk to the other endpoint of sendsock for sending, while a second thread blocks on the other endpoint of recvsock for receiving. On Thu, Jan 2, 2014 at 4:27 PM, Lindley French wrote: > Did the inproc solution work well for you? > > I believe this is a very common problem (how do I send and receive > asynchronously on the same zmq socket without touching it from multiple > threads?), and there ought to be a best practice defined for it. > > > On Tue, Dec 31, 2013 at 11:26 AM, Amir Taaki wrote: > >> thanks! I went with inproc sockets... hope performance is good. will >> benchmark this. >> >> >> https://github.com/spesmilo/obelisk/commit/c0882a9b74bcce3cca41e0cf6dab4cb93552ad39 >> >> >> >> >> >> On Tuesday, December 31, 2013 12:44 PM, Bruno D. Rodrigues < >> bruno.rodrig...@litux.org> wrote: >> poll on an additional pair of zmq inproc:// sockets ;) >> >> >> On Dec 31, 2013, at 12:41, Matt Connolly wrote: >> >> > Zmq poller can also wake on standard file descriptors (eg unix socket). >> If your custom event can write to a unix socket or pipe you might be in >> luck. >> > >> > Cheers >> > Matt. >> > >> >> On 31 Dec 2013, at 4:35 pm, Amir Taaki wrote: >> >> >> >> >> >> >> >> I was reading the docs and saw: >> >> >> >> "How can I integrate ØMQ sockets with normal sockets? Or with a GUI >> event loop? >> >> You can use the zmq_poll() function to poll for events on both ØMQ and >> normal sockets. The zmq_poll() function accepts a timeout so if you need to >> poll and process GUI >> >> events in the same application thread you can set a timeout and >> >> periodically poll for GUI events. See also the reference >> documentation." >> >> >> >> >> >> How can I wake up zmq_poll() with a custom event? This seems to solve >> my problems if possible. Then I can create a wakeup to process sends in the >> polling loop. >> >> >> >> >> >> On Tuesday, December 31, 2013 6:14 AM, Amir Taaki >> wrote: >> >> >> >> Hi! >> >> >> >> I have a bit of a design issue. I want to achieve this: >> >> >> >> - Software receives a request on a DEALER socket from a ROUTER socket >> (think the worker in paranoid pirate pattern). >> >> - Request is executed asynchronously in the software. Result is ready >> inside another thread. >> >> >> >> - Send it back over the same socket as the receive. >> >> >> >> In an ideal world, I'd be polling the socket to receive in one thread, >> and then performing the sends from another. But we cannot use sockets from >> multiple threads - sockets can only be used from one thread. >> >> >> >> Currently I have a multi-producer single-consumer lockless queue for >> sends, and then there's an std::condition_variable that's waiting for a >> signal to wake-up and batch process the send queue. Also there's a polling >> loop for receives. All the options I can think of have downsides: >> >> >> >> * Use a separate receive and send socket. I'd need some mechanism so >> that the broker (ppqueue) is aware of receive/send socket pairs. Maybe a >> special message for the broker that isn't relayed, but indicates the >> identity of the receive socket. >> >> >> >> * Polling then sending reduces the throughput of sends. I've >> benchmarked performance and it's a significant hit. You're essentially >> penalising sends by T microsecs. Sleeping for 0 is not a good idea since >> CPU usage hits 100%. >> >> >> >> * Using the same socket but synchronising access from the send/recv >> threads - ZMQ crashes infrequently when I do this because of triggered >> asserts. It'd be great if I know how to do this safely (maybe by calling >> some method on the socket). >> >> >> >> How do I achieve this scenario of 50% random receives and 50% random >> sends? It's not like the classic scenario of a receive, followed by some >> synchronous code path, and then a send (within the same thread). It's not >> an option to wait for requests to finish as I dropped Thrift to use ZMQ >> because of its async ability. >> >> >> >> Thanks! >> >> ___ >> >> zeromq-dev mailing list >> >> zeromq-dev@lists.zeromq.org >> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >> >> >> __
Re: [zeromq-dev] Heartbeating using TCP keepalives
Seconds is fine for this case but surprising overall since all other durations in the API are in msec. I'm not sure what you mean about backwards compatibility. On Thu, Jan 2, 2014 at 7:55 PM, Alex Grönholm wrote: > 02.01.2014 15:59, Pieter Hintjens kirjoitti: >> It makes sense, and I'd try this; the timeout should be in msec, to be >> consistent with other duration arguments. You can take any of the >> existing socket options like ZMQ_SNDBUF as a template, and make a pull >> request. > Wouldn't it be enough to document that the values are expressed in > seconds and not ms? > Who needs sub-second accuracy with keepalives? > Besides, converting the values on non-Windows systems would break > backwards compatibility. > Are you fine with that? This definitely should not be done in a micro > release. >> >> On Mon, Dec 30, 2013 at 11:29 PM, Alex Grönholm >> wrote: >>> This isn't directly related to ZeroMQ, but it is somewhat relevant now given >>> A) the addition of the (yet unimplemented) heartbeating feature in ZMTP/3.0 >>> and B) the Windows TCP keepalive parameters fix I committed recently. >>> The question is: has someone here used TCP keepalives as a substitute for >>> application level heartbeating? Given the operating model of ZeroMQ, using >>> TCP keepalives for this purpose would transparently shield the user from >>> stale connections. Are there any downsides to this? >>> TCP keepalives, when turned on, use a 2 hour interval by default (this is a >>> de facto standard). This makes them impractical unless the values are >>> adjusted. >>> I've done some research on that. From what I've gathered, it seems that >>> setting TCP keepalive parameters on a per-socket level is supported at least >>> on the following operating systems: >>> >>> Linux >>> FreeBSD >>> Windows (since Windows 2000; set only, read not supported; number of >>> keepalive probes is fixed on 10; must be set before connecting; values in >>> milliseconds, not seconds) >>> Mac OS X (since Mountain Lion) >>> AIX >>> Solaris (values in milliseconds, not seconds) >>> >>> It seems that both iOS and Android support sending TCP keepalives, but >>> setting keepalive parameters is not supported. >>> Note that the Windows TCP keepalive parameters patch takes the time >>> intervals in seconds and multiplies by 1000 on Windows for cross platform >>> compatibility. There is no similar fix for Solaris yet so Solaris users need >>> to do it on the application level for now. >>> >>> Setting the keepalive idle and retransmission delay to values like 10 and 5 >>> seconds would make a lot of sense to me. If the peer fails to respond to the >>> probes, zmq will just see a disconnection. >>> >>> >>> ___ >>> zeromq-dev mailing list >>> zeromq-dev@lists.zeromq.org >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> >> ___ >> zeromq-dev mailing list >> zeromq-dev@lists.zeromq.org >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Async access to a single socket - send and recv without reducing throughput
Did the inproc solution work well for you? I believe this is a very common problem (how do I send and receive asynchronously on the same zmq socket without touching it from multiple threads?), and there ought to be a best practice defined for it. On Tue, Dec 31, 2013 at 11:26 AM, Amir Taaki wrote: > thanks! I went with inproc sockets... hope performance is good. will > benchmark this. > > > https://github.com/spesmilo/obelisk/commit/c0882a9b74bcce3cca41e0cf6dab4cb93552ad39 > > > > > > On Tuesday, December 31, 2013 12:44 PM, Bruno D. Rodrigues < > bruno.rodrig...@litux.org> wrote: > poll on an additional pair of zmq inproc:// sockets ;) > > > On Dec 31, 2013, at 12:41, Matt Connolly wrote: > > > Zmq poller can also wake on standard file descriptors (eg unix socket). > If your custom event can write to a unix socket or pipe you might be in > luck. > > > > Cheers > > Matt. > > > >> On 31 Dec 2013, at 4:35 pm, Amir Taaki wrote: > >> > >> > >> > >> I was reading the docs and saw: > >> > >> "How can I integrate ØMQ sockets with normal sockets? Or with a GUI > event loop? > >> You can use the zmq_poll() function to poll for events on both ØMQ and > normal sockets. The zmq_poll() function accepts a timeout so if you need to > poll and process GUI > >> events in the same application thread you can set a timeout and > >> periodically poll for GUI events. See also the reference documentation." > >> > >> > >> How can I wake up zmq_poll() with a custom event? This seems to solve > my problems if possible. Then I can create a wakeup to process sends in the > polling loop. > >> > >> > >> On Tuesday, December 31, 2013 6:14 AM, Amir Taaki > wrote: > >> > >> Hi! > >> > >> I have a bit of a design issue. I want to achieve this: > >> > >> - Software receives a request on a DEALER socket from a ROUTER socket > (think the worker in paranoid pirate pattern). > >> - Request is executed asynchronously in the software. Result is ready > inside another thread. > >> > >> - Send it back over the same socket as the receive. > >> > >> In an ideal world, I'd be polling the socket to receive in one thread, > and then performing the sends from another. But we cannot use sockets from > multiple threads - sockets can only be used from one thread. > >> > >> Currently I have a multi-producer single-consumer lockless queue for > sends, and then there's an std::condition_variable that's waiting for a > signal to wake-up and batch process the send queue. Also there's a polling > loop for receives. All the options I can think of have downsides: > >> > >> * Use a separate receive and send socket. I'd need some mechanism so > that the broker (ppqueue) is aware of receive/send socket pairs. Maybe a > special message for the broker that isn't relayed, but indicates the > identity of the receive socket. > >> > >> * Polling then sending reduces the throughput of sends. I've > benchmarked performance and it's a significant hit. You're essentially > penalising sends by T microsecs. Sleeping for 0 is not a good idea since > CPU usage hits 100%. > >> > >> * Using the same socket but synchronising access from the send/recv > threads - ZMQ crashes infrequently when I do this because of triggered > asserts. It'd be great if I know how to do this safely (maybe by calling > some method on the socket). > >> > >> How do I achieve this scenario of 50% random receives and 50% random > sends? It's not like the classic scenario of a receive, followed by some > synchronous code path, and then a send (within the same thread). It's not > an option to wait for requests to finish as I dropped Thrift to use ZMQ > because of its async ability. > >> > >> Thanks! > >> ___ > >> zeromq-dev mailing list > >> zeromq-dev@lists.zeromq.org > >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > >> ___ > >> zeromq-dev mailing list > >> zeromq-dev@lists.zeromq.org > >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > ___ > > zeromq-dev mailing list > > zeromq-dev@lists.zeromq.org > > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Zyre ipaddress in Hello message
Yes. This is the way to go. I think we need a lower-level zsys method that returns a zlist of interfaces with some properties (wifi, lan, ipv4/ipv6 etc.). On Thu, Jan 2, 2014 at 7:12 PM, Lindley French wrote: > There's another advantage to doing one socket per address---it makes it easy > to pick and choose which interfaces you really want to listen/beacon on. On > a phone, for instance, it might make a lot of sense to beacon on wlan0 > (wifi) and bnep0 (bluetooth), but less sense to beacon on rmnet0 (4G). > Alternatively, if you are using a network simulator like CORE or EMANE, you > might need to make sure you *only* beacon over a particular interface. > > How these interfaces are specified is, of course, a difficult problem to get > right. Nonetheless, the ability to be selective is useful. > > > On Thu, Jan 2, 2014 at 8:51 AM, Pieter Hintjens wrote: >> >> On Thu, Jan 2, 2014 at 1:06 PM, Arnaud Loonstra >> wrote: >> >> > IMHO the easiest way to solve this is to get the ipaddress through the >> > 0mq socket. Pieter you said this was available in 0mq4+. Are there any >> > examples or docs? >> >> It's not yet available at the libzmq API, only internally. I'm not >> happy making the ZRE protocol depend on a specific version of ZeroMQ >> either. >> >> The beacon isn't a problem afaics: it is trivial to get the sender >> address for a beacon and we already do that. I don't see there's any >> requirement for endpoints except tcp:// at the moment. So a beacon >> with port number is fine. >> >> I think we can get the socket interface address for each received >> beacon, and deliver that as a 3rd frame. >> >> -Pieter >> ___ >> zeromq-dev mailing list >> zeromq-dev@lists.zeromq.org >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > > > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] libzmq crash closing socket with pending messages
Thanks for those pull requests. I merged them. You can update the NEWS as you go along (didn't check if you did that), particularly for backports. On Thu, Jan 2, 2014 at 5:34 PM, AJ Lewis wrote: > Just a heads up that I'm going to submit pull requests to libzmq, zeromq3-x, > and zeromq4-x to revert the fix for LIBZMQ-497 in order to fix LIBZMQ-576. > This means some other solution needs to be found for that problem though - I > don't have a clear idea of how to do that, but I do know that crashing on > socket close isn't acceptable behavior. > > Thanks, > AJ > > On Wed, Nov 13, 2013 at 06:55:25PM -0600, AJ Lewis wrote: >> Check out https://zeromq.jira.com/browse/LIBZMQ-576 for more info. It >> looks like a previous fix for trying to ensure messages in the encoder >> were sent out before socket close is causing issues. Reverting that fix >> (for libzmq, it's commit f27eb67e) seems to clear this up. But we still >> probably want something to fix what that commit was attempting to fix (for >> details on that, see https://zeromq.jira.com/browse/LIBZMQ-497). >> >> AJ >> >> On Wed, Nov 13, 2013 at 11:06:35PM +, Bill M wrote: >> > AJ Lewis quantum.com> writes: >> > >> > > >> > > I've recently seen the same thing in 3.2.3, but hadn't been able to >> > > pinpoint >> > > whether the problem was in zmq proper, or in the application using it. I >> > > look forward to the results of this question. >> > > >> > > On Wed, Nov 06, 2013 at 09:47:55AM -0800, Andy Tucker wrote: >> > > > Hi, I have a program that sends messages on a ZMQ_DEALER socket with >> > > > with >> > > > ZMQ_DONTWAIT. If it gets back EAGAIN (perhaps because the other end is >> > > > responding slowly or has gone away) it calls zmq_close to close the >> > > > socket >> > > > and then re-establish the connection (possibly to a new endpoint) with >> > > > a >> > > > new socket. ZMQ_LINGER is set to 0 (this doesn't appear to happen if >> > > > ZMQ_LINGER isn't set, but that can cause other issues). >> > > > >> > > > I'm occasionally seeing crashes in the libzmq epoll_t thread with >> > > > either >> > > > "pure virtual method called" or a segmentation fault. The stack looks >> > > > like >> > > > (this is with libzmq 3.2.4 but others are similar): >> > > > >> > > > #4 0x7f8928939ca3 in std::terminate() () from >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 >> > > > #5 0x7f892893a77f in __cxa_pure_virtual () from >> > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 >> > > > #6 0x7f8929649db1 in zmq::v1_encoder_t::message_ready >> > > > (this=0x7f8918000b90) at v1_encoder.cpp:66 >> > > > #7 0x7f892964a2a4 in >> > > > zmq::encoder_base_t::get_data >> > > > (this=0x7f8918000b90, data_=0x7f8918000928, size_=0x7f8918000930, >> > > > offset_=0x0) at encoder.hpp:93 >> > > > #8 0x7f892963fb42 in zmq::stream_engine_t::out_event >> > > > (this=0x7f89180008e0) at stream_engine.cpp:261 >> > > > #9 0x7f8929627d1a in zmq::epoll_t::loop (this=0x8eace0) at >> > > > epoll.cpp:158 >> > > > #10 0x7f8929644996 in thread_routine (arg_=0x8ead50) at >> > > > thread.cpp:83 >> > > > #11 0x7f8928be6e9a in start_thread (arg=0x7f89271b9700) at >> > > > pthread_create.c:308 >> > > > #12 0x7f89293453fd in clone () at >> > > > ../sysdeps/unix/sysv/linux/x86_64/clone.S:112 >> > > > >> > > > Looking at the core, it appears that the memory pointed to by the >> > > > msg_source field in the encoder has been freed (the "pure virtual >> > > > method >> > > > called" is because the vtbl pointer has been munged by something that >> > > > re-allocated the buffer). The msg_source field points to the >> > > > session_base_t, but that was freed by the zmq_close. The session_base_t >> > > > destructor calls engine->terminate(), which would normally free the >> > > > engine >> > > > state but doesn't do anything if the encoder still has data left to be >> > > > sent. >> > > > >> > > > I've reproduced this with 3.2.4, 4.0.1, and master (as of a few days >> > > > ago). >> > > > I filed LIBZMQ-576 and attached a small test program to the issue. >> > > > >> > > > This looks like a libzmq bug to me, though if I'm misusing the API in >> > > > some >> > > > way (or if there's a reasonable workaround) please let me know. >> > > > >> > > > Andy >> > > >> > > > ___ >> > > > zeromq-dev mailing list >> > > > zeromq-dev lists.zeromq.org >> > > > http://lists.zeromq.org/mailman/listinfo/zeromq-dev >> > > >> > >> > >> > I'm seeing something similar too, using zmq 3.2.3 through PHP. >> > The segfault is killing the apache process with the following stack trace: >> > >> > #0 0x7f4ae573ab65 in raise () from /lib/libc.so.6 >> > #1 0x7f4ae573e6b0 in abort () from /lib/libc.so.6 >> > #2 0x7f4adbaaa8c5 in __gnu_cxx::__verbose_terminate_handler() () from >> > /usr/lib/libstdc++.so.6 >> > #3 0x7f4adbaa8cf6 in ?? () from /usr/lib/libstdc++.so.6 >> > #4 0x7f4adbaa8d23 in std::terminate() () fro
Re: [zeromq-dev] Possible non conformance of libzmq with RFC/26 ?
On Tue, Dec 31, 2013 at 5:01 PM, Laurent Alebarde wrote: > In StreamQ-Proxy, I test that in the handcheck, I have "CURVE", then > "READY". From RFC/26, I SHOULD test: > !memcmp(content + 1, "READY", 5) > Instead, I have to use: > !memcmp(content + 3, "READY", 5) OK, RFC 26 assumes you have some mechanism to send/recv frames. So the command comes at the start of the frame, and consists of a 1-octet length plus the name, so [5]READY. If you are reading raw ZeroMQ frames then you also get the frame header, two bytes containing the command size. So, content + 3. -Pieter ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Application loop methods
On 01/02/2014 04:22 PM, Arnaud Loonstra wrote: > Hi all, > > I was reading through czmq's zloop class and wondering what methods > exists for creating application loops. I'm used to using frameworks > which handle these so I'm not really 'under the hood' often. From what I > can tell zeroMQ uses filedescriptor polling a lot > (select/poll/epoll/kqueue?) zloop also uses timed events. > > QT/GTK have there own event system, QT uses signals but I think they are > the same as events. I've also read about using (unix) signals for > handling loops. > > It seems select, poll, epoll and kqueue are very efficient. What methods > exist more? What do people advice or use most frequently? If anybody is interested I've implemented a ZeroMQ eventloop for Urwid (curses based UI library for Python). As I found Urwid uses select polling it was quite easy to make a ZeroMQ version. Urwid also ships with a Glib and Twisted eventloop. https://gist.github.com/sphaero/8225315 Tested on Linux, Python3 Rg, Arnaud -- w: http://www.sphaero.org t: http://twitter.com/sphaero g: http://github.com/sphaero i: freenode: sphaero_z25 ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Heartbeating using TCP keepalives
02.01.2014 15:59, Pieter Hintjens kirjoitti: > It makes sense, and I'd try this; the timeout should be in msec, to be > consistent with other duration arguments. You can take any of the > existing socket options like ZMQ_SNDBUF as a template, and make a pull > request. Wouldn't it be enough to document that the values are expressed in seconds and not ms? Who needs sub-second accuracy with keepalives? Besides, converting the values on non-Windows systems would break backwards compatibility. Are you fine with that? This definitely should not be done in a micro release. > > On Mon, Dec 30, 2013 at 11:29 PM, Alex Grönholm > wrote: >> This isn't directly related to ZeroMQ, but it is somewhat relevant now given >> A) the addition of the (yet unimplemented) heartbeating feature in ZMTP/3.0 >> and B) the Windows TCP keepalive parameters fix I committed recently. >> The question is: has someone here used TCP keepalives as a substitute for >> application level heartbeating? Given the operating model of ZeroMQ, using >> TCP keepalives for this purpose would transparently shield the user from >> stale connections. Are there any downsides to this? >> TCP keepalives, when turned on, use a 2 hour interval by default (this is a >> de facto standard). This makes them impractical unless the values are >> adjusted. >> I've done some research on that. From what I've gathered, it seems that >> setting TCP keepalive parameters on a per-socket level is supported at least >> on the following operating systems: >> >> Linux >> FreeBSD >> Windows (since Windows 2000; set only, read not supported; number of >> keepalive probes is fixed on 10; must be set before connecting; values in >> milliseconds, not seconds) >> Mac OS X (since Mountain Lion) >> AIX >> Solaris (values in milliseconds, not seconds) >> >> It seems that both iOS and Android support sending TCP keepalives, but >> setting keepalive parameters is not supported. >> Note that the Windows TCP keepalive parameters patch takes the time >> intervals in seconds and multiplies by 1000 on Windows for cross platform >> compatibility. There is no similar fix for Solaris yet so Solaris users need >> to do it on the application level for now. >> >> Setting the keepalive idle and retransmission delay to values like 10 and 5 >> seconds would make a lot of sense to me. If the peer fails to respond to the >> probes, zmq will just see a disconnection. >> >> >> ___ >> zeromq-dev mailing list >> zeromq-dev@lists.zeromq.org >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >> > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] An interesting use-case for EdgeNet : Asynchronous IRC?
As far as async twitter goes, it isn't a public product. It was a sample application to demonstrate the merits of DARPA CBMEN technology. On Thu, Jan 2, 2014 at 12:53 AM, Lindley French wrote: > Thanks for the offer but that probably isn't a great way to debug. I may > be getting an Android device in a month or so. > > > On Wed, Jan 1, 2014 at 9:02 PM, crocket wrote: > >> "Sent from my iPhone" tells you have an iPhone. >> Do you need to buy an android device? I already have nexus 7 and just >> ordered a nexus 5 yesterday. >> I can test your programs if you give .apk to me or upload your app to >> play store. >> >> >> On Thu, Jan 2, 2014 at 7:14 AM, Lindley French wrote: >> >>> Maybe. I have some practical experience but I don't have an Android >>> device right now, and the emulators don't behave exactly like the devices >>> in all cases. >>> >>> Sent from my iPhone >>> >>> > On Jan 1, 2014, at 4:37 PM, Pieter Hintjens wrote: >>> > >>> > Lindley, would you be able to help get Zyre et all working on Android? >>> > >>> >> On Wed, Jan 1, 2014 at 8:44 PM, Lindley French >>> wrote: >>> >> Oh---and some network functionality shuts down on Android when the >>> device is >>> >> inactive if you don't take the appropriate lock. This is a critical >>> >> consideration when designing edge networking services. >>> >> >>> >> On Jan 1, 2014, at 1:17 PM, Lindley French >>> wrote: >>> >> >>> >> On Android at least, if you have any trouble with UDP broadcast or >>> >> multicast, you should trying using the IPv6 all-hosts address. >>> Android's >>> >> built-in filtering doesn't seem to affect IPv6 the same way as IPv4. >>> >> >>> >> >>> >> On Wed, Jan 1, 2014 at 12:10 AM, Sean Robertson < >>> sprobert...@gmail.com> >>> >> wrote: >>> >>> >>> >>> I have something like this in the works, in the form of an iOS >>> application >>> >>> that I hope to soon port to Android. It doesn't properly use Zyre >>> but rather >>> >>> my own haphazard reimplementation, due to some silliness with >>> Apple's UDP >>> >>> broadcast (https://github.com/zeromq/czmq/issues/297). The UI works >>> decently >>> >>> though. I'll send the code to this list later this week. >>> >>> >>> On Dec 31, 2013 6:38 PM, "Lindley French" >>> wrote: >>> >>> Asych twitter is a good idea and will work well. I've seen it done. >>> Another fun application is async push to talk. >>> >>> On Dec 31, 2013, at 9:32 PM, crocket >>> wrote: >>> >>> May asynchronous twitter be more appropriate for my idea? >>> Asynchronous twitter, asynchronous IRC, whatever. >>> >>> >>> > On Wed, Jan 1, 2014 at 11:19 AM, crocket >>> wrote: >>> > >>> > With asynchronous IRC software, you can choose your nickname and a >>> > topic. >>> > You send messages that belong to a topic. >>> > People who subscribed to that topic receive your message. >>> > Or they might choose to receive messages from every topic. >>> > >>> > This becomes very interesting when population density goes up very >>> high >>> > in a small area. >>> > Imagine that you went to comiket. Wikipedia says "Comiket (コミケット >>> > Komiketto?), otherwise known as the Comic Market (コミックマーケット Komikku >>> > Māketto?), is the world's largest dōjinshi fair, held twice a year >>> in Tokyo, >>> > Japan." >>> > >>> > ~590,000 people attended comiket last summer. It basically looks >>> like >>> > http://en.wikipedia.org/wiki/File:Comiket77.jpg >>> > >>> > With hundreds of thousands of people in a small area, asynchronous >>> IRC >>> > becomes fun. >>> > Not as fun as the near-synchronous one we have now, but still. >>> > >>> > I think asynchronous IRC may entice people to adopt EdgeNet >>> starting >>> > from big meetups. >>> >>> >>> ___ >>> zeromq-dev mailing list >>> zeromq-dev@lists.zeromq.org >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> >>> >>> ___ >>> zeromq-dev mailing list >>> zeromq-dev@lists.zeromq.org >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> >>> >>> >>> ___ >>> >>> zeromq-dev mailing list >>> >>> zeromq-dev@lists.zeromq.org >>> >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> >> >>> >> >>> >> ___ >>> >> zeromq-dev mailing list >>> >> zeromq-dev@lists.zeromq.org >>> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> > ___ >>> > zeromq-dev mailing list >>> > zeromq-dev@lists.zeromq.org >>> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> ___ >>> zeromq-dev mailing list >>> zeromq-dev@lists.zeromq.org >>> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >>> >> >> >> __
Re: [zeromq-dev] Zyre ipaddress in Hello message
There's another advantage to doing one socket per address---it makes it easy to pick and choose which interfaces you really want to listen/beacon on. On a phone, for instance, it might make a lot of sense to beacon on wlan0 (wifi) and bnep0 (bluetooth), but less sense to beacon on rmnet0 (4G). Alternatively, if you are using a network simulator like CORE or EMANE, you might need to make sure you *only* beacon over a particular interface. How these interfaces are specified is, of course, a difficult problem to get right. Nonetheless, the ability to be selective is useful. On Thu, Jan 2, 2014 at 8:51 AM, Pieter Hintjens wrote: > On Thu, Jan 2, 2014 at 1:06 PM, Arnaud Loonstra > wrote: > > > IMHO the easiest way to solve this is to get the ipaddress through the > > 0mq socket. Pieter you said this was available in 0mq4+. Are there any > > examples or docs? > > It's not yet available at the libzmq API, only internally. I'm not > happy making the ZRE protocol depend on a specific version of ZeroMQ > either. > > The beacon isn't a problem afaics: it is trivial to get the sender > address for a beacon and we already do that. I don't see there's any > requirement for endpoints except tcp:// at the moment. So a beacon > with port number is fine. > > I think we can get the socket interface address for each received > beacon, and deliver that as a 3rd frame. > > -Pieter > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] libzmq crash closing socket with pending messages
Just a heads up that I'm going to submit pull requests to libzmq, zeromq3-x, and zeromq4-x to revert the fix for LIBZMQ-497 in order to fix LIBZMQ-576. This means some other solution needs to be found for that problem though - I don't have a clear idea of how to do that, but I do know that crashing on socket close isn't acceptable behavior. Thanks, AJ On Wed, Nov 13, 2013 at 06:55:25PM -0600, AJ Lewis wrote: > Check out https://zeromq.jira.com/browse/LIBZMQ-576 for more info. It > looks like a previous fix for trying to ensure messages in the encoder > were sent out before socket close is causing issues. Reverting that fix > (for libzmq, it's commit f27eb67e) seems to clear this up. But we still > probably want something to fix what that commit was attempting to fix (for > details on that, see https://zeromq.jira.com/browse/LIBZMQ-497). > > AJ > > On Wed, Nov 13, 2013 at 11:06:35PM +, Bill M wrote: > > AJ Lewis quantum.com> writes: > > > > > > > > I've recently seen the same thing in 3.2.3, but hadn't been able to > > > pinpoint > > > whether the problem was in zmq proper, or in the application using it. I > > > look forward to the results of this question. > > > > > > On Wed, Nov 06, 2013 at 09:47:55AM -0800, Andy Tucker wrote: > > > > Hi, I have a program that sends messages on a ZMQ_DEALER socket with > > > > with > > > > ZMQ_DONTWAIT. If it gets back EAGAIN (perhaps because the other end is > > > > responding slowly or has gone away) it calls zmq_close to close the > > > > socket > > > > and then re-establish the connection (possibly to a new endpoint) with a > > > > new socket. ZMQ_LINGER is set to 0 (this doesn't appear to happen if > > > > ZMQ_LINGER isn't set, but that can cause other issues). > > > > > > > > I'm occasionally seeing crashes in the libzmq epoll_t thread with either > > > > "pure virtual method called" or a segmentation fault. The stack looks > > > > like > > > > (this is with libzmq 3.2.4 but others are similar): > > > > > > > > #4 0x7f8928939ca3 in std::terminate() () from > > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 > > > > #5 0x7f892893a77f in __cxa_pure_virtual () from > > > > /usr/lib/x86_64-linux-gnu/libstdc++.so.6 > > > > #6 0x7f8929649db1 in zmq::v1_encoder_t::message_ready > > > > (this=0x7f8918000b90) at v1_encoder.cpp:66 > > > > #7 0x7f892964a2a4 in > > > > zmq::encoder_base_t::get_data > > > > (this=0x7f8918000b90, data_=0x7f8918000928, size_=0x7f8918000930, > > > > offset_=0x0) at encoder.hpp:93 > > > > #8 0x7f892963fb42 in zmq::stream_engine_t::out_event > > > > (this=0x7f89180008e0) at stream_engine.cpp:261 > > > > #9 0x7f8929627d1a in zmq::epoll_t::loop (this=0x8eace0) at > > > > epoll.cpp:158 > > > > #10 0x7f8929644996 in thread_routine (arg_=0x8ead50) at > > > > thread.cpp:83 > > > > #11 0x7f8928be6e9a in start_thread (arg=0x7f89271b9700) at > > > > pthread_create.c:308 > > > > #12 0x7f89293453fd in clone () at > > > > ../sysdeps/unix/sysv/linux/x86_64/clone.S:112 > > > > > > > > Looking at the core, it appears that the memory pointed to by the > > > > msg_source field in the encoder has been freed (the "pure virtual method > > > > called" is because the vtbl pointer has been munged by something that > > > > re-allocated the buffer). The msg_source field points to the > > > > session_base_t, but that was freed by the zmq_close. The session_base_t > > > > destructor calls engine->terminate(), which would normally free the > > > > engine > > > > state but doesn't do anything if the encoder still has data left to be > > > > sent. > > > > > > > > I've reproduced this with 3.2.4, 4.0.1, and master (as of a few days > > > > ago). > > > > I filed LIBZMQ-576 and attached a small test program to the issue. > > > > > > > > This looks like a libzmq bug to me, though if I'm misusing the API in > > > > some > > > > way (or if there's a reasonable workaround) please let me know. > > > > > > > > Andy > > > > > > > ___ > > > > zeromq-dev mailing list > > > > zeromq-dev lists.zeromq.org > > > > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > > > > > > > > I'm seeing something similar too, using zmq 3.2.3 through PHP. > > The segfault is killing the apache process with the following stack trace: > > > > #0 0x7f4ae573ab65 in raise () from /lib/libc.so.6 > > #1 0x7f4ae573e6b0 in abort () from /lib/libc.so.6 > > #2 0x7f4adbaaa8c5 in __gnu_cxx::__verbose_terminate_handler() () from > > /usr/lib/libstdc++.so.6 > > #3 0x7f4adbaa8cf6 in ?? () from /usr/lib/libstdc++.so.6 > > #4 0x7f4adbaa8d23 in std::terminate() () from /usr/lib/libstdc++.so.6 > > #5 0x7f4adbaa95ff in __cxa_pure_virtual () from /usr/lib/libstdc++.so.6 > > #6 0x7f4ad92267d7 in ?? () from /usr/local/lib/libzmq.so.3 > > #7 0x7f4ad92271af in ?? () from /usr/local/lib/libzmq.so.3 > > #8 0x7f4ad921a0f5 in ?? () from /usr/local/lib/libzmq.so.3 > > #9
[zeromq-dev] Application loop methods
Hi all, I was reading through czmq's zloop class and wondering what methods exists for creating application loops. I'm used to using frameworks which handle these so I'm not really 'under the hood' often. From what I can tell zeroMQ uses filedescriptor polling a lot (select/poll/epoll/kqueue?) zloop also uses timed events. QT/GTK have there own event system, QT uses signals but I think they are the same as events. I've also read about using (unix) signals for handling loops. It seems select, poll, epoll and kqueue are very efficient. What methods exist more? What do people advice or use most frequently? Rg, Arnaud -- w: http://www.sphaero.org t: http://twitter.com/sphaero g: http://github.com/sphaero i: freenode: sphaero_z25 ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Possible non conformance of libzmq with RFC/26 ?
Here it is: https://github.com/lalebarde/streamq-proxy/commit/17ba132cc444a08e78fb9c6e78f1f2436953e4eb#commitcomment-4969395 Le 02/01/2014 15:09, Pieter Hintjens a écrit : Can you get a dump of the whole handshake sent by libzmq? Thanks. On Tue, Dec 31, 2013 at 5:01 PM, Laurent Alebarde wrote: Hi Devs, In StreamQ-Proxy, I test that in the handcheck, I have "CURVE", then "READY". From RFC/26, I SHOULD test: !memcmp(content + 1, "READY", 5) Instead, I have to use: !memcmp(content + 3, "READY", 5) to have it work. Is it a misunderstanding of mine or a discrepency between the RFC and the libzmq implementation ? It is here: https://github.com/lalebarde/streamq-proxy Cheers, Laurent ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Reading queued messages after disconnect
On Wed, Jan 1, 2014 at 11:48 PM, Matt Connolly wrote: > disconnect socket, receive messages until you get an ENOTCONN error. Sure, if you connect and then disconnect, you might conceivably still get messages coming in. It's unclear whether this makes sense: a disconnect should perhaps destroy the pipe created during connect. > Alternately, poll and while readable recv the messages. This is how I’m doing > it at present. (But the ruby bindings doesn’t let me recv with no endpoints, > whereas a C program can). Interesting. I'm also curious why you would want to do this. It seems wrong. Instead, you'd terminate a protocol properly with whatever handshake, and then destroy the socket. -Pieter ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] ZMQ fails to start on ARM target
It sounds like there's some system-dependent code that's not working properly on uClibC. You'll have to work down to find what that is. My advice is to test fragments of code in isolation rather than in the whole libzmq+application stack. On Wed, Jan 1, 2014 at 2:16 PM, Mike Smith wrote: > Follow-up to the problem: > > > > I’ve tracked the failure as far as “ctx.cpp”, reaper->start(); This call > never returns, i.e., “poller” does not start? Have dug no further, yet. > > > > This is a peculiar condition that occurs in some apparently specific > instances. Otherwise, it all works correctly. Any ideas? This must have > something to do, perhaps, with uClibC, as I don’t find the problem in a > desktop system?? > > > > Thanks, > > Mike > > > > > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Possible non conformance of libzmq with RFC/26 ?
Can you get a dump of the whole handshake sent by libzmq? Thanks. On Tue, Dec 31, 2013 at 5:01 PM, Laurent Alebarde wrote: > Hi Devs, > > In StreamQ-Proxy, I test that in the handcheck, I have "CURVE", then > "READY". From RFC/26, I SHOULD test: > > !memcmp(content + 1, "READY", 5) > > Instead, I have to use: > > !memcmp(content + 3, "READY", 5) > > to have it work. > > Is it a misunderstanding of mine or a discrepency between the RFC and the > libzmq implementation ? > > It is here: https://github.com/lalebarde/streamq-proxy > > Cheers, > > Laurent > > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Heartbeating using TCP keepalives
It makes sense, and I'd try this; the timeout should be in msec, to be consistent with other duration arguments. You can take any of the existing socket options like ZMQ_SNDBUF as a template, and make a pull request. On Mon, Dec 30, 2013 at 11:29 PM, Alex Grönholm wrote: > This isn't directly related to ZeroMQ, but it is somewhat relevant now given > A) the addition of the (yet unimplemented) heartbeating feature in ZMTP/3.0 > and B) the Windows TCP keepalive parameters fix I committed recently. > The question is: has someone here used TCP keepalives as a substitute for > application level heartbeating? Given the operating model of ZeroMQ, using > TCP keepalives for this purpose would transparently shield the user from > stale connections. Are there any downsides to this? > TCP keepalives, when turned on, use a 2 hour interval by default (this is a > de facto standard). This makes them impractical unless the values are > adjusted. > I've done some research on that. From what I've gathered, it seems that > setting TCP keepalive parameters on a per-socket level is supported at least > on the following operating systems: > > Linux > FreeBSD > Windows (since Windows 2000; set only, read not supported; number of > keepalive probes is fixed on 10; must be set before connecting; values in > milliseconds, not seconds) > Mac OS X (since Mountain Lion) > AIX > Solaris (values in milliseconds, not seconds) > > It seems that both iOS and Android support sending TCP keepalives, but > setting keepalive parameters is not supported. > Note that the Windows TCP keepalive parameters patch takes the time > intervals in seconds and multiplies by 1000 on Windows for cross platform > compatibility. There is no similar fix for Solaris yet so Solaris users need > to do it on the application level for now. > > Setting the keepalive idle and retransmission delay to values like 10 and 5 > seconds would make a lot of sense to me. If the peer fails to respond to the > probes, zmq will just see a disconnection. > > > ___ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Zyre ipaddress in Hello message
On Thu, Jan 2, 2014 at 1:06 PM, Arnaud Loonstra wrote: > IMHO the easiest way to solve this is to get the ipaddress through the > 0mq socket. Pieter you said this was available in 0mq4+. Are there any > examples or docs? It's not yet available at the libzmq API, only internally. I'm not happy making the ZRE protocol depend on a specific version of ZeroMQ either. The beacon isn't a problem afaics: it is trivial to get the sender address for a beacon and we already do that. I don't see there's any requirement for endpoints except tcp:// at the moment. So a beacon with port number is fine. I think we can get the socket interface address for each received beacon, and deliver that as a 3rd frame. -Pieter ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev
Re: [zeromq-dev] Zyre ipaddress in Hello message
On 01/01/2014 10:49 PM, Pieter Hintjens wrote: > Yes, that seems like the simplest stupid solution. It would let us > also do IPv4 and IPv6 at the same time, as you say. > > On Wed, Jan 1, 2014 at 7:27 PM, Lindley French wrote: >> Binding a separate socket to each interface (in fact, binding separately to >> IPv6 and IPv4 addresses) has worked well for me in the past. Then just >> select() on all of them. >> [snip] Alternatively, as you say, it could get the originating IP address of each HELLO message. That is more work. The libzmq API doesn't provide that directly (we could extract it at authentication time, from ZMQ/4.0 and later). So option 1 then. The flow is, A gets beacon from B, and then connects to B and sends HELLO. B receives HELLO from A, and connects back to A. So A knows what address it received a beacon on. It seems we need to use recvmsg() instead of recvfrom(). There's an example here: I'm not sure about the recvmsg call. It's only available in python since version 3.3 http://docs.python.org/3.3/library/socket.html#socket.socket.recvmsg Is it available on other platforms? I've been playing with but I'm not sure it gives the info we need: https://gist.github.com/sphaero/8218025 My output is: Setting up a broadcast beacon on 255.255.255.255:1200 b'hoi\n' 255.255.255.255 192.168.12.224 2 b'hoi\n' 255.255.255.255 192.168.12.224 3 The last number is the interface index. That's the only data usable as the ipaddresses of both packets is the same although it's received on different interfaces. So then, zbeacon would update its hostname property after each recv, and the caller could use this to construct an accurate HELLO message. If zbeacon would be updating the ipaddress of itself we could run into a conflicting situation as it is asynchronous. (Where it's ipaddress is set when receiving a beacon while it was just sending a hello to a node on an other interface.) The HELLO message should contain the ipaddress of the interface it is send out from. Hence it's easiest to get that address on the receiving side IP wise... Polling on multiple interface should be done anyways, e.g. ipv6. But how would the solution scale to other transports... a beacon could just as well broadcast other transport possibilities? The beacon now only broadcasts it's port number. But how do we deal with it in situations of mixed ipv4 and ipv6 and possibly others? The essence is that the beacon is announcing a node and inside the beacon it contains the minimal information how one could connect to it. IMHO the easiest way to solve this is to get the ipaddress through the 0mq socket. Pieter you said this was available in 0mq4+. Are there any examples or docs? Rg, Arnaud -- w: http://www.sphaero.org t: http://twitter.com/sphaero g: http://github.com/sphaero i: freenode: sphaero_z25 ___ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev