On 16.02.2017 16:26, Luca Boccassi wrote:
What's the file limit on the 2 systems? (With the user that runs the
program)
ulimit -n
on both 6.8 and 7.3:
development environment: ulimit -n = 1024
installed environment: ulimit -n = 4096
With a basic sampling of file descriptor
Are you building your own binaries in both cases?
yes
What polling mechanism was RHEL 6 using? You can see it in
the ./configure output: "Using 'epoll' polling system"
from config.log:
Using 'epoll' polling system with CLOEXEC
___
zer
Hello,
I could use some advice to diagnose the following issue.
I have a program that has been running without problems for a couple of
years on Red Hat Enterprise Linux 6 at various sites.
On RHEL7, the program triggers the assertion
Bad file descriptor (src/epoll.cpp:131)
in about
On 02.12.2016 15:00, Luca Boccassi wrote:
Makes sense, we already set the CLOEXEC flag in the sockets.
Given it's causing you issues, would you be able to test it and send a
PR to fix it? Thanks!
it's going to take a few days until I get to it.
while investigating a problem involving fork() and zeromq, I found some
file descriptor leaks.
1. The function
zmq::epoll_t::epoll_t (const zmq::ctx_t &ctx_)
in src/epoll.cpp creates an epoll instance with
epoll_fd = epoll_create(1);
SUGGESTION: replace with
epoll_f
On 25.11.2016 11:50, Luca Boccassi wrote:
What I can say is that we have a unit test for this situation:
https://github.com/zeromq/libzmq/blob/master/tests/test_fork.cpp
And the child closes the (TCP) socket explicitly before the context.
Which is in fact what should happen in all cases.
The p
* Background
I have a service that starts workers on demand with fork+exec.
The requests arrive over zeromq sockets.
After the fork, before the exec, I close all file descriptors > 2,
keeping only stdin/out/err. I then exec the requested program.
* Problem
It works. Except that I get some
Hi Doron,
I tried out your latest repo
https://github.com/somdoron/libzmq/commit/3775d0853a8c1f1c3854a94c7fe12e78046faeca
with the changes to src/socket_base.cpp, src/pipe.cpp and src/pipe.hpp.
I confirm that the problem reported at
https://github.com/zeromq/libzmq/issues/216
On 20.10.2016 15:46, Doron Somech wrote:
I actually those are different issues. If you suffer from the pubsub
issue, I think I trace the bug and have a solution. Take a look at the
issue. If you suffer from the 100K issue, I think that is a different
one, anyway you can try the solution as well.
On 20.10.2016 14:31, Doron Somech wrote:
Also I think it smells like using socket from multiple threads...
unfortunately no, the assertion strikes in a single thread application.
See also the test case at
https://github.com/zeromq/libzmq/issues/2163
On Thu, Oct 20, 2016 at 3:28 PM,
Hi,
Ranjeet Kumar described his troubles with the assertion in
http://lists.zeromq.org/pipermail/zeromq-dev/2016-September/030839.html
According to
http://lists.zeromq.org/pipermail/zeromq-dev/2016-September/030851.html
he apparently solved his problem by commenting the assert
On 07.10.2016 10:14, Laughing wrote:
*>>>* Is that mean that socket.disconnect does not disconnect from
all endpoint connected before?
see http://api.zeromq.org/4-1:zmq-disconnect
int zmq_disconnect (void *socket, const char *endpoint);
zmq_disconnect disconnects the socket fr
On 07.10.2016 08:04, Laughing wrote:
I think that the socket cannot recv message any more after disconnect.
not quite correct: the SUB socket could still be connected to several
other PUB sockets.
The abort is still present when you modify the test case accordingly.
The first frame is als
13 matches
Mail list logo