On Thu, 02 Apr 2015 09:41:24 -0500, Thomas Rodgers wrote:
> can you get a stacktrace?
>
> On Thu, Apr 2, 2015 at 9:27 AM, Dario Meloni wrote:
>
>
>>
>> On Tue, 27 May 2014 10:55:57 +0100, Jon Gjengset wrote:
>>
>> I have this problem constantly in a software I am developing in my
>> company wi
can you get a stacktrace?
On Thu, Apr 2, 2015 at 9:27 AM, Dario Meloni wrote:
>
>
> On Tue, 27 May 2014 10:55:57 +0100, Jon Gjengset wrote:
>
> I have this problem constantly in a software I am developing in my
> company with ZMQ 4.0.4.
> The issue arise in the same situation, one thread reading
On Tue, 27 May 2014 10:55:57 +0100, Jon Gjengset wrote:
I have this problem constantly in a software I am developing in my
company with ZMQ 4.0.4.
The issue arise in the same situation, one thread reading and one writing
in socket pairs inproc. When stress testing the application the assertion
Hi Jon,
You can get list of changed files when executing patch --dry-run
marikm@linux-lwt7:~/Downloads/zeromq-3.2.4> patch --dry-run -p1 -i
~/workspace/odm/patches/zeromq-3.2.4-01.patch
checking file src/mailbox.cpp
checking file src/mailbox.hpp
checking file src/mutex.hpp
And what was change
> I take it you mean this one?
> http://lists.zeromq.org/pipermail/zeromq-dev/2014-March/025705.html
>
> I notice that the patch is for 3.2. Do you have it working on 4.x?
> Also, could you please regenerate the patch using "diff -u" or "git
> diff" so that it is clearer exactly what has changed?
I take it you mean this one?
http://lists.zeromq.org/pipermail/zeromq-dev/2014-March/025705.html
Looks like you were having the same issue as I am, especially since you
say you occasionally get segfault and "Resource not available" too..
I'll try the patch when I get back to work tomorrow.
I noti
This might be the equivalent of the "Assertion failed: ok (mailbox.cpp:84)". I
have sent patch for it and the patched ZeroMQ works well for us. Please, try
patched version prior further investigation.
Regards,
Slavek
___
zeromq-dev mailing list
zer
I had roughly similar problems that drove me crazy,
but was able to solve most of them with a call to
zclock_sleep(x) placed in just the right spot, to
allow the asynchronous nature of things to calm down.
That or handshaking. Sometimes, as little as 8 msec
was enough to solve the problem. Usually
> > Could there be a race on the message queue inside inproc?
>
> Could well be, at shutdown time.
>
> I can't reproduce the problem using your example. It might be easier
> to make an example that creates inproc pairs, uses them briefly, and
> then shuts down the context, and does this rapidly i
On Tue, May 20, 2014 at 1:29 PM, Jon Gjengset wrote:
> Just to make absolutely sure, sending a pointer this way with ØMQ is
> completely legal even if p == NULL, right?
This is valid, yes.
> Could there be a race on the message queue inside inproc?
Could well be, at shutdown time.
I can't rep
> > Otherwise, you're going to have to hunt down the issue by adding
> > tracing into the code, starting where it's asserting. E.g. in
> > signaler.cpp, what's the value of dummy when it asserts?
>
> That's what I'm in the process of doing now.
I just tried to comment out all my calls to zmq_clos
> Your design is correct afaics, it's very much what we do in CZMQ apps
> and I've not seen that crash. A global context, and sockets per thread
> is fine.
>
> Best would be a repeatable test case to diagnose this.
I'll keep working on that. Not being able to create a test case is what
prevents m
> > I tried adding a mutex around every call to zmq_close now, and the
> > assertion still fails, so it seems like this is not a close-close race.
> > Since the assertion fail happens in a recv call, it looks more like a
> > close-recv problem?
>
> Another thought: Have you tried setting the linge
Hi Jon,
Thanks for reporting this error.
Your design is correct afaics, it's very much what we do in CZMQ apps
and I've not seen that crash. A global context, and sockets per thread
is fine.
Best would be a repeatable test case to diagnose this.
Otherwise, you're going to have to hunt down the
On 19 May 2014, at 11:51 pm, Jon Gjengset wrote:
>>> In the rbczmq gem, I decided to put a mutex around any calls to close
>>> a socket so only one could happen at a time. This incurs a performance
>>> hit for applications opening and closing many sockets - but I
>>> understand that is not a reco
> > In the rbczmq gem, I decided to put a mutex around any calls to close
> > a socket so only one could happen at a time. This incurs a performance
> > hit for applications opening and closing many sockets - but I
> > understand that is not a recommended usage pattern.
>
> If there is a race cond
> > > Nevertheless, my application will sometimes (although not always),
> > > crash with an assertion failure at mailbox.cpp:82:
> >
> > It might be worth noting that I have only ever observed the crash during
> > the shutdown procedure after the second stage has closed its outgoing
> > socket.
>
On 19 May 2014, at 11:26 pm, Jon Gjengset wrote:
>> Nevertheless, my application will sometimes (although not always),
>> crash with an assertion failure at mailbox.cpp:82:
>
> It might be worth noting that I have only ever observed the crash during
> the shutdown procedure after the second stag
> Nevertheless, my application will sometimes (although not always),
> crash with an assertion failure at mailbox.cpp:82:
It might be worth noting that I have only ever observed the crash during
the shutdown procedure after the second stage has closed its outgoing
socket.
_
Hi all!
I'm fairly new to ØMQ, but have read most of The Guide and think I
understand how my particular problem should be built on top of ØMQ.
I'm building a multithreaded application with staged threads, similar to
the "Signaling Between Threads (PAIR Sockets)" chapter in The Guide.
My applicati
20 matches
Mail list logo