Thanks! Perfixing with 'transport.' worked.
It would be helpful if we were allowed to configure the TCP_KEEPIDLE,
TCP_KEEPCNT, and TCP_KEEPINTERVL on the underlying socket as well.
Otherwise one would need to rely on reasonable sysctl settings.
Thanks for the hlep.
Cheers,
-Josh
On 04/15/2
Josh, thanks for completing the loop with your reply.
I think the keepAlive option needs to be prefixed with transport. to make it
work for the accepted socket.
stomp://mmq1:61613??transport.keepAlive=true
There is a bit too much going on in the transport configuration with
TcpTransport option
Hi Josh,
that config should be
can you try it out and see if it works for you?
Cheers
--
Dejan Bosanac - http://twitter.com/dejanb
Open Source Integration - http://fusesource.com/
ActiveMQ in Action - http://www.manning.com/snyder/
Blog - http://www.nighttale.net
On Wed, Apr 14, 2010 at 11
Folks ... just because I hate nothing more than coming across a post
with out a solution, I thought I'd post what I did. After discovering
the same problem on Solaris as Linux I decided that TCP keepalive might
be the answer.
Activemq does appear to allow you to set this:
http://activem
Hi Dejan,
I don't think it would be practical or correct for us to do that client
side. The thing that gets me though is that killing the client *process*
causes the tcp connection to get closed on the other end. But killing
client *host* keeps the tcp connection established on the other end.
Hi Josh,
that's the job of inactivity monitor when using the OpenWire. Unfortunately
Stomp doesn't support that in version 1.0 and it is something we want to add
in the next version of the spec. Maybe implementing something like that on
the application level would help in your case?
Cheers
--
Dej
Hmm. If a timeout was the solution to this problem how would you be able
to tell the difference between something being wrong and the client just
being slow.
I did an strace on the server and discovered how the timeout is being
used. As a parameter to poll
6805 10:31:15 poll([{fd=94, events
Thanks Gary for the, as usual, helpful information.
It looks like the broker maybe suffering from exactly the same problem
we encountered when implementing client-side failover. Namely that when
the master broker went down a subsequent read on the socket by the
client would hang (well actually
The re-dispatch is triggered by the tcp connection dying, netstat can help
with the diagnosis here. Check the connection state of the broker port after
the client host is rebooted, if the connection is still active, possibly in
a timed_wait state, you may need to configure some additional timeout
o
Hi Josh,
can you create a reproducible test case for this and create a Jira?
Cheers
--
Dejan Bosanac - http://twitter.com/dejanb
Open Source Integration - http://fusesource.com/
ActiveMQ in Action - http://www.manning.com/snyder/
Blog - http://www.nighttale.net
On Tue, Apr 13, 2010 at 8:43 PM,
I am using client acknowledgements with a prefetch size of 1 with no
message expiration policy. When a consumer subscribes to a queue I can
see that the message gets dispatched correctly. If the process gets
killed before retrieving and acknowledging the message I see the message
getting re-dis
11 matches
Mail list logo