We are unable to view the queues when clicking on the queues tab in the AMQ
console. When we go to the tab it says "Exception occurred while processing
this request, check the log for more information". The logging errors are
below. We are using version 5.5.1. Any tips on what the problem might
We are using a shared file system master/slave configuration. We have
schedulerSupport enabled. When we encounter a failover all 'scheduled' messages
are lost. This is because it is using the local disk for the data store for the
Job Scheduler. How can I set this directory to our shared file sy
We have been using activemq in production for about a year now. We use
persistant messaging and shared file system master /slave for high availability
(don't forget to enable tcp_keepalive) and use STOMP. Sparse documentation is
my largest complaint. I purchased a $25 doc from some private compa
does support prefetch.
>
> activemq.prefetchSize = n during the subscription stage.
>
> From http://activemq.apache.org/stomp.html
>
> On 5 May 2011 21:20, Josh Carlson wrote:
>
> > We are using the STOMP protocol which doesn't support that. I was
> curious
&g
t; >> composite destination so that you can subscribe to all destinations
> at
> >> once.
> >> Also, there is a delay between new consumer registration and async
> >> dispatch, so waiting a few seconds before unsubscribe is necessary.
> >>
> >&g
;
> http://activemq.apache.org/composite-destinations.html
>
> On 28 April 2011 23:41, Josh Carlson wrote:
> > We are using a shared file system Master/Slave for the broker. Version
> 5.4.2. Our clients use the STOMP protocol. We use client
> acknowledgements and communicate synchron
If I have prefetch set to 1 and I retrieve a message, is another one dispatched
before it is ack'd? I'm trying to scale the number of consumers. My benchmark
test pre-produces 50 messages for each consumer (all in one queue). Then sets N
consumers active and I measure the time it takes to pull t
We are using a shared file system Master/Slave for the broker. Version 5.4.2.
Our clients use the STOMP protocol. We use client acknowledgements and
communicate synchronously with the broker (using receipts). We set prefetch to
1 in our subscriptions. Our clients iterate over several queues, sub
We are using version 5.3.0 with a shared file system master slave configuration
and using persistence messaging with client acknowledgements. A NFSV4 mount
point is used for both the lock file and the persistent storage. KahaDB is
being used as the persistence adaptor.
We have encountered issue
en the JVM well more than a GB.
Thanks for the help
> -Original Message-
> From: Bruce Snyder [mailto:bruce.sny...@gmail.com]
> Sent: Tuesday, November 09, 2010 1:41 PM
> To: users@activemq.apache.org
> Subject: Re: storeUsage with kahaDB which files
>
> On Tue, Nov 9
We running the activemq broker version 5.3.0 and using the STOMP clients for
producers and consumers. We ran into this morning where the store usage had
been exceeded and producers were blocked on the sends. I noticed that we had
the following configured for systemUsage:
transport configuration with
TcpTransport options, socket dot options and transport dot. Easy to
get confused but almost all options are settable in some way.
On 14 April 2010 22:32, Josh Carlson <mailto:jcarl...@e-dialog.com>> wrote:
Folks ... just because I hate nothing more t
0 11:58 AM, Josh Carlson wrote:
Hi Dejan,
I don't think it would be practical or correct for us to do that
client side. The thing that gets me though is that killing the client
*process* causes the tcp connection to get closed on the other end.
But killing client *host* keeps the tcp connecti
.manning.com/snyder/
Blog - http://www.nighttale.net
On Wed, Apr 14, 2010 at 5:41 PM, Josh Carlson <mailto:jcarl...@e-dialog.com>> wrote:
Hmm. If a timeout was the solution to this problem how would you
be able to tell the difference between something being wrong and
the cli
ne. Maybe this is more of a kernel issue. I would think that
when the poll is done that it would trigger the connection to move from
the ESTABLISHED state and get closed.
We are using Linux, kernel version 2.6.18, but I've seen this same issue
on a range of different 2.6 versions.
-Josh
meout options on the broker side.
On 13 April 2010 19:43, Josh Carlson <mailto:jcarl...@e-dialog.com>> wrote:
I am using client acknowledgements with a prefetch size of 1 with
no message expiration policy. When a consumer subscribes to a
queue I can see that the message g
I am using client acknowledgements with a prefetch size of 1 with no
message expiration policy. When a consumer subscribes to a queue I can
see that the message gets dispatched correctly. If the process gets
killed before retrieving and acknowledging the message I see the message
getting re-dis
I am using a Stomp client with ActiveMQ 5.3.0. I have some slow
consumers and find that they wind up locking up messages in the dispatch
queue even when are other consumers available to consume the messages. I
believe prefetch is the cause of this problem.
What I would like to do is set prefet
Is your client hung? Is your client running under Linux? I have a
problem where if the actual host machine serving as the master goes down
(as opposed to just the process) the client processes hang trying to
read/write from the socket connection. I've yet to investigate the issue
but was just c
ayed. This should only occur if the ack did not actually get to
the broker.
If previously acked messages are getting replayed there is something
wrong.
On 24 February 2010 19:02, Josh Carlson <mailto:jcarl...@e-dialog.com>> wrote:
When using a shared file system mas
When using a shared file system master/server activemq configuration and
client acknoledgements we run into a problem when
our clients fail over to a new server. The problem is that the new
server does not appear to have any knowledge of pending
messages that the old server had dispatched to clie
Actually I still have a problem. If another consumer asks for a message
before the original consumer acks the message (the ack that spans across
the broker) the message can be delivered to the other consumer.
On 02/23/2010 01:22 PM, Josh Carlson wrote:
I answered my own question. In order for
I answered my own question. In order for this to work my client has to
re-subscribe to the queues. Doing this solved the issue.
-Josh
On 02/23/2010 01:00 PM, Josh Carlson wrote:
I've been prototyping a 'shared file system master/slave'
implementation. I'm using the Stomp p
I've been prototyping a 'shared file system master/slave'
implementation. I'm using the Stomp protocol in my client and am trying
to get failover to work properly. Currently when I failover to a new
master, pending messages are persisted in the new master. However,
message state seems to be los
24 matches
Mail list logo