I’ve been thinking about how messages are stored in the broker and ways to
improve the storage in memory.
First, right now, messages are stored in the same heap, and if you’re using
the memory store, like, that’s going to add up. This will increase GC
latency , and you actually need 2x more
Are you reaching your diskUsage threshold (see systemUsage)?
On Sunday, April 19, 2015, Kevin Burton bur...@spinn3r.com wrote:
Interesting. It’s already 1 in the connection configuration. I assume you
mean queuePrefetch as it’s named differently in the destination policy.
On Sun, Apr 19,
No where near it.. we’re running in memory and the values are much higher
than this… I might try to disable it though… just as an experiment.
On Sun, Apr 19, 2015 at 7:39 PM, Geoffrey Arnold geoffrey.arn...@gmail.com
wrote:
Are you reaching your diskUsage threshold (see systemUsage)?
On
Also, I”ve run with and without producer flow control and that also doesn’t
impact the situation.
On Sun, Apr 19, 2015 at 8:01 PM, Kevin Burton bur...@spinn3r.com wrote:
Here’s the public gist of our XML config. (it needs some comment cleanup
but that’s that we’re running with).
What version of ActiveMQ are you using? Please send the contents of you
activemq.xml file plus details of your producer consumer and how they're
implemented.
Have you set the broker's logging to TRACE level prior to running your
experiments? If so, please attach or use pastebin.
Thanks,
Paul
On
Interesting. It’s already 1 in the connection configuration. I assume you
mean queuePrefetch as it’s named differently in the destination policy.
On Sun, Apr 19, 2015 at 5:42 PM, Justin Reock justin.re...@roguewave.com
wrote:
Have you tried forcing prefetch to 1 as a destination policy?
Here’s the public gist of our XML config. (it needs some comment cleanup
but that’s that we’re running with).
https://gist.github.com/burtonator/b5f4228b0f0acbf05b4e
We’re running 5.10.2 . I’ve reviewed the bugs fixed since then and nothing
seems to apply to our situation. I would upgrade but
Also, I was thinking that this MIGHT be a bug with unfair scheduling.
synchronized and read/write locks aren’t fair.
So it’s entirely possible that the client is scheduling work on on faster
queue because they reply quicker and thus they win the lock race.
This would explain why I can read from
Have you tried forcing prefetch to 1 as a destination policy?
-Justin
On Apr 19, 2015 8:15 PM, Kevin Burton bur...@spinn3r.com wrote:
I’m totally stumped on this bug ….
Essentially, I have a queue that locks up and consumers in my main daemon
no longer consume messages from it.
It’s basically
boekhold wrote
Hi all,
Does anybody know which jars from ActiveMQ 5.11 I need to use with IBM
Websphere Application Server 8.5.5 in order to create a new ActiveMQ JMS
Provider?
Figured it out. It's really simple actually. In the WAS console, go to
Resources - JMS - JMS Providers, and create
10 matches
Mail list logo