I suspect you may be running into the problem that the client keeps track of
known temporary destinations on the broker, and immediately rejects attempts
to produce to temporary destinations that the client doesn't know about,
even if they exist on the broker (although that's not the intent).
This
At this point, I am fairly open to ideas on the path forward. And I agree
with above statements that the PMC decides, but that it's best to have this
discussion in a transparent/open manner (i.e. on a public discussion board
like this one).
Jeff makes a good point about adoption.
Another thought
One possible (and common) cause of slowness on queues with message groups is
the Max Page Size limitation. If there are number of
messages at the head of the queue that cannot be dispatched for any reason
(lack of consumers available, all consumers prefetch buffers are full, ...)
then no subseque
Couple of thoughts on tracking this down.
First off, how large are the messages? 70,000 messages doesn't sound too
high in general, but if the messages are very large, it could be an issue.
If all 70,000 are waiting to go to a single consumer, that raises the
concern of a possible "slow consumer"
OK, persistent messages to a queue I expect to always be redelivered after
a missing acknowledgement on a dropped connection.
If a repeatable missing message redelivery in this case can be
demonstrated, I would be interested to look at the code.
As mentioned above, timing comes into play, and it
ichael Tarullo
> Contractor (Engility Corp)
> Software Engineer
> FAA WJH Technical Center
> (609)485-5294
>
>
> -Original Message-
> From: artnaseef [mailto:[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=4716079&i=0>]
> Sent: Thu
Normally, the operation of message acknowledgement and redelivery should
have nothing to do with publisher settings. With that said, there are some
broker configurations that change operation in non-obvious ways, such as
the "optimized dispatch" option.
Are these persistent messages delivered to
A couple of thoughts on this front.
First, how was the conclusion of different client versions made? I'm
curious because I can't think of any way that should be an issue.
Second, is the consumer using the failover transport?
This JMS Exception can happen normally without indicating a problem,
e
For the JMX: there is a newer setting in the activemq configuration file
that may override the command-line settings; here's a fairly stock example:
Check for these and any output during broker starting indicating the
management context failed to start -- if both
Is the AMQ broker embedded? If so, perhaps another application is leaking
memory.
Or, are there any custom plugins?
Bottom line here - there is something taking up all of the memory, causing
the OOM condition. Taking heap dumps and/or histograms may be the only
quick way to find out just what's
Hmm, why is it not ideal to cast to an ActiveMQMessage when validating an
internal operation of ActiveMQ? Is there another messaging provider that
gives the exact same information and, therefore, a need to make this code
reusable across both?
BTW, I suspect you'll find validating the path message
BTW, once the JVM reaches the out-of-memory condition, it can become
impossible to perform any operations on it - such as obtaining stack dumps
and heap dumps. With that said, a heap dump, or at least histogram, can
help determine the precise cause of the out-of-memory condition.
The jmap program
OutOfMemory most commonly occurs with slow consumption.
The broker can run out of memory if messages are produced to it too quickly
and consumers don't keep up. It will also run out of memory if massive
transactions are used and not committed nor rolled back before the broker's
memory is exhauste
OK, since this works in one container (WebLogic) and not another (WildFly) -
my suspicion is with the container handling of the exception.
Turning up the logging on org.apache.activemq.ActiveMQSession to DEBUG on
the JMS client should cause the following to get logged:
* ... Transaction Commit :.
Hey Tim, the demand forwarding network connector should automatically attempt
to reconnect. At one point, years ago, I tried removing the failover
transport and sticking to the reties there because of other issues and found
that without the failover transport, the reconnects are not as reliable.
Agreed - the critical thing here is to figure out what's happening with the
rollback and/or message state and get it fixed.
Actually, while trying to track down semantics of EJB and JTA, I opened the
API docs for createSession() here:
https://docs.oracle.com/javaee/7/api/javax/jms/Connection.html
Ahh, here it is from the original post:
uri="static://(tcp://10.102.44.181:61616)"
try this instead:
uri="static://(failover://(tcp://10.102.44.181:61616))"
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Failed-network-connector-can-not-be-re-established-tp4709054p47
Also - network connectors typically automatically reconnect.
Is the network connector configured to use the failover transport? If not,
try adding it to see if that helps to eliminate the problem.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Failed-network-connector-
Any idea what is causing "Borrow prepareStatement from pool failed"?
The only possible cause coming to mind right now is a pool that is exhausted
- meaning there are long-running DB operations. If that's the case, then
tuning the DB, moving to another store, or possible tuning the pooling
setting
Do I understand correctly - messages go into the DLQ when "container managed
transaction handling is used" and do not do so when the same is not used?
If so, what is managing the transactions? Is it possible that the
transactions just never rollback after the MDB throws an exception? In
other wo
When you say TempStore, are you talking about storing non-persistent messages
specifically?
To answer your question - YES, the hub broker, and all brokers in the
network can hold messages during short periods of disconnects between
clients. That's kinda the main point of JMS and messaging middlew
By late, I mean out-of-order.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Help-with-a-Failover-testing-that-shows-missing-messages-tp4707916p4708822.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
What happens in the test if a message arrives late?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Help-with-a-Failover-testing-that-shows-missing-messages-tp4707916p4708821.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Does the shutdown of the broker complete cleanly in both cases?
A slow shutdown will lead to the shutdown script using kill -9, which
eliminates any possibility of the process continuing to ensure a clean
shutdown. In that case, the contents of the file may be left in a dirty
state, which may lea
So 15 seconds sounds really low, although I'm not sure of all the various
timeout settings in NFS.
Specifically here, the timeout of concern is the release of a lock held by a
client. The higher the timeout, the less likelihood of two clients
obtaining the same lock, but the slower failover becom
Clarifying Virtual Topics...
When using Virtual Topics, there are both Topics and Queues involved. The
Topic name starts with "VirtualTopic." (this can be changed via
configuration). The Queue names start with "Consumer."
The easiest way to visualize Virtual Topics is to think of an interceptor
Thanks for posting your findings.
That's an interesting fix given the fact that the JMS spec states that a JMS
connection created within a JTA transaction participates in that transaction
automatically (although I'm sure it's fine for a provider to allow a means
to pull it out of the transaction).
I missed something critical here - Topics.
Topic flow across a network of brokers with non-durable subscriptions only
is not reliable. Brokers subscribe to one-another in the same manner that
end-clients subscribe to a broker. So, for topic subscriptions, that means
that while the bridge between
As far as a broker properly cleaning up when active and losing the lock --
first off, that's a very rare scenario. With that said, there's no way to
guarantee a completely clean hand-off at that point. The cause of such a
scenario will be a drop in network communication between that broker and th
So something is very wrong then. NFS should *not* allow two NFS clients to
obtain the same lock.
Three possible explanations come to mind:
* The lock file is getting incorrectly removed (I've never seen ActiveMQ
cause this)
* There is a flaw in the NFS locking implementation itself
* The NFSv4 t
Something sounds very wrong there. The NFS lock file should prevent more
than one broker writing to the store at a time.
Is all of /var/log/activemq/activemq-data/ shared across all of the brokers?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/question-for-users-of-NF
It may be possible to build a facade, if you don't need all of the features
of the 2.0 spec.
For example, ActiveMQ has an async send features that producers can use,
which may meet your needs.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Recommendation-for-vendor-inde
Huh, that looks like quoting in the script got messed up. Were any changes
made to the script?
Try running the script with "bash -x bin/activemq console" and save the
entire output (running within a "script" session is great for that). Then
look for the last few lines before the failure. That'l
Can you provide a minimal test that reproduces the problem and can be shared?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Help-with-a-Failover-testing-that-shows-missing-messages-tp4707916p4708594.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
As far as ActiveMQ is concerned, a Queue is a Queue. The only time there is
special handling with a queue is with virtual topics and the queues starting
with "Consumer." in the name.
If the application is publishing messages into the "Delivered" queue,
ActiveMQ has no idea why they are there. It
64GB is a very large server in my experience. Many use-cases do not require
this much memory, although some do. In fact, I've seen 2GB servers perform
very well - again, for specific use-cases.
As far as swapping - most Linux servers I've seen in the last 5 years
(longer really) are configured w
No idea on the mount settings - that was a while back. But again, I suspect
even the default NFS mounts settings would work.
My recommendation here - create a test setup, and perform some load tests.
Tweak settings as desired and try again. If that feels inadequate (for
example, there are conce
That's too little information to judge really.
4,000 queues is a lot, but ActiveMQ can be tuned to handle that many well.
As for "static" vs "dynamic", I'm not sure what that means. If it means
that the queue names may change for "dynamic" and not change for "static",
then there are many conside
So, slow consumption is a major anti-pattern for ActiveMQ.
By definition, a queue with no consumers and with messages stored has a
slow-consumer problem, since it has 0 consumption. I see in that table many
queues with no consumers and many messages pending.
Message size is rarely the biggest fa
There isn't quite enough information here to be sure, but it sounds like your
hitting the slow-consumer problem. ActiveMQ isn't a message store and
doesn't work well as such.
If producers are overloading the broker's store and/or memory, the only true
solution is to consume off those messages.
W
So there are a few concerns with the code.
First off, to your question of consumption stopping - based on the fact that
the code uses ActiveMQResourceAdapter, I assume this is running in the
context of a JTA Transaction. If so, all operations are transacted, and the
client acknowledge mode is ign
Hey Tim - even if the messages are sent asynchronously from the client, their
order should be maintained. In fact, the message is sent to the transport
entirely. The big difference is that the client code does not wait for the
server to send back a response indicating that it successfully receive
How many producers are being used by the application?
To determine the answer to that question, you need to look at the connection
settings used by the "activemq" component and related beans.
It is also possible to check the same by looking at Message IDs as each
message ID contains the ID of the
I agree with Robbie's comments here. The statement that the application only
creates the connections once got me confused at first, but looking at the
original post, the pseudo-code there clearly creates the connection once on
each producer.
To verify, start up the application and use one of the
I've used it successfully more than once without any specific tuning to NFS.
With that said, systems groups maintained the filesystem, so I may simply be
unaware of the same.
Note that you'll need NFSv4 for full H/A; NFSv3 clients hold locks
indefinitely when they drop off the server's network (i
Chistopher is correct here.
Is there any reason why this larger value is a concern?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/why-MSG-size-in-activemq-is-much-larger-than-payload-tp4708070p4708430.html
Sent from the ActiveMQ - User mailing list archive at Nabble.co
So there's no easy button to get those stats - it will take some work.
However, looking at the stated concern of filling store space. You have a
slow consumer problem - somewhere. Note that with store space, it's
entirely possible that only a small number of messages are causing the
entire probl
By the way, where in the processing is the check for missed messages?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Help-with-a-Failover-testing-that-shows-missing-messages-tp4707916p4708425.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Are there any messages in the application logs?
Messages from camel indicating failures on messages (aka exchanges)?
Messages from ActiveMQ (or camel's JMS component) indicating lost connection
and need to reconnect?
Connection pooling and consumer caching may be coming into play. Although
if t
Are the missing messages from non-durable consumers of a Topic? If so, there
are two perfectly normal causes of message loss that may be affecting the
test:
1. dropped connection between the client and the broker
2. dropped connection between brokers in a network of brokers
--
View this mess
When messages are always delivered to consumers, the performance scales
linearly. However, when there are messages sitting in a queue that don't
match any consumers, performance scales exponentially, as the broker checks
every single message on every attempt to dispatch even a single message.
So,
Ah, thanks Tim - I missed that as I skimmed at the end to see if the problem
had been resolved.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Messages-dequeued-but-not-consumed-tp4707380p4707910.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Check first for slow consumption.
Is producer-flow-control enabled on the broker? And, if it is, what is the
per-queue memory limit and the overall system limit.
The most common cause of Out Of Memory on ActiveMQ is a broker without
producer-flow-control and slow consumption, which mimics a memo
For the consumer-specific messages, why not use a queue for each? That seems
like the more natural approach.
Selectors introduce a few concerns. First, the processing of each selector
can add a lot of overhead, and there is no optimization; matching a message
to a consumer involves iterating all
Loop detection is per-message. As messages traverse the network of brokers,
an internal property on the message remembers which brokers were visited,
and each broker will refuse to send it to that broker again.
Note that topics with non-durable subscriptions in a network of brokers have
reliabili
Let's go back to some basics (I hope I read the thread correctly and the
current issue is messages showing as dequeued on the broker, but not
processed by the application).
One thing to note with ActiveMQ is the prefetch buffer comes into play and
can hold on to a number of messages on one consume
Searching git history, it appears the following commit introduced the change
back in 2013:
468e69765145ddad199963260e4774d179ad
That first appears in 5.9.0. So, it was longer ago than I realized ;-).
Cheers!
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Message-
Good finding - I commented on the Jira ticket.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-SSL-implementation-tp4706285p4706423.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Note, by the way, that asyncSend=false only affects the send from the
producer to the broker - not from the broker to the consumer.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Reject-an-incoming-message-from-Camel-broker-component-tp4706168p4706422.html
Sent from the
There is no way to notify a message producer if consumption of the message
fails through JMS semantics. That defeats the major aspect of JMS -
asynchronous processing and loose coupling of producers and consumers.
In order to inform the producer of success or failure, one of the following
would d
The default Message Group Map implementation was recently changed to use an
LRU Cache of message groups.
Here's the issue with message groups - the broker does not know the set of
message group IDs ahead of time and must allow for any number of group IDs
to be used. If the total set of possible m
To acheive the split between "high priority" and "low priority" without
starving the low-priority messages, I recommend:
1. use separate queues to split the priorities
2. either use dedicated consumers for the low priorities that do not take
resources away from the higher priorities, or use some f
There are two things that happen with message groups that makes them a poor
match for use with selectors.
First, the default implementation of message groups uses "buckets" of
assignments for groups-to-consumers. This means that multiple groups
actually get assigned to the same consumer because t
Note that I have personally seen the HTTP transport consuming significant CPU
resources in the past. Be sure to measure performance and watch CPU
utilization. IIRC that's because the implementation frequently polls.
I recommend looking to an alternate approach. One I've implemented with
great r
Two things to look at. As mentioned above, not ack'ing is one of them. If
the session is setup to auto-ack, then that's not the issue. If the session
is set to client-ack or transacted, then it needs to ack the messages or the
queue will eventually fill up.
Another possibility - look at the que
Jmeter has been used with much success as well. It comes with support for
JMS Topics and Queues. Note though that it tends to have a steep learning
curve.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Stress-Load-Testing-Tools-tp4704395p4704639.html
Sent from the Acti
Interesting point.
Jetty is a technology used by the current webconsole, so yes disabling the
same disables the webconsole. That leads to the question, "is there any
issue with changing that dependency to jolokia?"
Anyway, are we even talking about replacing the existing webconsole, or did
I rea
I like the look-and-feel of this console, based on the screen-shots at
https://github.com/snuids/AMQC/wiki.
It's very interesting that this tool works entirely out of the browser as
well.
All of the information displayed is obtained via jolokia, is that correct?
And all administration also opera
It is hard to determine based on that message and
https://issues.apache.org/jira/browse/COLLECTIONS-580. Based on my
searching so far, it looks like that feature of collections is not used in
ActiveMQ.
Specifically, I searched on InvokerTransformer and did not find any
occurrence in the code.
It
Only with the broker shutdown, of course.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/AWS-disk-hosting-ActiveMQ-is-full-UI-purge-delete-doesn-t-work-tp4703991p4704462.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
If you don't need the persisted messages, then go ahead and delete the entire
contents of the kahadb folder, not just the *.log files.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/AWS-disk-hosting-ActiveMQ-is-full-UI-purge-delete-doesn-t-work-tp4703991p4704461.html
Sen
Are there messages in the queue that do not match any of the consumer
selectors? If so, only "MaxPageSize" messages (300 by default) can reach
the head of the queue before dispatching stops. I believe that number
increases while browsing, hence the reason browsing temporarily aleviates
the proble
Note that the db-#.log files are not general-purpose error logging files,
they are the KahaDB data files.
If KahaDB is holding onto a large number of files, there's a good chance
there are old messages around. The first thing to check is that ActiveMQ is
not being used as a message store. Old me
Can you clarify the network configuration? It would help to see the network
connector settings for every individual broker. Something like the
following format would be great:
Server1
- static:failover:(...)
- static:failover:(...)
Server2
- static:failover:(...)
- static:failover:(...)
...
A
If the messages are being dequeued, they are getting consumed. Is there any
other client connecting to server3 that might be consuming the messages?
Are enqueue and dequeue counts on servers 1 and 2 changing?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Network-of-br
There is no set messaging rates, and ActiveMQ doesn't lose messages in the
normal case, so something else must be wrong.
It's not entirely clear, but it sounds like your clients are disconnecting
from the broker and reconnecting frequently. Is that true? If so, that's
an anti-pattern that will l
The problem with helping on this one is it's very complex. 1000s of clients
and it sounds like there are issues with losing connections, although that's
not clear.
Do you have a minimal use-case that demonstrates the problem?
--
View this message in context:
http://activemq.2283324.n4.nabble.
JMX is a good tool here. I would also start with some basics.
First, determine whether the client is actually connected to the broker box
- the "netstat" command is a good one for that. You'll need to know the
client's IP address.
Second, if the client connection is confirmed at the network lev
First off, there are no ordering guarantees for messages across more than one
destination.
The closest to a workable solution I believe you'll find for needing to
serialize the work from messages across multiple destinations, without some
form of internal queueing in the application, would involve
Hey Rosy - ActiveMQ is a messaging solution, primarily focused on getting
messages from producers to consumers. Advanced logic of that type really
belongs in an application. There's no straight-forward way to add that to
ActiveMQ itself, and it would be difficult to diagnose if such functionality
Without digging into the code, it appears to be waiting to grab a "monitor"
(aka lock) on an object. If that's the case, then you need to find the
thread holding that lock to determine why it's held. If no other threads
hold that lock, then perhaps the lock was left in the wrong state by a
thread
There are other concerns as well. For example, when using a network of
brokers, topic messages can be lost if a network connection between brokers
goes down, even just for an instant, and there's no way for the applications
to detect this loss (unless your messages have something that can be used
Could the data files be getting corrupted? One way that can happen is to
have two brokers sharing the same data directory and either manually
removing the lock file or using NFSv4 and timing out the lock.
Also, have you tried KahaDB? If so, does anything similar happen?
--
View this message i
Yup - make sure the broker is actually listening on 8161. And attempting to
connect to 61616 will get some garbled output from the initial handshake of
the OpenWire protocol, so it sounds like the broker is listening properly on
61616 based on the Ops description.
--
View this message in conte
@Andy - all queues used by JMS clients must start with jms.queue?
Can the libraries ensure this is always captured properly? What if, for
example, a client sends a JMS Message that's not an Artemis message into an
Artemis message producer?
Also, does this mean that queues for JMS cannot interope
Great! Thank you for letting me know.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/If-my-ActiveMQ-brokers-are-down-how-do-I-tell-my-producer-to-stop-blocking-the-thread-tp4698835p4699303.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.
Another thing here - what else is running in the same JVM as ActiveMQ?
Also, are the messages being consumed, and are consumers keeping up with
producers?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/ActiveMQ-uses-100-CPU-tp4699129p4699302.html
Sent from the ActiveMQ
Monitoring is very important for ActiveMQ, so it's good to hear. I've used
JMX and Jolokia for that purpose. The advisories are useful for that as
well.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/How-do-you-destroy-orphaned-consumers-on-the-server-broker-tp4680431p
It can be done, but may be tricky. If the client is using the clientId
setting, then unsubscribe should work - just pass the same clientId.
Otherwise, it's a little harder - the consumer is connected to the HTTP
session. As long as the session matches (I believe that's a cookie and
possibly comb
That's right, there will always be a chance of duplicate messages - hence the
JMSRedelivered flag in the JMS specification itself.
Specifically responding to the use-case in question, the failover transport
re-attempts message sends if they have not completed successfully on a prior
attempt. If t
Interesting - I don't get them. I may have gotten one or two a long time ago
- not sure now.
I wonder if it could be anti-spam or something similar causing the failures.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Fwd-Warning-from-users-activemq-apache-org-tp4698899
The timeout setting only works when the pending outbound request over the
transport is a message send, and only when the transport is disconnected and
stays disconnected.
If any request other than a message send is pending (e.g. consumer or
producer creation), the timeout is not applied. Also, if
I agree on the versions - first match the client and broker version and see
if that helps. Running with the latest too is a good idea to make sure this
isn't a bug that was already fixed.
With that said, one workaround that may help here - disable the cache
feature in the protocol (it helps by re
Every topic subscription should be getting the message, as long as the
message matches the selector.
Did you confirm the selector on the subscription using JConsole, VisualVM,
or perhaps the web console?
Also, does the subscription that appears to be missing messages receive any
messages at all?
First thing I would look at here is diagnostics from the network level
itself. WireShark or tcpdump can be used to get a better understanding of
why the connections are dropping.
If the network between the client and brokers is unreliable, this will
happen a lot and it will significantly interfer
What does the transportConnector configuration look like?
I believe for MQTT and SSL, you need "ssl+mqtt". Or is it "mqtt+ssl"?
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Invalid-TLS-Padding-data-Error-while-trying-to-connect-ActiveMQ-via-SSL-tp4694099p4695546.html
Sorry - JMeter or custom Java code are only examples. Other contributions
will be considered as well.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Throughput-drop-with-NIO-transport-as-compared-to-TCP-transport-tp4694494p4694620.html
Sent from the ActiveMQ - User mail
This is a great effort here.
Please consider sharing test tools. JMeter profiles or custom java code.
One thing I'm considering is capturing performance tests in another repo for
ActiveMQ, together with baselines (i.e. captured runs - preferably with a
solid capture of hardware configuration det
First off, 5.4.3 is an old version that had many issues; I highly recommend
upgrading.
With that said, Topic flow across a network of brokers is prone to message
loss. For example, if the network connection between two brokers is lost,
then non-durable Topic subscriptions between the two brokers
Hmm, the unknown data type is a concern.
Can you try without client certificates?
Also, please provide the details of the java version in use.
--
View this message in context:
http://activemq.2283324.n4.nabble.com/Invalid-TLS-Padding-data-Error-while-trying-to-connect-ActiveMQ-via-SSL-tp46940
1 - 100 of 498 matches
Mail list logo