Got it! :-)
This is the bit that I was missing: I didn't know that I need to actually
acknowledge the message before I refuse it!
**
Thank you for pointing me the right direction. :-)
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
Thank you for your responses, Justin. Also thank you for noting the journal
backup feature. That's great news. :)
Meanwhile, I've developed a simple scripted solution that works for
statically configured clusters such as mine. It simply extracts the broker
configuration and state (the last time it
Hi
If this is occurring, it sounds like a bug.
This is a community project, so if you are able to create a unit test case that
replicates the issue, it certainly help the community to validate if it is a
bug or not, and also validate any fix the community may do.
if you look here, for curren
I have an ActiveMQ server set up where I have multiple (and unbounded) queues
which differ only in the last component. e.g. myqueue.1, myqueue.2,
myqueue.4, etc. The last component is from a database that will have a
varying set of customers defined - and I want to set up one queue for each
custo
Thanks for the quick response Justin.
I've configured Artemis to use replication as the infrastructure for
shared-storage isn't... great.
So for my situation at work, the hypervisors tend to randomly die on us (and
thus taking the VMs with them). We have 3 zones/hypervisors.
I wanted a single ma
In a non-automated use-case I'd recommend an administrator take a look at
each of the brokers' log files to see which one had been active most
recently and then restart that broker first (and if that server happened to
be a slave its configuration would need to be changed to be a master so it
would
I assume you were using replication with your master/slave/slave setup. If
that assumption is correct, then this isn't a recommended option due to the
risk of split-brain which apparently you ran into. Split-brain is a
scenario where two brokers are live with the same data. This can occur when
usin
This is a valid setup even though failback won't work as expected. There
should be no more risk of data loss in this setup as there is in any other.
Justin
On Thu, Jun 6, 2019 at 2:49 AM Bummer wrote:
> This isn't a valid setup. Only one slave per master can work as expected.
> You're about to
At this point using multiple backups will preclude fail-back from working
as generally expected so the behavior you're seeing is expected.
Out of curiosity, are you using shared-storage or replication? If you're
using replication keep in mind that you'll want at least 3 master/slave
pairs to achie
Whatever Groovy code you pasted didn't make it through so it's hard to say
specifically what you're doing wrong. However, in general you should create
the session with autoCommitAcks = false, then ack the message, and lastly
rollback the session.
Here's an example from the Artemis test-suite [1].
Hello.
How do I make my message end up in DLQ while using the Core protocol?
My code in Groovy:
The assertion should be true, as the message is supposed to end up in the
default DLQ after 10 receive attempts as per [1]. But even on a freshly
spawned Artemis 2.8.1 instance this does not work.
I
Apologies! Preview was working but... don't know what happened.
Java client application exception:
Caught exception! javax.jms.JMSException: AMQ219014: Timed out after waiting
30,000 ms for response when sending packet 71
javax.jms.JMSException: AMQ219014: Timed out after waiting 30,000 ms for
This isn't a valid setup. Only one slave per master can work as expected.
You're about to lose data if you continue this way. I was there recently.
Look this topic up on the forums to get more information about the reasons.
This setup is surprisingly common.
--
Sent from: http://activemq.2283324
13 matches
Mail list logo