jbertram wrote
> Do you have any metrics on the network utilization between
> the live and the backup?
In terms of the network utilization between the live and the backup, using
iperf3 I get these results:
[ec2-user@ip-172-31-30-42 ~]$ iperf3 -c 172.31.10.32 -p 5672
Connecting to host 172.31.10.
Is there any reason why running some Artemis performance tests I cannot get
more than ~200MB/s (1.6 gbits/s) through an Artemis v2.6.3 broker running on
a Amazon Linux VM with 16 vCPU, 32GB RAM?
artemis.profile:
broker.xml:
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ
A trace of a message exchange with Artemis where this has occurred is below.
The lines show opening and then closing two AMQP links, but Artemis doesn’t
respond after that with its own attach/detach frames, which acknowledge the
opening/closing of those links.Question: does anyone expect this behav
When running a continual test against a broker setup as a master-slave, after
an arbitrary length of time, the test client disconnects and stops working.
The only message thrown in the artemis.log file is:
{{2018-10-31 09:32:07,368 WARN [org.apache.activemq.artemis.core.client]
AMQ212037: Connecti
nigro_franz wrote
> FYI MAPPED journal with datasync off protect you just against application
> failures and considering that you're in a could environment (+ replication
> if needed) it could be enough.
That's _exactly_ what we plan on doing.
I'm just in the process of figuring out, using the cl
Thanks Tim & Justin - appreciate the comments.
I think where I'm going to land is run a master/slave, but then utilise AWS
to notify me when a master dies and a slave becomes the master, and
orchestrate spinning up another slave. But that's on me to figure that out.
:)
I'll kick off a separate [D
Hi Mike,
I'm not looking at getting improved performance by having multiple slaves.
The use case I have is master-multiple backups as per
https://activemq.apache.org/artemis/docs/latest/ha.html
Our architecture is complex and we're using QPID dispatch routers at other
points within that. What I n
jbertram wrote
> The master/slave/slave triplet architecture complicates fail-back quite a
> bit and it's not something the broker handles gracefully at this point.
> I'd recommend against using it for that reason.
Would it be desirable for Artemis to support this functionality in the
future thoug
I'm not sure I understand your question(s) @clebertsuconic?
We are building highly scalable systems and highly distributed systems, so
the need for the multiple backups is there to ensure that in the unlikely
event of a server or AZ failure, our systems still run at the maximum
available performan
I am using AWS and paying for three EC2 instances (the 'servers'). I am
deploying a server in each AWS Availability Zone (AZ) and in the region I am
using there are 3 AZ. I am running three servers with a master (as part of a
cluster) on each, to maximum performance of the applications connecting t
I want to set up a three node cluster as follows:
• Node1: Master-1, Slave-3, Backup-2
• Node2: Master-2, Slave-1, Backup-3
• Node3: Master-3, Slave-2, Backup-1
But it appears that you can't have a master with multiple backups and still
have fail-back, according to this post:
ht
As per our IRC conversation, JIRA raised @
https://issues.apache.org/jira/projects/ARTEMIS/issues/ARTEMIS-1928?filter=allopenissues
With regards to the timeout issues, that is caused by the AWS ELB connecting
to the nodes testing they are alive, so please disregard that.
--
Sent from: http://ac
Changes made:
- Installed and created new brokers using Artemis v2.6.1 from
https://github.com/apache/activemq-artemis/archive/2.6.1.tar.gz and then run
mvn package and used the zip from apache-distribution/target
- Modified broker.xml to fix up the cluster-connector (example broker.xml
attached b
I'm having some issues with my 3 node Artemis cluster (v2.6.0).
Here is the snippet from my broker.xml on node1:
tcp://0.0.0.0:61616
tcp://10.0.201.97:61616
tcp://10.0.202.250:61616
tcp://0.0.0.0:61616
Hi Justin,
Is it possible to have a 3 node cluster with no failover? i.e
live-live-live?
--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html
15 matches
Mail list logo