nigro_franz wrote
> FYI MAPPED journal with datasync off protect you just against application
> failures and considering that you're in a could environment (+ replication
> if needed) it could be enough.
That's _exactly_ what we plan on doing.
I'm just in the process of figuring out, using the cl
ay.
> Thus RAID and possibly replication on storage level sounds as better
> option.
>
> Thanks,
> Mirek
>
> - Original Message -----
> > From: "Tim Bain"
> > To: "ActiveMQ Users"
> > Sent: Thursday, 4 October, 2018 3:01:52 PM
> &g
om: "Tim Bain"
> To: "ActiveMQ Users"
> Sent: Thursday, 4 October, 2018 3:01:52 PM
> Subject: Re: Designing for maximum Artemis performance
>
> Justin,
>
> That approach will work, to a point, but it has (at least) two failure
> cases that would be proble
Justin,
That approach will work, to a point, but it has (at least) two failure
cases that would be problematic.
First, spinning up a replacement host is not instantaneous, so there will
be a period of at least a minute but possibly several where the messages on
that broker and storage volume will
Thanks Tim & Justin - appreciate the comments.
I think where I'm going to land is run a master/slave, but then utilise AWS
to notify me when a master dies and a slave becomes the master, and
orchestrate spinning up another slave. But that's on me to figure that out.
:)
I'll kick off a separate [D
> Would it be desirable for Artemis to support this functionality in the
future though, i.e. if we raised it as a feature request?
All things being equal I'd say probably so, but I suspect the effort to
implement the feature might outweigh the benefits.
> The cloud can manage spinning up another
Although some concept of either AZ affinity or preference for active
masters to live on different physical hosts/AZs could be helpful and might
be worth considering (Kafka has the concept of rack awareness to avoid
putting all copies of any particular message on the same hardware, which
can be used
Hi Mike,
I'm not looking at getting improved performance by having multiple slaves.
The use case I have is master-multiple backups as per
https://activemq.apache.org/artemis/docs/latest/ha.html
Our architecture is complex and we're using QPID dispatch routers at other
points within that. What I n
: Re: Designing for maximum
Artemis performance
I'm not sure I understand your question(s) @clebertsuconic?
We are building highly scalable systems and highly distributed systems, so
the need for the multiple backups is there to ensure that in the unlikely
event of a server or AZ failure
jbertram wrote
> The master/slave/slave triplet architecture complicates fail-back quite a
> bit and it's not something the broker handles gracefully at this point.
> I'd recommend against using it for that reason.
Would it be desirable for Artemis to support this functionality in the
future thoug
The master/slave/slave triplet architecture complicates fail-back quite a
bit and it's not something the broker handles gracefully at this point.
I'd recommend against using it for that reason.
To Clebert's point...I also don't understand why you wouldn't let the cloud
infrastructure deal with spi
I'm not sure I understand your question(s) @clebertsuconic?
We are building highly scalable systems and highly distributed systems, so
the need for the multiple backups is there to ensure that in the unlikely
event of a server or AZ failure, our systems still run at the maximum
available performan
Since you are on EC2? Why do you need a backup? Wouldn't a Storage
give you what you need in terms of Cloud? if the server is gone. .you
just start it again with the same cloud storage?
On Tue, Oct 2, 2018 at 3:22 AM schalmers
wrote:
>
> I am using AWS and paying for three EC2 instances (the 'serv
I am using AWS and paying for three EC2 instances (the 'servers'). I am
deploying a server in each AWS Availability Zone (AZ) and in the region I am
using there are 3 AZ. I am running three servers with a master (as part of a
cluster) on each, to maximum performance of the applications connecting t
14 matches
Mail list logo