Ohhh.. you're talking about rsync... perhaps if we disabled reclaim..
it would always be a new file... as right now we reuse a file
if we disabled reclaim for your case.. it wouldn't need to reuse
files... and it wouldn't have a case of corruption.
On Mon, Jun 24, 2019 at 2:40 PM Clebert Suconic
I don't understand what you're talking about with small corruptions of
the journal?
We write on the backup.. and wait for confirmation.. so clients are
blocked until the backup has a copy of the data.
On Sun, Jun 23, 2019 at 8:24 PM warm-sun wrote:
>
> >>> Technically speaking, replication is as
> So when the network is down between master and slave (eg slaves network
card fails)... the master will keep ACK-ing messages it receieves in this
case?
In general, yes. Like I said before, the master can be configured to
initiate a quorum vote and will shut itself down if it's isolated.
> If so
>>> Technically speaking, replication is asynchronous. However, the broker
will not send a response to the client until it has received a reply from
the slave that the data has been received.
...
>>> If the network between the master and slave goes down then by default
>>> the master continues like
> RedHat AMQ 7 (which is using Artemis under the hood) in their
"configuring broker" documentation recommend NOT using [HA replication]
across data centers.What is the Artemis position (not on AMQ, but if using
Artemis)?
Replication was designed to be used across a low-latency, high-performance
ne
I have a very similar scenario to the original post. (Multi data center
replication is required)
I have read all the documentation -- but am unclear about a couple of
points:
1) RedHat AMQ 7 (which is using Artemis under the hood) in their
"configuring broker" documentation recommend NOT using [HA
>
> > What if any file/directory management would need to be done on the
> secondary DC file structure if doing a straight file copy. I wouldn't want
> to copy the whole directory every 15 minutes for example, but rather just
> reflect the changes.
>
> There is a challenge with just copying change
> What config, if any, would the node in the secondary DC need to have in
common with the nodes in the primary DC?
It would need to have the same essential configuration (e.g. same
addresses, queues, etc.), but it wouldn't necessarily need to be clustered
or have its own HA config, etc.
> I ass
Thanks Justin. Do you know what sort of considerations I would need to make
if doing shipping, e.g.
1) What config, if any, would the node in the secondary DC need to have in
common with the nodes in the primary DC? e.g. see the warning box regarding
copying data directories and unique node id
There's nothing built in to Artemis at this point specifically for the DR
use case. However, I believe the "data" directory (where persistent data
is stored by default) can be replicated (e.g. via a block-level storage
replication solution) or "shipped" via an external process (e.g. rsync) to
a DR
I am looking at options for handling local failover and disaster recovery.
Our existing primary and secondary data centre hosted services run mainly
active-passive and have SAN storage but do not support SAN replication.
There is a dedicated network between the two DC's so we have fast, reliable
11 matches
Mail list logo