Tomislav,
Now, I have a specific use case I'd like your opinions on...
I (will) have 2 databases, one local and one on a dedicated server on
the Internet.
The local database is connected to the Internet with a 2Mbit
undedicated symmetric link with a fixed IP address.
I want to set things up so that they work as RAIDb-1 with the
controller on the local db instance, propagating changes to both
servers.
I would load-balance read requests so that local requests never go to
the Internet server.
This is do-able with a weighted round robin load balancer for example.
I would then recommend to have a single controller on the local database
and just use the remote database as a backup for disaster recovery purposes.
What I'm worried about is link failures and bandwith capacity.
I would like to have sequoia register a link failure (via a timeout)
and when the link is up again (presumably between a minute and an hour
later) to have it send the missing changes to the Internet database to
get it up to date.
Sequoia does not proactively pings database links, so failures are
detected lazily at query execution time. If you have TCP connections
(that's what usually JDBC drivers use) this might hide temporary link
failures if you don't have write requests during the failure. As RAIDb-1
provides synchronous replication, it cannot really provide the feature
you require. However, when a failure is detected, a checkpoint is
automatically inserted in the recovery log. If you are sure that the
failure did not happen in the middle of a transaction, you can force the
controller to recovery the node from the failure checkpoint and replay
the missing writes.
The only possibility I know of so far is manually backing up a local
instance, restoring it on the Internet instance and plug the Internet
instance back in. Is it the only way and can this be automated?
Secondly, is using sequoia to pass low volume writes (several
records/minute) over a 2Mbit link feasible?
The way you describe is probably the safest way but it might be
cumbersome if your database is big (the backup might take some time to
transfer on your link). Regarding the volume of writes, it's only the
SQL traffic that would go through the connection so 2Mbit should be
plenty. If you want to read from the remote node, it will depend how
much data you fetch (assuming you want to failback on that remote node
in case the local one is unavailable).
The other option is to consider asynchronous replication as Robert
mentioned. This is not supported by Sequoia but might be supported
natively by your database like MysQL replication.
Thanks for your interest in Sequoia,
Emmanuel
_______________________________________________
Sequoia mailing list
[email protected]
https://forge.continuent.org/mailman/listinfo/sequoia