> > I am no expert on playing around with IP addresses, but I would think
this
> > a rather dodgy option. Wouldn't connections which you appear to have
open
> > still get through, and connect to something unexpected? Dynamic DND
would
> > probably work. I cannot guarantee access to the DNS system I (or
rather, my
> > customers) are using, so this is not an option. I have therefore had to
> > implement failover at the application level.
> If the connection is open on a machine which is then downed, then any
> further
> reads/writes would fail, and you would need to reconnect.
But would they fail if they were connecting to the same IP address and
socket? Or would they pick up on each others "conversation"? If the IP
address changes, as in dynamic DNS, they will certainly fail. But if the
same IP address exists, mssages from the old connections might go into the
new machine, at the risk of confusion.
> Is your failover a manual or automatic process?, ie, if one DB goes
> down, do you need to run a script manually to failover, or does this
> happen transparently?
Automatic. I need unmanned 24/7 operation, but can accept a 10 second
outage at failover (which should be rare). I keep a replicated MySQL
database and a hot-spare of my main application.
> I assume that once the deceased machine returns and drops/reloads its
> databases, it sets the sync levels to be the same?
The action of reloading the database reloads the magic sync table, which
therefore automagically assumes the correct sync level.
> If Master A and Master B are up to date with eachother, then should one
> fail, surely a 'SWITCH MASTER TO xxxx' command on each of the readers
> would let them continue to be up to date?
There is no exact "SYNC MASTER TO" command im MySQL. In general, it is a
far from trivial problem switching slaveship from one master to another
because of the indeterminate relationship between the binlog files of the
two masters. MySQL used to have a command which was billed as the first
stage of this process. This command was withdrawn, and I can understand
why. I tried to work our how to do it myself, and only succeeded in getting
dizzy. I would guess that the MySQL team have been this way as well, and
decided that the swamp was too deep for the moment.
The only way I have found of restarting a slave is the cold start method -
stop slaving, drop database, load from master, restart slaving. One could
say that this is a shortcoming of MySQL's replication scheme. However, I
regard MySQLs replication as a good 90%/10% solution: it covers 90% of
requirements at 10% of the cost of a perfect solution. I solve it with more
relatively inexpensive PCs in a row: another PC or two is cheaper than the
development effort tring to develop (and particularly to test) a failsafe
hot changeover. I actually cut costs because oI have failover: I don't Raid
the database disks unless I need the performance, because the second and
optional third copy of MySQL on separate machines act as a Raid for me.
Alec
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]