Hi everyone,
I've been browsing the online archives for a while and haven't found as much data as I'd like to feel really comfortable about replication. I've been a pretty busy developer in terms of using MySQL, but never so much in the way of administration. Also, I've only ever had to deal with a single machine until about a month ago when I was asked to extend our database to multiple machines for security and redundancy.
As it stands right now, I have 3 machines running MySQL, set up as hosts db1, db2 and db3.
db1 is a master
db2 and db3 are set as slaves, and replicate data just fine with the premise that we'd like to add numerous slave machines at any time.
Problems I'm having:
1. If a new database is created on db1, this is not replicated on db2 and db3. Should my permissions be "GRANT FILE on *.* ..." or is there a better way to automate duplicating this database on the slaves?
2. If a database exists on all three machines already and I create a table on db1, this is also not replicated on db2 or db3.
3. I haven't checked yet whether "ALTER TABLE ..." commands have worked across the replication.
4. Darn RedHat and their logrotate utility: db1 was having MySQL restart once a week, which created db1.001, db1.002, db1.003, etc., every time it restarted, yet db2 and db3 also running logrotate and having MySQL restart because of it, were not updating their "master.info" files to point at the ".002" or ".003" or ".004" files whenever db1 was restarted. This also happens if db1 reboots.
5. In the event of power failure, or system failure, if db1 is offline, how can I set the systems so either db2 or db3 becomes a master? And if db1 comes back online later, could I set it as a slave to whichever other machine became a master? Can this be automated, or will it always require manual intervention?
6a. I rewrote my Perl applications to connect to db1 and return a handle I call $dbh_w for any SELECT/INSERT/UPDATE/DELETE queries, and to connect to any of db1/db2/db3 for any SELECT queries as $dbh_r. However, we want to use some third-party software that contains a few MB of PHP code, and only connects to the database one time - I'm not sure how much time to allocate (to tell my CTO/CEO) to rewrite the entire application the same way I did my Perl applications. Any advice?
6b. My Perl scripts are 90% reading data back from the database, the PHP scripts are more like 80% writing to the database so being able to load-balance the writing to the database farm is ideal. In my current scenario, the PHP application can only write to db1, which will ultimately cap out the machine. Surely there's a better way than making two connections ($dbh_w/$dbh_r) to the database for each copy of my Perl or PHP processes running?
7. Finally, would 'fake replication' work if the MySQL database files were on a RAID system and mounted via NFS to multiple machines to actually run the MySQL engine? The tables we use have a lot of auto_increment fields, and I'd be worried about data being corrupted or lost.
Thanks, Ian Douglas
-- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]