Re: Replication basics

2004-11-03 Thread Gleb Paharenko
Hi.



You should really read the FAQ: 
http://mysql.ensita.net/doc/mysql/en/Replication_FAQ.html



In case you want us to fine-tune replication for you, you may want to order commercial 

support, go to: https://order.mysql.com/?ref=ensita.





ian douglas <[EMAIL PROTECTED]> wrote:

> Hi everyone,

> 

> I've been browsing the online archives for a while and haven't found as 

> much data as I'd like to feel really comfortable about replication. I've 

> been a pretty busy developer in terms of using MySQL, but never so much 

> in the way of administration. Also, I've only ever had to deal with a 

> single machine until about a month ago when I was asked to extend our 

> database to multiple machines for security and redundancy.

> 

> As it stands right now, I have 3 machines running MySQL, set up as hosts 

> db1, db2 and db3.

> 

> db1 is a master

> db2 and db3 are set as slaves, and replicate data just fine with the 

> premise that we'd like to add numerous slave machines at any time.

> 

> Problems I'm having:

> 

> 1. If a new database is created on db1, this is not replicated on db2 

> and db3. Should my permissions be "GRANT FILE on *.* ..." or is there a 

> better way to automate duplicating this database on the slaves?

> 

> 2. If a database exists on all three machines already and I create a 

> table on db1, this is also not replicated on db2 or db3.

> 

> 3. I haven't checked yet whether "ALTER TABLE ..." commands have worked 

> across the replication.

> 

> 4. Darn RedHat and their logrotate utility: db1 was having MySQL restart 

> once a week, which created db1.001, db1.002, db1.003, etc., every time 

> it restarted, yet db2 and db3 also running logrotate and having MySQL 

> restart because of it, were not updating their "master.info" files to 

> point at the ".002" or ".003" or ".004" files whenever db1 was 

> restarted. This also happens if db1 reboots.

> 

> 5. In the event of power failure, or system failure, if db1 is offline, 

> how can I set the systems so either db2 or db3 becomes a master? And if 

> db1 comes back online later, could I set it as a slave to whichever 

> other machine became a master? Can this be automated, or will it always 

> require manual intervention?

> 

> 6a. I rewrote my Perl applications to connect to db1 and return a handle 

> I call $dbh_w for any SELECT/INSERT/UPDATE/DELETE queries, and to 

> connect to any of db1/db2/db3 for any SELECT queries as $dbh_r. However, 

> we want to use some third-party software that contains a few MB of PHP 

> code, and only connects to the database one time - I'm not sure how much 

> time to allocate (to tell my CTO/CEO) to rewrite the entire application 

> the same way I did my Perl applications. Any advice?

> 6b. My Perl scripts are 90% reading data back from the database, the PHP 

> scripts are more like 80% writing to the database so being able to 

> load-balance the writing to the database farm is ideal. In my current 

> scenario, the PHP application can only write to db1, which will 

> ultimately cap out the machine. Surely there's a better way than making 

> two connections ($dbh_w/$dbh_r) to the database for each copy of my Perl 

> or PHP processes running?

> 

> 7. Finally, would 'fake replication' work if the MySQL database files 

> were on a RAID system and mounted via NFS to multiple machines to 

> actually run the MySQL engine? The tables we use have a lot of 

> auto_increment fields, and I'd be worried about data being corrupted or 

> lost.

> 

> Thanks,

> Ian Douglas

> 

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication basics

2004-11-02 Thread ian douglas
1. If a new database is created on db1, this is not replicated on db2
and db3. ... is there a better way to automate duplicating this 
>>database on the slaves?
Yes put in your my.cnf on the master binlog-ignore-db=mysql,test this
means the master will replicate any event to any database other then mysql,
test.
So, by your statement:
binlog-ignore-db=
... and leaving it blank, will tell MySQL to replicate EVERY database? 
That's the behavior that I want. I will not be ignoring any databases on 
the master - all data on the master (databases, tables, data) must exist 
on every system, at least for the time being. Later on, we may replicate 
only certain tables onto certain slaves depending on our user load and 
which databases are more frequently used. However, for the foreseeable 
future, all data must exist on all machines.

2. If a database exists on all three machines already and I create a
table on db1, this is also not replicated on db2 or db3.
Why not?
That's my question too. I haven't seen anything on how to allow this 
behavior yet. I've seen some text about "LOAD TABLE tablename FROM 
MASTER" but I need a way to automate this so there is less manual 
intervention required.

Are you explicitly replicating a set of tables from your slaves via
replicate-do-table=foo.bar?
I need to replicate everything, and I'm not setting any options like 
this in the slave's my.cnf. Each slave must be a complete mirror copy of 
the master.

3. I haven't checked yet whether "ALTER TABLE ..." commands have worked
across the replication.
Yes they do.
To finish my statement: "... in my implementation" - that is, I haven't 
tested my implementation to know if an "ALTER TABLE ..." query works or not.

4. Darn RedHat and their logrotate utility: ... created db1.001, 
>>db1.002, db1.003, etc., ... This also happens if db1 reboots.
Stop the logrotate script.
Which I did last week when I realized this was why my slaves hadn't 
sync'd up in a week... But this past weekend a few transformers blew in 
the neighborhood around our office and a power surge shut off db1 
(master) - when it came back online, it created a db1.002 file and 
updated its own db1.index file, yet the two slave machines did not 
update their master.info records. How can I automate the slaves to 
detect this change, and use the db1.002 file? (db2 and db3 were not shut 
off, they kept running just fine)

5. In the event of power failure, or system failure, if db1 is offline,
how can I set the systems so either db2 or db3 becomes a master?
write your own load / failover software.
Thanks for the pointer. Was hoping there was some sort of round-robin 
scenario.

6. [paraphrase] Rewriting Perl and PHP code
Look at DBI::Multiplex
... anyone know of a PHP implementation of this? It would sure save me 
some time.

7. Finally, would 'fake replication' work if the MySQL database files
were on a RAID system and mounted via NFS 
No, not reliable.
Noted, thank you for saving me some grief.
-id
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Replication basics

2004-11-02 Thread Dathan Vance Pattishall


DVP

Dathan Vance Pattishall http://www.friendster.com


> -Original Message-
> From: ian douglas [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, November 02, 2004 9:30 AM
> To: [EMAIL PROTECTED]
> Subject: Replication basics
> 
> Hi everyone,
> 
> I've been browsing the online archives for a while and haven't found as
> much data as I'd like to feel really comfortable about replication. I've
> been a pretty busy developer in terms of using MySQL, but never so much
> in the way of administration. Also, I've only ever had to deal with a
> single machine until about a month ago when I was asked to extend our
> database to multiple machines for security and redundancy.
> 
> As it stands right now, I have 3 machines running MySQL, set up as hosts
> db1, db2 and db3.
> 
> db1 is a master
> db2 and db3 are set as slaves, and replicate data just fine with the
> premise that we'd like to add numerous slave machines at any time.
> 
> Problems I'm having:
> 
> 1. If a new database is created on db1, this is not replicated on db2
> and db3. Should my permissions be "GRANT FILE on *.* ..." or is there a
> better way to automate duplicating this database on the slaves?

Yes put in you're my.cnf on the master binlog-ignore-db=mysql,test this
means the master will replicate any event to any database other then mysql,
test.


> 
> 2. If a database exists on all three machines already and I create a
> table on db1, this is also not replicated on db2 or db3.

Why not? Are you explicitly replicating a set of tables from your slaves via
replicate-do-table=foo.bar?


> 
> 3. I haven't checked yet whether "ALTER TABLE ..." commands have worked
> across the replication.

Yes they do.

> 
> 4. Darn RedHat and their logrotate utility: db1 was having MySQL restart
> once a week, which created db1.001, db1.002, db1.003, etc., every time
> it restarted, yet db2 and db3 also running logrotate and having MySQL
> restart because of it, were not updating their "master.info" files to
> point at the ".002" or ".003" or ".004" files whenever db1 was
> restarted. This also happens if db1 reboots.

Stop the logrotate script.


> 
> 5. In the event of power failure, or system failure, if db1 is offline,
> how can I set the systems so either db2 or db3 becomes a master? And if
> db1 comes back online later, could I set it as a slave to whichever
> other machine became a master? Can this be automated, or will it always
> require manual intervention?

Use NDB or write your own load / failover software.


> 
> 6a. I rewrote my Perl applications to connect to db1 and return a handle
> I call $dbh_w for any SELECT/INSERT/UPDATE/DELETE queries, and to
> connect to any of db1/db2/db3 for any SELECT queries as $dbh_r. However,
> we want to use some third-party software that contains a few MB of PHP
> code, and only connects to the database one time - I'm not sure how much
> time to allocate (to tell my CTO/CEO) to rewrite the entire application
> the same way I did my Perl applications. Any advice?
> 6b. My Perl scripts are 90% reading data back from the database, the PHP
> scripts are more like 80% writing to the database so being able to
> load-balance the writing to the database farm is ideal. In my current
> scenario, the PHP application can only write to db1, which will
> ultimately cap out the machine. Surely there's a better way than making
> two connections ($dbh_w/$dbh_r) to the database for each copy of my Perl
> or PHP processes running?

Look at DBI::Multiplex


> 
> 7. Finally, would 'fake replication' work if the MySQL database files
> were on a RAID system and mounted via NFS to multiple machines to
> actually run the MySQL engine? The tables we use have a lot of
> auto_increment fields, and I'd be worried about data being corrupted or
> lost.

No, not reliable. Locks would be ignored via NFS and mysql will detect that
the index file changed on server b marking the record or index file as
crashed.


> 
> Thanks,
> Ian Douglas
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]