RE: table copying/replication
This sounds like the best idea yet :) I have a couple of questions: - I need to keep the data in x_shadow while still creating table x.. will copying accomplish the same thing? I'm guessing it will be slower, but keeping the 'shadow' table around is important. - Will either / both of these (rename and copy) preserve indexes? Each table has ~5 indexes, and I don't want to reindex. Thanks for the ideas Ross On Wed, 2003-06-04 at 02:43, Martin Waite wrote: Run a MySQL replication chain from stage to prod to replicas. On stage, prepare the data on shadow tables with different names to those used in prod. Say, if your real tables are one, two, three, four, five then create shadow tables one_shadow, two_shadow, three_shadow... Once the data is ready in the shadow tables, do a rename: rename table one to one_old, one_shadow to one, two to two_old, two_shadow to two, ... Replication will apply the rename to all your replicas and you should get a fairly snappy switchover. The only drawback is that you have two copies of your tables, which might be impractical depending on the amount of data involved. regards, Martin -- Ross Simpson [EMAIL PROTECTED] MapQuest.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
I have looked at this idea as well, and I am unsure what problems issuing a 'flush tables' could cause: - will it take a long time to complete with lots of tables and indexes? - does any locking happen while this is going on, or is each table available after it's been re-read from disk? - iirc, mysql caches indexes.. what happens when a table's datafile is changed, and hasn't been re-read yet (i.e., the index no longer corresponds to the table)? Does mysql take this into account, or will a query against this table yield incorrect data? Getting there.. thanks for the help :) Ross On Tue, 2003-06-03 at 17:24, Andrew Braithwaite wrote: OK - in that case there's not much you can do with replication... But with mysql you could write some bash to copy the raw files over like: Run this from /datadir (on the server you want to copy from): tar cf - databasedirname | ssh server_you_want_it_to_go_to 'cd /datadir; tar xf -' And supply the password (I think you can do that with a ssh option (or set up some ssh keys)) then (if you have the right permissions set up) - issue a flush tables on all the slaves from the master that will do the job with a simple file transfer. If you don't have ssh then you could set up a nfs mount or something to let your scripts do the copying (I wouldn't recommend nfs for large amounts of fast data transfer personally...) Hope this helps... Andrew -Original Message- From: Ross Simpson [mailto:[EMAIL PROTECTED] Sent: Tuesday 03 June 2003 23:57 To: Andrew Braithwaite Cc: Dathan Vance Pattishall; [EMAIL PROTECTED] Subject: RE: table copying/replication I looked at that type of idea, and the reason I didn't mention it was the indexes -- each table involved has ~5 indexes, and on a large table, that equals a lot of work that's already done on 'stage' that must be redone on 'prod'. I'd really like to preserve the indexes from stage if at all possible. Maybe there's a way to dump and reload the indexes as well as the data.. Thanks for the help, and I hope you don't get fired.. I won't tell anyone ;) Ross On Tue, 2003-06-03 at 16:53, Andrew Braithwaite wrote: Hi, Assuming that this system runs on *nix and that prod is set up to replicate to all the replicas you could write a small bash script to push the data from the stage to the prod which would then replicate as normal. I would author the script something like this... [bash]# mysqldump -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This would dump the entire somedatabasename to the prod server which would then replicate it to all the slaves using the -e option for faster inserts: If you wanted to overwrite the existing data then use the --add-drop-table option to mysqldump like this: mysqldump --add-drop-table -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This will then replicate as fast as your network/hardware will allow. Hope this helps.. (I'm all for open source but it's a bit weird that I'm helping out our state-side mapping competitors here - at least it's not microsoft - I'll check tomorrow to make sure I don't get sacked :) Cheers, Andrew multimap.com -Original Message- From: Ross Simpson [mailto:[EMAIL PROTECTED] Sent: Tuesday 03 June 2003 22:44 To: Dathan Vance Pattishall Cc: [EMAIL PROTECTED] Subject: RE: table copying/replication Thanks for the reply. I didn't explain properly :) A diagram should help: || |-| 1 || 2 | || |stage| -- |prod| -- |-| || |-| || |-|replicas| || 'stage' is a staging mysql instance, where changes are made all the time. When the data is ready for production, it needs to be pushed to 'prod', at which time it will be replicated out to all the slaves. Step 2 is covered by the answer to my previous question. Step 1 is really my question. My need is that somehow a table already existing on stage can be copied/replicated/etc over to prod, but _only_ when requested, and then immediately. Any thoughts? Thanks, Ross On Tue, 2003-06-03 at 12:07, Dathan Vance Pattishall wrote: ---Original Message- --From: Ross Simpson [mailto:[EMAIL PROTECTED] --Sent: Tuesday, June 03, 2003 10:31 AM --To: [EMAIL PROTECTED] --Subject: table copying/replication -- --Hello, -- --I have a need for fast copying of a specific table from a master mysql --server to a number of slave servers (say 5). Create the table on the master and if the master and slave config is working then the same table will be on the slave. -- --The database in question could potentially have up to 2000 --tables, and --at any time, one of those tables would need to be copied to all
RE: table copying/replication
Hi Ross, On Wed, 2003-06-04 at 16:35, Ross Simpson wrote: This sounds like the best idea yet :) I have a couple of questions: - I need to keep the data in x_shadow while still creating table x.. will copying accomplish the same thing? I'm guessing it will be slower, but keeping the 'shadow' table around is important. I'm not sure I understand the question, but I think you're saying you want to keep a shadow table after the rename. Erm, you could create a shadow of the shadow, or create a new shadow after the switchover. You have two options for creating the shadow tables: * inside MySQL create table x_shadow( ... ); insert into x_shadow select * from x; This SQL will be replicated to your slaves (ie. prod and replicas), but depending on the amount of data involved might hammer your slaves and make your production system temporarily unusable. * outside MySQL, using OS-level copy (only works for MyISAM tables) - in mysql flush tables and possibly lock them to prevent anyone else updating them - cd mysql-datadir - cp x.MYD x_shadow.MYD - cp x.MYI x_shadow.MYI - cp x.frm x_shadow.frm But note that this will have to be repeated on each replica. You could tar up the x_shadow files on stage, copy the tarball to the replicas and unpack them in the correct directory. mysqlhotcopy can be used to do the locking and copying for you. Once the copied shadow tables are in place, a rename issued on the stage server should replicate fine, swapping x and shadow_x on all replicas. - Will either / both of these (rename and copy) preserve indexes? Each table has ~5 indexes, and I don't want to reindex. Rename will correctly handle the indexes. An OS-level copy of MyISAM files will also preserve the indexes (as long as you get all the files belonging to a table). regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
---Original Message- --From: Ross Simpson [mailto:[EMAIL PROTECTED] --Sent: Tuesday, June 03, 2003 10:31 AM --To: [EMAIL PROTECTED] --Subject: table copying/replication -- --Hello, -- --I have a need for fast copying of a specific table from a master mysql --server to a number of slave servers (say 5). Create the table on the master and if the master and slave config is working then the same table will be on the slave. -- --The database in question could potentially have up to 2000 tables, and --at any time, one of those tables would need to be copied to all 5 of the --slaves, upon command of the master. Make sure you have enough inodes that's 6000 files that will be opened. Also set your ulimit high enough to open all the files. Replication will perform the same action on the slaves as initiated by the master. There is no need for a copy. --I also looked at doing table copies (insert into .. select * from ..), --but these seem pretty slow. It's building the index on the fly as well, if there are indexes on the dst table. It does have to scan the src table and for every row insert it into the dst table. You can tweak you're my.cnf values to make that operation happen faster. -- --Has anyone solved this problem before? Any ideas that would help out? -- Yes, the mysql team with replication. -- --Ross Simpson [EMAIL PROTECTED] --MapQuest.com -- -- --MySQL General Mailing List --For list archives: http://lists.mysql.com/mysql --To unsubscribe: --http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
Thanks for the reply. I didn't explain properly :) A diagram should help: || |-| 1 || 2 | || |stage| -- |prod| -- |-| || |-| || |-|replicas| || 'stage' is a staging mysql instance, where changes are made all the time. When the data is ready for production, it needs to be pushed to 'prod', at which time it will be replicated out to all the slaves. Step 2 is covered by the answer to my previous question. Step 1 is really my question. My need is that somehow a table already existing on stage can be copied/replicated/etc over to prod, but _only_ when requested, and then immediately. Any thoughts? Thanks, Ross On Tue, 2003-06-03 at 12:07, Dathan Vance Pattishall wrote: ---Original Message- --From: Ross Simpson [mailto:[EMAIL PROTECTED] --Sent: Tuesday, June 03, 2003 10:31 AM --To: [EMAIL PROTECTED] --Subject: table copying/replication -- --Hello, -- --I have a need for fast copying of a specific table from a master mysql --server to a number of slave servers (say 5). Create the table on the master and if the master and slave config is working then the same table will be on the slave. -- --The database in question could potentially have up to 2000 tables, and --at any time, one of those tables would need to be copied to all 5 of the --slaves, upon command of the master. Make sure you have enough inodes that's 6000 files that will be opened. Also set your ulimit high enough to open all the files. Replication will perform the same action on the slaves as initiated by the master. There is no need for a copy. --I also looked at doing table copies (insert into .. select * from ..), --but these seem pretty slow. It's building the index on the fly as well, if there are indexes on the dst table. It does have to scan the src table and for every row insert it into the dst table. You can tweak you're my.cnf values to make that operation happen faster. -- --Has anyone solved this problem before? Any ideas that would help out? -- Yes, the mysql team with replication. -- --Ross Simpson [EMAIL PROTECTED] --MapQuest.com -- -- --MySQL General Mailing List --For list archives: http://lists.mysql.com/mysql --To unsubscribe: --http://lists.mysql.com/[EMAIL PROTECTED] -- Ross Simpson [EMAIL PROTECTED] MapQuest.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
Hi, Assuming that this system runs on *nix and that prod is set up to replicate to all the replicas you could write a small bash script to push the data from the stage to the prod which would then replicate as normal. I would author the script something like this... [bash]# mysqldump -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This would dump the entire somedatabasename to the prod server which would then replicate it to all the slaves using the -e option for faster inserts: If you wanted to overwrite the existing data then use the --add-drop-table option to mysqldump like this: mysqldump --add-drop-table -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This will then replicate as fast as your network/hardware will allow. Hope this helps.. (I'm all for open source but it's a bit weird that I'm helping out our state-side mapping competitors here - at least it's not microsoft - I'll check tomorrow to make sure I don't get sacked :) Cheers, Andrew multimap.com -Original Message- From: Ross Simpson [mailto:[EMAIL PROTECTED] Sent: Tuesday 03 June 2003 22:44 To: Dathan Vance Pattishall Cc: [EMAIL PROTECTED] Subject: RE: table copying/replication Thanks for the reply. I didn't explain properly :) A diagram should help: || |-| 1 || 2 | || |stage| -- |prod| -- |-| || |-| || |-|replicas| || 'stage' is a staging mysql instance, where changes are made all the time. When the data is ready for production, it needs to be pushed to 'prod', at which time it will be replicated out to all the slaves. Step 2 is covered by the answer to my previous question. Step 1 is really my question. My need is that somehow a table already existing on stage can be copied/replicated/etc over to prod, but _only_ when requested, and then immediately. Any thoughts? Thanks, Ross On Tue, 2003-06-03 at 12:07, Dathan Vance Pattishall wrote: ---Original Message- --From: Ross Simpson [mailto:[EMAIL PROTECTED] --Sent: Tuesday, June 03, 2003 10:31 AM --To: [EMAIL PROTECTED] --Subject: table copying/replication -- --Hello, -- --I have a need for fast copying of a specific table from a master mysql --server to a number of slave servers (say 5). Create the table on the master and if the master and slave config is working then the same table will be on the slave. -- --The database in question could potentially have up to 2000 tables, and --at any time, one of those tables would need to be copied to all 5 --of the --slaves, upon command of the master. Make sure you have enough inodes that's 6000 files that will be opened. Also set your ulimit high enough to open all the files. Replication will perform the same action on the slaves as initiated by the master. There is no need for a copy. --I also looked at doing table copies (insert into .. select * from ..), --but these seem pretty slow. It's building the index on the fly as well, if there are indexes on the dst table. It does have to scan the src table and for every row insert it into the dst table. You can tweak you're my.cnf values to make that operation happen faster. -- --Has anyone solved this problem before? Any ideas that would help out? -- Yes, the mysql team with replication. -- --Ross Simpson [EMAIL PROTECTED] --MapQuest.com -- -- --MySQL General Mailing List --For list archives: http://lists.mysql.com/mysql --To unsubscribe: --http://lists.mysql.com/[EMAIL PROTECTED] -- Ross Simpson [EMAIL PROTECTED] MapQuest.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
I looked at that type of idea, and the reason I didn't mention it was the indexes -- each table involved has ~5 indexes, and on a large table, that equals a lot of work that's already done on 'stage' that must be redone on 'prod'. I'd really like to preserve the indexes from stage if at all possible. Maybe there's a way to dump and reload the indexes as well as the data.. Thanks for the help, and I hope you don't get fired.. I won't tell anyone ;) Ross On Tue, 2003-06-03 at 16:53, Andrew Braithwaite wrote: Hi, Assuming that this system runs on *nix and that prod is set up to replicate to all the replicas you could write a small bash script to push the data from the stage to the prod which would then replicate as normal. I would author the script something like this... [bash]# mysqldump -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This would dump the entire somedatabasename to the prod server which would then replicate it to all the slaves using the -e option for faster inserts: If you wanted to overwrite the existing data then use the --add-drop-table option to mysqldump like this: mysqldump --add-drop-table -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This will then replicate as fast as your network/hardware will allow. Hope this helps.. (I'm all for open source but it's a bit weird that I'm helping out our state-side mapping competitors here - at least it's not microsoft - I'll check tomorrow to make sure I don't get sacked :) Cheers, Andrew multimap.com -Original Message- From: Ross Simpson [mailto:[EMAIL PROTECTED] Sent: Tuesday 03 June 2003 22:44 To: Dathan Vance Pattishall Cc: [EMAIL PROTECTED] Subject: RE: table copying/replication Thanks for the reply. I didn't explain properly :) A diagram should help: || |-| 1 || 2 | || |stage| -- |prod| -- |-| || |-| || |-|replicas| || 'stage' is a staging mysql instance, where changes are made all the time. When the data is ready for production, it needs to be pushed to 'prod', at which time it will be replicated out to all the slaves. Step 2 is covered by the answer to my previous question. Step 1 is really my question. My need is that somehow a table already existing on stage can be copied/replicated/etc over to prod, but _only_ when requested, and then immediately. Any thoughts? Thanks, Ross On Tue, 2003-06-03 at 12:07, Dathan Vance Pattishall wrote: ---Original Message- --From: Ross Simpson [mailto:[EMAIL PROTECTED] --Sent: Tuesday, June 03, 2003 10:31 AM --To: [EMAIL PROTECTED] --Subject: table copying/replication -- --Hello, -- --I have a need for fast copying of a specific table from a master mysql --server to a number of slave servers (say 5). Create the table on the master and if the master and slave config is working then the same table will be on the slave. -- --The database in question could potentially have up to 2000 tables, and --at any time, one of those tables would need to be copied to all 5 --of the --slaves, upon command of the master. Make sure you have enough inodes that's 6000 files that will be opened. Also set your ulimit high enough to open all the files. Replication will perform the same action on the slaves as initiated by the master. There is no need for a copy. --I also looked at doing table copies (insert into .. select * from ..), --but these seem pretty slow. It's building the index on the fly as well, if there are indexes on the dst table. It does have to scan the src table and for every row insert it into the dst table. You can tweak you're my.cnf values to make that operation happen faster. -- --Has anyone solved this problem before? Any ideas that would help out? -- Yes, the mysql team with replication. -- --Ross Simpson [EMAIL PROTECTED] --MapQuest.com -- -- --MySQL General Mailing List --For list archives: http://lists.mysql.com/mysql --To unsubscribe: --http://lists.mysql.com/[EMAIL PROTECTED] -- Ross Simpson [EMAIL PROTECTED] MapQuest.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED] -- Ross Simpson [EMAIL PROTECTED] MapQuest.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
RE: table copying/replication
OK - in that case there's not much you can do with replication... But with mysql you could write some bash to copy the raw files over like: Run this from /datadir (on the server you want to copy from): tar cf - databasedirname | ssh server_you_want_it_to_go_to 'cd /datadir; tar xf -' And supply the password (I think you can do that with a ssh option (or set up some ssh keys)) then (if you have the right permissions set up) - issue a flush tables on all the slaves from the master that will do the job with a simple file transfer. If you don't have ssh then you could set up a nfs mount or something to let your scripts do the copying (I wouldn't recommend nfs for large amounts of fast data transfer personally...) Hope this helps... Andrew -Original Message- From: Ross Simpson [mailto:[EMAIL PROTECTED] Sent: Tuesday 03 June 2003 23:57 To: Andrew Braithwaite Cc: Dathan Vance Pattishall; [EMAIL PROTECTED] Subject: RE: table copying/replication I looked at that type of idea, and the reason I didn't mention it was the indexes -- each table involved has ~5 indexes, and on a large table, that equals a lot of work that's already done on 'stage' that must be redone on 'prod'. I'd really like to preserve the indexes from stage if at all possible. Maybe there's a way to dump and reload the indexes as well as the data.. Thanks for the help, and I hope you don't get fired.. I won't tell anyone ;) Ross On Tue, 2003-06-03 at 16:53, Andrew Braithwaite wrote: Hi, Assuming that this system runs on *nix and that prod is set up to replicate to all the replicas you could write a small bash script to push the data from the stage to the prod which would then replicate as normal. I would author the script something like this... [bash]# mysqldump -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This would dump the entire somedatabasename to the prod server which would then replicate it to all the slaves using the -e option for faster inserts: If you wanted to overwrite the existing data then use the --add-drop-table option to mysqldump like this: mysqldump --add-drop-table -e -hstage -usomeuser somedatabasename | mysql -hprod -usomeuser This will then replicate as fast as your network/hardware will allow. Hope this helps.. (I'm all for open source but it's a bit weird that I'm helping out our state-side mapping competitors here - at least it's not microsoft - I'll check tomorrow to make sure I don't get sacked :) Cheers, Andrew multimap.com -Original Message- From: Ross Simpson [mailto:[EMAIL PROTECTED] Sent: Tuesday 03 June 2003 22:44 To: Dathan Vance Pattishall Cc: [EMAIL PROTECTED] Subject: RE: table copying/replication Thanks for the reply. I didn't explain properly :) A diagram should help: || |-| 1 || 2 | || |stage| -- |prod| -- |-| || |-| || |-|replicas| || 'stage' is a staging mysql instance, where changes are made all the time. When the data is ready for production, it needs to be pushed to 'prod', at which time it will be replicated out to all the slaves. Step 2 is covered by the answer to my previous question. Step 1 is really my question. My need is that somehow a table already existing on stage can be copied/replicated/etc over to prod, but _only_ when requested, and then immediately. Any thoughts? Thanks, Ross On Tue, 2003-06-03 at 12:07, Dathan Vance Pattishall wrote: ---Original Message- --From: Ross Simpson [mailto:[EMAIL PROTECTED] --Sent: Tuesday, June 03, 2003 10:31 AM --To: [EMAIL PROTECTED] --Subject: table copying/replication -- --Hello, -- --I have a need for fast copying of a specific table from a master mysql --server to a number of slave servers (say 5). Create the table on the master and if the master and slave config is working then the same table will be on the slave. -- --The database in question could potentially have up to 2000 --tables, and --at any time, one of those tables would need to be copied to all 5 --of the --slaves, upon command of the master. Make sure you have enough inodes that's 6000 files that will be opened. Also set your ulimit high enough to open all the files. Replication will perform the same action on the slaves as initiated by the master. There is no need for a copy. --I also looked at doing table copies (insert into .. select * from ..), --but these seem pretty slow. It's building the index on the fly as well, if there are indexes on the dst table. It does have to scan the src table and for every row insert it into the dst table. You can tweak you're my.cnf values to make that operation happen faster. -- --Has anyone solved this problem before? Any ideas that would help out? -- Yes, the mysql team
RE: table copying/replication
On Tue, 2003-06-03 at 22:44, Ross Simpson wrote: Thanks for the reply. I didn't explain properly :) A diagram should help: || |-| 1 || 2 | || |stage| -- |prod| -- |-| || |-| || |-|replicas| || 'stage' is a staging mysql instance, where changes are made all the time. When the data is ready for production, it needs to be pushed to 'prod', at which time it will be replicated out to all the slaves. Step 2 is covered by the answer to my previous question. Step 1 is really my question. My need is that somehow a table already existing on stage can be copied/replicated/etc over to prod, but _only_ when requested, and then immediately. Run a MySQL replication chain from stage to prod to replicas. On stage, prepare the data on shadow tables with different names to those used in prod. Say, if your real tables are one, two, three, four, five then create shadow tables one_shadow, two_shadow, three_shadow... Once the data is ready in the shadow tables, do a rename: rename table one to one_old, one_shadow to one, two to two_old, two_shadow to two, ... Replication will apply the rename to all your replicas and you should get a fairly snappy switchover. The only drawback is that you have two copies of your tables, which might be impractical depending on the amount of data involved. regards, Martin -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]