I have looked at this idea as well, and I am unsure what problems
issuing a 'flush tables' could cause:

 - will it take a long time to complete with lots of tables and indexes?
 - does any locking happen while this is going on, or is each table
available after it's been re-read from disk?
 - iirc, mysql caches indexes.. what happens when a table's datafile is
changed, and hasn't been re-read yet (i.e., the index no longer
corresponds to the table)?  Does mysql take this into account, or will a
query against this table yield incorrect data?

Getting there.. thanks for the help :)
Ross




On Tue, 2003-06-03 at 17:24, Andrew Braithwaite wrote:
> OK - in that case there's not much you can do with replication...  But with
> mysql you could write some bash to copy the raw files over like:
> 
> Run this from /datadir (on the server you want to copy from):
> 
> tar cf - databasedirname | ssh server_you_want_it_to_go_to 'cd /datadir; tar
> xf -'
> 
> And supply the password (I think you can do that with a ssh option (or set
> up some ssh keys)) then (if you have the right permissions set up) - issue a
> "flush tables" on all the slaves from the master that will do the job with a
> simple file transfer.
> 
> If you don't have ssh then you could set up a nfs mount or something to let
> your scripts do the copying (I wouldn't recommend nfs for large amounts of
> fast data transfer personally...)
> 
> Hope this helps...
> 
> Andrew
> 
> 
> 
> -----Original Message-----
> From: Ross Simpson [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday 03 June 2003 23:57
> To: Andrew Braithwaite
> Cc: Dathan Vance Pattishall; [EMAIL PROTECTED]
> Subject: RE: table copying/replication
> 
> 
> I looked at that type of idea, and the reason I didn't mention it was the
> indexes -- each table involved has ~5 indexes, and on a large table, that
> equals a lot of work that's already done on 'stage' that must be redone on
> 'prod'.  I'd really like to preserve the indexes from stage if at all
> possible.  Maybe  there's a way to dump and reload the indexes as well as
> the data..
> 
> Thanks for the help, and I hope you don't get fired.. I won't tell anyone ;)
> 
> Ross
> 
> 
> On Tue, 2003-06-03 at 16:53, Andrew Braithwaite wrote:
> > Hi,
> > 
> > Assuming that this system runs on *nix and that "prod" is set up to 
> > replicate to all the "replicas" you could write a small bash script to 
> > push the data from the "stage" to the "prod" which would then 
> > replicate as normal.
> > 
> > I would author the script something like this...
> > 
> > [bash]# mysqldump -e -hstage -usomeuser somedatabasename | mysql 
> > -hprod -usomeuser
> > 
> > This would dump the entire "somedatabasename" to the "prod" server 
> > which would then replicate it to all the slaves using the -e option 
> > for faster
> > inserts:
> > 
> > If you wanted to overwrite the existing data then use the 
> > --add-drop-table option to mysqldump like this:
> > 
> > mysqldump --add-drop-table -e -hstage -usomeuser somedatabasename | 
> > mysql -hprod -usomeuser
> > 
> > This will then replicate as fast as your network/hardware will allow.
> > 
> > Hope this helps.. (I'm all for open source but it's a bit weird that 
> > I'm helping out our state-side mapping competitors here - at least 
> > it's not microsoft - I'll check tomorrow to make sure I don't get 
> > sacked :)
> > 
> > Cheers,
> > 
> > Andrew
> > multimap.com
> > 
> > 
> > -----Original Message-----
> > From: Ross Simpson [mailto:[EMAIL PROTECTED]
> > Sent: Tuesday 03 June 2003 22:44
> > To: Dathan Vance Pattishall
> > Cc: [EMAIL PROTECTED]
> > Subject: RE: table copying/replication
> > 
> > 
> > Thanks for the reply.
> > 
> > I didn't explain properly :)
> > 
> > A diagram should help:
> > 
> >                        |--------|
> > |-----|  1  |----|  2  | |--------|
> > |stage| --> |prod| --> |-| |--------|
> > |-----|     |----|       |-|replicas|
> >                            |--------|
> > 
> > 'stage' is a staging mysql instance, where changes are made all the 
> > time. When the data is ready for production, it needs to be pushed to 
> > 'prod', at which time it will be replicated out to all the slaves.
> > 
> > Step 2 is covered by the answer to my previous question.
> > 
> > Step 1 is really my question.  My need is that somehow a table already 
> > existing on stage can be copied/replicated/etc over to prod, but 
> > _only_ when requested, and then immediately.
> > 
> > Any thoughts?
> > 
> > Thanks,
> > Ross
> > 
> > 
> > On Tue, 2003-06-03 at 12:07, Dathan Vance Pattishall wrote:
> > > -->-----Original Message-----
> > > -->From: Ross Simpson [mailto:[EMAIL PROTECTED]
> > > -->Sent: Tuesday, June 03, 2003 10:31 AM
> > > -->To: [EMAIL PROTECTED]
> > > -->Subject: table copying/replication
> > > -->
> > > -->Hello,
> > > -->
> > > -->I have a need for fast copying of a specific table from a master
> > > mysql
> > > -->server to a number of slave servers (say 5).
> > > Create the table on the master and if the master and slave config is
> > > working then the same table will be on the slave.
> > > 
> > > 
> > > -->
> > > -->The database in question could potentially have up to 2000 
> > > -->tables,
> > > and
> > > -->at any time, one of those tables would need to be copied to all 5
> > > -->of
> > > the
> > > -->slaves, upon command of the master.
> > > Make sure you have enough inodes that's 6000 files that will be
> > > opened. Also set your ulimit high enough to open all the files.
> > > 
> > > Replication will perform the same action on the slaves as initiated 
> > > by
> > > the master. There is no need for a copy.
> > > 
> > > -->I also looked at doing table copies (insert into .. select * from
> > > ..),
> > > -->but these seem pretty slow.
> > > It's building the index on the fly as well, if there are indexes on
> > > the dst table. It does have to scan the src table and for every row 
> > > insert it into the dst table. You can tweak you're my.cnf values to 
> > > make that operation happen faster.
> > > 
> > > -->
> > > -->Has anyone solved this problem before?  Any ideas that would help
> > > out?
> > > -->
> > > 
> > > Yes, the mysql team with replication.
> > > 
> > > -->
> > > -->--
> > > -->Ross Simpson <[EMAIL PROTECTED]>
> > > -->MapQuest.com
> > > -->
> > > -->
> > > -->--
> > > -->MySQL General Mailing List
> > > -->For list archives: http://lists.mysql.com/mysql
> > > -->To unsubscribe:
> > > -->http://lists.mysql.com/[EMAIL PROTECTED]
> > > 
> > --
> > Ross Simpson <[EMAIL PROTECTED]>
> > MapQuest.com
> > 
> > 
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
> -- 
> Ross Simpson <[EMAIL PROTECTED]>
> MapQuest.com
-- 
Ross Simpson <[EMAIL PROTECTED]>
MapQuest.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to