On Jan 30, 2008, at 10:44 AM, Jerry Schwartz wrote:
mysqldump -A > file.dump
tar -jcf file.dump
rsync
[JS] You could also just pipe the output of mysqldump through gzip.
tar buys
you nothing, since it is a single file.
-j is the bzip2 compression option. :)
[JS] Yes, but tar is just extra b
> >> mysqldump -A > file.dump
> >> tar -jcf file.dump
> >> rsync
> >
> > [JS] You could also just pipe the output of mysqldump through gzip.
> > tar buys
> > you nothing, since it is a single file.
>
> -j is the bzip2 compression option. :)
[JS] Yes, but tar is just extra baggage.
Regards,
Jerry
Is there a reason this wouldn't work with InnoDB? (I understand
there's usually a single ibdata file, but so?)
On Jan 24, 2008, at 8:08 AM, Matthias Witte wrote:
On Thu, Jan 24, 2008 at 01:42:38PM +0200, Ivan Levchenko wrote:
Hi All,
What would be the best way to transfer a 20 gig db from
On Jan 29, 2008, at 10:02 AM, Jerry Schwartz wrote:
mysqldump -A > file.dump
tar -jcf file.dump
rsync
[JS] You could also just pipe the output of mysqldump through gzip.
tar buys
you nothing, since it is a single file.
-j is the bzip2 compression option. :)
--
MySQL General Mailing List
All,
InnoDB and MyISAM tables are platform independent, is there a reason
why transferring the actual binary files from one system (or mount
point) is not advised? From like OS to like OS, this seems like it
should be okay and you'll only get into trouble with Float/Double.
MySQL Document
> -Original Message-
> From: Chris [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, January 29, 2008 2:02 AM
> To: Ivan Levchenko
> Cc: mysql@lists.mysql.com
> Subject: Re: transfer huge mysql db
>
> Ivan Levchenko wrote:
> > Hi All,
> >
> > What wou
Ivan Levchenko wrote:
Hi All,
What would be the best way to transfer a 20 gig db from one host to another?
mysqldump -A > file.dump
tar -jcf file.dump
rsync
or use replication to do it (might take a bit longer this way though).
--
MySQL General Mailing List
For list archives: http://lists.my
Thanks Everybody for your help!
I'll think over what would be the best in my situation...
/me hoping to add a success story to this thread later
On Jan 24, 2008 3:08 PM, Matthias Witte <[EMAIL PROTECTED]> wrote:
> On Thu, Jan 24, 2008 at 01:42:38PM +0200, Ivan Levchenko wrote:
> > Hi All,
> >
>
On Thu, Jan 24, 2008 at 01:42:38PM +0200, Ivan Levchenko wrote:
> Hi All,
>
> What would be the best way to transfer a 20 gig db from one host to another?
If it consists of MyISAM tables you can do a pre rsync with everything
up and running.
Then you would lock all tables and do the real sync[1]
A binary copy will require that you shut down the db, rather than just
lock some tables for a while, which may be more desirable.
I've always found the mysql compression to be a bit weak over a slow
link. The way I tend to do this sort of thing is:
mysqldump --opt -B dbname | bzip2 -9c | ssh
do binary copy. sql dump will be slow.
Saravanan
--- On Thu, 1/24/08, Ivan Levchenko <[EMAIL PROTECTED]> wrote:
> From: Ivan Levchenko <[EMAIL PROTECTED]>
> Subject: transfer huge mysql db
> To: mysql@lists.mysql.com
> Date: Thursday, January 24, 2008, 6:12 PM
> Hi All,
>
> What would be the b
heh.. a little problem with this is that i'm going to do all of this remotely.
I was thinking on doing a mysqldump with the compress flag.
Anything to say about mysql migration tools? can it be used to do the
job more efficiently?
Its just that the db is in high use and i don't want to lock the
12 matches
Mail list logo