Hi,

tedd wrote on Wednesday, September 12, 2007 4:33 PM:
> Hi:
> 
> I'm developing php scripts for a client that will eventually use his
> database. However, I don't want to touch his database during
> development and instead want to use a copy.
> 
> While I thought it was going to be easy to make a copy (i.e., just
> dump the database and reload it via phpMyAdmin) the database turns
> out to be too large.
> 
> So, I resorted to saving individual tables with data and then
> reloading these one at a time. However, even some of those tables are
> too large. At 1.3 Meg (not compressed and 132k gz compressed), the
> phpMyAdmin 2.6.0-pl3, after a considerable delay, reports "Service
> Unavailable" and fails.
> 
> The total database size is around 13 Meg and phpMyAdmin reports that
> it can handle uploads up-to 2 Meg, but craters at far less.
> 
> So, what are my options?  Any quick one line solutions? Nothing I've
> read address the problem I'm facing.

You'll need to use the mysql command line tools, mysqldump and mysql.
Something like this:

mysqldump -u username -p --database yourdatabase > yourdatabase.sql

Then copy the .sql file to where you want it, and do something like:

mysql -u username -p
mysql> source yourdatabase.sql

Of course, read a bit about mysqldump before just doing this.  There may be
certain flags you should set, depending on the naming of your database, etc.
Just make sure it doesn't drop your customer's database :)

Another option is that phpmyadmin should have a Copy section under the
Operations tab (if it's fairly recent).  This will use SQL commands to copy
the database, so there are no round trips to the web server.

H

_______________________________________________
New York PHP Community MySQL SIG
http://lists.nyphp.org/mailman/listinfo/mysql

NYPHPCon 2006 Presentations Online
http://www.nyphpcon.com

Show Your Participation in New York PHP
http://www.nyphp.org/show_participation.php

Reply via email to