Ingvar,

I have not used mysql data dumps directly to restore my databases, but I
take the mysqldump output and massage them a bit, then use them for
restores.

Basically, the "mysqldump" output is missing the "use xxx" command to
change to the target database.  I have a perl script that does that (and
extracts the table defs to another file for documentation).

I have database mysqldump outputs that were 15 megabytes (now 10 megs,
as I removed some report storage from the system).  I have restored my
system from these dumps many, many times.  I have 4 systems in use at
present, a Sun E250 as the production server, two Sparc 20's as
devel/test servers, and a W2K system as primary development.  I have an
automated backup running every day on the E250 (just a script to run
mysqldump with crontab), and I FTP these to the devel machine and use
them to rebuild the development databases every couple of days.

I restore from the modified dump by:

mysql -u root -p <CurrentDbms.mysql

where CurrentDbms.mysql is the output of a "mysqldump" with the "use
xxx" (use database) command added as line 1.

I have never had any problem.  Now, perhaps 10-15 megs is not considered
"large".  Well, we started in August with a 2 meg database, and we're
adding users every day. We currently have 775 users, but may expand to
over 10000 before this time next year.

I'll keep this list posted of any problems, but so far the performance
has been exceptional.

I chose MySQL based on industry "hearsay" about its speed.  I was
looking for "anything" at the time - as Oracle 8i has a known bug in the
Solaris/Sparc install scripts that essentially prevent it from being
installed by anyone except an Oracle certified DBA (turns out you have
to interrupt the install, manually "hack" some config files, and then
restart the install scripts).

However, once I had downloaded and installed MySQL, I became a convert.
MySQL is very robust, VERY fast, and has a very tiny footprint compared
to other database products.

I do miss nested sub-queries and all that, but I've found a good
workaround by running the sub-query as a normal query and capturing the
output to a file.  Then I edit the file to perform the outer query
(ususally some form of update).

Sorry for the long (gradually off topic) reply - but I've had no
problems with large mysqldump restores.

Cheers,

-Richard
================== quote ====================
I was just reading an annoying article from a user who was moving from
MySql
to Postgres because of stability problems and because of problems
restoring
"large" MySql data-dumps. I checked his site and it does not have the
appearance of a very heavy loaded site so I am not sure what he means by

large.

This is the article: http://webmasterbase.com/article/529

I was wondering what size of MySqldump file creates problems when
importing
the datadumps (is it equipment dependent -> size of memory, speed of
processor etc ??), also under what circumstances does MySql become
unstable
(how many concurrent users? what kind of queries etc) or is the
stability
question something that has changed alot with version 3.23 and is just
not
relevant any more?

I used Oracle8i before (under very heavy load) and did not have any
problems
there. I am now using MySql under not so heavy load and have not yet had
any
problems with it, but just want to be prepared when and if they come
along.

I would be grateful to hear your opions and experiences, good or bad.

Regards,

Ingvar G.
us.logiledger.com
Web accounting & CRM

======================================



---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to