RE: How to verify mysqldump files

2012-11-07 Thread Rick James
A variant on that...
1. pre-validate slave's consistency using pt-table-checksum
2. dump slave, wipe clean, restore
3. RE-validate slave's consistency using pt-table-checksum

> -Original Message-
> From: Stillman, Benjamin [mailto:bstill...@limitedbrands.com]
> Sent: Wednesday, November 07, 2012 7:09 AM
> To: 'Gary'; mysql@lists.mysql.com
> Subject: RE: How to verify mysqldump files
> 
> In the past when I used mysqldump, I used a slave database for backups and
> periodically testing restores.
> 
> My process for testing:
> - Stop the slave process (so the db doesn't get updated).
> - Run the backup.
> - Create restore_test database.
> - Restore the backup to the restore_test database.
> - Use mysqldbcompare to compare the two databases.
> - Drop restore_test database.
> - Start the slave process.
> 
> I have this scripted so it just runs and emails me the results.
> 
> Useful link:
> http://dev.mysql.com/doc/workbench//en/mysqldbcompare.html
> 
> 
> 
> 
> -Original Message-
> From: Gary [mailto:listgj-my...@yahoo.co.uk]
> Sent: Wednesday, November 07, 2012 7:52 AM
> To: mysql@lists.mysql.com
> Subject: How to verify mysqldump files
> 
> Can anyone suggest how I could verify that the files created by mysqldump
> are "okay"? They are being created for backup purposes, and the last thing
> I want to do is find out that the backups themselves are in some way
> corrupt.
> 
> I know I can check the output of the command itself, but what if.. I don't
> know... if there are problems with the disc it writes to, or something like
> that. Is there any way to check whether the output file is "valid" in the
> sense that it is complete and syntactically correct?
> 
> --
> GaryPlease do NOT send me 'courtesy' replies off-list.
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
> 
> 
> 
> 
> Notice: This communication may contain privileged and/or confidential
> information. If you are not the intended recipient, please notify the
> sender by email, and immediately delete the message and any attachments
> without copying or disclosing them. LBI may, for any reason, intercept,
> access, use, and disclose any information that is communicated by or
> through, or which is stored on, its networks, applications, services, and
> devices.
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: How to verify mysqldump files

2012-11-07 Thread Stillman, Benjamin
In the past when I used mysqldump, I used a slave database for backups and 
periodically testing restores.

My process for testing:
- Stop the slave process (so the db doesn't get updated).
- Run the backup.
- Create restore_test database.
- Restore the backup to the restore_test database.
- Use mysqldbcompare to compare the two databases.
- Drop restore_test database.
- Start the slave process.

I have this scripted so it just runs and emails me the results.

Useful link:
http://dev.mysql.com/doc/workbench//en/mysqldbcompare.html




-Original Message-
From: Gary [mailto:listgj-my...@yahoo.co.uk]
Sent: Wednesday, November 07, 2012 7:52 AM
To: mysql@lists.mysql.com
Subject: How to verify mysqldump files

Can anyone suggest how I could verify that the files created by mysqldump are 
"okay"? They are being created for backup purposes, and the last thing I want 
to do is find out that the backups themselves are in some way corrupt.

I know I can check the output of the command itself, but what if.. I don't 
know... if there are problems with the disc it writes to, or something like 
that. Is there any way to check whether the output file is "valid" in the sense 
that it is complete and syntactically correct?

--
GaryPlease do NOT send me 'courtesy' replies off-list.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: How to verify mysqldump files

2012-11-07 Thread Manuel Arostegui
2012/11/7 Ananda Kumar 

> you can use checksum to make sure there are not corruption in the file



That would work for the file integrity itself not for the data integrity
_in_ the file.

As Claudio suggested, probably going thru the whole recovery process from
time to time is the best way to make sure the backup'ed data is correct.

Manuel.


Re: How to verify mysqldump files

2012-11-07 Thread Ananda Kumar
you can use checksum to make sure there are not corruption in the file

On Wed, Nov 7, 2012 at 6:39 PM, Claudio Nanni wrote:

> Gary,
>
> It is always a good practice to test the whole solution backup/restore.
> So nothing is better than testing a restore, actually it should be a
> periodic procedure.
> As for the validity of the file usually is delegated to the operating
> system.
> If you want to check it yourself you may create an algorithm that analyses
> some patterns in the dump file to recognize that it is correct,
> starting may be from one that is working as 'valid' sample.
>
> Cheers
>
> Claudio
>
>
>
> 2012/11/7 Gary 
>
> > Can anyone suggest how I could verify that the files created by
> > mysqldump are "okay"? They are being created for backup purposes, and
> > the last thing I want to do is find out that the backups themselves are
> > in some way corrupt.
> >
> > I know I can check the output of the command itself, but what if.. I
> > don't know... if there are problems with the disc it writes to, or
> > something like that. Is there any way to check whether the output file
> > is "valid" in the sense that it is complete and syntactically correct?
> >
> > --
> > GaryPlease do NOT send me 'courtesy' replies off-list.
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >
>
>
> --
> Claudio
>


Re: How to verify mysqldump files

2012-11-07 Thread Claudio Nanni
Gary,

It is always a good practice to test the whole solution backup/restore.
So nothing is better than testing a restore, actually it should be a
periodic procedure.
As for the validity of the file usually is delegated to the operating
system.
If you want to check it yourself you may create an algorithm that analyses
some patterns in the dump file to recognize that it is correct,
starting may be from one that is working as 'valid' sample.

Cheers

Claudio



2012/11/7 Gary 

> Can anyone suggest how I could verify that the files created by
> mysqldump are "okay"? They are being created for backup purposes, and
> the last thing I want to do is find out that the backups themselves are
> in some way corrupt.
>
> I know I can check the output of the command itself, but what if.. I
> don't know... if there are problems with the disc it writes to, or
> something like that. Is there any way to check whether the output file
> is "valid" in the sense that it is complete and syntactically correct?
>
> --
> GaryPlease do NOT send me 'courtesy' replies off-list.
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


-- 
Claudio


Re: Mysqldump Files

2007-01-30 Thread Alex Arul Lurthu

On a different note just running a slave will not help when there is some
accidental deletion of data or tables or databases. Hence have a slave as
Mikhail mentioned and also run regular backups on the slave. Preferably a
tar of your mysql server after shutting it down, also ensure you have
log-bin and log-slave-updates enabled on your slave. Push the tar to more
than one location and hosts so that you can survive disk corruptions. This
way you can ensure you can recover from most possible database corruption
scenarios.

Thanks
Alex

On 1/30/07, Mikhail Berman <[EMAIL PROTECTED]> wrote:


Hi David,

Is the space on hard-drive is major concern of yours or abilities to
recover from crash is?

Backups are usually taking to be able to recover from a crash. Which in
its turn means if there is a way to recover faster it is better. Having
slave that is constantly updated gives you very quick way of recovering
if master goes down. Just point you PHP scripts to slave and be happy.

If you need additional back up, do them from "full" slave. Stop it for a
while, do backups and then restart slave again.

For huge backups in our office we use old, and I mean old, Dells with
huge 500GB drives running one of *Nix's


Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, January 29, 2007 3:27 PM
To: Mikhail Berman
Cc: mysql
Subject: RE: Mysqldump Files

Hi Mikhail,

I don't think that would save much space, in terms of file size. The
tables that are actively getting inserts are large and growing larger
(~750,000 records), and those that have no activity are either currently

empty or have less than a hundred records in them. So just dumping the
active tables will comprise I'd guess 99% or more of the database size.

David

--

Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use "replicate-do-table = [table_name]" in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from
the database is used via select statements for graphical display and
report generation amongst other uses.

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and
tarred, the entire file is 31.7 MB in size, and this part of the backup
process now takes 46-47 minutes.

The rate of acquisition of data will be fairly constant, and up to 3
years
of data will be kept on the live master-slave, so simply doubling all
these values seems a realistic expectation for a full backup of the
database after 3 years.  Data older than 3 years would be deleted from
the
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database
versus using mysqldump with a where clause, i.e., doing a daily
incremental backup, say of the last 24 hours.

Also, what are the key mysqldump and/or server variables to pay
attention
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




RE: Mysqldump Files

2007-01-29 Thread Mikhail Berman
Hi David,

Is the space on hard-drive is major concern of yours or abilities to
recover from crash is?

Backups are usually taking to be able to recover from a crash. Which in
its turn means if there is a way to recover faster it is better. Having
slave that is constantly updated gives you very quick way of recovering
if master goes down. Just point you PHP scripts to slave and be happy.

If you need additional back up, do them from "full" slave. Stop it for a
while, do backups and then restart slave again.

For huge backups in our office we use old, and I mean old, Dells with
huge 500GB drives running one of *Nix's


Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 29, 2007 3:27 PM
To: Mikhail Berman
Cc: mysql
Subject: RE: Mysqldump Files

Hi Mikhail,

I don't think that would save much space, in terms of file size. The 
tables that are actively getting inserts are large and growing larger 
(~750,000 records), and those that have no activity are either currently

empty or have less than a hundred records in them. So just dumping the 
active tables will comprise I'd guess 99% or more of the database size. 

David

--

Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use "replicate-do-table = [table_name]" in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server 
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000 
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from 
the database is used via select statements for graphical display and 
report generation amongst other uses. 

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2 
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and 
tarred, the entire file is 31.7 MB in size, and this part of the backup 
process now takes 46-47 minutes. 

The rate of acquisition of data will be fairly constant, and up to 3
years 
of data will be kept on the live master-slave, so simply doubling all 
these values seems a realistic expectation for a full backup of the 
database after 3 years.  Data older than 3 years would be deleted from
the 
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database 
versus using mysqldump with a where clause, i.e., doing a daily 
incremental backup, say of the last 24 hours. 

Also, what are the key mysqldump and/or server variables to pay
attention 
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Mysqldump Files

2007-01-29 Thread dpgirago
Hi Mikhail,

I don't think that would save much space, in terms of file size. The 
tables that are actively getting inserts are large and growing larger 
(~750,000 records), and those that have no activity are either currently 
empty or have less than a hundred records in them. So just dumping the 
active tables will comprise I'd guess 99% or more of the database size. 

David

--

Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use "replicate-do-table = [table_name]" in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server 
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000 
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from 
the database is used via select statements for graphical display and 
report generation amongst other uses. 

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2 
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and 
tarred, the entire file is 31.7 MB in size, and this part of the backup 
process now takes 46-47 minutes. 

The rate of acquisition of data will be fairly constant, and up to 3
years 
of data will be kept on the live master-slave, so simply doubling all 
these values seems a realistic expectation for a full backup of the 
database after 3 years.  Data older than 3 years would be deleted from
the 
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database 
versus using mysqldump with a where clause, i.e., doing a daily 
incremental backup, say of the last 24 hours. 

Also, what are the key mysqldump and/or server variables to pay
attention 
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

RE: Mysqldump Files

2007-01-29 Thread Mikhail Berman
Hi David,

Let me point you in a bit different direction. You are already running
replication as it seems from your E-mail

So, why not just run chained replication from second to the third server
and use "replicate-do-table = [table_name]" in my.cnf of the third
server to limit selection of tables to be used by web server. Or do full
replication to another server from the first one for full backup?

Regards,

Mikhail Berman

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 29, 2007 2:33 PM
To: mysql
Subject: Mysqldump Files

Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server 
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000 
records per day.  The data are insert-only into about 25 of 31 tables in

the database -- no updates are ever applied to the data. Information
from 
the database is used via select statements for graphical display and 
report generation amongst other uses. 

A PHP backup script using mysqldump runs as a cron job each night from a

third server which also functions as an intranet webserver. After 1 1/2 
years of operation, the mysqldump file of the entire database is roughly

760 MB, and this takes under 2 minutes to create.  Once bzipped and 
tarred, the entire file is 31.7 MB in size, and this part of the backup 
process now takes 46-47 minutes. 

The rate of acquisition of data will be fairly constant, and up to 3
years 
of data will be kept on the live master-slave, so simply doubling all 
these values seems a realistic expectation for a full backup of the 
database after 3 years.  Data older than 3 years would be deleted from
the 
master-slave system.

How long it would be reasonable to keep doing a full dump of the
database 
versus using mysqldump with a where clause, i.e., doing a daily 
incremental backup, say of the last 24 hours. 

Also, what are the key mysqldump and/or server variables to pay
attention 
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysqldump Files

2007-01-29 Thread dpgirago
Howdy Guys and Gals,

We are acquiring data on background radiation in a master-slave server 
environment (RH9.0, MySQL 4.0.24) at the rate of approximately 19,000 
records per day.  The data are insert-only into about 25 of 31 tables in 
the database -- no updates are ever applied to the data. Information from 
the database is used via select statements for graphical display and 
report generation amongst other uses. 

A PHP backup script using mysqldump runs as a cron job each night from a 
third server which also functions as an intranet webserver. After 1 1/2 
years of operation, the mysqldump file of the entire database is roughly 
760 MB, and this takes under 2 minutes to create.  Once bzipped and 
tarred, the entire file is 31.7 MB in size, and this part of the backup 
process now takes 46-47 minutes. 

The rate of acquisition of data will be fairly constant, and up to 3 years 
of data will be kept on the live master-slave, so simply doubling all 
these values seems a realistic expectation for a full backup of the 
database after 3 years.  Data older than 3 years would be deleted from the 
master-slave system.

How long it would be reasonable to keep doing a full dump of the database 
versus using mysqldump with a where clause, i.e., doing a daily 
incremental backup, say of the last 24 hours. 

Also, what are the key mysqldump and/or server variables to pay attention 
to in order to process these large, maybe multi-gigabyte dump files?

Thanks,

David

Re: Managing big mysqldump files

2006-08-19 Thread Steve Edberg

At 4:03 PM +0530 8/19/06, Anil  wrote:

Hi List,



We are facing a problem of managing mysqldump out put file which is
currently of size  80 GB and it is growing daily by 2 - 3 GB, but we have a
linux partition of only 90 GB.. Our backup process is  first generate the
mysqldump file of total database and then compress the dump file and remove
the dump file. Is there any way  to get compressed dump file instead of
generating dump file and then compressing it later. Any ideas or suggestions
please



Thanks

Anil





Short answer: Yes -

mysqldump  | gzip > outputfile.gz

Other alternatives:

You could direct output to a filesystem that is larger than the 90GB 
filesystem you mention (perhaps NFS mounted?).


You could pipe the output of gzip through ssh to a remote server.

You could use bzip2, which compresses substantially better than gzip, 
but with a significant performance/speed penalty (that is, do 
mysqldump | bzip2 > outputfile.bz2).


Try 'man gzip' and 'man bzip2' for more info.

steve

--
+--- my people are the people of the dessert, ---+
| Steve Edberghttp://pgfsun.ucdavis.edu/ |
| UC Davis Genome Center[EMAIL PROTECTED] |
| Bioinformatics programming/database/sysadmin (530)754-9127 |
+ said t e lawrence, picking up his fork +

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Managing big mysqldump files

2006-08-19 Thread Logan, David (SST - Adelaide)
Hi Anil,

Why not pipe the mysqldump direct into gzip?

eg: mysqldump etc ... | gzip -c

Regards 



---
** _/ **  David Logan 
***   _/ ***  ITO Delivery Specialist - Database
*_/*  Hewlett-Packard Australia Ltd
_/_/_/  _/_/_/    E-Mail: [EMAIL PROTECTED]
   _/  _/  _/  _/     Desk:   +618 8408 4273
  _/  _/  _/_/_/  Mobile: 0417 268 665
*_/   **
**  _/    Postal: 148 Frome Street,
   _/ **  Adelaide SA 5001
  Australia 
invent   
---

-Original Message-
From: Anil [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 19 August 2006 8:03 PM
To: mysql@lists.mysql.com
Subject: Managing big mysqldump files

Hi List,

 

We are facing a problem of managing mysqldump out put file which is
currently of size  80 GB and it is growing daily by 2 - 3 GB, but we
have a
linux partition of only 90 GB.. Our backup process is  first generate
the
mysqldump file of total database and then compress the dump file and
remove
the dump file. Is there any way  to get compressed dump file instead of
generating dump file and then compressing it later. Any ideas or
suggestions
please

 

Thanks

Anil

 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Managing big mysqldump files

2006-08-19 Thread Anil
Hi List,

 

We are facing a problem of managing mysqldump out put file which is
currently of size  80 GB and it is growing daily by 2 - 3 GB, but we have a
linux partition of only 90 GB.. Our backup process is  first generate the
mysqldump file of total database and then compress the dump file and remove
the dump file. Is there any way  to get compressed dump file instead of
generating dump file and then compressing it later. Any ideas or suggestions
please

 

Thanks

Anil

 



Re: Too slow recovering mysqldump files

2004-01-21 Thread Mikel -
Thanks for your fast response, I'll adjust the autocommit parametr, too, and 
I'll will let you know if this improves the restore of my data base.

Greetings

Mikel


From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: "Mikel -" <[EMAIL PROTECTED]>
CC: <[EMAIL PROTECTED]>
Subject: Re: Too slow recovering mysqldump files
Date: Wed, 21 Jan 2004 21:21:18 +0200
Mikel,

- Original Message -
From: "Mikel -" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, January 21, 2004 7:19 PM
Subject: Re: Too slow recovering mysqldump files
> I'll set up those parameters in my.cnf and try again the recovery from 
the
> dump file. I'll hope these changes make faster the recovery.

Dr. Ullrich suggested that you should set

innodb_flush_log_at_trx_commit=0

during the big import, or wrap the big import inside a transaction.
Otherwise, the speed may be restricted to 1 row / disk rotation = 100
rows/second.
> Thanks again Heikki,
>
> Mikel
>
> P.S. Do you recommend the innodb hot backup tool, does it do faster than
the
> other options, or is a combination of both?
InnoDB Hot Backup takes a BINARY backups of your database. If you need to
restore from a backup taken with InnoDB Hot Backup, you do not need to
import table dumps. Thus restoration is much faster than if you use table
dumps as a backup (table dump = LOGICAL backup).
Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
Order MySQL technical support from https://order.mysql.com/



> >From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> >To: "Mikel -" <[EMAIL PROTECTED]>
> >CC: <[EMAIL PROTECTED]>
> >Subject: Re: Too slow recovering mysqldump files
> >Date: Wed, 21 Jan 2004 05:10:27 +0200
> >
> >Mikel,
> >
> >it is apparently disk-bound.
> >
> >I recommend setting innodb_buffer_pool_size as high as 1.4 GB during 
such
> >big import. Adjust innodb_log_file_size accordingly, and make 
key_buffer
> >smaller during the big import.
> >
> >Help is coming: Marko Mäkelä is writing a compressed InnoDB table 
format,
> >which can squeeze a typical table to 1 / 4 the size of a normal InnoDB
> >table. I believe the compressed format will be available in October 
2004.

> >
> >Best regards,
> >
> >Heikki Tuuri
> >Innobase Oy
> >http://www.innodb.com
> >Foreign keys, transactions, and row level locking for MySQL
> >InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
MyISAM
> >tables
> >
> >Order MySQL technical support from https://order.mysql.com/
> >
> >
> >- Original Message -
> >From: "Mikel -" <[EMAIL PROTECTED]>
> >To: <[EMAIL PROTECTED]>
> >Cc: <[EMAIL PROTECTED]>
> >Sent: Wednesday, January 21, 2004 12:32 AM
> >Subject: Re: Too slow recovering mysqldump files
> >
> >
> > > Here we are my options:
> > >
> > > set-variable = innodb_log_file_size=150M
> > > set-variable = innodb_log_buffer_size=8M
> > >
> > > I follow the example that came here:
> > > "http://www.mysql.com/doc/en/InnoDB_start.html";
> > >
> > > Greetings and best regards
> > >
> > > Mikel
> > >
> > >
> > > >From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> > > >To: <[EMAIL PROTECTED]>
> > > >Subject: Re: Too slow recovering mysqldump files
> > > >Date: Mon, 19 Jan 2004 22:44:50 +0200
> > > >
> > > >Mikel,
> > > >
> > > >have you set the size of the InnoDB log files as recommended in the
> >manual?
> > > >
> > > >Best regards,
> > > >
> > > >Heikki Tuuri
> > > >Innobase Oy
> > > >http://www.innodb.com
> > > >Foreign keys, transactions, and row level locking for MySQL
> > > >InnoDB Hot Backup - a hot backup tool for InnoDB which also backs 
up
> >MyISAM
> > > >tables
> > > >
> > > >Order MySQL technical support from https://order.mysql.com/
> > > >
> > > >- Original Message -
> > > >From: ""Mikel -"" <[EMAIL PROTECTED]>
> > > >Newsgroups: mailing.database.myodbc
> > > >Sent: Monday, January 19, 2004 7:25 PM
> > > >Subject: Too slow recovering mysqldump files
> > > >
> > > >
> > > > &

Re: Too slow recovering mysqldump files

2004-01-21 Thread Heikki Tuuri
Mikel,

- Original Message - 
From: "Mikel -" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, January 21, 2004 7:19 PM
Subject: Re: Too slow recovering mysqldump files


> I'll set up those parameters in my.cnf and try again the recovery from the
> dump file. I'll hope these changes make faster the recovery.

Dr. Ullrich suggested that you should set

innodb_flush_log_at_trx_commit=0

during the big import, or wrap the big import inside a transaction.
Otherwise, the speed may be restricted to 1 row / disk rotation = 100
rows/second.

> Thanks again Heikki,
>
> Mikel
>
> P.S. Do you recommend the innodb hot backup tool, does it do faster than
the
> other options, or is a combination of both?

InnoDB Hot Backup takes a BINARY backups of your database. If you need to
restore from a backup taken with InnoDB Hot Backup, you do not need to
import table dumps. Thus restoration is much faster than if you use table
dumps as a backup (table dump = LOGICAL backup).

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables

Order MySQL technical support from https://order.mysql.com/




> >From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> >To: "Mikel -" <[EMAIL PROTECTED]>
> >CC: <[EMAIL PROTECTED]>
> >Subject: Re: Too slow recovering mysqldump files
> >Date: Wed, 21 Jan 2004 05:10:27 +0200
> >
> >Mikel,
> >
> >it is apparently disk-bound.
> >
> >I recommend setting innodb_buffer_pool_size as high as 1.4 GB during such
> >big import. Adjust innodb_log_file_size accordingly, and make key_buffer
> >smaller during the big import.
> >
> >Help is coming: Marko Mäkelä is writing a compressed InnoDB table format,
> >which can squeeze a typical table to 1 / 4 the size of a normal InnoDB
> >table. I believe the compressed format will be available in October 2004.

> >
> >Best regards,
> >
> >Heikki Tuuri
> >Innobase Oy
> >http://www.innodb.com
> >Foreign keys, transactions, and row level locking for MySQL
> >InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
MyISAM
> >tables
> >
> >Order MySQL technical support from https://order.mysql.com/
> >
> >
> >- Original Message -
> >From: "Mikel -" <[EMAIL PROTECTED]>
> >To: <[EMAIL PROTECTED]>
> >Cc: <[EMAIL PROTECTED]>
> >Sent: Wednesday, January 21, 2004 12:32 AM
> >Subject: Re: Too slow recovering mysqldump files
> >
> >
> > > Here we are my options:
> > >
> > > set-variable = innodb_log_file_size=150M
> > > set-variable = innodb_log_buffer_size=8M
> > >
> > > I follow the example that came here:
> > > "http://www.mysql.com/doc/en/InnoDB_start.html";
> > >
> > > Greetings and best regards
> > >
> > > Mikel
> > >
> > >
> > > >From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> > > >To: <[EMAIL PROTECTED]>
> > > >Subject: Re: Too slow recovering mysqldump files
> > > >Date: Mon, 19 Jan 2004 22:44:50 +0200
> > > >
> > > >Mikel,
> > > >
> > > >have you set the size of the InnoDB log files as recommended in the
> >manual?
> > > >
> > > >Best regards,
> > > >
> > > >Heikki Tuuri
> > > >Innobase Oy
> > > >http://www.innodb.com
> > > >Foreign keys, transactions, and row level locking for MySQL
> > > >InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
> >MyISAM
> > > >tables
> > > >
> > > >Order MySQL technical support from https://order.mysql.com/
> > > >
> > > >- Original Message -
> > > >From: ""Mikel -"" <[EMAIL PROTECTED]>
> > > >Newsgroups: mailing.database.myodbc
> > > >Sent: Monday, January 19, 2004 7:25 PM
> > > >Subject: Too slow recovering mysqldump files
> > > >
> > > >
> > > > > Hi list,  does anyone know a faster way to recover a mysqldump
file
> > > >cause
> > > > > When I recovered one dump file it took 26 hours ! to finish, I
think
> > > >it's
> > > > > too slow.
> > > > > Thnx in advanced, greetings
> > > > >
> > > > > MySQL server 3.23.58
&g

Re: Too slow recovering mysqldump files

2004-01-21 Thread Mikel -
I'll set up those parameters in my.cnf and try again the recovery from the 
dump file. I'll hope these changes make faster the recovery.

Thanks again Heikki,

Mikel

P.S. Do you recommend the innodb hot backup tool, does it do faster than the 
other options, or is a combination of both?

From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: "Mikel -" <[EMAIL PROTECTED]>
CC: <[EMAIL PROTECTED]>
Subject: Re: Too slow recovering mysqldump files
Date: Wed, 21 Jan 2004 05:10:27 +0200
Mikel,

it is apparently disk-bound.

I recommend setting innodb_buffer_pool_size as high as 1.4 GB during such
big import. Adjust innodb_log_file_size accordingly, and make key_buffer
smaller during the big import.
Help is coming: Marko Mäkelä is writing a compressed InnoDB table format,
which can squeeze a typical table to 1 / 4 the size of a normal InnoDB
table. I believe the compressed format will be available in October 2004.
Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
Order MySQL technical support from https://order.mysql.com/

- Original Message -
From: "Mikel -" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, January 21, 2004 12:32 AM
Subject: Re: Too slow recovering mysqldump files
> Here we are my options:
>
> set-variable = innodb_log_file_size=150M
> set-variable = innodb_log_buffer_size=8M
>
> I follow the example that came here:
> "http://www.mysql.com/doc/en/InnoDB_start.html";
>
> Greetings and best regards
>
> Mikel
>
>
> >From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> >To: <[EMAIL PROTECTED]>
> >Subject: Re: Too slow recovering mysqldump files
> >Date: Mon, 19 Jan 2004 22:44:50 +0200
> >
> >Mikel,
> >
> >have you set the size of the InnoDB log files as recommended in the
manual?
> >
> >Best regards,
> >
> >Heikki Tuuri
> >Innobase Oy
> >http://www.innodb.com
> >Foreign keys, transactions, and row level locking for MySQL
> >InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
MyISAM
> >tables
> >
> >Order MySQL technical support from https://order.mysql.com/
> >
> >- Original Message -
> >From: ""Mikel -"" <[EMAIL PROTECTED]>
> >Newsgroups: mailing.database.myodbc
> >Sent: Monday, January 19, 2004 7:25 PM
> >Subject: Too slow recovering mysqldump files
> >
> >
> > > Hi list,  does anyone know a faster way to recover a mysqldump file
> >cause
> > > When I recovered one dump file it took 26 hours ! to finish, I think
> >it's
> > > too slow.
> > > Thnx in advanced, greetings
> > >
> > > MySQL server 3.23.58
> > > RedHat 7.3
> > > 4GB RAM
> > > 2 scsi disk via fiber channel (333GB each)
> > > 2 processor Xeon 1.6GHZ
> > >
> > > dump file size: 2.5 GB
> > > ibdata: 11GB
> > > innodb tables
> > > key_buffer=850M
> > > innodb_buffer_pool_size=850M
> > > table_cache=1500
> > >
> > > _
> > > MSN. Más Útil Cada Día  http://www.msn.es/intmap/
> > >
> > >
> > > --
> > > MySQL General Mailing List
> > > For list archives: http://lists.mysql.com/mysql
> > > To unsubscribe:
> >http://lists.mysql.com/[EMAIL PROTECTED]
> > >
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.mysql.com/mysql
> >To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
> >
>
> _
> MSN. Más Útil Cada Día  http://www.msn.es/intmap/
>
_
MSN Fotos: la forma más fácil de compartir e imprimir fotos.  
http://photos.msn.es/support/worldwide.aspx

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Too slow recovering mysqldump files

2004-01-21 Thread Dr. Frank Ullrich
Mikel,

what about playing with the autocommit setting? 
I guess you use the mysql client for the restore, so you could set
autocommit off and perform a commit yourself every 1 rows or so.

In our case this boosts recovery by a factor of 5 to 10.

Regards,
 Frank.


Mikel - schrieb:
> 
> I will review the log files, I guess that I set the correct size of my
> innodb log files, but I'll check it out again.
> 
> Best Regards
> 
> Mikel
> 
> P.S. How about of InnoDB Hot Backup? does it  do much faster?
> 
> >From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> >To: <[EMAIL PROTECTED]>
> >Subject: Re: Too slow recovering mysqldump files
> >Date: Mon, 19 Jan 2004 22:44:50 +0200
> >
> >Mikel,
> >
> >have you set the size of the InnoDB log files as recommended in the manual?
> >
> >Best regards,
> >
> >Heikki Tuuri
> >Innobase Oy
> >http://www.innodb.com
> >Foreign keys, transactions, and row level locking for MySQL
> >InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
> >tables
> >
> >Order MySQL technical support from https://order.mysql.com/
> >
> >- Original Message -
> >From: ""Mikel -"" <[EMAIL PROTECTED]>
> >Newsgroups: mailing.database.myodbc
> >Sent: Monday, January 19, 2004 7:25 PM
> >Subject: Too slow recovering mysqldump files
> >
> >
> > > Hi list,  does anyone know a faster way to recover a mysqldump file
> >cause
> > > When I recovered one dump file it took 26 hours ! to finish, I think
> >it's
> > > too slow.
> > > Thnx in advanced, greetings
> > >
> > > MySQL server 3.23.58
> > > RedHat 7.3
> > > 4GB RAM
> > > 2 scsi disk via fiber channel (333GB each)
> > > 2 processor Xeon 1.6GHZ
> > >
> > > dump file size: 2.5 GB
> > > ibdata: 11GB
> > > innodb tables
> > > key_buffer=850M
> > > innodb_buffer_pool_size=850M
> > > table_cache=1500
> > >
> > > _
> > > MSN. Más Útil Cada Día  http://www.msn.es/intmap/
> > >
> > >
> > > --
> > > MySQL General Mailing List
> > > For list archives: http://lists.mysql.com/mysql
> > > To unsubscribe:
> >http://lists.mysql.com/[EMAIL PROTECTED]
> > >
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.mysql.com/mysql
> >To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> >
> 
> _
> Charla con tus amigos en línea mediante MSN Messenger:
> http://messenger.microsoft.com/es
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

-- 
Dr. Frank Ullrich, DBA Netzwerkadministration 
Heise Zeitschriften Verlag GmbH & Co KG, Helstorfer Str. 7, D-30625
Hannover
E-Mail: [EMAIL PROTECTED]
Phone: +49 511 5352 587; FAX: +49 511 5352 538

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Too slow recovering mysqldump files

2004-01-20 Thread Heikki Tuuri
Mikel,

it is apparently disk-bound.

I recommend setting innodb_buffer_pool_size as high as 1.4 GB during such
big import. Adjust innodb_log_file_size accordingly, and make key_buffer
smaller during the big import.

Help is coming: Marko Mäkelä is writing a compressed InnoDB table format,
which can squeeze a typical table to 1 / 4 the size of a normal InnoDB
table. I believe the compressed format will be available in October 2004.

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables

Order MySQL technical support from https://order.mysql.com/


- Original Message - 
From: "Mikel -" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Wednesday, January 21, 2004 12:32 AM
Subject: Re: Too slow recovering mysqldump files


> Here we are my options:
>
> set-variable = innodb_log_file_size=150M
> set-variable = innodb_log_buffer_size=8M
>
> I follow the example that came here:
> "http://www.mysql.com/doc/en/InnoDB_start.html";
>
> Greetings and best regards
>
> Mikel
>
>
> >From: "Heikki Tuuri" <[EMAIL PROTECTED]>
> >To: <[EMAIL PROTECTED]>
> >Subject: Re: Too slow recovering mysqldump files
> >Date: Mon, 19 Jan 2004 22:44:50 +0200
> >
> >Mikel,
> >
> >have you set the size of the InnoDB log files as recommended in the
manual?
> >
> >Best regards,
> >
> >Heikki Tuuri
> >Innobase Oy
> >http://www.innodb.com
> >Foreign keys, transactions, and row level locking for MySQL
> >InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up
MyISAM
> >tables
> >
> >Order MySQL technical support from https://order.mysql.com/
> >
> >- Original Message -
> >From: ""Mikel -"" <[EMAIL PROTECTED]>
> >Newsgroups: mailing.database.myodbc
> >Sent: Monday, January 19, 2004 7:25 PM
> >Subject: Too slow recovering mysqldump files
> >
> >
> > > Hi list,  does anyone know a faster way to recover a mysqldump file
> >cause
> > > When I recovered one dump file it took 26 hours ! to finish, I think
> >it's
> > > too slow.
> > > Thnx in advanced, greetings
> > >
> > > MySQL server 3.23.58
> > > RedHat 7.3
> > > 4GB RAM
> > > 2 scsi disk via fiber channel (333GB each)
> > > 2 processor Xeon 1.6GHZ
> > >
> > > dump file size: 2.5 GB
> > > ibdata: 11GB
> > > innodb tables
> > > key_buffer=850M
> > > innodb_buffer_pool_size=850M
> > > table_cache=1500
> > >
> > > _
> > > MSN. Más Útil Cada Día  http://www.msn.es/intmap/
> > >
> > >
> > > --
> > > MySQL General Mailing List
> > > For list archives: http://lists.mysql.com/mysql
> > > To unsubscribe:
> >http://lists.mysql.com/[EMAIL PROTECTED]
> > >
> >
> >
> >--
> >MySQL General Mailing List
> >For list archives: http://lists.mysql.com/mysql
> >To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
> >
>
> _
> MSN. Más Útil Cada Día  http://www.msn.es/intmap/
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Too slow recovering mysqldump files

2004-01-20 Thread Mikel -
Here we are my options:

set-variable = innodb_log_file_size=150M
set-variable = innodb_log_buffer_size=8M
I follow the example that came here: 
"http://www.mysql.com/doc/en/InnoDB_start.html";

Greetings and best regards

Mikel


From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: Re: Too slow recovering mysqldump files
Date: Mon, 19 Jan 2004 22:44:50 +0200
Mikel,

have you set the size of the InnoDB log files as recommended in the manual?

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
Order MySQL technical support from https://order.mysql.com/

- Original Message -
From: ""Mikel -"" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Monday, January 19, 2004 7:25 PM
Subject: Too slow recovering mysqldump files
> Hi list,  does anyone know a faster way to recover a mysqldump file 
cause
> When I recovered one dump file it took 26 hours ! to finish, I think 
it's
> too slow.
> Thnx in advanced, greetings
>
> MySQL server 3.23.58
> RedHat 7.3
> 4GB RAM
> 2 scsi disk via fiber channel (333GB each)
> 2 processor Xeon 1.6GHZ
>
> dump file size: 2.5 GB
> ibdata: 11GB
> innodb tables
> key_buffer=850M
> innodb_buffer_pool_size=850M
> table_cache=1500
>
> _
> MSN. Más Útil Cada Día  http://www.msn.es/intmap/
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
_
MSN. Más Útil Cada Día  http://www.msn.es/intmap/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Too slow recovering mysqldump files

2004-01-20 Thread Mikel -
I will review the log files, I guess that I set the correct size of my 
innodb log files, but I'll check it out again.

Best Regards

Mikel

P.S. How about of InnoDB Hot Backup? does it  do much faster?


From: "Heikki Tuuri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Subject: Re: Too slow recovering mysqldump files
Date: Mon, 19 Jan 2004 22:44:50 +0200
Mikel,

have you set the size of the InnoDB log files as recommended in the manual?

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables
Order MySQL technical support from https://order.mysql.com/

- Original Message -
From: ""Mikel -"" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Monday, January 19, 2004 7:25 PM
Subject: Too slow recovering mysqldump files
> Hi list,  does anyone know a faster way to recover a mysqldump file 
cause
> When I recovered one dump file it took 26 hours ! to finish, I think 
it's
> too slow.
> Thnx in advanced, greetings
>
> MySQL server 3.23.58
> RedHat 7.3
> 4GB RAM
> 2 scsi disk via fiber channel (333GB each)
> 2 processor Xeon 1.6GHZ
>
> dump file size: 2.5 GB
> ibdata: 11GB
> innodb tables
> key_buffer=850M
> innodb_buffer_pool_size=850M
> table_cache=1500
>
> _
> MSN. Más Útil Cada Día  http://www.msn.es/intmap/
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
_
Charla con tus amigos en línea mediante MSN Messenger:  
http://messenger.microsoft.com/es

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Too slow recovering mysqldump files

2004-01-19 Thread Heikki Tuuri
Mikel,

have you set the size of the InnoDB log files as recommended in the manual?

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Foreign keys, transactions, and row level locking for MySQL
InnoDB Hot Backup - a hot backup tool for InnoDB which also backs up MyISAM
tables

Order MySQL technical support from https://order.mysql.com/

- Original Message - 
From: ""Mikel -"" <[EMAIL PROTECTED]>
Newsgroups: mailing.database.myodbc
Sent: Monday, January 19, 2004 7:25 PM
Subject: Too slow recovering mysqldump files


> Hi list,  does anyone know a faster way to recover a mysqldump file cause
> When I recovered one dump file it took 26 hours ! to finish, I think it's
> too slow.
> Thnx in advanced, greetings
>
> MySQL server 3.23.58
> RedHat 7.3
> 4GB RAM
> 2 scsi disk via fiber channel (333GB each)
> 2 processor Xeon 1.6GHZ
>
> dump file size: 2.5 GB
> ibdata: 11GB
> innodb tables
> key_buffer=850M
> innodb_buffer_pool_size=850M
> table_cache=1500
>
> _
> MSN. Más Útil Cada Día  http://www.msn.es/intmap/
>
>
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
>


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Too slow recovering mysqldump files

2004-01-19 Thread Mikel -
Hi list,  does anyone know a faster way to recover a mysqldump file cause 
When I recovered one dump file it took 26 hours ! to finish, I think it's 
too slow.
Thnx in advanced, greetings

MySQL server 3.23.58
RedHat 7.3
4GB RAM
2 scsi disk via fiber channel (333GB each)
2 processor Xeon 1.6GHZ
dump file size: 2.5 GB
ibdata: 11GB
innodb tables
key_buffer=850M
innodb_buffer_pool_size=850M
table_cache=1500
_
MSN. Más Útil Cada Día  http://www.msn.es/intmap/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]