Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Dan Buettner

MyISAM does table level locking, which is to say that read (select) and
write (insert/update/delete) cannot happen at the same time.  One will wait
for the other.

If your select takes 10 seconds, then any write operations will block for
those 10 seconds.  Other read processes should be unaffected, though perhaps
slightly slower depending on resources you are consuming.

InnoDB avoids the problem described above by implementing transactions and
row-level locking, so that reads can proceed while writes are happening in
many cases.  InnoDB does have disadvantages compared to MyISAM so it's not
always a no-brain switch.

Dan



On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:


The table is MyISAM, does that matter?


On May 2, 2007, at 7:28 AM, Dan Buettner wrote:

> A few observations:
>
> 1 - if the table is in the InnoDB format, you aren't going to lock
> up their server, as InnoDB doesn't do table locking.  SHOW TABLE
> STATUS LIKE 'tablename' will tell you.
>
> 2 - Ryan's mysqldump script looks useful - also, there's a little-
> used option with mysqldump that lets you specify a where clause to
> get just the records you want into the SQL file.
>
> 3 - since you're not operating on the server itself, but
> transferring over the net, the time for the transfer could become a
> problem, especially if you're not using InnoDB.  You could copy the
> data into a temp table and then work with that to your heart's
> content, without tying up production tables.  Something like this:
> DROP TABLE IF EXISTS _tmp_tablename;
> CREATE TABLE _tmp_tablename LIKE tablename;
> INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant
> blah blah
> then use select into outfile, mysqldump, php etc. on the
> _tmp_tablename table.
> While this does involve copying lots of records, in my experience,
> this sort of thing can be very fast, since it's all self-contained
> in the database software.  Not having any indices on your temp
> table will help too.
>
> HTH,
> Dan
>
>
> On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:
> I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
> need to dump it in CSV format and zip the file.
>
> This is not on my server, and it's in production, so I don't want to
> risk testing different methods and possibly hanging up their server
> for a period of time, so I wanted to seek advice here first to find
> what's the best way to proceed.
>
> I can easily use PHP to query the table for the results I want and
> write a file line by line and then zip it, but I'm worried that might
> take too long and hang up the machine. The other way to go is some
> kind of sql dump command, which I guess would be faster, but not sure
> how much control I'd have over the exact format of the file. Any
> suggestions which way I should proceed? Not hanging up their server
> is my prime concern.
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe: http://lists.mysql.com/mysql?
> [EMAIL PROTECTED]
>
>




Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Brian Dunning

The table is MyISAM, does that matter?


On May 2, 2007, at 7:28 AM, Dan Buettner wrote:


A few observations:

1 - if the table is in the InnoDB format, you aren't going to lock  
up their server, as InnoDB doesn't do table locking.  SHOW TABLE  
STATUS LIKE 'tablename' will tell you.


2 - Ryan's mysqldump script looks useful - also, there's a little- 
used option with mysqldump that lets you specify a where clause to  
get just the records you want into the SQL file.


3 - since you're not operating on the server itself, but  
transferring over the net, the time for the transfer could become a  
problem, especially if you're not using InnoDB.  You could copy the  
data into a temp table and then work with that to your heart's  
content, without tying up production tables.  Something like this:

DROP TABLE IF EXISTS _tmp_tablename;
CREATE TABLE _tmp_tablename LIKE tablename;
INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant  
blah blah
then use select into outfile, mysqldump, php etc. on the  
_tmp_tablename table.
While this does involve copying lots of records, in my experience,  
this sort of thing can be very fast, since it's all self-contained  
in the database software.  Not having any indices on your temp  
table will help too.


HTH,
Dan


On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.

This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I wanted to seek advice here first to find
what's the best way to proceed.

I can easily use PHP to query the table for the results I want and
write a file line by line and then zip it, but I'm worried that might
take too long and hang up the machine. The other way to go is some
kind of sql dump command, which I guess would be faster, but not sure
how much control I'd have over the exact format of the file. Any
suggestions which way I should proceed? Not hanging up their server
is my prime concern.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]







Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Dan Buettner

A few observations:

1 - if the table is in the InnoDB format, you aren't going to lock up their
server, as InnoDB doesn't do table locking.  SHOW TABLE STATUS LIKE
'tablename' will tell you.

2 - Ryan's mysqldump script looks useful - also, there's a little-used
option with mysqldump that lets you specify a where clause to get just the
records you want into the SQL file.

3 - since you're not operating on the server itself, but transferring over
the net, the time for the transfer could become a problem, especially if
you're not using InnoDB.  You could copy the data into a temp table and then
work with that to your heart's content, without tying up production tables.
Something like this:
DROP TABLE IF EXISTS _tmp_tablename;
CREATE TABLE _tmp_tablename LIKE tablename;
INSERT INTO _tmp_tablename SELECT * FROM tablename WHERE whatiwant blah blah
then use select into outfile, mysqldump, php etc. on the _tmp_tablename
table.
While this does involve copying lots of records, in my experience, this sort
of thing can be very fast, since it's all self-contained in the database
software.  Not having any indices on your temp table will help too.

HTH,
Dan


On 5/2/07, Brian Dunning <[EMAIL PROTECTED]> wrote:


I have a huge MySQL table, 2.1 million records, 200MB. Once a week I
need to dump it in CSV format and zip the file.

This is not on my server, and it's in production, so I don't want to
risk testing different methods and possibly hanging up their server
for a period of time, so I wanted to seek advice here first to find
what's the best way to proceed.

I can easily use PHP to query the table for the results I want and
write a file line by line and then zip it, but I'm worried that might
take too long and hang up the machine. The other way to go is some
kind of sql dump command, which I guess would be faster, but not sure
how much control I'd have over the exact format of the file. Any
suggestions which way I should proceed? Not hanging up their server
is my prime concern.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Ryan Stille

I use a script like this:

  #!/bin/bash
  DATE=`date +%A`

  DESTFILE="/home/mysql-backups/mysql-dump-$DATE"

  /usr/bin/mysqldump --skip-extended-insert -uroot -p"password" 
mydatabase > $DESTFILE.sql

  /usr/bin/zip -qjTm $DESTFILE.zip $DESTFILE.sql

I end up with:
mysql-dump-Friday.zip
mysql-dump-Saturday.zip
etc.

Modify to suit your needs.

-Ryan

Brian Dunning wrote:
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I 
need to dump it in CSV format and zip the file.


This is not on my server, and it's in production, so I don't want to 
risk testing different methods and possibly hanging up their server 
for a period of time, so I wanted to seek advice here first to find 
what's the best way to proceed.


I can easily use PHP to query the table for the results I want and 
write a file line by line and then zip it, but I'm worried that might 
take too long and hang up the machine. The other way to go is some 
kind of sql dump command, which I guess would be faster, but not sure 
how much control I'd have over the exact format of the file. Any 
suggestions which way I should proceed? Not hanging up their server is 
my prime concern.


--MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: [X-POST] Fastest way to dump this huge table

2007-05-02 Thread Jay Blanchard
[snip]
I have a huge MySQL table, 2.1 million records, 200MB. Once a week I  
need to dump it in CSV format and zip the file.

This is not on my server, and it's in production, so I don't want to  
risk testing different methods and possibly hanging up their server  
for a period of time, so I wanted to seek advice here first to find  
what's the best way to proceed.

I can easily use PHP to query the table for the results I want and  
write a file line by line and then zip it, but I'm worried that might  
take too long and hang up the machine. The other way to go is some  
kind of sql dump command, which I guess would be faster, but not sure  
how much control I'd have over the exact format of the file. Any  
suggestions which way I should proceed? Not hanging up their server  
is my prime concern.
[/snip]

SELECT * INTO OUTFILE "/directory/myfile.csv"
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
  LINES TERMINATED BY '\n'
  FROM table;

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]