Mathieu Bruneau wrote:
> I never experience any dump that were slow due to the index. The index
> aren't dumped anyway they will be recreate when you import them back so
> it shouldn't matter. (And that will cause problem if the db is running)
> so I wouldn't drop the index on your table if I were you...

Good point.


> Your getting a lot of compression ratio 2.7G => 270 Megs

Opps I wasn't clear, I killed the dump when it was < 10% done.  It never
would've finished.


> , is it possible
> that your dump is CPU bound ? I have seen this quite often when using
> bzip2 for example which makes the dump takes very long! You can see that
> from top when the dump is running. If that's the case you could try gzip
> which takes much less cpu (but will give a bigger dump size)

I am using gzip ... the cpu utilization is at 0%.  The dump runs on a
different server than the DB.


> Also about using the mysqldump 5.0 on a mysql 4.1 server... hmmm not
> sure about which side effect that may have! I usually use the version
> that comes with the server...

I guess I could copy the binary and libs to another server to test this.
 However strace suggests that mysqldump is waiting for the server to
send data (its reading the socket).

I just checked my latest dump attempt and it has now spent 128077
seconds trying to dump the 29GB table and making almost no progress (1
row every 30 seconds as estimated by strace).  I guess the MVCC
implementation is pushed to its limits because I can see other queries
not finishing in a timely manner. :(

Anyone have any other ideas?

ds

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to