Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server
This sounds like you need to raise max_allowed_packet for mysqldump (and possibly mysqld) - these are separate settings for both the client and the server. You can do this via the my.cnf (or ~/.my.cnf) or specify it as an option on the command line mysqldump --opt ... --max_allowed_packet=1G dbname backup-file. On Tue, Jan 13, 2009 at 2:58 PM, Dan d...@entropy.homelinux.org wrote: On Tue, 2009-01-13 at 12:19 +0530, Chandru wrote: Hi, Did u try using this command mysqldump --opt db_name db_name.sql -p 2bkp.err Not quite. Firstly, I had to alter the normal backup cron job, and that doesn't happen until late at night. Secondly, yes I added the redirection to capture errors. There were none ( empty file this time ). Thirdly, I didn't use '--opt'. I had no other suggestions yesterday ( before I went to bed anyway - there's 1 in my inbox this morning ), so I did some experimenting of my own and changed the dump command to: mysqldump --skip-opt --add-drop-table --add-locks --create-options --quick --lock-tables --set-charset --disable-keys dbmail dbmail.sql -pSOME_PASSWORD 2bkp.err This made mysql do 1 insert per record. The backup *appears* to have completed successfully. At least the end of the dump file looks valid. It ends dumping the last table, then a view, then I get: -- Dump completed on 2009-01-13 17:23:13 Previously it just finished part-way through dumping a blob. I have yet to do extensive testing on it. I suppose I should try importing the dump file into another server and see if I get the correct number of rows in each table ... The only issue now is that the dump file is much smaller than I would have expected. When using --opt, I was getting 30GB dump files. I would have expected the current format ( 1 insert statement per record ) to be much bigger, but it's 23GB. Now having said that, I did email the current DB administrator and ask him to get people to archive all emails with huge attachments somewhere on a network share ( people have some pretty big attachments ). Also I asked him to get people to clean out their Trash ( which happens only when we tell them to ). So I suppose it's not completely infeasible that this alone is responsible for the difference. Anyway, it's been a very disconcerting experience. It goes without saying that people would expect that anything that gets into a MySQL database should be able to be backed up by mysqldump. And it's worrying that the default --opt can't do that. When I get some time I'll enter a bug ... Thanks for you help Chandru. Dan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe: http://lists.mysql.com/mysql?unsub=andrew.b.gar...@gmail.com -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server
On Tue, 13 Jan 2009 18:34:44 -0600, Andrew Garner andrew.b.gar...@gmail.com wrote: This sounds like you need to raise max_allowed_packet for mysqldump (and possibly mysqld) - these are separate settings for both the client and the server. You can do this via the my.cnf (or ~/.my.cnf) or specify it as an option on the command line mysqldump --opt ... --max_allowed_packet=1G dbname backup-file. This is certainly the most common advice for this error, yes. I increased the max_allowed_packet size from 1M to 128M when the problem initially occured. This didn't fix anything. Since dbmail splits up all email body / attachments into small chunks and inserts these chunks in separate records, I really don't see how a max_allowed_packet size of 128M would fail ... especially since the data got in there with a max_allowed_packet size of 1M to begin with. The biggest email in the database is 50M. So even if dbmail *hadn't* split the email into separate records, a max_allowed_packet size of 128M should be *easily* big enough, shouldn't it? As for a max_allowed_packet size of 1G, that just sounds dangerous. The server has 900MB or so of chip RAM and 512MB of swap. It's also running a LOT of other services. I don't want something stupid happening like Linux's out-of-memory-killer coming along and killing MySQL, causing database corruption. Can someone please comment on this? If it's not dangerous, I will try it. As noted in a prior post, I 'successfully' completed a backup last night, and I'm testing it now, but it took 10 hours to complete, and was still running when people came in this morning, which is obviously not desirable, so if I can somehow still use the --opt option of mysqldump by making max_allowed_packet to some absolutely astronomical level without endangering things, maybe that's the way to go. Maybe ... Anyway, thanks for the comments Andrew. Dan -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server
On Tue, Jan 13, 2009 at 6:06 PM, Dan d...@entropy.homelinux.org wrote: On Tue, 13 Jan 2009 18:34:44 -0600, Andrew Garner andrew.b.gar...@gmail.com wrote: This sounds like you need to raise max_allowed_packet for mysqldump (and possibly mysqld) - these are separate settings for both the client and the server. You can do this via the my.cnf (or ~/.my.cnf) or specify it as an option on the command line mysqldump --opt ... --max_allowed_packet=1G dbname backup-file. This is certainly the most common advice for this error, yes. I increased the max_allowed_packet size from 1M to 128M when the problem initially occured. This didn't fix anything. My apologies. I hadn't read up-thread where this was discussed, and given that, max_allowed_packet is almost certainly not the problem. Sorry for the noise. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org