Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Andrew Garner
This sounds like you need to raise max_allowed_packet for mysqldump
(and possibly mysqld) - these are separate settings for both the
client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
or specify it as an option on the command line mysqldump --opt ...
--max_allowed_packet=1G dbname  backup-file.

On Tue, Jan 13, 2009 at 2:58 PM, Dan d...@entropy.homelinux.org wrote:
 On Tue, 2009-01-13 at 12:19 +0530, Chandru wrote:

 Hi,

   Did u try using this command


 mysqldump --opt db_name  db_name.sql -p 2bkp.err

 Not quite. Firstly, I had to alter the normal backup cron job, and that
 doesn't happen until late at night.

 Secondly, yes I added the redirection to capture errors. There were none
 ( empty file this time ).

 Thirdly, I didn't use '--opt'. I had no other suggestions yesterday
 ( before I went to bed anyway - there's 1 in my inbox this morning ), so
 I did some experimenting of my own and changed the dump command to:

 mysqldump --skip-opt --add-drop-table --add-locks --create-options
 --quick --lock-tables --set-charset --disable-keys dbmail  dbmail.sql
 -pSOME_PASSWORD 2bkp.err

 This made mysql do 1 insert per record.

 The backup *appears* to have completed successfully. At least the end of
 the dump file looks valid. It ends dumping the last table, then a view,
 then I get:

 -- Dump completed on 2009-01-13 17:23:13

 Previously it just finished part-way through dumping a blob.

 I have yet to do extensive testing on it. I suppose I should try
 importing the dump file into another server and see if I get the correct
 number of rows in each table ...

 The only issue now is that the dump file is much smaller than I would
 have expected. When using --opt, I was getting 30GB dump files. I would
 have expected the current format ( 1 insert statement per record ) to be
 much bigger, but it's 23GB. Now having said that, I did email the
 current DB administrator and ask him to get people to archive all emails
 with huge attachments somewhere on a network share ( people have some
 pretty big attachments ). Also I asked him to get people to clean out
 their Trash ( which happens only when we tell them to ). So I suppose
 it's not completely infeasible that this alone is responsible for the
 difference.

 Anyway, it's been a very disconcerting experience. It goes without
 saying that people would expect that anything that gets into a MySQL
 database should be able to be backed up by mysqldump. And it's worrying
 that the default --opt can't do that. When I get some time I'll enter a
 bug ...

 Thanks for you help Chandru.

 Dan



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=andrew.b.gar...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Dan
On Tue, 13 Jan 2009 18:34:44 -0600, Andrew Garner
andrew.b.gar...@gmail.com wrote:

 This sounds like you need to raise max_allowed_packet for mysqldump
 (and possibly mysqld) - these are separate settings for both the
 client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
 or specify it as an option on the command line mysqldump --opt ...
 --max_allowed_packet=1G dbname  backup-file.

This is certainly the most common advice for this error, yes. I increased
the max_allowed_packet size from 1M to 128M when the problem initially
occured. This didn't fix anything.

Since dbmail splits up all email body / attachments into small chunks and
inserts these chunks in separate records, I really don't see how a
max_allowed_packet size of 128M would fail ... especially since the data
got in there with a max_allowed_packet size of 1M to begin with. The
biggest email in the database is 50M. So even if dbmail *hadn't* split the
email into separate records, a max_allowed_packet size of 128M should be
*easily* big enough, shouldn't it?

As for a max_allowed_packet size of 1G, that just sounds dangerous. The
server has 900MB or so of chip RAM and 512MB of swap. It's also running a
LOT of other services. I don't want something stupid happening like Linux's
out-of-memory-killer coming along and killing MySQL, causing database
corruption. Can someone please comment on this? If it's not dangerous, I
will try it. As noted in a prior post, I 'successfully' completed a backup
last night, and I'm testing it now, but it took 10 hours to complete, and
was still running when people came in this morning, which is obviously not
desirable, so if I can somehow still use the --opt option of mysqldump by
making max_allowed_packet to some absolutely astronomical level without
endangering things, maybe that's the way to go. Maybe ...

Anyway, thanks for the comments Andrew.

Dan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Andrew Garner
On Tue, Jan 13, 2009 at 6:06 PM, Dan d...@entropy.homelinux.org wrote:
 On Tue, 13 Jan 2009 18:34:44 -0600, Andrew Garner
 andrew.b.gar...@gmail.com wrote:

 This sounds like you need to raise max_allowed_packet for mysqldump
 (and possibly mysqld) - these are separate settings for both the
 client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
 or specify it as an option on the command line mysqldump --opt ...
 --max_allowed_packet=1G dbname  backup-file.

 This is certainly the most common advice for this error, yes. I increased
 the max_allowed_packet size from 1M to 128M when the problem initially
 occured. This didn't fix anything.

My apologies.  I hadn't read up-thread where this was discussed, and
given that, max_allowed_packet is almost certainly not the problem.
Sorry for the noise.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Chandru
Hi,

 please increase your interactive_timeout variable to some big number and
also try to log the erros if any thing by using the command:

mysqldump --opt db_name  db_name.sql -p 2bkp.err

check if you get some thing in the bkp.err file.

Regards,

Chandru,

www.mafiree.com

On Mon, Jan 12, 2009 at 9:07 AM, Daniel Kasak d...@entropy.homelinux.orgwrote:

 Hi all. I have a 30GB innodb-only database in mysql-5.0.54. I have
 always done nightly backups with:

 mysqldump --opt db_name  db_name.sql -p

 Recently this started failing with:
 Error 2013: Lost connection to MySQL server

 I have checked all tables for corruption - nothing found. Also as far as
 I can tell there are no issues with clients using the database. There
 have been no crashes since I did a full restore. So I assume we can rule
 out corruption.

 I have searched around for the error message, and found people
 discussing the max_allowed_packet option. I've tried increasing the
 server's max_allowed_packet to many different values. Currently it's at
 128M, which is *way* over the default. I have also used the
 --max_allowed_packet option simultaneously with mysqldump. And lastly, I
 have been restarting the server after each my.cnf change.

 The data was inserted via the 'dbmail' application
 ( http://www.dbmail.org ), while the server was set up with the default
 max_allowed_packet size. DBMail breaks up message into chunks, and
 stores these chunks in individual records. I'm not sure what the default
 size of these chunks is, but I belive it's a reasonable value anyway.

 What next? I *must* get regular backups working again ...

 Dan


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=chandru@gmail.com




Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Dan
On Mon, 12 Jan 2009 16:25:12 +0530, Chandru chandru@gmail.com wrote:

 Hi,
 
  please increase your interactive_timeout variable to some big number and
 also try to log the erros if any thing by using the command:
 
 mysqldump --opt db_name  db_name.sql -p 2bkp.err
 
 check if you get some thing in the bkp.err file.

Thanks for responding :)

Unfortunately I don't think this is the problem for us. This value is
already at 28800 seconds ( equals 8 hours ). The backup certainly never
used to take that long. The mysql portion of the backup used to take about
90 minutes.

I will retry with your suggestion anyway tonight and post back if something
new happens.

Here are our server variables which I should have posted the 1st time (
minus version_bdb as it will cause horrible text wrapping ):

mysql show variables 
- where Variable_name != 'version_bdb';
+-+-+
| Variable_name   | Value   |
+-+-+
| auto_increment_increment| 1   | 
| auto_increment_offset   | 1   | 
| automatic_sp_privileges | ON  | 
| back_log| 50  | 
| basedir | /usr/   | 
| bdb_cache_size  | 8384512 | 
| bdb_home| | 
| bdb_log_buffer_size | 262144  | 
| bdb_logdir  | | 
| bdb_max_lock| 1   | 
| bdb_shared_data | OFF | 
| bdb_tmpdir  | | 
| binlog_cache_size   | 32768   | 
| bulk_insert_buffer_size | 8388608 | 
| character_set_client| latin1  | 
| character_set_connection| latin1  | 
| character_set_database  | latin1  | 
| character_set_filesystem| binary  | 
| character_set_results   | latin1  | 
| character_set_server| latin1  | 
| character_set_system| utf8| 
| character_sets_dir  | /usr/share/mysql/charsets/  | 
| collation_connection| latin1_swedish_ci   | 
| collation_database  | latin1_swedish_ci   | 
| collation_server| latin1_swedish_ci   | 
| completion_type | 0   | 
| concurrent_insert   | 1   | 
| connect_timeout | 10  | 
| datadir | /mnt/stuff/mysql/   | 
| date_format | %Y-%m-%d| 
| datetime_format | %Y-%m-%d %H:%i:%s   | 
| default_week_format | 0   | 
| delay_key_write | ON  | 
| delayed_insert_limit| 100 | 
| delayed_insert_timeout  | 300 | 
| delayed_queue_size  | 1000| 
| div_precision_increment | 4   | 
| keep_files_on_create| OFF |
| engine_condition_pushdown   | OFF | 
| expire_logs_days| 0   | 
| flush   | OFF | 
| flush_time  | 0   | 
| ft_boolean_syntax   | + -()~*:|  | 
| ft_max_word_len | 84  | 
| ft_min_word_len | 4   | 
| ft_query_expansion_limit| 20  | 
| ft_stopword_file| (built-in)  | 
| group_concat_max_len| 1024| 
| have_archive| NO  | 
| have_bdb| DISABLED| 
| have_blackhole_engine   | NO  | 
| have_compress   | YES | 
| have_crypt  | YES | 
| have_csv| NO  | 
| have_dynamic_loading| YES | 
| have_example_engine | NO  | 
| have_federated_engine   | NO  | 
| have_geometry   | YES | 
| have_innodb | YES 

Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Aaron Blew
I'm also having a similar issue with some tables I've been trying to dump
(total data set is around 3TB).  I'm dumping directly from one host to
another (mysqldump -hSOURCE DATABASE | mysql -hLOCALHOST DATABASE) using
mysql 4.1.22.  One system is Solaris 10 SPARC, while the other is Solaris 10
x64 (64bit MySQL as well).

I wrote a script that starts a mysqldump process for each table within a
database, which shouldn't be a problem since the host currently has around
12G unused memory.  Midway through the dump I seem to lose the connection as
Dan described.  After attempting to drop/re-import (using a single process),
the larger tables continue to fail (though at different points) while some
of the small-medium sized tables made it across.

Anyone else run into this before? Ideas?

Thanks,
-Aaron


RE: mysqldump: Error 2013

2005-09-02 Thread Gustafson, Tim
Hello everyone!

I just wanted to give everyone an update.  I'm still getting this error
when I try to back up this database table.  I don't get it at the same
row each time - today was at row 1,618, yesterday it was at row 24,566.
Just a reminder of my symptoms:

1. mysqldump is the only thing reporting any errors
2. the database server itself is not crashing
3. the timeouts on the database server are all set to 86,400 seconds
4. there is plenty of disk space on both the database server and the
backup media
5. the max_packet_size is 100MB
6. the maximum row size is less than 50MB

I have run the backup by hand a few times (not as part of a cron job,
but rather from my session instead) and it does complete (after about
4-5 hours).  That would be fine, except that the backup slows the entire
system down, so I can't run it during the day - that's why it's usually
part of a cron job that runs at 1AM UTC.

Can anyone offer some suggestions as to what's causing this, and what I
might be able to do to fix it?  Is there any way to maybe split the
backups into 3 or 4 pieces so that no one .sql file is so big and no one
run against the database is so long?

Thanks in advance!

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 


smime.p7s
Description: S/MIME cryptographic signature


RE: mysqldump: Error 2013

2005-08-30 Thread Gustafson, Tim
 Have a look here:
 http://dev.mysql.com/doc/mysql/en/gone-away.html

Gleb,

Thanks for the response.  The only one that seems to apply is this one:

 You may also see the MySQL server has gone away error if
 MySQL is started with the --skip-networking option.

I do start mySQL without networking enabled - it's only accessible from
the local machine (for security reasons).

I can tell you for certain that the mySQL server is definitely not
crashing itself - it chugs along happily without incident.

Interestingly, I ran the backup command from my shell yesterday during
the day (when the server is actually much more active) and the backup
completed successfully.  That one table took about 5 hours to back up
though - I'm not sure if that is normal or not.  Then last night's
automated (unattended) backup completed successfully for the first time
in a few days.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 



smime.p7s
Description: S/MIME cryptographic signature


Re: mysqldump: Error 2013

2005-08-29 Thread SGreen
Gustafson, Tim [EMAIL PROTECTED] wrote on 08/29/2005 09:24:36 AM:

 Hello
 
 I am using mysqldump to backup my entire database (about 40GB total)
 each night.  I dump each table separetly, so that if mysqldump crashes
 in the middle somewhere, the rest of the database still gets backed up.
 
 Most of the tables are fairly small.  About 20GB of the database is
 spread across more than a hundred tables.  However, one table is very
 large - it accounts for the other 20GB of the dataset.
 
 When backing up this table, I get this error message every night:
 
 /usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server
 during query when dumping table `DocumentVariants` at row: 13456
 
 The table actually has 94,916 rows in it.  There are no entries in the
 mySQL server log and nothing in /var/log/messages.  There is plenty of
 disk space available on the backup drive.  The file is about 4.5GB when
 this happens, which is about 1/5 of the total table size.  The table
 itself contains a lot of binary data in a longblob field, in case that
 makes any difference.  wait_timeout on my server is set to 86400, and
 the whole backup takes less than an hour, so the timeout is not the
 problem.
 
 Has anyone else had similar problems?  Can anyone shed some light on how
 to successfully back up this database?
 
 Thanks!
 
 Tim Gustafson
 MEI Technology Consulting, Inc
 [EMAIL PROTECTED]
 (516) 379-0001 Office
 (516) 908-4185 Fax
 http://www.meitech.com/ 

The one thing I can think of is to check that you are not trying to buffer 
your output. Use the quick option when you start mysqldump (to skip 
memory buffering the dump file) and write the data straight to disk as it 
arrives. With a 20GB file it will be very easy to exceed available system 
memory allocation limits. With the buffering turned off, you shouldn't hit 
that limit.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



RE: mysqldump: Error 2013

2005-08-29 Thread Gustafson, Tim
Shawn,

Thanks.  I should have included the switches I was using to make the
backup.

I'm using --opt --quote-names, and according to the manual, --opt
includes --quick.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, August 29, 2005 9:35 AM
To: Gustafson, Tim
Cc: mysql@lists.mysql.com
Subject: Re: mysqldump: Error 2013




Gustafson, Tim [EMAIL PROTECTED] wrote on 08/29/2005 09:24:36 AM:

 Hello
 
 I am using mysqldump to backup my entire database (about 40GB total)
 each night.  I dump each table separetly, so that if mysqldump crashes
 in the middle somewhere, the rest of the database still gets backed
up.
 
 Most of the tables are fairly small.  About 20GB of the database is
 spread across more than a hundred tables.  However, one table is very
 large - it accounts for the other 20GB of the dataset.
 
 When backing up this table, I get this error message every night:
 
 /usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server
 during query when dumping table `DocumentVariants` at row: 13456
 
 The table actually has 94,916 rows in it.  There are no entries in the
 mySQL server log and nothing in /var/log/messages.  There is plenty of
 disk space available on the backup drive.  The file is about 4.5GB
when
 this happens, which is about 1/5 of the total table size.  The table
 itself contains a lot of binary data in a longblob field, in case that
 makes any difference.  wait_timeout on my server is set to 86400, and
 the whole backup takes less than an hour, so the timeout is not the
 problem.
 
 Has anyone else had similar problems?  Can anyone shed some light on
how
 to successfully back up this database?
 
 Thanks!
 
 Tim Gustafson
 MEI Technology Consulting, Inc
 [EMAIL PROTECTED]
 (516) 379-0001 Office
 (516) 908-4185 Fax
 http://www.meitech.com/ 

The one thing I can think of is to check that you are not trying to
buffer your output. Use the quick option when you start mysqldump (to
skip memory buffering the dump file) and write the data straight to disk
as it arrives. With a 20GB file it will be very easy to exceed available
system memory allocation limits. With the buffering turned off, you
shouldn't hit that limit. 

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine 


smime.p7s
Description: S/MIME cryptographic signature


Re: mysqldump: Error 2013

2005-08-29 Thread Hassan Schroeder

Gustafson, Tim wrote:


When backing up this table, I get this error message every night:

/usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server
during query when dumping table `DocumentVariants` at row: 13456

The table actually has 94,916 rows in it.  There are no entries in the
mySQL server log and nothing in /var/log/messages.  There is plenty of
disk space available on the backup drive.  The file is about 4.5GB when
this happens, which is about 1/5 of the total table size.  The table
itself contains a lot of binary data in a longblob field, in case that
makes any difference.  


Does the size of the contents of that field exceed your defined
max_allowed_packet size?

--
Hassan Schroeder - [EMAIL PROTECTED]
Webtuitive Design ===  (+1) 408-938-0567   === http://webtuitive.com

  dream.  code.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysqldump: Error 2013

2005-08-29 Thread Gustafson, Tim
No, max_allowed_packet is 100 megabytes, and the maximum data field in the 
database is 50MB right now, and most are well below 10MB.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 



-Original Message-
From: Hassan Schroeder [mailto:[EMAIL PROTECTED]
Sent: Monday, August 29, 2005 9:51 AM
To: mysql@lists.mysql.com
Subject: Re: mysqldump: Error 2013


Gustafson, Tim wrote:

 When backing up this table, I get this error message every night:
 
 /usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server
 during query when dumping table `DocumentVariants` at row: 13456
 
 The table actually has 94,916 rows in it.  There are no entries in the
 mySQL server log and nothing in /var/log/messages.  There is plenty of
 disk space available on the backup drive.  The file is about 4.5GB when
 this happens, which is about 1/5 of the total table size.  The table
 itself contains a lot of binary data in a longblob field, in case that
 makes any difference.  

Does the size of the contents of that field exceed your defined
max_allowed_packet size?

-- 
Hassan Schroeder - [EMAIL PROTECTED]
Webtuitive Design ===  (+1) 408-938-0567   === http://webtuitive.com

   dream.  code.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



smime.p7s
Description: S/MIME cryptographic signature


Re: mysqldump: Error 2013

2005-08-29 Thread Hassan Schroeder

Gustafson, Tim wrote:
No, max_allowed_packet is 100 megabytes, and the maximum data field 

 in the database is 50MB right now, and most are well below 10MB.

mmm. OK, not having any more bright ideas :-) I would try dumping
it using an explicit `--where=` claus to guarantee fixed ordering,
and see if it fails on the same row every time.

HTH!
--
Hassan Schroeder - [EMAIL PROTECTED]
Webtuitive Design ===  (+1) 408-938-0567   === http://webtuitive.com

  dream.  code.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump: Error 2013

2005-08-29 Thread Gleb Paharenko
Hello.



Has anyone else had similar problems?  Can anyone shed some light on how

to successfully back up this database?



Have a look here:

  http://dev.mysql.com/doc/mysql/en/gone-away.html







Hello



I am using mysqldump to backup my entire database (about 40GB total)

each night.  I dump each table separetly, so that if mysqldump crashes

in the middle somewhere, the rest of the database still gets backed up.



Most of the tables are fairly small.  About 20GB of the database is

spread across more than a hundred tables.  However, one table is very

large - it accounts for the other 20GB of the dataset.



When backing up this table, I get this error message every night:



/usr/local/bin/mysqldump: Error 2013: Lost connection to MySQL server

during query when dumping table `DocumentVariants` at row: 13456



The table actually has 94,916 rows in it.  There are no entries in the

mySQL server log and nothing in /var/log/messages.  There is plenty of

disk space available on the backup drive.  The file is about 4.5GB when

this happens, which is about 1/5 of the total table size.  The table

itself contains a lot of binary data in a longblob field, in case that

makes any difference.  wait_timeout on my server is set to 86400, and

the whole backup takes less than an hour, so the timeout is not the

problem.



Has anyone else had similar problems?  Can anyone shed some light on how

to successfully back up this database?



Thanks!

Gustafson, Tim [EMAIL PROTECTED] wrote:



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysqldump: Error 2013

2005-08-29 Thread Michael Stassen

Hassan Schroeder wrote:
 Does the size of the contents of that field exceed your defined
 max_allowed_packet size?


Gustafson, Tim wrote:

No, max_allowed_packet is 100 megabytes, and the maximum data field in
the  database is 50MB right now, and most are well below 10MB.

Tim Gustafson
MEI Technology Consulting, Inc
[EMAIL PROTECTED]
(516) 379-0001 Office
(516) 908-4185 Fax
http://www.meitech.com/ 


I believe it's the size of the row, not the size of a single field, that 
matters.  Is it possible you have a row which exceeds max_allowed_packet size?


Michael

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysqldump: Error 2013

2005-08-29 Thread Gustafson, Tim
 I believe it's the size of the row, not the size of a
 single field, that matters.  Is it possible you have a
 row which exceeds max_allowed_packet size?

No.  There is one blob fields (always less than 50MB) and like 10 other fields, 
all integers.


smime.p7s
Description: S/MIME cryptographic signature