Re: stuck commits

2009-01-12 Thread Krishna Chandra Prajapati
Hi Scott,

I believe something wrong with innodb parameters. It should be optimum. In
your case it might be too high or too low. Take a look at log file size.
Please send your show variables and show status data to reach at conclusion.


On Tue, Jan 13, 2009 at 3:35 AM, Scott Edwards  wrote:

> All too frequently, I see commits stuck in this database.  What can I do to
> speed that up? Or, abort if it takes more than 40 seconds?  This query here
> for example appears to take 443 seconds so far.
>
> From mysqladmin processlist:
>
> Id| User | Host | db | Command |Time | State | Info
> 14010 | amavis | mx:53008 | amavis | Query   | 443  | | commit
>
> mysqld  Ver 5.0.32-Debian_7etch8-log for pc-linux-gnu on x86_64 (Debian
> etch
> distribution)
>
> I recompiled it once, but the debug symbols are still missing.  The build
> transcript didn't include -g during compile.  I'm looking into redoing that
> now.
>
> Thanks in advance,
>
>
> Scott Edwards
>
> ---
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=prajapat...@gmail.com
>
>


-- 
Krishna Chandra Prajapati
MySQL DBA,
Ed Ventures e-Learning Pvt.Ltd.
1-8-303/48/15, Sindhi Colony
P.G.Road, Secunderabad.
Pin Code: 53
Office Number: 040-66489771
Mob: 9912924044
URL: ed-ventures-online.com
Email-id: prajapat...@gmail.com


Re: Why does changing a table property rebuild the table?

2009-01-12 Thread Baron Schwartz
>> Why would delay_key_writes require a table rebuild? It's not
>> modifying the data. Reloading tens of millions of rows for several
>> hours seems to be a waste of time.

It probably flips a bit in the .frm file or something like that, but I
have not investigated it myself.

My guess is that you can "hack" this to do what you want.  We wrote
about this in our book -- you can alter ENUM lists without a table
rebuild, for example.  I'm betting you can do the same thing here.
Rather than describe the whole thing let me show you the blog post
Aurimas wrote about it:

http://www.mysqlperformanceblog.com/2007/10/29/hacking-to-make-alter-table-online-for-certain-changes/

Baron

-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Aaron Blew
I'm also having a similar issue with some tables I've been trying to dump
(total data set is around 3TB).  I'm dumping directly from one host to
another (mysqldump -hSOURCE DATABASE | mysql -hLOCALHOST DATABASE) using
mysql 4.1.22.  One system is Solaris 10 SPARC, while the other is Solaris 10
x64 (64bit MySQL as well).

I wrote a script that starts a mysqldump process for each table within a
database, which shouldn't be a problem since the host currently has around
12G unused memory.  Midway through the dump I seem to lose the connection as
Dan described.  After attempting to drop/re-import (using a single process),
the larger tables continue to fail (though at different points) while some
of the small-medium sized tables made it across.

Anyone else run into this before? Ideas?

Thanks,
-Aaron


Re: stuck commits

2009-01-12 Thread Baron Schwartz
You didn't say much about your workload, tuning or table, but...

Looks like you have a configuration problem, or slow disks, or InnoDB
contention problems.

You can get faster hardware, or make your log files or log buffer
bigger (but first figure out whether they're too small!), or figure
out what the contention problem is.  If you can figure it out
yourself, that might be a fun exercise; otherwise let me know and I
can suggest someone who can help ;-)  You might just need a RAID
controller with a battery-backed write cache set to write-through
policy.

Aborting isn't possible once it gets to this stage, AFAIK.

On Mon, Jan 12, 2009 at 5:05 PM, Scott Edwards  wrote:
> All too frequently, I see commits stuck in this database.  What can I do to
> speed that up? Or, abort if it takes more than 40 seconds?  This query here
> for example appears to take 443 seconds so far.
>
> From mysqladmin processlist:
>
> Id| User | Host | db | Command |Time | State | Info
> 14010 | amavis | mx:53008 | amavis | Query   | 443  | | commit
>
> mysqld  Ver 5.0.32-Debian_7etch8-log for pc-linux-gnu on x86_64 (Debian etch
> distribution)
>
> I recompiled it once, but the debug symbols are still missing.  The build
> transcript didn't include -g during compile.  I'm looking into redoing that
> now.
>
> Thanks in advance,
>
>
> Scott Edwards
>
> ---
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=ba...@xaprb.com
>
>



-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Dan
On Mon, 12 Jan 2009 16:25:12 +0530, Chandru  wrote:

> Hi,
> 
>  please increase your interactive_timeout variable to some big number and
> also try to log the erros if any thing by using the command:
> 
> mysqldump --opt db_name > db_name.sql -p 2>>bkp.err
> 
> check if you get some thing in the bkp.err file.

Thanks for responding :)

Unfortunately I don't think this is the problem for us. This value is
already at 28800 seconds ( equals 8 hours ). The backup certainly never
used to take that long. The mysql portion of the backup used to take about
90 minutes.

I will retry with your suggestion anyway tonight and post back if something
new happens.

Here are our server variables which I should have posted the 1st time (
minus version_bdb as it will cause horrible text wrapping ):

mysql> show variables 
-> where Variable_name != 'version_bdb';
+-+-+
| Variable_name   | Value   |
+-+-+
| auto_increment_increment| 1   | 
| auto_increment_offset   | 1   | 
| automatic_sp_privileges | ON  | 
| back_log| 50  | 
| basedir | /usr/   | 
| bdb_cache_size  | 8384512 | 
| bdb_home| | 
| bdb_log_buffer_size | 262144  | 
| bdb_logdir  | | 
| bdb_max_lock| 1   | 
| bdb_shared_data | OFF | 
| bdb_tmpdir  | | 
| binlog_cache_size   | 32768   | 
| bulk_insert_buffer_size | 8388608 | 
| character_set_client| latin1  | 
| character_set_connection| latin1  | 
| character_set_database  | latin1  | 
| character_set_filesystem| binary  | 
| character_set_results   | latin1  | 
| character_set_server| latin1  | 
| character_set_system| utf8| 
| character_sets_dir  | /usr/share/mysql/charsets/  | 
| collation_connection| latin1_swedish_ci   | 
| collation_database  | latin1_swedish_ci   | 
| collation_server| latin1_swedish_ci   | 
| completion_type | 0   | 
| concurrent_insert   | 1   | 
| connect_timeout | 10  | 
| datadir | /mnt/stuff/mysql/   | 
| date_format | %Y-%m-%d| 
| datetime_format | %Y-%m-%d %H:%i:%s   | 
| default_week_format | 0   | 
| delay_key_write | ON  | 
| delayed_insert_limit| 100 | 
| delayed_insert_timeout  | 300 | 
| delayed_queue_size  | 1000| 
| div_precision_increment | 4   | 
| keep_files_on_create| OFF |
| engine_condition_pushdown   | OFF | 
| expire_logs_days| 0   | 
| flush   | OFF | 
| flush_time  | 0   | 
| ft_boolean_syntax   | + -><()~*:""&|  | 
| ft_max_word_len | 84  | 
| ft_min_word_len | 4   | 
| ft_query_expansion_limit| 20  | 
| ft_stopword_file| (built-in)  | 
| group_concat_max_len| 1024| 
| have_archive| NO  | 
| have_bdb| DISABLED| 
| have_blackhole_engine   | NO  | 
| have_compress   | YES | 
| have_crypt  | YES | 
| have_csv| NO  | 
| have_dynamic_loading| YES | 
| have_example_engine | NO  | 
| have_federated_engine   | NO  | 
| have_geometry   | YES | 
| have_innodb | YES

stuck commits

2009-01-12 Thread Scott Edwards
All too frequently, I see commits stuck in this database.  What can I do to 
speed that up? Or, abort if it takes more than 40 seconds?  This query here 
for example appears to take 443 seconds so far.

From mysqladmin processlist:

Id| User | Host | db | Command |Time | State | Info
14010 | amavis | mx:53008 | amavis | Query   | 443  | | commit

mysqld  Ver 5.0.32-Debian_7etch8-log for pc-linux-gnu on x86_64 (Debian etch 
distribution)

I recompiled it once, but the debug symbols are still missing.  The build 
transcript didn't include -g during compile.  I'm looking into redoing that 
now.

Thanks in advance,


Scott Edwards

---

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why does changing a table property rebuild the table?

2009-01-12 Thread Dan Nelson
In the last episode (Jan 12), mos said:
> At 12:14 PM 1/12/2009, Dan Nelson wrote:
> >In the last episode (Jan 12), mos said:
> > > I'm using MySQL 5.1 and if I execute:
> > >
> > > alter table mytable delay_key_write=1;
> > >
> > > it takes about an hour to rebuild the table. Why? As far as I
> > > know it is not changing the table structure. So why does it have
> > > to make a copy of the table and reload all the data?
> >
> >Mysql plays it safe and assumes that any table modification requires
> >a full rebuild.  5.1 knows that certain settings don't require a
> >full rebuild, but delay_key_writes isn't one of them (and some that
> >are marked as fast shouldn't be - see
> >http://bugs.mysql.com/bug.php?id=39372 ).
> 
> Why would delay_key_writes require a table rebuild? It's not
> modifying the data. Reloading tens of millions of rows for several
> hours seems to be a waste of time.

It shouldn't require one; at worst it would require flushing all dirty
key blocks.  Historically, all "ALTER TABLE" commands always did a full
table rebuild, and only recently has the ability to do quick ALTER's
appeared.  Maybe they're adding one flag at a time to the quick list,
or something.

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why does changing a table property rebuild the table?

2009-01-12 Thread mos

At 12:14 PM 1/12/2009, Dan Nelson wrote:

In the last episode (Jan 12), mos said:
> I'm using MySQL 5.1 and if I execute:
>
> alter table mytable delay_key_write=1;
>
> it takes about an hour to rebuild the table. Why? As far as I know it
> is not changing the table structure. So why does it have to make a
> copy of the table and reload all the data?

Mysql plays it safe and assumes that any table modification requires a
full rebuild.  5.1 knows that certain settings don't require a full
rebuild, but delay_key_writes isn't one of them (and some that are
marked as fast shouldn't be - see
http://bugs.mysql.com/bug.php?id=39372 ).


Dan,
  Why would delay_key_writes require a table rebuild? It's not 
modifying the data. Reloading tens of millions of rows for several hours 
seems to be a waste of time.


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why does changing a table property rebuild the table?

2009-01-12 Thread Dan Nelson
In the last episode (Jan 12), mos said:
> I'm using MySQL 5.1 and if I execute:
> 
> alter table mytable delay_key_write=1;
> 
> it takes about an hour to rebuild the table. Why? As far as I know it
> is not changing the table structure. So why does it have to make a
> copy of the table and reload all the data?

Mysql plays it safe and assumes that any table modification requires a
full rebuild.  5.1 knows that certain settings don't require a full
rebuild, but delay_key_writes isn't one of them (and some that are
marked as fast shouldn't be - see
http://bugs.mysql.com/bug.php?id=39372 ).

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: help on join

2009-01-12 Thread Johan De Meersman
The error is not in the join, but in the fact that you have two invoices
with the same invocecode. The items are retrieved and displayed for both
invoices.

If this is correct, select distinct should solve your problem.


On Mon, Jan 12, 2009 at 5:59 PM, Ron  wrote:

> Hi All,
>
> I got the following tables:
>
> table items
>
>
> +-+-++-+--+--+--+
> | accountcode | invoicecode | invitemqty | packagecode | itemcode |
> packagename  | packagedesc  |
>
> +-+-++-+--+--+--+
> | 103 |  2009011301 |  1 |   1 |0 | Closed
> Trial Package | Closed Trial Package |
> | 103 |  2009011301 |  1 |   1 |0 |
> carryover| Previous Balance |
>
> +-+-++-+--+--+--+
>
> table invoice
>
>
> +-+++-+-+---++
> | accountcode | refno  | status | invoicecode | invoicedatefrom |
> invoicedateto | billdate   |
>
> +-+++-+-+---++
> | 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-13 |
> 2009-01-12| 2009-01-13 |
> | 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-08 |
> 2008-12-13| 2009-01-13 |
>
> +-+++-+-+---++
>
> select * from invoice join items on invoice.invoicecode = items.invoicecode
> where invoice.accountcode='103';
>
>
> +-+++-+-+---++-+-++-+--+--+--+
> | accountcode | refno  | status | invoicecode | invoicedatefrom |
> invoicedateto | billdate   | accountcode | invoicecode | invitemqty |
> packagecode | itemcode | packagename  | packagedesc  |
>
> +-+++-+-+---++-+-++-+--+--+--+
> | 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-13 |
> 2009-01-12| 2009-01-13 | 103 |  2009011301 |  1 |
>1 |0 | Closed Trial Package | Closed Trial Package |
> | 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-08 |
> 2008-12-13| 2009-01-13 | 103 |  2009011301 |  1 |
>1 |0 | Closed Trial Package | Closed Trial Package |
> | 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-13 |
> 2009-01-12| 2009-01-13 | 103 |  2009011301 |  1 |
>1 |0 | carryover| Previous Balance |
> | 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-08 |
> 2008-12-13| 2009-01-13 | 103 |  2009011301 |  1 |
>1 |0 | carryover| Previous Balance |
>
> +-+++-+-+---++-+-++-+--+--+--+
>
> what's was my mistake on the join why it resulted to four rows (duplicate
> results) ? how can make it that the result is without duplicate, which in
> this case should be 2 rows. TIA.
>
> Regards
>
> Ron
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be
>
>


-- 
Celsius is based on water temperature.
Fahrenheit is based on alcohol temperature.
Ergo, Fahrenheit is better than Celsius. QED.


Why does changing a table property rebuild the table?

2009-01-12 Thread mos

I'm using MySQL 5.1 and if I execute:

alter table mytable delay_key_write=1;

it takes about an hour to rebuild the table. Why? As far as I know it is 
not changing the table structure. So why does it have to make a copy of the 
table and reload all the data?


TIA
Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



help on join

2009-01-12 Thread Ron

Hi All,

I got the following tables:

table items

+-+-++-+--+--+--+
| accountcode | invoicecode | invitemqty | packagecode | itemcode | 
packagename  | packagedesc  |

+-+-++-+--+--+--+
| 103 |  2009011301 |  1 |   1 |0 | 
Closed Trial Package | Closed Trial Package |
| 103 |  2009011301 |  1 |   1 |0 | 
carryover| Previous Balance |

+-+-++-+--+--+--+

table invoice

+-+++-+-+---++
| accountcode | refno  | status | invoicecode | invoicedatefrom 
| invoicedateto | billdate   |

+-+++-+-+---++
| 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-13 
| 2009-01-12| 2009-01-13 |
| 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-08 
| 2008-12-13| 2009-01-13 |

+-+++-+-+---++

select * from invoice join items on invoice.invoicecode = 
items.invoicecode where invoice.accountcode='103';


+-+++-+-+---++-+-++-+--+--+--+
| accountcode | refno  | status | invoicecode | invoicedatefrom 
| invoicedateto | billdate   | accountcode | invoicecode | invitemqty | 
packagecode | itemcode | packagename  | packagedesc  |

+-+++-+-+---++-+-++-+--+--+--+
| 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-13 
| 2009-01-12| 2009-01-13 | 103 |  2009011301 |  1 | 
  1 |0 | Closed Trial Package | Closed Trial Package |
| 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-08 
| 2008-12-13| 2009-01-13 | 103 |  2009011301 |  1 | 
  1 |0 | Closed Trial Package | Closed Trial Package |
| 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-13 
| 2009-01-12| 2009-01-13 | 103 |  2009011301 |  1 | 
  1 |0 | carryover| Previous Balance |
| 103 | 103A2009011301 | unpaid |  2009011301 | 2008-12-08 
| 2008-12-13| 2009-01-13 | 103 |  2009011301 |  1 | 
  1 |0 | carryover| Previous Balance |

+-+++-+-+---++-+-++-+--+--+--+

what's was my mistake on the join why it resulted to four rows 
(duplicate results) ? how can make it that the result is without 
duplicate, which in this case should be 2 rows. TIA.


Regards

Ron

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Query Optimization

2009-01-12 Thread Johnny Withers
I have the following tables:

Customer: id,ssn
Customer_Id: id,customer_id,id_num

The customer table holds customers along with their SSN and the customer_id
table holds identifications for each customer (Driver's License, State
Issued ID, Student ID, etc). The SSN column from the customer table is
VARCHAR(9) and the id_num column from the customer_id table is VARCHAR(32).
Both of these columns have an index on them.

The following query uses the index on customer.ssn and executes in 0ms:

SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
FROM customer USE INDEX(idx_ssn)
LEFT JOIN customer_id ON customer.id=customer_id.customer_id
WHERE ssn='123456789';

Explain output:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: ref
possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: const
 rows: 1
Extra: Using where; Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra:

Now, this is the query I have trouble with, it does not use the index (or
says it does but doesn't?) and on a busy system (200+ queries per sec) can
take up to 20 seconds or more to execute:

SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
FROM customer USE INDEX(idx_ssn)
LEFT JOIN customer_id ON customer.id=customer_id.customer_id
WHERE ssn='123456789' OR id_num='123456789';

Explain output:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: index
possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: NULL
 rows: 165843
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra: Using where


Is there some way I can make it use the index? I've thought about
redesigning the query to select from the customer_id table first, if a row
is found.. just return the matching customer_id from the customer table..
but I wanted to see if maybe i'm going about this the wrong way before I
"engineer" some way around this.

Thanks in advance,

-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: mk-slave-restart

2009-01-12 Thread Baron Schwartz
Hi,

On Mon, Jan 12, 2009 at 4:28 AM, Krishna Chandra Prajapati
 wrote:
> Hi Baron,
>
> I want to use mk-slave-restart (maatkit tool) to restart the slave if 1048
> errors comes up.
>
> [r...@linux18 ~]# mk-slave-restart --always --daemonize
> --defaults-file=/etc/my1.cnf --error-numbers=1048 --host=localhost --port
> 3307 --user=root
> [r...@linux18 ~]# ps aux | grep mk-slave-restart
> root 22006  0.0  0.0   4004   700 pts/2S+   14:51   0:00 grep
> mk-slave-restart
>
> Can you tell me whats wrong in the above syntax. It's not working.
> Please tell me the complete syntax.

It's great that you want to use it, but just as a note -- if this
becomes a long thread, please move it to the Maatkit mailing list.

I would remove the --daemonize argument first so you can see standard
output and standard error easily.

Baron

-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Chandru
Hi,

 please increase your interactive_timeout variable to some big number and
also try to log the erros if any thing by using the command:

mysqldump --opt db_name > db_name.sql -p 2>>bkp.err

check if you get some thing in the bkp.err file.

Regards,

Chandru,

www.mafiree.com

On Mon, Jan 12, 2009 at 9:07 AM, Daniel Kasak wrote:

> Hi all. I have a 30GB innodb-only database in mysql-5.0.54. I have
> always done nightly backups with:
>
> mysqldump --opt db_name > db_name.sql -p
>
> Recently this started failing with:
> Error 2013: Lost connection to MySQL server
>
> I have checked all tables for corruption - nothing found. Also as far as
> I can tell there are no issues with clients using the database. There
> have been no crashes since I did a full restore. So I assume we can rule
> out corruption.
>
> I have searched around for the error message, and found people
> discussing the max_allowed_packet option. I've tried increasing the
> server's max_allowed_packet to many different values. Currently it's at
> 128M, which is *way* over the default. I have also used the
> --max_allowed_packet option simultaneously with mysqldump. And lastly, I
> have been restarting the server after each my.cnf change.
>
> The data was inserted via the 'dbmail' application
> ( http://www.dbmail.org ), while the server was set up with the default
> max_allowed_packet size. DBMail breaks up message into chunks, and
> stores these chunks in individual records. I'm not sure what the default
> size of these chunks is, but I belive it's a reasonable value anyway.
>
> What next? I *must* get regular backups working again ...
>
> Dan
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:
> http://lists.mysql.com/mysql?unsub=chandru@gmail.com
>
>


mk-slave-restart

2009-01-12 Thread Krishna Chandra Prajapati
Hi Baron,

I want to use mk-slave-restart (maatkit tool) to restart the slave if 1048
errors comes up.

[r...@linux18 ~]# mk-slave-restart --always --daemonize
--defaults-file=/etc/my1.cnf --error-numbers=1048 --host=localhost --port
3307 --user=root
[r...@linux18 ~]# ps aux | grep mk-slave-restart
root 22006  0.0  0.0   4004   700 pts/2S+   14:51   0:00 grep
mk-slave-restart

Can you tell me whats wrong in the above syntax. It's not working.
Please tell me the complete syntax.


-- 
Krishna Chandra Prajapati


mysqldump: Error 2013: Lost connection to MySQL server

2009-01-12 Thread Daniel Kasak
Hi all. I have a 30GB innodb-only database in mysql-5.0.54. I have
always done nightly backups with:

mysqldump --opt db_name > db_name.sql -p

Recently this started failing with:
Error 2013: Lost connection to MySQL server

I have checked all tables for corruption - nothing found. Also as far as
I can tell there are no issues with clients using the database. There
have been no crashes since I did a full restore. So I assume we can rule
out corruption.

I have searched around for the error message, and found people
discussing the max_allowed_packet option. I've tried increasing the
server's max_allowed_packet to many different values. Currently it's at
128M, which is *way* over the default. I have also used the
--max_allowed_packet option simultaneously with mysqldump. And lastly, I
have been restarting the server after each my.cnf change.

The data was inserted via the 'dbmail' application
( http://www.dbmail.org ), while the server was set up with the default
max_allowed_packet size. DBMail breaks up message into chunks, and
stores these chunks in individual records. I'm not sure what the default
size of these chunks is, but I belive it's a reasonable value anyway.

What next? I *must* get regular backups working again ...

Dan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org