Re: Lost connection to MySQL server - need help.

2013-10-12 Thread nixofortune

You might want to comment

bind-address= 127.0.0.1

in your my.cnf and restart mysql server.



On 12/10/13 10:49, Jørn Dahl-Stamnes wrote:

Hello,

I got a strange problem related to a production server. It has been working OK
for months, but yesterday it start to fail. There are several batch scripts
using the database in addition to a web application using it.

The php scripts running in batch mode began to get:

mysql_connect(): Lost connection to MySQL server at 'reading initial
communication packet', system error: 111

I stopped the server and restarted it and everything seems to work OK for
hours but when the load start to increase, the errors begin to appear again.

Today I noticed that after I starte phpMyAdmin and selected one of the
databases, phpMyAdmin was hanging and the batch scripts began to fail again.
Seems like the server does not handle much load anymore.


What's strange is the memory usage. The server is a quad core cpu with 48 Gb
memory, where 28 Gb is allocated to innodb (we mostly use innodb). But when
using top command, I noticed this:

VIRT: 33.9g
RES: 9.4g
SWAP: 23g

at this time over 11G memory is free. vm.swappiness is set to 0. I find it
strange that the server is not able to use physical memory but use swap
instead. The amount of cpu time used for swapping is rather high during sql
queries. The amount of RESident memory may increase slowly over time but very
slowly (it can take hours before it increase to 20+ Gb).

[PS: I also got a MySQL server running at a dedicated host at home, where the
it seem to use the memory as I except it to use:

   PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  SWAP DATA COMMAND
  1462 mysql 20   0 30.0g  27g 3900 S  0.3 87.3   2633:14 844m  29g mysqld
]


I would like to have some suggestions what I can do to solve this problem.
I have google'd it but found nothing that seem to solve my case.

Server:
   OS: Debian 6
   MySQL: 5.1.61-0+squeeze1

my.cnf:
#
# The MySQL database server configuration file.
#

[client]
port= 3306
socket  = /var/run/mysqld/mysqld.sock

# Here is entries for some specific programs
# The following values assume you have at least 32M ram

# This was formally known as [safe_mysqld].
[mysqld_safe]
socket  = /var/run/mysqld/mysqld.sock
nice= 0

[mysqld]
#
# * Basic Settings
#
user= mysql
pid-file= /var/run/mysqld/mysqld.pid
socket  = /var/run/mysqld/mysqld.sock
port= 3306
basedir = /usr
datadir = /database/mysql
tmpdir  = /tmp
language= /usr/share/mysql/english
skip-external-locking
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address= 127.0.0.1
## All applications use 127.0.0.1 when connectiong to the db.

#
# * Fine Tuning
#
#key_buffer = 16M
max_allowed_packet  = 64M
thread_stack= 192K
#thread_cache_size   = 8

# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover = BACKUP
#
# * Query Cache Configuration
#
query_cache_limit   = 1M

#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
#   other settings you may need to change.
#server-id  = 1
#log_bin= /var/log/mysql/mysql-bin.log
expire_logs_days= 10
max_binlog_size = 100M
#binlog_do_db   = include_database_name
#binlog_ignore_db   = include_database_name
#
# * InnoDB
#
# InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
# Read the manual for more InnoDB related options. There are many!

thread_cache_size = 192
table_cache = 768
## key_buffer = 64M
## sort_buffer_size = 256K
## read_buffer_size = 256K
## read_rnd_buffer_size = 256K
tmp_table_size=32M
max_heap_table_size=32M
query_cache_size=128M
query_cache_type=2

innodb_open_files=1000
innodb_buffer_pool_size = 28G
innodb_additional_mem_pool_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_support_xa = 0
innodb_lock_wait_timeout = 50
## innodb_flush_method=O_DIRECT
innodb_log_files_in_group = 2
## innodb_log_file_size = 128M
innodb_log_buffer_size = 8M
innodb_thread_concurrency = 14
innodb_file_per_table

max_connections = 100
binlog_cache_size   = 1M
sort_buffer_size= 16M
join_buffer_size= 16M
ft_min_word_len = 1
ft_max_word_len = 84
ft_stopword_file= ''
default_table_type  = InnoDB
key_buffer  = 2G
read_buffer_size= 2M
read_rnd_buffer_size= 16M
bulk_insert_buffer_size = 64M
myisam_sort_buffer_size = 128M
myisam_max_sort_file_size   = 10G
myisam_max_extra_sort_file_size = 10G
myisam_repair_threads   

mysql on zfs

2013-06-28 Thread nixofortune

Hi guys,
Did you have any experience running MyLSQ or in my case MariaDB 5.5.31 
on FreeBSD on top of zfs?

We are using Samsung Pro 840 SSD drives and experiencing temporary stalls.
Our workload very much skewed towards inserts into big InnoDB tables 
(70-100Gig) the dataset overall 1.5T.
I have feeling that ZFS is not mature enough to be used on production. 
The speed is not great either 2k-6k/s.
I disabled innodb_checksums = 0 , innodb_doublewrite = 0  but the stalls 
up to 8min still there.

Would it be better option to move to EXT4? We need FS snapshots for backups.
Your thought guys.
Many thanks.
Igor


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: space gone after MyISAM REPAIR TABLE

2013-06-26 Thread nixofortune

On 25/06/13 23:55, Rick James wrote:

Switch to InnoDB so you won't have to repair after crashes.
Caution:  InnoDB takes 2x-3x the disk space per table.  Be sure to use 
innodb_file_per_table=1.
 Repair by sort. is usually much faster than repair by keycache; you probably got 
'sort' because of this being big enough:  myisam_sort_buffer_size = 526M


-Original Message-
From: nixofortune [mailto:nixofort...@gmail.com]
Sent: Monday, June 24, 2013 12:35 PM
To: mysql@lists.mysql.com
Subject: Re: space gone after MyISAM REPAIR TABLE

On 24/06/13 19:57, Reindl Harald wrote:

Am 24.06.2013 18:47, schrieb Johan De Meersman:

- Original Message -

From: nixofortune nixofort...@gmail.com

Hi guys,
any suggestions? I just repaired 90G MyISAM table with REPAIR TABLE
command. the space on the hard drive gone down from 165 Gig to 70
Gig. I understand that during repair process MySQL creates temp file
and remove it after the job done.  Or removal process executes on
the server restart? how can I get that space back? I can't check the
table directory as I don't have root perm on that box.

Oops... Can you run [show global variables like

'innodb_file_per_table';] ?

I kind of expect it to be OFF, which means that the temp table would

have been created in the main tablespace. If that's the case, that space
has been permanently assimilated by the global tablespace; the only way to
get it back would be a full dump of all your (innodb) tables, stop server,
delete tablespace, start server and import the data again. Be sure to read
the documentation carefully before doing such an intrusive operation.

While you're doing that, use the opportunity to set
innodb_file_per_table to ON :-p

he spoke about MYISAM table


the space on the hard drive gone down from 165 Gig to 70 Gig how can
I get that space back?
I can't check the table directory as I don't have root perm

well, someone should look at the dadadir and error-log it is not
uncommon that a repair to such large tables fails due too small
myisam_sort_buffer_size and i suspect the operation failed and some
temp file is laying around


Thanks Reindl, It looks like Repair operation completed successfully.
Overall it took 2Hours to complete with OK massage and some other message
related to the index size. Repair process went through Repair by sort.
myisam_sort_buffer_size = 526M.
Provider runs MySQL on FreeBSD + ZFS file system. Could it be up to
snapshots as well?
I will ask them to look inside of datadir as we migrated this DB from
Solaris just day before. This is a new DB for me and I never worked with
MyISAM tables of that size.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Hi Rick,
many thanks for the 2x3 space tip. I'm working on that trying to find a 
way by reducing extremely inefficient tables.
Switched to innodb_file_per_table already. I'm gradually converting HUGE 
(70-100Gig) MyISAM tables to InnoDB.

The way I do it is by
1. creating csv file to keep the original data
2. CREATE TABLE new_innodb LIKE old_myisam;
3. ALTER TABLE new_innodb MODIFY ADD id bigint UNSIGNED AUTO_INCREMENT 
PRIMARY KEY FIRST; (YES NO PRIMARY KEYS :(  )

4. LOAD DATA INFILE '/storage/mysql/dump/old_myisam.csv'
INTO TABLE new_innodb
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY ''
LINES TERMINATED BY '\n'
(`site_id`,`index_date`,`index_month`,`index_year`,`keyword`,`source`,`visits`,`bounced_visits`,`transactions`,`revenue`,`value_per_click`,`conversions`,`goal_value`);

Our operations allows to do that. But it takes a long time to load 7-8 H 
for 250 000 000 Rows
I tried to DROP indexes on  new_innodb, LOAD DATA, works  quicker, but 
then when I do

ALTER TABLE `new_innodb`
ADD KEY `idx1` (`col1`,`col2`),
ADD  KEY `idx2` (`col1`,`col2`,`col3`);

Server become numb with I/O wait 15-20% and I had to kill the process ..

What would be the best way to convert BIG MyISAM table into InnoDB? We 
do not have SLAVE.


Thanks,
Igor

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: space gone after MyISAM REPAIR TABLE

2013-06-26 Thread nixofortune

This is my table:
CREATE TABLE `ga_monthly_keyword_visits` (
  `site_id` int(11) DEFAULT NULL,
  `index_date` int(11) DEFAULT NULL,
  `index_month` int(11) NOT NULL,
  `index_year` int(11) NOT NULL,
  `keyword` varchar(128) DEFAULT NULL,
  `source` varchar(30) DEFAULT NULL,
  `visits` int(11) DEFAULT NULL,
  `bounced_visits` int(11) DEFAULT NULL,
  `transactions` int(11) DEFAULT NULL,
  `revenue` float(10,2) DEFAULT NULL,
  `value_per_click` float(10,2) DEFAULT NULL,
  `conversions` int(11) DEFAULT NULL,
  `goal_value` float(10,2) DEFAULT NULL,
  KEY `idx_bounced_visits` (`site_id`,`index_date`),
  KEY `idx_visits_month` (`site_id`,`keyword`,`index_month`,`index_year`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

Was changed into:

CREATE TABLE `ga_monthly_keyword_visits` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `site_id` int(11) DEFAULT NULL,
  `index_date` int(11) DEFAULT NULL,
  `index_month` int(11) NOT NULL,
  `index_year` int(11) NOT NULL,
  `keyword` varchar(128) DEFAULT NULL,
  `source` varchar(30) DEFAULT NULL,
  `visits` int(11) DEFAULT NULL,
  `bounced_visits` int(11) DEFAULT NULL,
  `transactions` int(11) DEFAULT NULL,
  `revenue` float(10,2) DEFAULT NULL,
  `value_per_click` float(10,2) DEFAULT NULL,
  `conversions` int(11) DEFAULT NULL,
  `goal_value` float(10,2) DEFAULT NULL,
  PRIMARY KEY (`id`)
  KEY `idx_bounced_visits` (`site_id`,`index_date`),
  KEY `idx_visits_month` (`site_id`,`keyword`,`index_month`,`index_year`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8

I tried Converting like this:

   CREATE TABLE new LIKE old;
   ALTER TABLE new DROP (all indexes), ADD PRIMARY KEY (either autoincrement or 
a 'natural' compound PK), ENGINE=InnoDB;
   INSERT INTO new SELECT site_id, ..., goal_value FROM old;
   ALTER TABLE new ADD INDEX (...);

With only difference The original MyISAM table crashed and I took it 
from backup, loading by LOAD DATA INFILE.
The problem, yes It loaded much quicker into Database 4H 16M to be 
precise, but ALTER TABLE new ADD INDEX (...);
Put server into meditation mode. High I/O Wait rendered box unusable. I 
had to interrupt the ALTER ADD KEY process after 5H of struggle.


Now importing with Keys in place. It takes longer, much longer but at 
least the server is working and customers do not complaint.
Schema design is awful, agree. I try to understand the process so will 
redesign it soon, but any suggestions are welcome.

I' not a MySQL super guru so will be glad for hear your sorts, guys.
Thanks


On 27/06/13 00:04, Rick James wrote:

(`site_id`,`index_date`,`index_month`,`index_year`,`keyword`,`source`,`visits`, 
`bounced_visits`,`transactions`,`revenue`,`value_per_click`,`conversions`,`goal_value`);

May we see the SHOW CREATE TABLE?  Some of this smells bad.
* It is almost always bad to split day/month/year into multiple fields.
* Often a fact table, which this sounds like, should not have extra indexes.
* Is each datatype as small as is practical?
* Are any of the fields VARCHAR, and could be 'normalized'?

I would expect this to the fastest way to convert (assuming you have the disk 
space):
CREATE TABLE new LIKE old;
ALTER TABLE new DROP (all indexes), ADD PRIMARY KEY (either autoincrement 
or a 'natural' compound PK), ENGINE=InnoDB;
INSERT INTO new SELECT site_id, ..., goal_value FROM old;
ALTER TABLE new ADD INDEX (...);

What version of MySQL are you running?  Newer versions do the ALTER TABLE 
faster (online??), and may require you to do one add at a time.

Another issue...
If the data in `old` is in the same order as the PRIMARY KEY of `new`, then 
INSERT..SELECT will run fast.  (No need to jump around to find where to put 
each row.)
Case 1:  You are adding an AUTO_INC -- it will be in the 'right' order.
Case 2:  The new PK is approximately the order of the insertions into `old` -- 
probably run fast.  (However, I do not see a likely natural PK that would allow 
this

INSERT ... SELECT...ORDER BY (new PK) -- This would make the INSERT part fast, but the 
SELECT part would be slow.  (You can't win)

Your task is all about disk hits.  By understanding what MySQL has to do, you can 'predict' whether 
a plan will be slow or slower.

Back to the secondary indexes...
What are the SELECTs that will benefit from them?  (Sometimes discussing this 
can lead to fewer/better INDEXes.  Often it leads to suggesting Summary 
Table(s).)



-Original Message-
From: Johan De Meersman [mailto:vegiv...@tuxera.be]
Sent: Wednesday, June 26, 2013 11:46 AM
To: li...@netrogenic.com; Jay Ess; mysql@lists.mysql.com
Subject: Re: space gone after MyISAM REPAIR TABLE

You can't actually move innodb tables around until 5.6 where you have
transpotable tablespaces.

I suggest having a good hard look at pt-online-schema-change or
whatsitcalled.

Jay Ess li...@netrogenic.com wrote:

On 2013-06-26 18:31, nixofortune wrote:

What would be the best way to convert BIG MyISAM table into InnoDB?

We do not

have SLAVE.

I would do it on another computer

space gone after MyISAM REPAIR TABLE

2013-06-24 Thread nixofortune

Hi guys,
any suggestions? I just repaired 90G MyISAM table with REPAIR TABLE 
command. the space on the hard drive gone down from 165 Gig to 70 Gig. I 
understand that during repair process MySQL creates temp file and remove 
it after the job done.  Or removal process executes on the server 
restart? how can I get that space back? I can't check the table 
directory as I don't have root perm on that box.

Thanks
Igor

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: space gone after MyISAM REPAIR TABLE

2013-06-24 Thread nixofortune

On 24/06/13 19:57, Reindl Harald wrote:


Am 24.06.2013 18:47, schrieb Johan De Meersman:

- Original Message -

From: nixofortune nixofort...@gmail.com

Hi guys,
any suggestions? I just repaired 90G MyISAM table with REPAIR TABLE
command. the space on the hard drive gone down from 165 Gig to 70
Gig. I understand that during repair process MySQL creates temp file and
remove it after the job done.  Or removal process executes on the server
restart? how can I get that space back? I can't check the table
directory as I don't have root perm on that box.

Oops... Can you run [show global variables like 'innodb_file_per_table';] ?

I kind of expect it to be OFF, which means that the temp table would have been 
created in the main tablespace. If that's the case, that space has been 
permanently assimilated by the global tablespace; the only way to get it back 
would be a full dump of all your (innodb) tables, stop server, delete 
tablespace, start server and import the data again. Be sure to read the 
documentation carefully before doing such an intrusive operation.
While you're doing that, use the opportunity to set innodb_file_per_table to ON 
:-p

he spoke about MYISAM table


the space on the hard drive gone down from 165 Gig to 70 Gig
how can I get that space back?
I can't check the table directory as I don't have root perm

well, someone should look at the dadadir and error-log
it is not uncommon that a repair to such large tables
fails due too small myisam_sort_buffer_size and i
suspect the operation failed and some temp file is
laying around

Thanks Reindl, It looks like Repair operation completed successfully. 
Overall it took 2Hours to complete with OK massage and some other 
message related to the index size. Repair process went through Repair by 
sort.  myisam_sort_buffer_size = 526M.
Provider runs MySQL on FreeBSD + ZFS file system. Could it be up to 
snapshots as well?
I will ask them to look inside of datadir as we migrated this DB from 
Solaris just day before. This is a new DB for me and I never worked with 
MyISAM tables of that size.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



pt-query-digest --processlist

2013-06-06 Thread nixofortune

Hi guys,
I'm trying to use pt-query-digest on one of our Prod servers.
The problem our slow log is disabled, no tcpdump is installed and I 
decided to use --processlist parameter which I never tried before.


pt-query-digest version 2.2.2
I run to connect to remote node:

pt-query-digest --processlist h=192.168.1.111,u=user -p PASSWORD --print 
--no-report --run-time 60  SERVER-`date +%Y%m%d-%H`


I get:
Unknown option: print

Without --print parameter I don't have errors but I do not  get any 
output into file.


The prod server is very busy.
How to make it work?
Thanks,
Igor




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: pt-query-digest --processlist

2013-06-06 Thread nixofortune

Hi Claudio,
Thanks for reply. Great. I was sure you can't enable slow log 
dynamically on 5.0 and 5.1.

That solves my issue for 5.1 at least.
Many thanks,
Igor



On 06/06/13 13:06, Claudio Nanni wrote:

Hi,

First of all I discourage you to use --processlist, I don't think it 
is enough an accurate method to analyse queries.
You can dynamically enable the Slow Query Log with long_query_time=0 
and get a way better data.

Check the syntax for SET GLOBAL variables.
Remember to disable it afterwards.

Now to your question,

pt-query-digest version 2.2.2
I run to connect to remote node:

pt-query-digest --processlist h=192.168.1.111,u=user -p PASSWORD
--print --no-report --run-time 60  SERVER-`date +%Y%m%d-%H`

I get:
Unknown option: print

Maybe because such option does not exist?

Cheers

Claudio




mysqldump warning

2012-09-18 Thread nixofortune
Hello everybody.
I'm trying to create a backup of mysql database:

mysqldump --all-databases --routines --master-data=2  all_databases_`date
+'%y%m%d-%H%M'`.sql

It looks like backup has been created but I've got this Warning:

Warning: mysqldump: ignoring option '--databases' due to invalid value
'temp_fwd'

Nothing in the error logs, just curious what this warning means.
Does anybody had similar thing?

Many thanks.


Re: One table gets locked by itself

2012-05-08 Thread nixofortune
You might run out of file desciptors. Check your open file limits, open
table limits vars and corresponding syatus values
On 8 May 2012 15:05, abhishek jain abhishek.netj...@gmail.com wrote:

 Hi

 I am facing a strange problem, from the last few days in one of my projects
 in production, i find that one of my table fails to retrieve or insert
 records,

 I think it gets locked somehow, certainly my code doesn't have code to do
 so
 explicitly. All / rest of tables are fine, only one table creates problem.
 All is well after i restart mysqld.



 Dont know what to check!



 Details are:

 Mysqld version: 5.0.x

 Linux - Centos 5

 Table : MyISAM



 Please help me asap,

 Thanks,

 Abhi






Re: One table gets locked by itself

2012-05-08 Thread nixofortune
Few more things. You can't have a deadlock on Mylsam table. You can check
status of your tables in use with: SHOW OPEN TABLES WHERE IN_USE !=0  you
might check mysqld error log ad well
 On 8 May 2012 18:42, nixofortune nixofort...@gmail.com wrote:

 You might run out of file desciptors. Check your open file limits, open
 table limits vars and corresponding syatus values
 On 8 May 2012 15:05, abhishek jain abhishek.netj...@gmail.com wrote:

 Hi

 I am facing a strange problem, from the last few days in one of my
 projects
 in production, i find that one of my table fails to retrieve or insert
 records,

 I think it gets locked somehow, certainly my code doesn't have code to do
 so
 explicitly. All / rest of tables are fine, only one table creates problem.
 All is well after i restart mysqld.



 Dont know what to check!



 Details are:

 Mysqld version: 5.0.x

 Linux - Centos 5

 Table : MyISAM



 Please help me asap,

 Thanks,

 Abhi






Maser-Slave replication is playing up

2012-04-20 Thread nixofortune
Hi guys,
I'm am experiencing strange behaviour with the replication.
Our replication Setup looks like this:

   Master1(5.1.46-log)
=Master2(5.1.46-log)
   /  |\
Slave1(5.1.46-log) Slave2(5.1.52) Slave3(5.1.52)



Yesterday I noticed that 2 of the slaves start lagging behind the master.
The load on the slaves Machines was about 1.0, top = mysqld 100%

SHOW SLAVE STATUS:
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Relay_Log_Pos: 670375858  without any progress for 4 hours
Exec_Master_Log_Pos: without any progress for 4 hours
Seconds_Behind_Master: steady growing
No Errors.

Relay log at the Relay log position was something like this:

# at 670375858
#120419  6:22:57 server id 5  end_log_pos 670375922  Query
thread_id=48477609 exec_time=8 error_code=0
SET TIMESTAMP=1334830977/*!*/;
SET @@session.auto_increment_increment=10,
@@session.auto_increment_offset=5/*!*/;
BEGIN
/*!*/;
# at 670375922
# at 670376015
# at 670377033
# at 670378042
# at 670379050
# at 670380055
..
..
# at 678710787
# at 678711799
(8,257 rows like that..)

#120419  6:22:57 server id 5  end_log_pos 670376015  Table_map:
`fw4`.`newsletter_campaigns_recipients_type_regular` mapped to number
10591074
#120419  6:22:57 server id 5  end_log_pos 670377033  Delete_rows: table id
10591074
#120419  6:22:57 server id 5  end_log_pos 670378042  Delete_rows: table id
10591074
#120419  6:22:57 server id 5  end_log_pos 670379050  Delete_rows: table id
10591074


#120419  6:22:57 server id 5  end_log_pos 678711799  Delete_rows: table id
10591074
#120419  6:22:57 server id 5  end_log_pos 678712260  Delete_rows: table id
10591074 flags: STMT_END_F
(8,257 rows like that..)

BINLOG '
geePTxMFXQAAAE8g9ScAAGKboQEAA2Z3NAAsbmV3c2xldHRlcl9jYW1wYWlnbnNfcmVj
aXBpZW50c190eXBlX3JlZ3VsYXIABgMDAw8PDwYQAHwAIAAI
geePTxkF+gMAAEkk9ScAAGKboQAABv/AlWdgIAEAAABnNAUAA01ycwRGVzI0IDkwODcx
ZGIxMDEwOTUyNWM3NTJmODYwYjY4YmNkMjdlwItnYCABZzQFAAJEcgRGVzI0IDYxOTQxZTMw
ZGU2MDNlYWRmOTBmZTMyMGZiODA5OGNhwIFnYCABZzQFAAJNcwRGVzI0IGMxMTIzYWZlNThh
NDczMmQ0ODE1Yzk3ZDUwNmEyMTdhwHdnYCABZzQFAAJNcwRGVzI0IGQ5YmY3YTJjZDAyNzNl
M2Y0MmNmYzI3MDliOTJmNzc0wG1nYCABZzQFAAJNcgRGVzI0IGJmZjY4OTlhNjUwZjdlYmE3
NjY2YzZjMjkyNzJkZGIzwGNnYCABZzQFAAJNcwlFeGVjdXRpdmUgMTE0ZDlmNWY4ZDU2ZGIy
ZmJiNmRiMWY0OTExZmZkNjTAWWdgIAEAAABnNAUAAkRyBEZXMjQgMGMxMGMzMTA3Y2MxMGJhNmE1
ZjhkYzlmMGI1NTM1MDHAT2dgIAEAAABnNAUAAk1zBEZXMjQgNzBkOGFjNmE5MmU3ZDE3MDc5OTEx
NmVmOTE3OTg2OTHARWdgIAEAAABnNAUAAkRyCUV4ZWN1dGl2ZSA4ZjlmMmZiMzc3ZWYwOTFjZDc0
ZWJkNGZmOTdmYzVkZsA7Z2AgAQAAAGc0BQACRHIERlcyNCA2OTQwMGZhNzUyNTg5NmM2Mjc4ZDI1

(1 686 969: 125Meg Rows of Rubbish like that )

After 4Hours time the Slaves started processing the Replication logs and
gradually catching up with the masters.
No Slave errors btw yet.
Those deletes haven't been processed on slaves and we have now 500 000 rows
difference with the masters.
I'm trying to understand what's caused this issue and what actually
happened here.
So far  I found :
binlog_format different: Masters (MIXED)
 Slaves (STATEMENT)

sync_binlog different on one of the slaves: 1024 against 0 on the rest.

Does any of you experienced similar problems?
I would really appreciate any suggestions on the issue.
Many thanks,

Egor


Re: Maser-Slave replication is playing up

2012-04-20 Thread nixofortune
Thanks,
So the actions should be:
1. Stop slaves
2. On Slaves: SET GLOBAL binlog_format=MIXED
3. Start slaves.

What is restart the replication from scratch with a binary ident copy
(rsync)
Is it to use rsync to rsync mysql data on masters and slaves? and than run
chnge the master to  to start from zero ground?
Many thanks

On 20 April 2012 12:31, Reindl Harald h.rei...@thelounge.net wrote:



 Am 20.04.2012 13:17, schrieb nixofortune:
  So far  I found :
  binlog_format different:
  Masters (MIXED)
  Slaves (STATEMENT)
 
  sync_binlog different on one of the slaves: 1024 against 0 on the rest

 binlog format statement is practically unuseable
 there are way too much possible queries which can not
 work this way over the long

 if i were you i would fix the configurations and restart
 the replication from scratch with a binary ident copy (rsync)




Re: Maser-Slave replication is playing up

2012-04-20 Thread nixofortune
Hi Shawn,
Great! This is most likely what happened.
I saw yesterday when the drama developed, that mysql does huge amount of
select queries, but couldn't identified the source. the was no long running
threads, no slow logs entries as well.
Now I understand what happened.

This is a table definition:
+-+--+--+-+-+---+
| Field   | Type | Null | Key | Default | Extra |
+-+--+--+-+-+---+
| recipient_id| int(11)  | NO   | | NULL|   |
| newsletter_type_id  | int(11)  | NO   | | NULL|   |
| recipient_newsletter_id | int(11)  | NO   | MUL | NULL|   |
| recipient_salutation| varchar(16)  | YES  | | NULL|   |
| recipient_role  | varchar(124) | NO   | | NULL|   |
| recipient_userhash  | varchar(32)  | NO   | | NULL|   |
+-+--+--+-+-+---+
No PRI key!

Now. the next step will be to Alter the table and create PRI key
on recipient_id luckily all ID's are distinct there.
ALTER TABLE tb_name add PRIMARY KEY (recipient_id);
That should fix it.
Many thanks,
Egor






On 20 April 2012 12:51, shawn green shawn.l.gr...@oracle.com wrote:

 Hello Egor,


 On 4/20/2012 7:17 AM, nixofortune wrote:

 Hi guys,
 I'm am experiencing strange behaviour with the replication.
 Our replication Setup looks like this:

Master1(5.1.46-log)
 =Master2(5.1.**46-log)
/  |\
 Slave1(5.1.46-log) Slave2(5.1.52) Slave3(5.1.52)



 Yesterday I noticed that 2 of the slaves start lagging behind the master.
 The load on the slaves Machines was about 1.0, top = mysqld 100%

 SHOW SLAVE STATUS:
 Slave_IO_Running: Yes
 Slave_SQL_Running: Yes
 Relay_Log_Pos: 670375858  without any progress for 4 hours
 Exec_Master_Log_Pos: without any progress for 4 hours
 Seconds_Behind_Master: steady growing
 No Errors.

 Relay log at the Relay log position was something like this:

 # at 670375858
 #120419  6:22:57 server id 5  end_log_pos 670375922  Query
 thread_id=48477609 exec_time=8 error_code=0
 SET TIMESTAMP=1334830977/*!*/;
 SET @@session.auto_increment_**increment=10,
 @@session.auto_increment_**offset=5/*!*/;
 BEGIN
 /*!*/;
 # at 670375922
 # at 670376015
 # at 670377033
 # at 670378042
 # at 670379050
 # at 670380055
 ..
 ..
 # at 678710787
 # at 678711799
 (8,257 rows like that..)


 This is a representation of the replication stream using ROW formatting.


  #120419  6:22:57 server id 5  end_log_pos 670376015  Table_map:
 `fw4`.`newsletter_campaigns_**recipients_type_regular` mapped to number
 10591074
 #120419  6:22:57 server id 5  end_log_pos 670377033  Delete_rows: table id
 10591074
 #120419  6:22:57 server id 5  end_log_pos 670378042  Delete_rows: table id
 10591074
 #120419  6:22:57 server id 5  end_log_pos 670379050  Delete_rows: table id
 10591074
 
 
 #120419  6:22:57 server id 5  end_log_pos 678711799  Delete_rows: table id
 10591074
 #120419  6:22:57 server id 5  end_log_pos 678712260  Delete_rows: table id
 10591074 flags: STMT_END_F
 (8,257 rows like that..)


 Those are descriptions of the actual ROW events being executed. It appears
 you are trying to delete 8257 rows from the table 10591074 which has been
 mapped to the name `fw4`.`newsletter_campaigns_**recipients_type_regular`


  BINLOG '
 geePTxMFXQAAAE8g9ScAAGKboQ**EAA2Z3NAAsbmV3c2xldHRlcl9j**
 YW1wYWlnbnNfcmVj
 aXBpZW50c190eXBlX3JlZ3VsYXIABg**MDAw8PDwYQAHwAIAAI
 geePTxkF+**gMAAEkk9ScAAGKboQAABv/**AlWdgIAEAAABnNAUAA01ycwRGVzI0I
 **DkwODcx
 ZGIxMDEwOTUyNWM3NTJmODYwYjY4Ym**NkMjdlwItnYCABZzQFAAJEcgRG**
 VzI0IDYxOTQxZTMw
 ZGU2MDNlYWRmOTBmZTMyMGZiODA5OG**NhwIFnYCABZzQFAAJNcwRGVzI0**
 IGMxMTIzYWZlNThh
 NDczMmQ0ODE1Yzk3ZDUwNmEyMTdhwH**dnYCABZzQFAAJNcwRGVzI0IGQ5**
 YmY3YTJjZDAyNzNl
 M2Y0MmNmYzI3MDliOTJmNzc0wG1nYC**ABZzQFAAJNcgRGVzI0IGJmZjY4**
 OTlhNjUwZjdlYmE3
 NjY2YzZjMjkyNzJkZGIzwGNnYCABAA**AAZzQFAAJNcwlFeGVjdXRpdmUgMTE0**
 ZDlmNWY4ZDU2ZGIy
 ZmJiNmRiMWY0OTExZmZkNjTAWWdgIA**EAAABnNAUAAkRyBEZXMjQgMGMxMGMz**
 MTA3Y2MxMGJhNmE1
 ZjhkYzlmMGI1NTM1MDHAT2dgIAEAAA**BnNAUAAk1zBEZXMjQgNzBkOGFjNmE5**
 MmU3ZDE3MDc5OTEx
 NmVmOTE3OTg2OTHARWdgIAEAAABnNA**UAAkRyCUV4ZWN1dGl2ZSA4ZjlmMmZi**
 Mzc3ZWYwOTFjZDc0
 ZWJkNGZmOTdmYzVkZsA7Z2AgAQAAAG**c0BQACRHIERlcyNCA2OTQwMGZhNzUy**
 NTg5NmM2Mjc4ZDI1

 (1 686 969: 125Meg Rows of Rubbish like that )


 This is an actual ROW event. It is a base64 encoded representation of the
 binary information that represents both the values of the original row and
 the values that you want that row to become.



  After 4Hours time the Slaves started processing the Replication logs and
 gradually catching up with the masters.
 No Slave errors btw yet.
 Those deletes haven't been processed on slaves and we have now 500 000
 rows

Re: Maser-Slave replication is playing up

2012-04-20 Thread nixofortune
Hi  Reindl,
Many thanks for your contribution.
I did remember we discussed rsync backup method in the past and I was able
successfully to do this on our staging 100Gig + MASTER=MASTER replication
setup.
The downtime was really minimum, minutes not hours.
It used to fail on regular basis as we tried to fix the errors by skipping
or any other ways, now it solid for 2 month with no major issued.
Thanks,
Egor

On 20 April 2012 13:09, Reindl Harald h.rei...@thelounge.net wrote:



 Am 20.04.2012 13:44, schrieb nixofortune:
  Thanks,
  So the actions should be:
  1. Stop slaves
  2. On Slaves: SET GLOBAL binlog_format=MIXED
  3. Start slaves.

 not SET GLOBAL, put things you want in my.cnf
 you want them also get active after restart :-)

  What is restart the replication from scratch with a binary ident copy
 (rsync)
  Is it to use rsync to rsync mysql data on masters and slaves? and than
 run chnge the master to  to start from

 yes, that's the way i init replications sicne years

 * stop slave
 * hot rsync from the running master
 * stop master
 * remove binlog files
 * rsync again
 * start master
 * start slave
 * change master to

 this way has several benefits

 * if you are using de-duplication backups it may save space
 * less possible errors because a binary ident start

 the downtime on the master can be reduced to a minimum if
 you are doing the rsync on a local drive on the
 master-server, starting the master and after that you
 have all time you need to rsync the copy to the slave
 while the master happily writes new changes in his binlog
 and after starting the salve it will fetch the changes

 usually i have a shell-script for all actions
 on the master machines which needs param really
 or it would stop after the hot rsync and not stop
 mysqld





Re: Maser-Slave replication is playing up

2012-04-20 Thread nixofortune
Hi Shaw,
I have two more question here.
1. Why those rows hasn't been deleted on slaves?
2. Why no slow logs entries were created?
Many thanks for your time and effort.
Egor


On 20 April 2012 12:51, shawn green shawn.l.gr...@oracle.com wrote:

 Hello Egor,


 On 4/20/2012 7:17 AM, nixofortune wrote:

 Hi guys,
 I'm am experiencing strange behaviour with the replication.
 Our replication Setup looks like this:

Master1(5.1.46-log)
 =Master2(5.1.**46-log)
/  |\
 Slave1(5.1.46-log) Slave2(5.1.52) Slave3(5.1.52)



 Yesterday I noticed that 2 of the slaves start lagging behind the master.
 The load on the slaves Machines was about 1.0, top = mysqld 100%

 SHOW SLAVE STATUS:
 Slave_IO_Running: Yes
 Slave_SQL_Running: Yes
 Relay_Log_Pos: 670375858  without any progress for 4 hours
 Exec_Master_Log_Pos: without any progress for 4 hours
 Seconds_Behind_Master: steady growing
 No Errors.

 Relay log at the Relay log position was something like this:

 # at 670375858
 #120419  6:22:57 server id 5  end_log_pos 670375922  Query
 thread_id=48477609 exec_time=8 error_code=0
 SET TIMESTAMP=1334830977/*!*/;
 SET @@session.auto_increment_**increment=10,
 @@session.auto_increment_**offset=5/*!*/;
 BEGIN
 /*!*/;
 # at 670375922
 # at 670376015
 # at 670377033
 # at 670378042
 # at 670379050
 # at 670380055
 ..
 ..
 # at 678710787
 # at 678711799
 (8,257 rows like that..)


 This is a representation of the replication stream using ROW formatting.


  #120419  6:22:57 server id 5  end_log_pos 670376015  Table_map:
 `fw4`.`newsletter_campaigns_**recipients_type_regular` mapped to number
 10591074
 #120419  6:22:57 server id 5  end_log_pos 670377033  Delete_rows: table id
 10591074
 #120419  6:22:57 server id 5  end_log_pos 670378042  Delete_rows: table id
 10591074
 #120419  6:22:57 server id 5  end_log_pos 670379050  Delete_rows: table id
 10591074
 
 
 #120419  6:22:57 server id 5  end_log_pos 678711799  Delete_rows: table id
 10591074
 #120419  6:22:57 server id 5  end_log_pos 678712260  Delete_rows: table id
 10591074 flags: STMT_END_F
 (8,257 rows like that..)


 Those are descriptions of the actual ROW events being executed. It appears
 you are trying to delete 8257 rows from the table 10591074 which has been
 mapped to the name `fw4`.`newsletter_campaigns_**recipients_type_regular`


  BINLOG '
 geePTxMFXQAAAE8g9ScAAGKboQ**EAA2Z3NAAsbmV3c2xldHRlcl9j**
 YW1wYWlnbnNfcmVj
 aXBpZW50c190eXBlX3JlZ3VsYXIABg**MDAw8PDwYQAHwAIAAI
 geePTxkF+**gMAAEkk9ScAAGKboQAABv/**AlWdgIAEAAABnNAUAA01ycwRGVzI0I
 **DkwODcx
 ZGIxMDEwOTUyNWM3NTJmODYwYjY4Ym**NkMjdlwItnYCABZzQFAAJEcgRG**
 VzI0IDYxOTQxZTMw
 ZGU2MDNlYWRmOTBmZTMyMGZiODA5OG**NhwIFnYCABZzQFAAJNcwRGVzI0**
 IGMxMTIzYWZlNThh
 NDczMmQ0ODE1Yzk3ZDUwNmEyMTdhwH**dnYCABZzQFAAJNcwRGVzI0IGQ5**
 YmY3YTJjZDAyNzNl
 M2Y0MmNmYzI3MDliOTJmNzc0wG1nYC**ABZzQFAAJNcgRGVzI0IGJmZjY4**
 OTlhNjUwZjdlYmE3
 NjY2YzZjMjkyNzJkZGIzwGNnYCABAA**AAZzQFAAJNcwlFeGVjdXRpdmUgMTE0**
 ZDlmNWY4ZDU2ZGIy
 ZmJiNmRiMWY0OTExZmZkNjTAWWdgIA**EAAABnNAUAAkRyBEZXMjQgMGMxMGMz**
 MTA3Y2MxMGJhNmE1
 ZjhkYzlmMGI1NTM1MDHAT2dgIAEAAA**BnNAUAAk1zBEZXMjQgNzBkOGFjNmE5**
 MmU3ZDE3MDc5OTEx
 NmVmOTE3OTg2OTHARWdgIAEAAABnNA**UAAkRyCUV4ZWN1dGl2ZSA4ZjlmMmZi**
 Mzc3ZWYwOTFjZDc0
 ZWJkNGZmOTdmYzVkZsA7Z2AgAQAAAG**c0BQACRHIERlcyNCA2OTQwMGZhNzUy**
 NTg5NmM2Mjc4ZDI1

 (1 686 969: 125Meg Rows of Rubbish like that )


 This is an actual ROW event. It is a base64 encoded representation of the
 binary information that represents both the values of the original row and
 the values that you want that row to become.



  After 4Hours time the Slaves started processing the Replication logs and
 gradually catching up with the masters.
 No Slave errors btw yet.
 Those deletes haven't been processed on slaves and we have now 500 000
 rows
 difference with the masters.
 I'm trying to understand what's caused this issue and what actually
 happened here.

  ...

 The most common mistake when using ROW or MIXED is the failure to verify
 that every table you want to replicate has a PRIMARY KEY on it. This is a
 mistake because when a ROW event (such as the one documented above) is sent
 to the slave and neither the master's copy nor the slave's copy of the
 table has a PRIMARY KEY on the table, there is no way to easily identify
 which unique row you want replication to change.

 Replication solves this problem by scanning your entire table and choosing
 an appropriately matching row to the 'before' image embedded in the ROW
 event.  Multiply the number of rows you are trying to change by the number
 of rows you need to scan and this can quickly become a process that takes a
 long time to complete. A numerical example would look like this:

 * 5 rows in a table without a PRIMARY KEY
 * You delete 5000 of those on the master.
 * The 5000 deletion events are written to the Binary Log in ROW formatting.

 When

Re: Swap data in columns

2010-09-23 Thread nixofortune
Hi Kevin,
It works in mysql but not exactly as I need.
In my case it copied content of column2 into column1.
So, not exactly what I intended to achieve.
Thanks.
Igor


update mydata set column1 = column2, column2 = column1

On Thu, Sep 23, 2010 at 12:03 AM, Kevin (Gmail) kfoneil...@gmail.comwrote:

 update mydata set column1 = column2, column2 = column1
 (works in sqlserver, can't try mysql at the moment)
 You can select which rows by adding a where clause obviously.
 I suppose that the field values are copied to a buffer which is the written
 to the table at the end of the update (or row by row?)

 - Original Message - From: nixofortune 
 nixofort...@googlemail.com

 To: mysql@lists.mysql.com
 Sent: Wednesday, September 22, 2010 5:29 PM

 Subject: Swap data in columns


  Hi all.

 Sorry for very simple question, just can't figure out the solution.
 I need to swap data in column1 with data in column2.


 ++-+-+
 | id | column1 | column2 |
 ++-+-+
 |  1 | a   | z   |
 |  2 | b   | y   |
 |  3 | c   | x   |
 |  4 | d   | w   |
 |  5 | e   | v   |
 ++-+-+

 Can you achieve this with a simple query?
 so for id 1 column1 = 'z' and column2 = 'a' and so on.

 Thanks guys,
 Igor





Swap data in columns

2010-09-22 Thread nixofortune
Hi all.

Sorry for very simple question, just can't figure out the solution.
I need to swap data in column1 with data in column2.


++-+-+
| id | column1 | column2 |
++-+-+
|  1 | a   | z   |
|  2 | b   | y   |
|  3 | c   | x   |
|  4 | d   | w   |
|  5 | e   | v   |
++-+-+

Can you achieve this with a simple query?
so for id 1 column1 = 'z' and column2 = 'a' and so on.

Thanks guys,
Igor


Re: Swap data in columns

2010-09-22 Thread nixofortune
Hi Rolando,
This is perfect solution I was looking for.
Why do you use left join here? It looks like inner join works fine as well.
Thanks.








 Rolando Edwards wrote:

 I ran these commands:

 use test
 DROP TABLE IF EXISTS mydata;
 CREATE TABLE mydata (id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,column1 
 CHAR(1),column2 CHAR(2));
 INSERT INTO mydata (column1,column2) VALUES ('a','z'), ('b','y'), ('c','x'), 
 ('d','w'), ('e','v');
 SELECT * FROM mydata;
 UPDATE mydata A LEFT JOIN mydata B USING (id) SET 
 A.column1=B.column2,A.column2=B.column1;
 SELECT * FROM mydata;

 I got this output:

 lwdba@ (DB test) :: use test
 Database changed
 lwdba@ (DB test) :: DROP TABLE IF EXISTS mydata;
 Query OK, 0 rows affected (0.00 sec)

 lwdba@ (DB test) :: CREATE TABLE mydata (id INT NOT NULL AUTO_INCREMENT 
 PRIMARY KEY,column1 CHAR(1),column2 CHAR(2));
 Query OK, 0 rows affected (0.05 sec)

 lwdba@ (DB test) :: INSERT INTO mydata (column1,column2) VALUES ('a','z'), 
 ('b','y'), ('c','x'), ('d','w'), ('e','v');
 Query OK, 5 rows affected (0.00 sec)
 Records: 5  Duplicates: 0  Warnings: 0

 lwdba@ (DB test) :: SELECT * FROM mydata;
 ++-+-+
 | id | column1 | column2 |
 ++-+-+
 |  1 | a   | z   |
 |  2 | b   | y   |
 |  3 | c   | x   |
 |  4 | d   | w   |
 |  5 | e   | v   |
 ++-+-+
 5 rows in set (0.00 sec)

 lwdba@ (DB test) :: UPDATE mydata A LEFT JOIN mydata B USING (id) SET 
 A.column1=B.column2,A.column2=B.column1;
 Query OK, 5 rows affected (0.03 sec)
 Rows matched: 5  Changed: 5  Warnings: 0

 lwdba@ (DB test) :: SELECT * FROM mydata;
 ++-+-+
 | id | column1 | column2 |
 ++-+-+
 |  1 | z   | a   |
 |  2 | y   | b   |
 |  3 | x   | c   |
 |  4 | w   | d   |
 |  5 | v   | e   |
 ++-+-+
 5 rows in set (0.00 sec)

 GIVE IT A TRY !!!

 Rolando A. Edwards
 MySQL DBA (CMDBA)

 155 Avenue of the Americas, Fifth Floor
 New York, NY 10013
 212-625-5307 (Work)
 AIM  Skype : 
 rolandologicworxredwa...@logicworks.nethttp://www.linkedin.com/in/rolandoedwards


 -Original Message-
 From: nixofortune [mailto:nixofort...@googlemail.com 
 nixofort...@googlemail.com]
 Sent: Wednesday, September 22, 2010 12:30 PM
 To: mysql@lists.mysql.com
 Subject: Swap data in columns

 Hi all.

 Sorry for very simple question, just can't figure out the solution.
 I need to swap data in column1 with data in column2.


 ++-+-+
 | id | column1 | column2 |
 ++-+-+
 |  1 | a   | z   |
 |  2 | b   | y   |
 |  3 | c   | x   |
 |  4 | d   | w   |
 |  5 | e   | v   |
 ++-+-+

 Can you achieve this with a simple query?
 so for id 1 column1 = 'z' and column2 = 'a' and so on.

 Thanks guys,
 Igor






Workbench strange behavior

2010-08-19 Thread nixofortune
Hi ALL,
I just start using Workbench 5.2.26 CE and this is a problem I have.
When I try to run a query with a case statement, columns with datetime Type
shown as BLOB in output window.
To see the output data I have to right click inside of the cell, choose
Open Value in Viewer and see text.
Example:
case
when dda.cancelled_on is null then ''
when dda.cancelled_on is not null then dda.cancelled_on
end as 'Cancelled On',

Should produce cells with a date of cancelled operation, but it returns
blob icons where the dates should be.
If I try to Export data as CSV file, the fileds with 'blob' icon instead of
the real datetime data are empty.
The code works nicely in MySQL monitor or PhPMyAdmin with properly formated
CSV exports,
It could be some View option that I missed or Bug in the Workbench.
Has anybody experienced similar Workbench behavior, any ideas?
Thanks.
Igor


Re: Remove 'debian-sys-maint' Account?

2010-03-03 Thread nixofortune
Hi ALL,

You will find all the details you need to set up debian-sys-maint account
under /etc/mysql/debian.cnf

GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN,
PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE
TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION
CLIENT ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY PASSWORD
'password' WITH GRANT OPTION
Regards,
Igor

BTW, this is my First post.

On Tue, Mar 2, 2010 at 6:15 PM, Carlos Williams carlosw...@gmail.comwrote:

 I am using Debian 'Squeeze' / Testing on with MySQL 5.1.41-3
 installed. It is a fresh install and I was checking all the system
 accounts and noticed that Debian has a 'debian-sys-maint' account on
 'localhost'. Has anyone ever removed this account? Do I need it or can
 I safely remove this account? I don't understand why it's there. I
 don't want to break MySQL even though there is no data or databases on
 this machine but I would like to keep this as clean as possible.

 Thanks for any input.

 -Carlos

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=nixofort...@googlemail.com