failed to decrypt log block

2016-04-04 Thread lejeczek

hi everybody..

would you know how to recover from such a problem:

InnoDB: ok header, but checksum field contains 792537472, 
should be 1776874443
2016-04-04 12:41:15 140333716928640 [ERROR] InnoDB: Redo log 
crypto: failed to decrypt log block. Reason could be that 
requested key version is not found, required encryption key 
management plugin is not found or configured encryption 
algorithm and/or method does not match.
2016-04-04 12:41:15 140333716928640 [ERROR] Plugin 'InnoDB' 
init function returned error.
2016-04-04 12:41:15 140333716928640 [ERROR] Plugin 'InnoDB' 
registration as a STORAGE ENGINE failed.


gee, I really have not clue what happened there.
for sharing thought & suggestions,
many thanks,
L.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: innodb log sequence problem

2015-08-06 Thread Csepregi Árpád
Thank you for answer. The problem is that I wrote in previous message 
that there is no sql backup just the files for binary backup. Hardware 
we are using is a simple laptop with Windows 7 that runs 5.1 server in 
case the originally installed files are in use. It runs an 5.5 server 
paralelly as well without any problems.


2015.08.05. 17:17 keltezéssel, Reindl Harald írta:



Am 05.08.2015 um 17:06 schrieb Csepregi Árpád:

150805 17:02:31  InnoDB: Page dump in ascii and hex (16384 bytes):
hex...
150805 17:02:31  InnoDB: Page checksum 1094951825, prior-to-4.0.14-form
checksum 1449969277
InnoDB: stored checksum 1467223489, prior-to-4.0.14-form stored checksum
87759728
InnoDB: Page lsn 1054562273 1692468334, low 4 bytes of lsn at page end
3304862103
InnoDB: Page number (if stored to page already) 544833488,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already)
960999566
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 7.
InnoDB: You may have to recover from a backup

Does anyone have any idea how to recover?


check hardware, especially memory and restore your backup!



--
Csepregi Árpád

Integ Rendszerház Kft.
Az Ön szolgálatában mindennap.

06-70-629-2114
www.integ.hu

Online pénztárgép naplófájl kiolvasó rendszer
WWW.KONTROLLSZALAG.HU


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



innodb log sequence problem

2015-08-05 Thread Csepregi Árpád

Hello,

We are facing a strange innodb related problem. Our client ran mysql 5.1 
on WinXP having file_per_table disabled. OS crashed after 5 years 
continuous running and our client of course does not have any backup 
(big company with own IT department so we do not have acces to their 
system and IT policy).

We received the mysql folders to see whether we can recover data somehow.

We installed a new myql 5.1 instance. Changed my.ini in program 
files\mysql\msql server 5.1 either changed bin folder all in all. We 
changed data folder as well to the crashed on containing all database 
folders with all .frm files ib_logfile0, ib_logfile1 and ibdata1 as 
well. Trying to start mysql service log says the following:


50805 16:58:28 [Note] Plugin 'FEDERATED' is disabled.
150805 16:58:28  InnoDB: Initializing buffer pool, size = 47.0M
150805 16:58:28  InnoDB: Completed initialization of buffer pool
InnoDB: Error: log file .\ib_logfile0 is of different size 0 10485808 bytes
InnoDB: than specified in the .cnf file 0 25165824 bytes!
150805 16:58:28 [ERROR] Plugin 'InnoDB' init function returned error.
150805 16:58:28 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE 
failed.

150805 16:58:28 [ERROR] Unknown/unsupported table type: INNODB
150805 16:58:28 [ERROR] Aborting

Strange is that ib_logfile0 is 10MB+48 bytes however as far as I know 
innodb_log_file_size system variable can only be added in M.


We tryed to remove ib_logfile0 and ib_logfile1 from data folder and 
tryed to start the instance again. Logfiles were created but got the 
following error mesage in error log:


150805 17:02:29 [Note] Plugin 'FEDERATED' is disabled.
150805 17:02:30  InnoDB: Initializing buffer pool, size = 47.0M
150805 17:02:30  InnoDB: Completed initialization of buffer pool
150805 17:02:30  InnoDB: Log file .\ib_logfile0 did not exist: new to be 
created

InnoDB: Setting log file .\ib_logfile0 size to 24 MB
InnoDB: Database physically writes the file full: wait...
150805 17:02:30  InnoDB: Log file .\ib_logfile1 did not exist: new to be 
created

InnoDB: Setting log file .\ib_logfile1 size to 24 MB
InnoDB: Database physically writes the file full: wait...
InnoDB: The log sequence number in ibdata files does not match
InnoDB: the log sequence number in the ib_logfiles!
150805 17:02:31  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
150805 17:02:31  InnoDB: Error: space id and page n:o stored in the page
InnoDB: read in are 960999566:544833488, should be 0:7!
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 7.
InnoDB: You may have to recover from a backup.
150805 17:02:31  InnoDB: Page dump in ascii and hex (16384 bytes):
hex...
150805 17:02:31  InnoDB: Page checksum 1094951825, prior-to-4.0.14-form 
checksum 1449969277
InnoDB: stored checksum 1467223489, prior-to-4.0.14-form stored checksum 
87759728
InnoDB: Page lsn 1054562273 1692468334, low 4 bytes of lsn at page end 
3304862103

InnoDB: Page number (if stored to page already) 544833488,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 
960999566

InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 7.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: See also 
http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html

InnoDB: about forcing recovery.
InnoDB: Ending processing because of a corrupt database page.

We tried to start with system variable innodb_force_recovery set 1-6 
none of them helped.


Does anyone have any idea how to recover?

Many thanks in advance.

Arpad Csepregi


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: innodb log sequence problem

2015-08-05 Thread Reindl Harald



Am 05.08.2015 um 17:06 schrieb Csepregi Árpád:

150805 17:02:31  InnoDB: Page dump in ascii and hex (16384 bytes):
hex...
150805 17:02:31  InnoDB: Page checksum 1094951825, prior-to-4.0.14-form
checksum 1449969277
InnoDB: stored checksum 1467223489, prior-to-4.0.14-form stored checksum
87759728
InnoDB: Page lsn 1054562273 1692468334, low 4 bytes of lsn at page end
3304862103
InnoDB: Page number (if stored to page already) 544833488,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already)
960999566
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 7.
InnoDB: You may have to recover from a backup

Does anyone have any idea how to recover?


check hardware, especially memory and restore your backup!



signature.asc
Description: OpenPGP digital signature


Binary log in 5.6

2015-05-12 Thread geetanjali mehra
Got a very strange situation, where  I receive two similar  DELETE
statement in the same binary log  position, due to which replication slave
is stopped due to following error:


Could not execute DELETE rows event on table db1.xyz.; Can't find record in
'xyz' , error code:1032.

Following entry is found in the binary log:

# at 28651234
#150423 12:21:03 server id 5170  end_log_pos 28651303 CRC32 0x0b496431
Delete_rows: table id 73 flags: STMT_END_F
### DELETE FROM `db1`.`xyz`
### WHERE
###   @1=1040103520
###   @2='new'
### DELETE FROM `db1`.`xyz`
### WHERE
###   @1=1040103520
###   @2='new'

But, above record do exist in db1.xyz table on slave.  Still, replication
error on slave says:
can't find record.

Any idea on this.



Best Regards,
Geetanjali Mehra
Senior Database Administrator


Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Ajay Garg
Hi all.

I am a newbie to MySQL, and have been going through several online
resources.

I usually come across the terms - flushing and syncing the log-buffer.
In particular, these two terms hold great significance while selecting the
value of 
innodb_flush_log_at_trx_commithttp://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
.

So, I will be grateful if I could gain some light on the differences
between the two terms.


Thanks and Regards,
Ajay


Re: Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Reindl Harald

Am 17.04.2014 10:37, schrieb Ajay Garg:
 I am a newbie to MySQL, and have been going through several online
 resources.
 
 I usually come across the terms - flushing and syncing the log-buffer.
 In particular, these two terms hold great significance while selecting the
 value of 
 innodb_flush_log_at_trx_commithttp://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
 .
 
 So, I will be grateful if I could gain some light on the differences
 between the two terms


SYNC(1)User Commands
 SYNC(1)

NAME
   sync - flush file system buffers

SYNOPSIS
   sync [OPTION]

DESCRIPTION
   Force changed blocks to disk, update the super block.




signature.asc
Description: OpenPGP digital signature


Re: Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Ajay Garg
Reindl,

I do understand the meaning of Unix sync function.

So, you mean to say that flushing and syncing are same, in the context
of MySQL?


On Thu, Apr 17, 2014 at 2:15 PM, Reindl Harald h.rei...@thelounge.netwrote:


 Am 17.04.2014 10:37, schrieb Ajay Garg:
  I am a newbie to MySQL, and have been going through several online
  resources.
 
  I usually come across the terms - flushing and syncing the
 log-buffer.
  In particular, these two terms hold great significance while selecting
 the
  value of innodb_flush_log_at_trx_commit
 http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
 
  .
 
  So, I will be grateful if I could gain some light on the differences
  between the two terms


 SYNC(1)User Commands
  SYNC(1)

 NAME
sync - flush file system buffers

 SYNOPSIS
sync [OPTION]

 DESCRIPTION
Force changed blocks to disk, update the super block.





-- 
Regards,
Ajay


Re: Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Reindl Harald


Am 17.04.2014 10:55, schrieb Ajay Garg:
 I do understand the meaning of Unix sync function.
 
 So, you mean to say that flushing and syncing are same, in the context of 
 MySQL?

please get rid of top-posting and reply-all

a flush without a sync is dear operating system, may i ask you to write
that to disk if you find the time to do so while a sync is write that
to disk without a nice asking

 On Thu, Apr 17, 2014 at 2:15 PM, Reindl Harald h.rei...@thelounge.net 
 mailto:h.rei...@thelounge.net wrote:
 
 
 Am 17.04.2014 10:37, schrieb Ajay Garg:
  I am a newbie to MySQL, and have been going through several online
  resources.
 
  I usually come across the terms - flushing and syncing the 
 log-buffer.
  In particular, these two terms hold great significance while selecting 
 the
  value of
 
 innodb_flush_log_at_trx_commithttp://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
  .
 
  So, I will be grateful if I could gain some light on the differences
  between the two terms
 
 
 SYNC(1)User Commands
  SYNC(1)
 
 NAME
sync - flush file system buffers
 
 SYNOPSIS
sync [OPTION]
 
 DESCRIPTION
Force changed blocks to disk, update the super block



signature.asc
Description: OpenPGP digital signature


Re: Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Ajay Garg
On Thu, Apr 17, 2014 at 2:28 PM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 17.04.2014 10:55, schrieb Ajay Garg:
  I do understand the meaning of Unix sync function.
 
  So, you mean to say that flushing and syncing are same, in the
 context of MySQL?

 please get rid of top-posting and reply-all

 a flush without a sync is dear operating system, may i ask you to write
 that to disk if you find the time to do so while a sync is write that
 to disk without a nice asking


Thanks Reindl, that clears it up !!





  On Thu, Apr 17, 2014 at 2:15 PM, Reindl Harald 
  h.rei...@thelounge.netmailto:
 h.rei...@thelounge.net wrote:
 
 
  Am 17.04.2014 10:37, schrieb Ajay Garg:
   I am a newbie to MySQL, and have been going through several online
   resources.
  
   I usually come across the terms - flushing and syncing the
 log-buffer.
   In particular, these two terms hold great significance while
 selecting the
   value of
  innodb_flush_log_at_trx_commit
 http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
 
   .
  
   So, I will be grateful if I could gain some light on the
 differences
   between the two terms
 
 
  SYNC(1)User Commands
   SYNC(1)
 
  NAME
 sync - flush file system buffers
 
  SYNOPSIS
 sync [OPTION]
 
  DESCRIPTION
 Force changed blocks to disk, update the super block




-- 
Regards,
Ajay


Re: Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Manuel Arostegui
2014-04-17 11:11 GMT+02:00 Ajay Garg ajaygargn...@gmail.com:

 On Thu, Apr 17, 2014 at 2:28 PM, Reindl Harald h.rei...@thelounge.net
 wrote:

 
 
  Am 17.04.2014 10:55, schrieb Ajay Garg:
   I do understand the meaning of Unix sync function.
  
   So, you mean to say that flushing and syncing are same, in the
  context of MySQL?
 
  please get rid of top-posting and reply-all
 
  a flush without a sync is dear operating system, may i ask you to write
  that to disk if you find the time to do so while a sync is write that
  to disk without a nice asking
 

 Thanks Reindl, that clears it up !!


Keep in mind that if you are using HW RAID controller (and a BBU) the
concept of write-back and write-through are important to have in mind too
whilst thinking about how and when your data is written to disk.
If you enable sync_binlog and trx_commit to 1, it might not necessarily be
committing on every transaction to disk, as it might depend on how you've
configured your controller.

If you have write-through configured, the data will be written to disk and
cache at the same time (this has a performance drawback).
If you have write-back configured, the data will be written to cache (and
then to disk, but _not_ at the same time) so you might lose data if there's
a power failure. When you have write-back, at an OS level your data has
been sync'ed to disk even though this is not totally true.
write-back is better for performance, but you have to keep in mind that you
can lose data (which is not common, but you're not 100% safe).

Hope this helps
Manuel.


Re: Difference between log-buffer flushing and log-buffer syncing?

2014-04-17 Thread Ajay Garg
On Thu, Apr 17, 2014 at 3:03 PM, Manuel Arostegui man...@tuenti.com wrote:




 2014-04-17 11:11 GMT+02:00 Ajay Garg ajaygargn...@gmail.com:

 On Thu, Apr 17, 2014 at 2:28 PM, Reindl Harald h.rei...@thelounge.net
 wrote:

 
 
  Am 17.04.2014 10:55, schrieb Ajay Garg:
   I do understand the meaning of Unix sync function.
  
   So, you mean to say that flushing and syncing are same, in the
  context of MySQL?
 
  please get rid of top-posting and reply-all
 
  a flush without a sync is dear operating system, may i ask you to write
  that to disk if you find the time to do so while a sync is write that
  to disk without a nice asking
 

 Thanks Reindl, that clears it up !!


 Keep in mind that if you are using HW RAID controller (and a BBU) the
 concept of write-back and write-through are important to have in mind too
 whilst thinking about how and when your data is written to disk.
 If you enable sync_binlog and trx_commit to 1, it might not necessarily be
 committing on every transaction to disk, as it might depend on how you've
 configured your controller.

 If you have write-through configured, the data will be written to disk and
 cache at the same time (this has a performance drawback).
 If you have write-back configured, the data will be written to cache (and
 then to disk, but _not_ at the same time) so you might lose data if there's
 a power failure. When you have write-back, at an OS level your data has
 been sync'ed to disk even though this is not totally true.
 write-back is better for performance, but you have to keep in mind that
 you can lose data (which is not common, but you're not 100% safe).

 Hope this helps



Thanks Manuel for the note !!
It certainly is something to keep in mind !!



 Manuel.





-- 
Regards,
Ajay


Re: Master not creating new binary log.

2013-07-04 Thread Machiel Richards - Gmail

Hi,

in short what we did was the following:

- binary logs was written to a 20GB filesystem and due to company 
policies we kept expire logs at 7 days.
- system got quite busy over the years so the space was becoming a 
problem and thus we had to move to another directory.

- setting that was changed is : log_bin =  new directory
- old binary logs were moved to the new directory after shutting 
down the database
- database started up and continued as normal, however stopped at 
the last binary log when it filled up and complained about a corrupted 
binary log.
- a flush logs and reset master was done and a new binary log was 
created mysql-bin.1
- however same thing happening here, the binlog file fills up to 
100Mb as configured, then stops without creating a new binary log.

- this is then the point where the replication crashes as well.


Output of the queries:


mysql SHOW VARIABLES LIKE '%bin%';
+-+--+
| Variable_name   | Value|
+-+--+
| binlog_cache_size   | 1048576  |
| innodb_locks_unsafe_for_binlog  | OFF  |
| log_bin | ON   |
| log_bin_trust_function_creators | OFF  |
| max_binlog_cache_size   | 18446744073709547520 |
| max_binlog_size | 104857600|
| sync_binlog | 0|
+-+--+


mysql SHOW VARIABLES LIKE '%dir%';
+++
| Variable_name  | Value  |
+++
| basedir| /usr/  |
| character_sets_dir | /usr/share/mysql/charsets/ |
| datadir| /var/lib/mysql/|
| innodb_data_home_dir   ||
| innodb_log_arch_dir||
| innodb_log_group_home_dir  | ./ |
| innodb_max_dirty_pages_pct | 90 |
| plugin_dir ||
| slave_load_tmpdir  | /var/lib/mysql/tmp/|
| tmpdir | /var/lib/mysql/tmp |
+++
10 rows in set (0.00 sec)






Regards


On 07/03/2013 08:29 PM, Rick James wrote:

What setting(s) did you change to move to the separate partition?
SHOW VARIABLES LIKE '%bin%';
SHOW VARIABLES LIKE '%dir%';
(there may be other VARIABLES worth checking)
What steps did you take for the move?  (Actually move bin.1?  Start over?  etc?)

Consider using expire_logs_days.

5.0 -- I don't think anything relevant has changed during 4.0 thru 5.6.


-Original Message-
From: Machiel Richards - Gmail [mailto:machiel.richa...@gmail.com]
Sent: Wednesday, July 03, 2013 3:20 AM
To: mysql list
Subject: Master not creating new binary log.

Hi all

I hope all are well.

I would like some assistance with something really strange.

We currently have a problem with a master slave setup running mysql
5.0.

 This is one of our legacy servers which are in the planning to be
upgraded, however in order for this to be done the replication needs to be
up and running.

 The problem we have currently however is that the binary logs on
the master was moved to a seperate partition due to disc space
restrictions.

 A new binlog file called mysql-bin.1 was created and everything
seemed to work fine.

  However, the moment the file reached the file size of 100Mb, it
does not go on to create a new binlog file called mysql-bin.2 and the
replication fails stating that it is unable to read the binary log file.

  Thus far we have done a flush logs and reset master , but the
same problem occurs, where it creates mysql-bin.1 and the moment it
reaches it's max size and suppose to create a new file, it stops and does
not create the new one.

  I really hope this makes sense, and that someone can perhaps
point us in the correct direction.

  Any help would be appreciated.

Regards





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Master not creating new binary log.

2013-07-04 Thread Rick James
What setting(s) did you change to move to the separate partition?
   SHOW VARIABLES LIKE '%bin%';
   SHOW VARIABLES LIKE '%dir%';
   (there may be other VARIABLES worth checking)
What steps did you take for the move?  (Actually move bin.1?  Start over?  etc?)

Consider using expire_logs_days.

5.0 -- I don't think anything relevant has changed during 4.0 thru 5.6.

 -Original Message-
 From: Machiel Richards - Gmail [mailto:machiel.richa...@gmail.com]
 Sent: Wednesday, July 03, 2013 3:20 AM
 To: mysql list
 Subject: Master not creating new binary log.
 
 Hi all
 
I hope all are well.
 
I would like some assistance with something really strange.
 
We currently have a problem with a master slave setup running mysql
 5.0.
 
 This is one of our legacy servers which are in the planning to be
 upgraded, however in order for this to be done the replication needs to be
 up and running.
 
 The problem we have currently however is that the binary logs on
 the master was moved to a seperate partition due to disc space
 restrictions.
 
 A new binlog file called mysql-bin.1 was created and everything
 seemed to work fine.
 
  However, the moment the file reached the file size of 100Mb, it
 does not go on to create a new binlog file called mysql-bin.2 and the
 replication fails stating that it is unable to read the binary log file.
 
  Thus far we have done a flush logs and reset master , but the
 same problem occurs, where it creates mysql-bin.1 and the moment it
 reaches it's max size and suppose to create a new file, it stops and does
 not create the new one.
 
  I really hope this makes sense, and that someone can perhaps
 point us in the correct direction.
 
  Any help would be appreciated.
 
 Regards
 
 
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Master not creating new binary log.

2013-07-04 Thread spameden
issue on the slave:

SHOW SLAVE STATUS\G and post here

most likely after you reset the master your slave can't synch anymore,
because its missing next sequence of replication file.

why don't you backup your master with mysqldump and re-issue it ont he new
setup (i.e. on MySQL 5.5 instance)?


2013/7/4 Machiel Richards - Gmail machiel.richa...@gmail.com

 Hi,

 in short what we did was the following:

 - binary logs was written to a 20GB filesystem and due to company
 policies we kept expire logs at 7 days.
 - system got quite busy over the years so the space was becoming a
 problem and thus we had to move to another directory.
 - setting that was changed is : log_bin =  new directory
 - old binary logs were moved to the new directory after shutting down
 the database
 - database started up and continued as normal, however stopped at the
 last binary log when it filled up and complained about a corrupted binary
 log.
 - a flush logs and reset master was done and a new binary log was
 created mysql-bin.1
 - however same thing happening here, the binlog file fills up to 100Mb
 as configured, then stops without creating a new binary log.
 - this is then the point where the replication crashes as well.


 Output of the queries:


 mysql SHOW VARIABLES LIKE '%bin%';
 +-**+--+
 | Variable_name   | Value|
 +-**+--+
 | binlog_cache_size   | 1048576  |
 | innodb_locks_unsafe_for_binlog  | OFF  |
 | log_bin | ON   |
 | log_bin_trust_function_**creators | OFF  |
 | max_binlog_cache_size   | 18446744073709547520 |
 | max_binlog_size | 104857600|
 | sync_binlog | 0|
 +-**+--+


 mysql SHOW VARIABLES LIKE '%dir%';
 ++**+
 | Variable_name  | Value  |
 ++**+
 | basedir| /usr/  |
 | character_sets_dir | /usr/share/mysql/charsets/ |
 | datadir| /var/lib/mysql/|
 | innodb_data_home_dir   ||
 | innodb_log_arch_dir||
 | innodb_log_group_home_dir  | ./ |
 | innodb_max_dirty_pages_pct | 90 |
 | plugin_dir ||
 | slave_load_tmpdir  | /var/lib/mysql/tmp/|
 | tmpdir | /var/lib/mysql/tmp |
 ++**+
 10 rows in set (0.00 sec)






 Regards



 On 07/03/2013 08:29 PM, Rick James wrote:

 What setting(s) did you change to move to the separate partition?
 SHOW VARIABLES LIKE '%bin%';
 SHOW VARIABLES LIKE '%dir%';
 (there may be other VARIABLES worth checking)
 What steps did you take for the move?  (Actually move bin.1?  Start over?
  etc?)

 Consider using expire_logs_days.

 5.0 -- I don't think anything relevant has changed during 4.0 thru 5.6.

  -Original Message-
 From: Machiel Richards - Gmail 
 [mailto:machiel.richards@**gmail.commachiel.richa...@gmail.com
 ]
 Sent: Wednesday, July 03, 2013 3:20 AM
 To: mysql list
 Subject: Master not creating new binary log.

 Hi all

 I hope all are well.

 I would like some assistance with something really strange.

 We currently have a problem with a master slave setup running
 mysql
 5.0.

  This is one of our legacy servers which are in the planning to
 be
 upgraded, however in order for this to be done the replication needs to
 be
 up and running.

  The problem we have currently however is that the binary logs on
 the master was moved to a seperate partition due to disc space
 restrictions.

  A new binlog file called mysql-bin.1 was created and everything
 seemed to work fine.

   However, the moment the file reached the file size of 100Mb, it
 does not go on to create a new binlog file called mysql-bin.2 and the
 replication fails stating that it is unable to read the binary log file.

   Thus far we have done a flush logs and reset master , but the
 same problem occurs, where it creates mysql-bin.1 and the moment it
 reaches it's max size and suppose to create a new file, it stops and does
 not create the new one.

   I really hope this makes sense, and that someone can perhaps
 point us in the correct direction.

   Any help would be appreciated.

 Regards





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




 --
 MySQL General

Master not creating new binary log.

2013-07-03 Thread Machiel Richards - Gmail

Hi all

  I hope all are well.

  I would like some assistance with something really strange.

  We currently have a problem with a master slave setup running 
mysql 5.0.


   This is one of our legacy servers which are in the planning to 
be upgraded, however in order for this to be done the replication needs 
to be up and running.


   The problem we have currently however is that the binary logs on 
the master was moved to a seperate partition due to disc space restrictions.


   A new binlog file called mysql-bin.1 was created and everything 
seemed to work fine.


However, the moment the file reached the file size of 100Mb, it 
does not go on to create a new binlog file called mysql-bin.2 and the 
replication fails stating that it is unable to read the binary log file.


Thus far we have done a flush logs and reset master , but the 
same problem occurs, where it creates mysql-bin.1 and the moment it 
reaches it's max size and suppose to create a new file, it stops and 
does not create the new one.


I really hope this makes sense, and that someone can perhaps 
point us in the correct direction.


Any help would be appreciated.

Regards





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SLAVE aware of binary log file switch?

2013-06-17 Thread Denis Jedig

Am 17.06.2013 13:11, schrieb Mihamina Rakotomandimby:


Say the binary log file (on the master) has reached its maximum
size, so that it has to switch to a +1 binary log file: does he
inform the SLAVE of that switch so that the SLAVE updates its
information about the MASTER status?


The master does not inform the slave via an immediate 
communication channel, but the slave knows how to keep up because 
the end of the binary log file contains continuation information 
- i.e. the name of the next log file to fetch.


Denis

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



SLAVE aware of binary log file switch?

2013-06-17 Thread Mihamina Rakotomandimby

Hi all,

Given a MASTER and a SLAVE.

When launching the SLAVE, it knows about the binary log file used by the 
MASTER and the position in that log file.


Say the binary log file (on the master) has reached its maximum size, so 
that it has to switch to a +1 binary log file: does he inform the 
SLAVE of that switch so that the SLAVE updates its information about the 
MASTER status?


I'm readind the documentation 
http://dev.mysql.com/doc/refman/5.1/en/binary-log.html and dont see what 
is happening while slaving.


Thank you.

--
RMA.



Re: SLAVE aware of binary log file switch?

2013-06-17 Thread Mihamina Rakotomandimby

On 2013-06-17 14:43, Denis Jedig wrote:

Say the binary log file (on the master) has reached its maximum
size, so that it has to switch to a +1 binary log file: does he
inform the SLAVE of that switch so that the SLAVE updates its
information about the MASTER status?
The master does not inform the slave via an immediate communication 
channel, but the slave knows how to keep up because the end of the 
binary log file contains continuation information - i.e. the name of 
the next log file to fetch.


OK.

I'm sorry I was mislead by the output of:
[mihamina@prod-ebidual ~]$ echo SHOW SLAVE STATUS \G; | mysql -uroot 
-px | grep 'Slave_'
   Slave_IO_State: Waiting for master to send event -- 
this

 Slave_IO_Running: Yes
Slave_SQL_Running: Yes

--
RMA.



RE: SLAVE aware of binary log file switch?

2013-06-17 Thread Rick James
 Waiting for master to send event -- just means that nothing is being 
replicated at the moment.

The Yes+Yes says that things are running.

Seconds_behind_master = 0 says that the Slave is essentially caught up.  NULL 
means something is broken.  0 _may_ indicate a problem, or it may indicate a 
brief delay.

 -Original Message-
 From: Mihamina Rakotomandimby [mailto:miham...@rktmb.org]
 Sent: Monday, June 17, 2013 5:35 AM
 To: mysql@lists.mysql.com
 Subject: Re: SLAVE aware of binary log file switch?
 
 On 2013-06-17 14:43, Denis Jedig wrote:
  Say the binary log file (on the master) has reached its maximum size,
  so that it has to switch to a +1 binary log file: does he inform
  the SLAVE of that switch so that the SLAVE updates its information
  about the MASTER status?
  The master does not inform the slave via an immediate communication
  channel, but the slave knows how to keep up because the end of the
  binary log file contains continuation information - i.e. the name of
  the next log file to fetch.
 
 OK.
 
 I'm sorry I was mislead by the output of:
 [mihamina@prod-ebidual ~]$ echo SHOW SLAVE STATUS \G; | mysql -uroot -
 px | grep 'Slave_'
 Slave_IO_State: Waiting for master to send event --
 this
   Slave_IO_Running: Yes
  Slave_SQL_Running: Yes
 
 --
 RMA.



Re: error-log aging

2013-04-15 Thread hsv
 2013/04/05 11:16 +0200, Johan De Meersman 
Half and half - rename the file, then issue flush logs in mysql to close and 
reopen the logs, which will cause a new log with the configured name to be 
created.

That being said, I'm not much aware of Windows' idiosyncracies - I hope the 
damn thing allows you to rename a file that's being held open by a program. If 
not, well... see above. 

No, as Reindl answered, but in 5.5.8 there is this:


If you flush the logs using 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/sql-syntax.html#flushFLUSH
 LOGS or 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/programs.html#mysqladminmysqladmin
 flush-logs and 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/programs.html#mysqldmysqld
 is writing the error log to a file (for example, if it was started with the 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/server-administration.html#option_mysqld_log-error--log-error
 option), the effect is version dependent: 
* As of MySQL 5.5.7, the server closes and reopens the log file. To rename 
the file, you can do so manually before flushing. Then flushing the logs 
reopens a new file with the original file name. For example, you can rename the 
file and create a new one using the following commands: shell mv host_name.err 
host_name.err-old

shell mysqladmin flush-logs

shell mv host_name.err-old backup-directory

On Windows, use rename rather than mv. 

* Prior to MySQL 5.5.7, the server renames the current log file with the 
suffix -old, then creates a new empty log file. Be aware that a second 
log-flushing operation thus causes the original error log file to be lost 
unless you save it under a different name. On Windows, you cannot rename the 
error log while the server has it open before MySQL 5.5.7. To avoid a restart, 
flush the logs first to cause the server to rename the original file and create 
a new one, then save the renamed file. That also works on Unix, or you can use 
the commands shown earlier. 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: error-log aging

2013-04-05 Thread Johan De Meersman
- Original Message -
 From: h...@tbbs.net
 Subject: Re: error-log aging
 
 man logrotate
 
 Not Unix!

So get unix :-)

 In any case, I take this to mean that this is not done within MySQL,
 right?

Half and half - rename the file, then issue flush logs in mysql to close and 
reopen the logs, which will cause a new log with the configured name to be 
created.

That being said, I'm not much aware of Windows' idiosyncracies - I hope the 
damn thing allows you to rename a file that's being held open by a program. If 
not, well... see above.

-- 
Linux Kriek Wanderung
April 19-21, 2013
http://www.tuxera.be/lkw

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: error-log aging

2013-04-05 Thread Reindl Harald


Am 05.04.2013 11:16, schrieb Johan De Meersman:
 - Original Message -
 From: h...@tbbs.net
 Subject: Re: error-log aging

 man logrotate
 
 Not Unix!
 
 So get unix :-)
 
 In any case, I take this to mean that this is not done within MySQL,
 right?
 
 Half and half - rename the file, then issue flush logs in mysql to close 
 and reopen the logs, which will cause a new log with the configured name to 
 be created.
 
 That being said, I'm not much aware of Windows' idiosyncracies - I hope the 
 damn thing allows you to rename a file that's being held open by a program. 
 If not, well... see above.

no, windows does not allow this

but what also will work without touch mysqld is copy the existing file and 
empty it
on Unix  /path/mysqld.log would do this, on windows maybe redirect echo of a 
empty
string to the file, at least whatever  /path/file.ext works on windows too

i am working this way since years with apache accesslogs to empty them at the
begin of each month EXACTLY after webalizer has proceeded the accesslog from
the specific vhost which is all controlled by a bash-script generated by a
PHP script with access to the config-data :-)



signature.asc
Description: OpenPGP digital signature


Re: error-log aging

2013-04-05 Thread Reindl Harald


Am 04.04.2013 23:08, schrieb h...@tbbs.net:
 Is there somewhere within MySQL means of aging the error log, that it not 
 indefinitly grow big, or is that done through the OS and filesystem on which 
 mysqld runs?

man logrotate



signature.asc
Description: OpenPGP digital signature


error-log aging

2013-04-04 Thread hsv
Is there somewhere within MySQL means of aging the error log, that it not 
indefinitly grow big, or is that done through the OS and filesystem on which 
mysqld runs?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: error-log aging

2013-04-04 Thread hsv
 2013/04/04 23:18 +0200, Reindl Harald 
 Is there somewhere within MySQL means of aging the error log, that it not 
 indefinitly grow big, or is that done through the OS and filesystem on which 
 mysqld runs?

man logrotate

Not Unix!

In any case, I take this to mean that this is not done within MySQL, right?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-02-04 Thread Manuel Arostegui
2013/2/3 Larry Martell larry.mart...@gmail.com


 We also ended up dropping the database and restoring from dumps.
 However all recent dumps ended up having a similar corruption and we
 were still getting the same errors. We had to go back to an October
 dump before it would come up cleanly. And our db is fairly large, and
 it takes around 4 hours to load a dump. We were working on this Friday
 from 8:30 AM until 4AM Saturday before we got it going. And now we're
 trying to recall all the alters we did since then, and reload all the
 data since then, most of which is in files we can import. The only
 thing we don't have is the manual updates done. All in all a total
 disaster and something that will make us rethink our procedures.
 Perhaps we'll look at replication, although I don't know if that would
 have helped in this case.


Hi Larry,

I am sorry to read this. I hope you guys recovered everything already.
I would like to suggest something though.

From my point of view it is always good to backup just schemas (without
data) aside from regular data backups, that's to say, combine both. If
something like this happens, you can always do a diff and get the schemas
recovered in a matter of minutes.
Generally, schemas are pretty light and they won't use any significant disk
space.

About the replication solutionI would strongly recommend to use it if
possible in your scenario.
Clearly it won't prevent any data-loss generated by a bad statement (UPDATE
without where, DELETE * from etc). Albeit, if you're thinking to have a
dedicated slave for backups you might want to use pt-delay-slave (
http://www.percona.com/doc/percona-toolkit/2.1/pt-slave-delay.html) so you
can have your slave delayed XX minutes/hours and you can prevent disasters
coming from bad statements such as the ones I described earlier.

Anyways, as I was saying, if it's possible to have a server just acting as
a slave as a backup, that would help you to recover faster in corruption
due to HW problems. It would be a matter of setting it up as a master,
which generally takes minutes.

Hope you guys fixed everything already!
Manuel.


RE: log sequence number InnoDB: is in the future!?

2013-02-04 Thread Stillman, Benjamin
I definitely agree with using replication. As for delayed replication, this is 
actually a built in feature of MySQL 5.6 (coming soon).  5.6 has numerous 
improvements to replication. Definitely worth checking out: 
http://dev.mysql.com/tech-resources/articles/whats-new-in-mysql-5.6.html  
Scroll down to Replication Improvements.

Lastly, I've heard good things about Percona's Data Recovery Tool for InnoDB: 
https://launchpad.net/percona-data-recovery-tool-for-innodb. It might be worth 
a try.



-Original Message-
From: Manuel Arostegui [mailto:man...@tuenti.com]
Sent: Monday, February 04, 2013 4:35 AM
To: Larry Martell
Cc: wha...@bfs.de; mysql
Subject: Re: log sequence number InnoDB: is in the future!?

2013/2/3 Larry Martell larry.mart...@gmail.com


 We also ended up dropping the database and restoring from dumps.
 However all recent dumps ended up having a similar corruption and we
 were still getting the same errors. We had to go back to an October
 dump before it would come up cleanly. And our db is fairly large, and
 it takes around 4 hours to load a dump. We were working on this Friday
 from 8:30 AM until 4AM Saturday before we got it going. And now we're
 trying to recall all the alters we did since then, and reload all the
 data since then, most of which is in files we can import. The only
 thing we don't have is the manual updates done. All in all a total
 disaster and something that will make us rethink our procedures.
 Perhaps we'll look at replication, although I don't know if that would
 have helped in this case.


Hi Larry,

I am sorry to read this. I hope you guys recovered everything already.
I would like to suggest something though.

From my point of view it is always good to backup just schemas (without
data) aside from regular data backups, that's to say, combine both. If 
something like this happens, you can always do a diff and get the schemas 
recovered in a matter of minutes.
Generally, schemas are pretty light and they won't use any significant disk 
space.

About the replication solutionI would strongly recommend to use it if 
possible in your scenario.
Clearly it won't prevent any data-loss generated by a bad statement (UPDATE 
without where, DELETE * from etc). Albeit, if you're thinking to have a 
dedicated slave for backups you might want to use pt-delay-slave (
http://www.percona.com/doc/percona-toolkit/2.1/pt-slave-delay.html) so you can 
have your slave delayed XX minutes/hours and you can prevent disasters coming 
from bad statements such as the ones I described earlier.

Anyways, as I was saying, if it's possible to have a server just acting as a 
slave as a backup, that would help you to recover faster in corruption due to 
HW problems. It would be a matter of setting it up as a master, which generally 
takes minutes.

Hope you guys fixed everything already!
Manuel.



Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-02-03 Thread walter harms


Am 02.02.2013 01:34, schrieb Larry Martell:
 On Mon, Jan 28, 2013 at 5:01 AM, walter harms wha...@bfs.de wrote:
 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at 
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not 
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all tables 
 are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ? 
 (Having fun with forgein keys)

 Or is there a better way to handle this ?
 
 
 We had the same thing happen to us today. We had a power hit and when
 the server came back we got the log sequences numbers in the future
 message. We were able to dump the
 affected tables, but when we tried to restore them we were not able to
 drop the old tables. When we tried the server crashed with:
 
 InnoDB: Failing assertion not_full_n_used = descr_n_used
 
 We did try booting with innodb_force_recovery at all levels from 1 to
 6 with the same results.
 
 We still have not figured out what to do. Pretty big disaster.
 

Yep, a serious problem.
I tried several thinks that came to my mind but this was all useless
i had to drop the database and manualy rm ib_datalog0/1 (?).

Did you already got the funny errormsg about rawpartions ?

I must admit that we made several test before using innodb but we
never had such problem, actualy we are happy with that but that
kind of problems cost me three days of backup replay.

re,
 wh



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-02-03 Thread Larry Martell
On Sun, Feb 3, 2013 at 7:23 AM, walter harms wha...@bfs.de wrote:


 Am 02.02.2013 01:34, schrieb Larry Martell:
 On Mon, Jan 28, 2013 at 5:01 AM, walter harms wha...@bfs.de wrote:
 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at 
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not 
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all tables 
 are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ? 
 (Having fun with forgein keys)

 Or is there a better way to handle this ?


 We had the same thing happen to us today. We had a power hit and when
 the server came back we got the log sequences numbers in the future
 message. We were able to dump the
 affected tables, but when we tried to restore them we were not able to
 drop the old tables. When we tried the server crashed with:

 InnoDB: Failing assertion not_full_n_used = descr_n_used

 We did try booting with innodb_force_recovery at all levels from 1 to
 6 with the same results.

 We still have not figured out what to do. Pretty big disaster.


 Yep, a serious problem.
 I tried several thinks that came to my mind but this was all useless
 i had to drop the database and manualy rm ib_datalog0/1 (?).

 Did you already got the funny errormsg about rawpartions ?

 I must admit that we made several test before using innodb but we
 never had such problem, actualy we are happy with that but that
 kind of problems cost me three days of backup replay.

We also ended up dropping the database and restoring from dumps.
However all recent dumps ended up having a similar corruption and we
were still getting the same errors. We had to go back to an October
dump before it would come up cleanly. And our db is fairly large, and
it takes around 4 hours to load a dump. We were working on this Friday
from 8:30 AM until 4AM Saturday before we got it going. And now we're
trying to recall all the alters we did since then, and reload all the
data since then, most of which is in files we can import. The only
thing we don't have is the manual updates done. All in all a total
disaster and something that will make us rethink our procedures.
Perhaps we'll look at replication, although I don't know if that would
have helped in this case.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-02-03 Thread Reindl Harald


Am 03.02.2013 15:30, schrieb Larry Martell:
 We also ended up dropping the database and restoring from dumps.
 However all recent dumps ended up having a similar corruption and we
 were still getting the same errors. We had to go back to an October
 dump before it would come up cleanly. And our db is fairly large, and
 it takes around 4 hours to load a dump. We were working on this Friday
 from 8:30 AM until 4AM Saturday before we got it going. And now we're
 trying to recall all the alters we did since then, and reload all the
 data since then, most of which is in files we can import. The only
 thing we don't have is the manual updates done. All in all a total
 disaster and something that will make us rethink our procedures.
 Perhaps we'll look at replication, although I don't know if that would
 have helped in this case

replication is the way to go

 We had to go back to an October dump

well with a replication savle you have

a) good changes that not both are corrupt
b) can stop the salve for backup it and skip useless dumps
c) restore a backup of a slave with rsync is MUCH faster





signature.asc
Description: OpenPGP digital signature


Re: log sequence number InnoDB: is in the future!?

2013-02-01 Thread Larry Martell
On Mon, Jan 28, 2013 at 5:01 AM, walter harms wha...@bfs.de wrote:
 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at 
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not 
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all tables 
 are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ? 
 (Having fun with forgein keys)

 Or is there a better way to handle this ?


We had the same thing happen to us today. We had a power hit and when
the server came back we got the log sequences numbers in the future
message. We were able to dump the
affected tables, but when we tried to restore them we were not able to
drop the old tables. When we tried the server crashed with:

InnoDB: Failing assertion not_full_n_used = descr_n_used

We did try booting with innodb_force_recovery at all levels from 1 to
6 with the same results.

We still have not figured out what to do. Pretty big disaster.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-01-29 Thread walter harms


Am 28.01.2013 16:18, schrieb Andrew Moore:
 So this isn't production - well just rebuild it from a backup? It's a pain
 in the rear to get the lsn aligned again through data creation/removal but
 if it's a system critical instance without possible downtime you've got
 some work to do...
 

to be fair, my main concern is to understand what is going on.
Last time we had this in production, we loaded the back but it
takes some serious time.
This time i hoped to find a faster solution.

What exactly belongs to the innodb-side of a database (beside the tables)
only they ibdata1-file or is there more ?

re,
 wh


 
 On Mon, Jan 28, 2013 at 2:21 PM, walter harms wha...@bfs.de wrote:
 


 Am 28.01.2013 15:01, schrieb Manuel Arostegui:
 2013/1/28 walter harms wha...@bfs.de

 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0
 2871649158
 InnoDB: is in the future! Current system log sequence number 0
 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do
 not
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
 tables are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables
 ?
 (Having fun with forgein keys)

 Or is there a better way to handle this ?



 Hello,

 I reckon you really need to think of what caused your MySQL to crash. If
 there's not a clear reason (HW problem) you might want to dig into that
 to
 prevent this happening again. I am saying this because it is not the
 first
 time I see someone fixing a corruption (re-building the database or
 fixing
 corrupted tables) and then getting it corrupted again within some hours.

 very simple: power outage
 Our Production server are on UPS but i was making tests on this one and to
 be
 fair power outages are very seldom

 The problem itself has a solution: increasing the log sequence counter. I
 wouldn't do it if it's not totally necessary (ie: you don't have another
 machine to copy the data from). If you can get the data copied again from
 some other server, that is probably the safest solution here to make sure
 your data isn't corrupted. If not, I would suggest to run
 pt-table-checksum
 to make sure the data is okay. Once your DB is recovered from this crash.


  pt-table-checksum means this tool ? [
 http://www.percona.com/doc/percona-toolkit/2.1/pt-table-checksum.html]
 I would need to run it once, from the description i had the impression it
 is
 intended for monitoring. Could you please explain ?

 re,
  wh

 Cheers
 Manuel.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread Andrew Moore
Dump and reload or use some scripting to create and drop some fake data to
increase the lsn towards the 'future' value.

http://dba.stackexchange.com/questions/8011/any-better-way-out-of-mysql-innodb-log-in-the-future


On Mon, Jan 28, 2013 at 12:01 PM, walter harms wha...@bfs.de wrote:

 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
 tables are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ?
 (Having fun with forgein keys)

 Or is there a better way to handle this ?

 re,
  wh

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread Manuel Arostegui
2013/1/28 walter harms wha...@bfs.de

 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
 tables are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ?
 (Having fun with forgein keys)

 Or is there a better way to handle this ?



Hello,

I reckon you really need to think of what caused your MySQL to crash. If
there's not a clear reason (HW problem) you might want to dig into that to
prevent this happening again. I am saying this because it is not the first
time I see someone fixing a corruption (re-building the database or fixing
corrupted tables) and then getting it corrupted again within some hours.

The problem itself has a solution: increasing the log sequence counter. I
wouldn't do it if it's not totally necessary (ie: you don't have another
machine to copy the data from). If you can get the data copied again from
some other server, that is probably the safest solution here to make sure
your data isn't corrupted. If not, I would suggest to run pt-table-checksum
to make sure the data is okay. Once your DB is recovered from this crash.

Cheers
Manuel.


Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread walter harms


Am 28.01.2013 15:01, schrieb Manuel Arostegui:
 2013/1/28 walter harms wha...@bfs.de
 
 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
 tables are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ?
 (Having fun with forgein keys)

 Or is there a better way to handle this ?



 Hello,
 
 I reckon you really need to think of what caused your MySQL to crash. If
 there's not a clear reason (HW problem) you might want to dig into that to
 prevent this happening again. I am saying this because it is not the first
 time I see someone fixing a corruption (re-building the database or fixing
 corrupted tables) and then getting it corrupted again within some hours.
 
very simple: power outage
Our Production server are on UPS but i was making tests on this one and to be
fair power outages are very seldom

 The problem itself has a solution: increasing the log sequence counter. I
 wouldn't do it if it's not totally necessary (ie: you don't have another
 machine to copy the data from). If you can get the data copied again from
 some other server, that is probably the safest solution here to make sure
 your data isn't corrupted. If not, I would suggest to run pt-table-checksum
 to make sure the data is okay. Once your DB is recovered from this crash.
 

 pt-table-checksum means this tool ? 
[http://www.percona.com/doc/percona-toolkit/2.1/pt-table-checksum.html]
I would need to run it once, from the description i had the impression it is
intended for monitoring. Could you please explain ?

re,
 wh

 Cheers
 Manuel.
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread Andrew Moore
So this isn't production - well just rebuild it from a backup? It's a pain
in the rear to get the lsn aligned again through data creation/removal but
if it's a system critical instance without possible downtime you've got
some work to do...


On Mon, Jan 28, 2013 at 2:21 PM, walter harms wha...@bfs.de wrote:



 Am 28.01.2013 15:01, schrieb Manuel Arostegui:
  2013/1/28 walter harms wha...@bfs.de
 
  hi list,
 
  i am using mysql 5.1.53.
  after a crash i have the follwing error in my log:
 
  130128 10:45:25  InnoDB: Error: page 61 log sequence number 0
 2871649158
  InnoDB: is in the future! Current system log sequence number 0
 2494349480.
  InnoDB: Your database may be corrupt or you may have copied the InnoDB
  InnoDB: tablespace but not the InnoDB log files. See
  InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
  InnoDB: for more information.
 
  according to the doc's at
  http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
  I need to restore the database from scratch (short version). What i do
 not
  understand is what
  exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
  tables are ok).
 
  Not all tables are INNODB. Is is possible to restore only immodb tables
 ?
  (Having fun with forgein keys)
 
  Or is there a better way to handle this ?
 
 
 
  Hello,
 
  I reckon you really need to think of what caused your MySQL to crash. If
  there's not a clear reason (HW problem) you might want to dig into that
 to
  prevent this happening again. I am saying this because it is not the
 first
  time I see someone fixing a corruption (re-building the database or
 fixing
  corrupted tables) and then getting it corrupted again within some hours.
 
 very simple: power outage
 Our Production server are on UPS but i was making tests on this one and to
 be
 fair power outages are very seldom

  The problem itself has a solution: increasing the log sequence counter. I
  wouldn't do it if it's not totally necessary (ie: you don't have another
  machine to copy the data from). If you can get the data copied again from
  some other server, that is probably the safest solution here to make sure
  your data isn't corrupted. If not, I would suggest to run
 pt-table-checksum
  to make sure the data is okay. Once your DB is recovered from this crash.
 

  pt-table-checksum means this tool ? [
 http://www.percona.com/doc/percona-toolkit/2.1/pt-table-checksum.html]
 I would need to run it once, from the description i had the impression it
 is
 intended for monitoring. Could you please explain ?

 re,
  wh

  Cheers
  Manuel.
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread walter harms


Am 28.01.2013 14:40, schrieb Andrew Moore:
 Dump and reload or use some scripting to create and drop some fake data to
 increase the lsn towards the 'future' value.
 
 http://dba.stackexchange.com/questions/8011/any-better-way-out-of-mysql-innodb-log-in-the-future
 

For now i tend to solution 3, rsync
do you know is it possible only certain files ?

re,
 wh



 
 On Mon, Jan 28, 2013 at 12:01 PM, walter harms wha...@bfs.de wrote:
 
 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
 tables are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ?
 (Having fun with forgein keys)

 Or is there a better way to handle this ?

 re,
  wh

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread Reindl Harald


Am 28.01.2013 16:01, schrieb walter harms:
 
 
 Am 28.01.2013 14:40, schrieb Andrew Moore:
 Dump and reload or use some scripting to create and drop some fake data to
 increase the lsn towards the 'future' value.

 http://dba.stackexchange.com/questions/8011/any-better-way-out-of-mysql-innodb-log-in-the-future

 
 For now i tend to solution 3, rsync
 do you know is it possible only certain files?

no way

innodb has a global tablespace even with files_per_table



signature.asc
Description: OpenPGP digital signature


Re: Relay log Question

2013-01-17 Thread Nitin Mehta
Hi,

There was sort of a bug which was fixed in MySQL 5.5 with replication 
heartbeat. Before the replication heartbeat, a new relay log file would be 
created after every slave_net_timeout. It doesn't have any negative impact 
though.

Hope that helps.



 From: Akshay Suryavanshi akshay.suryavansh...@gmail.com
To: Wayne Leutwyler wleut...@columbus.rr.com 
Cc: mysql@lists.mysql.com 
Sent: Wednesday, January 9, 2013 1:42 AM
Subject: Re: Relay log Question
 
Also, you may want to see, if at all new file is really getting every hour
exactly, if any cron'd script runs, which executes flush logs on the
slave server. That will also rotate relay log.

Cheers

On Wed, Jan 9, 2013 at 1:35 AM, Akshay Suryavanshi 
akshay.suryavansh...@gmail.com wrote:

 Hi,

 Please re-phrase your question. The relay logs are created as and when
 required by the Slave_SQL thread. Once all the events in the relay logs are
 executed the relay log would be purged by the Slave_SQL thread.

 By setting relay_log_purge=0 you are disabling this automatic purge
 option. So the new relay log files will be created however the older ones
 would not be deleted.

 Creation or rotation of relay or binary logs is not time-based, for some
 situations it might create in one hour, however for many others it might
 create after 1 day, week, or even months.

 Usually the relay logs in your case should be of definite size, check out
 this setting max_relay_log_size and max_binlog_size, the latter would
 come into picture if the prior one is disabled.

 Thanks,
 Akshay S


 On Wed, Jan 9, 2013 at 1:21 AM, Wayne Leutwyler 
 wleut...@columbus.rr.comwrote:

 Hello List,

 Quick question. I am running MySQL 5.1.66, and I noticed that every hour,
 my slave is creating a new relay log. I added this:

 relay_log_purge = 0

 to the my.cnf and it did not solve the issue.

 What am I missing? Never seen this issue before.

 Thanks,

 Walter

 Courage is being scared to death, but saddling up anyway. --John Wayne




Re: Relay log Question

2013-01-08 Thread Akshay Suryavanshi
Hi,

Please re-phrase your question. The relay logs are created as and when
required by the Slave_SQL thread. Once all the events in the relay logs are
executed the relay log would be purged by the Slave_SQL thread.

By setting relay_log_purge=0 you are disabling this automatic purge option.
So the new relay log files will be created however the older ones would not
be deleted.

Creation or rotation of relay or binary logs is not time-based, for some
situations it might create in one hour, however for many others it might
create after 1 day, week, or even months.

Usually the relay logs in your case should be of definite size, check out
this setting max_relay_log_size and max_binlog_size, the latter would
come into picture if the prior one is disabled.

Thanks,
Akshay S

On Wed, Jan 9, 2013 at 1:21 AM, Wayne Leutwyler wleut...@columbus.rr.comwrote:

 Hello List,

 Quick question. I am running MySQL 5.1.66, and I noticed that every hour,
 my slave is creating a new relay log. I added this:

 relay_log_purge = 0

 to the my.cnf and it did not solve the issue.

 What am I missing? Never seen this issue before.

 Thanks,

 Walter

 Courage is being scared to death, but saddling up anyway. --John Wayne




Re: Relay log Question

2013-01-08 Thread Akshay Suryavanshi
Also, you may want to see, if at all new file is really getting every hour
exactly, if any cron'd script runs, which executes flush logs on the
slave server. That will also rotate relay log.

Cheers

On Wed, Jan 9, 2013 at 1:35 AM, Akshay Suryavanshi 
akshay.suryavansh...@gmail.com wrote:

 Hi,

 Please re-phrase your question. The relay logs are created as and when
 required by the Slave_SQL thread. Once all the events in the relay logs are
 executed the relay log would be purged by the Slave_SQL thread.

 By setting relay_log_purge=0 you are disabling this automatic purge
 option. So the new relay log files will be created however the older ones
 would not be deleted.

 Creation or rotation of relay or binary logs is not time-based, for some
 situations it might create in one hour, however for many others it might
 create after 1 day, week, or even months.

 Usually the relay logs in your case should be of definite size, check out
 this setting max_relay_log_size and max_binlog_size, the latter would
 come into picture if the prior one is disabled.

 Thanks,
 Akshay S


 On Wed, Jan 9, 2013 at 1:21 AM, Wayne Leutwyler 
 wleut...@columbus.rr.comwrote:

 Hello List,

 Quick question. I am running MySQL 5.1.66, and I noticed that every hour,
 my slave is creating a new relay log. I added this:

 relay_log_purge = 0

 to the my.cnf and it did not solve the issue.

 What am I missing? Never seen this issue before.

 Thanks,

 Walter

 Courage is being scared to death, but saddling up anyway. --John Wayne





RE: Unexpected gradual replication log size increase.

2012-10-18 Thread Kent Ho
Thanks for the replies.

 After examining the logs carefully. We found several devices sending snmp 
traps to the application making it composing large sql statements to mysql. 
Statment over a meg on size on 1 line. We disabled those devices and the 
problems have gone away.

 Thanks.

 Kent.
- Original Message -
From: Rick James
Sent: 10/17/12 04:50 PM
To: Kent Ho, mysql@lists.mysql.com, replicat...@lists.mysql.com
Subject: RE: Unexpected gradual replication log size increase.

 Check that server_id is different between Master and Slave(s). Check other 
settings relating to replication.  -Original Message-  From: Kent Ho 
[mailto:k...@graffiti.net]  Sent: Wednesday, October 17, 2012 6:45 AM  To: 
mysql@lists.mysql.com; replicat...@lists.mysql.com  Subject: Unexpected 
gradual replication log size increase.   Hi,   I have a Mysql replicate 
setup running for a while over 6 months and  recent we had an outage. We fix 
it, bought the server back up and we  spotted something peculiar and worrying. 
The replication logs are  growing in size, all of a sudden on Tuesday 9th Oct 
based on clues from  monitoring graphs. No changes recorded at the time that 
might have  triggered it.   The application using this database is Zenoss. 
There are no surge of  events or activitiy AFAIK of in Zenoss and working as 
normal. There's  no extra load or work added to the server. CPU  load average 
are  normal. It's just that the IO write to disk are inc!
 reasing suddenly,  creeping up slowly.   Next try find out why? I'm not a 
mysql guru. I've found a mysql bin  log analyser here:- 
http://scale-out-blog.blogspot.co.uk/2010/01/whats-  in-your-binlog.html  run 
it against the logs.   We noticed Max. Event Bytes are increasing approx. 
almost exactly by  58,590 bytes per hour.   for i in `ls -1 mysql-bin.*` ; 
do ls -l $i ; /opt/zends/bin/mysqlbinlog  $i | ./binlog-analyze.pl -q | grep 
Max. Event Bytes ; done  -rw-rw 1 zenoss zenoss 1076965696 Oct 17 01:07 
mysql-bin.000210  Max. Event Bytes : 10129066  -rw-rw 1 zenoss zenoss 
1082897654 Oct 17 01:34 mysql-bin.000211  Max. Event Bytes : 10156066 .  .  
-rw-rw 1 zenoss zenoss 1075538619 Oct 17 07:14 mysql-bin.000224  Max. 
Event Bytes : 10495366   On 15th Oct 22:05 GMT mysql-bin.000155 Max. Event 
Bytes reading was  8,553,166.  40 hours later today 17th Oct 14:00 GMT 
mysql-bin.000240 Max. Event  Bytes reading is 10,896,766.   On 15th bin log 
gr!
 owth rate was around 16MB per hour, today 50Mb per  hour and !
 increasing. Before 9th the bin log was constant size, it take  several days 
to fill a 1Gb bin log file. Now I get about 72 files or  72Gb logs per day. 
Now I've put in a cron job to purge anything older  then 12 hours and soon 
need to reduce that further at the current rate.   What could be causing 
this? All other atrributes on the system are  same as it was weeks or months 
ago.   I'm running Mysql Ver 8.42 Distrib 5.5.15, for linux2.6 on x86_64 on 
 RHEL5.6 Linux xxx 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 2010  x86_64 
x86_64 x86_64 GNU/Linux.   Please let me know if need any further infomation, 
I'll grab and post.   Any help/info are highly appreciated.   Thanks in 
advance.   Kent.   --  MySQL General Mailing List  For list archives: 
http://lists.mysql.com/mysql  To unsubscribe: http://lists.mysql.com/mysql


Re: Unexpected gradual replication log size increase.

2012-10-18 Thread Dehua Yang
Hi Kent

How did you found the   snmp traps ?

Would you give me some tips that how did you locate this problem?



On Thu, Oct 18, 2012 at 5:37 PM, Kent Ho k...@graffiti.net wrote:

 Thanks for the replies.

  After examining the logs carefully. We found several devices sending snmp
 traps to the application making it composing large sql statements to mysql.
 Statment over a meg on size on 1 line. We disabled those devices and the
 problems have gone away.

  Thanks.

  Kent.
 - Original Message -
 From: Rick James
 Sent: 10/17/12 04:50 PM
 To: Kent Ho, mysql@lists.mysql.com, replicat...@lists.mysql.com
 Subject: RE: Unexpected gradual replication log size increase.

  Check that server_id is different between Master and Slave(s). Check
 other settings relating to replication.  -Original Message- 
 From: Kent Ho [mailto:k...@graffiti.net]  Sent: Wednesday, October 17,
 2012 6:45 AM  To: mysql@lists.mysql.com; replicat...@lists.mysql.com 
 Subject: Unexpected gradual replication log size increase.   Hi,   I
 have a Mysql replicate setup running for a while over 6 months and  recent
 we had an outage. We fix it, bought the server back up and we  spotted
 something peculiar and worrying. The replication logs are  growing in
 size, all of a sudden on Tuesday 9th Oct based on clues from  monitoring
 graphs. No changes recorded at the time that might have  triggered it.  
 The application using this database is Zenoss. There are no surge of 
 events or activitiy AFAIK of in Zenoss and working as normal. There's  no
 extra load or work added to the server. CPU  load average are  normal.
 It's just that the IO write to disk are inc!
  reasing suddenly,  creeping up slowly.   Next try find out why? I'm
 not a mysql guru. I've found a mysql bin  log analyser here:-
 http://scale-out-blog.blogspot.co.uk/2010/01/whats-  in-your-binlog.html
  run it against the logs.   We noticed Max. Event Bytes are increasing
 approx. almost exactly by  58,590 bytes per hour.   for i in `ls -1
 mysql-bin.*` ; do ls -l $i ; /opt/zends/bin/mysqlbinlog  $i | ./
 binlog-analyze.pl -q | grep Max. Event Bytes ; done  -rw-rw 1
 zenoss zenoss 1076965696 Oct 17 01:07 mysql-bin.000210  Max. Event Bytes :
 10129066  -rw-rw 1 zenoss zenoss 1082897654 Oct 17 01:34
 mysql-bin.000211  Max. Event Bytes : 10156066 .  .  -rw-rw 1 zenoss
 zenoss 1075538619 Oct 17 07:14 mysql-bin.000224  Max. Event Bytes :
 10495366   On 15th Oct 22:05 GMT mysql-bin.000155 Max. Event Bytes
 reading was  8,553,166.  40 hours later today 17th Oct 14:00 GMT
 mysql-bin.000240 Max. Event  Bytes reading is 10,896,766.   On 15th bin
 log gr!
  owth rate was around 16MB per hour, today 50Mb per  hour and !
  increasing. Before 9th the bin log was constant size, it take  several
 days to fill a 1Gb bin log file. Now I get about 72 files or  72Gb logs
 per day. Now I've put in a cron job to purge anything older  then 12 hours
 and soon need to reduce that further at the current rate.   What could be
 causing this? All other atrributes on the system are  same as it was weeks
 or months ago.   I'm running Mysql Ver 8.42 Distrib 5.5.15, for linux2.6
 on x86_64 on  RHEL5.6 Linux xxx 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44
 EST 2010  x86_64 x86_64 x86_64 GNU/Linux.   Please let me know if need
 any further infomation, I'll grab and post.   Any help/info are highly
 appreciated.   Thanks in advance.   Kent.   --  MySQL General
 Mailing List  For list archives: http://lists.mysql.com/mysql  To
 unsubscribe: http://lists.mysql.com/mysql




-- 
My Gtalk : meflyingf...@gmail.com
My skype name : meflyingfish
Twitter: http://twitter.com/whitepoplar


Unexpected gradual replication log size increase.

2012-10-17 Thread Kent Ho
Hi,

I have a Mysql replicate setup running for a while over 6 months and recent we 
had an outage.  We fix it, bought the server back up and we spotted something 
peculiar and worrying.   The replication logs are growing in size, all of a 
sudden on Tuesday 9th Oct based on clues from monitoring graphs.  No changes 
recorded at the time that might have triggered it.

The application using this database is Zenoss.  There are no surge of events or 
activitiy AFAIK of in Zenoss and working as normal.  There's no extra load or 
work added to the server.  CPU  load average are normal.  It's just that the 
IO write to disk are increasing suddenly, creeping up slowly.

Next try find out why?  I'm not a mysql guru.  I've found a mysql bin log 
analyser here:-
http://scale-out-blog.blogspot.co.uk/2010/01/whats-in-your-binlog.html
run it against the logs.

We noticed Max. Event Bytes are increasing approx. almost exactly by 58,590 
bytes per hour.

for i in `ls -1 mysql-bin.*` ; do ls -l $i ; /opt/zends/bin/mysqlbinlog $i | 
./binlog-analyze.pl -q | grep Max. Event Bytes ; done
-rw-rw 1 zenoss zenoss 1076965696 Oct 17 01:07 mysql-bin.000210
Max. Event Bytes  :        10129066
-rw-rw 1 zenoss zenoss 1082897654 Oct 17 01:34 mysql-bin.000211
Max. Event Bytes  :        10156066
.
.
-rw-rw 1 zenoss zenoss 1075538619 Oct 17 07:14 mysql-bin.000224
Max. Event Bytes  :        10495366

On 15th Oct 22:05 GMT mysql-bin.000155 Max. Event Bytes reading was 8,553,166.
40 hours later today 17th Oct 14:00 GMT mysql-bin.000240 Max. Event Bytes 
reading is 10,896,766.

On 15th bin log growth rate was around 16MB per hour, today 50Mb per hour and 
increasing.  Before 9th the bin log was constant size, it take several days to 
fill a 1Gb bin log file.  Now I get about 72 files or 72Gb logs per day.   Now 
I've put in a cron job to purge anything older then 12 hours and soon need to 
reduce that further at the current rate.

What could be causing this?   All other atrributes on the system are same as it 
was weeks or months ago.

I'm running Mysql Ver 8.42 Distrib 5.5.15, for linux2.6 on x86_64 on RHEL5.6 
Linux xxx 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 2010 x86_64 x86_64 
x86_64 GNU/Linux.

Please let me know if need any further infomation, I'll grab and post.

Any help/info are highly appreciated.

Thanks in advance.

Kent.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Unexpected gradual replication log size increase.

2012-10-17 Thread Rick James
Check that server_id is different between Master and Slave(s).

Check other settings relating to replication.

 -Original Message-
 From: Kent Ho [mailto:k...@graffiti.net]
 Sent: Wednesday, October 17, 2012 6:45 AM
 To: mysql@lists.mysql.com; replicat...@lists.mysql.com
 Subject: Unexpected gradual replication log size increase.
 
 Hi,
 
 I have a Mysql replicate setup running for a while over 6 months and
 recent we had an outage.  We fix it, bought the server back up and we
 spotted something peculiar and worrying.   The replication logs are
 growing in size, all of a sudden on Tuesday 9th Oct based on clues from
 monitoring graphs.  No changes recorded at the time that might have
 triggered it.
 
 The application using this database is Zenoss.  There are no surge of
 events or activitiy AFAIK of in Zenoss and working as normal.  There's
 no extra load or work added to the server.  CPU  load average are
 normal.  It's just that the IO write to disk are increasing suddenly,
 creeping up slowly.
 
 Next try find out why?  I'm not a mysql guru.  I've found a mysql bin
 log analyser here:- http://scale-out-blog.blogspot.co.uk/2010/01/whats-
 in-your-binlog.html
 run it against the logs.
 
 We noticed Max. Event Bytes are increasing approx. almost exactly by
 58,590 bytes per hour.
 
 for i in `ls -1 mysql-bin.*` ; do ls -l $i ; /opt/zends/bin/mysqlbinlog
 $i | ./binlog-analyze.pl -q | grep Max. Event Bytes ; done
 -rw-rw 1 zenoss zenoss 1076965696 Oct 17 01:07 mysql-bin.000210
 Max. Event Bytes  :        10129066
 -rw-rw 1 zenoss zenoss 1082897654 Oct 17 01:34 mysql-bin.000211
 Max. Event Bytes  :        10156066 .
 .
 -rw-rw 1 zenoss zenoss 1075538619 Oct 17 07:14 mysql-bin.000224
 Max. Event Bytes  :        10495366
 
 On 15th Oct 22:05 GMT mysql-bin.000155 Max. Event Bytes reading was
 8,553,166.
 40 hours later today 17th Oct 14:00 GMT mysql-bin.000240 Max. Event
 Bytes reading is 10,896,766.
 
 On 15th bin log growth rate was around 16MB per hour, today 50Mb per
 hour and increasing.  Before 9th the bin log was constant size, it take
 several days to fill a 1Gb bin log file.  Now I get about 72 files or
 72Gb logs per day.   Now I've put in a cron job to purge anything older
 then 12 hours and soon need to reduce that further at the current rate.
 
 What could be causing this?   All other atrributes on the system are
 same as it was weeks or months ago.
 
 I'm running Mysql Ver 8.42 Distrib 5.5.15, for linux2.6 on x86_64 on
 RHEL5.6 Linux xxx 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 2010
 x86_64 x86_64 x86_64 GNU/Linux.
 
 Please let me know if need any further infomation, I'll grab and post.
 
 Any help/info are highly appreciated.
 
 Thanks in advance.
 
 Kent.
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread Shawn Green

On 10/15/2012 7:15 PM, spameden wrote:

Thanks a lot for all your comments!

I did disable Query cache before testing with

set query_cache_type=OFF

for the current session.

I will report this to the MySQL bugs site later.




First. What are all of your logging settings?

SHOW GLOBAL VARIABLES LIKE '%log%';

Next. When you physically look in the slow query log, how long does it 
say that it took this command to execute?


And last, before you can ask MySQL to fix a bug, you must first ensure 
it's a MySQL bug. Please try to reproduce your results using official 
binaries, not those constructed by a third party.  If the problem exists 
in our packages, do tell us about it. If the problem is not reproducible 
using official MySQL products, then please report it to the appropriate 
channel for the product you are using.


MySQL Bugs -
http://bugs.mysql.com/

Thanks!
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
Will do.

mysql  SHOW GLOBAL VARIABLES LIKE '%log%';
+-+-+
| Variable_name   | Value
|
+-+-+
| back_log| 50
|
| binlog_cache_size   | 32768
|
| binlog_direct_non_transactional_updates | OFF
|
| binlog_format   | MIXED
|
| expire_logs_days| 5
|
| general_log | OFF
|
| general_log_file| /var/run/mysqld/mysqld.log
|
| innodb_flush_log_at_trx_commit  | 2
|
| innodb_flush_log_at_trx_commit_session  | 3
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_block_size   | 512
|
| innodb_log_buffer_size  | 8388608
|
| innodb_log_file_size| 2145386496
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | ./
|
| innodb_mirrored_log_groups  | 1
|
| innodb_overwrite_relay_log_info | OFF
|
| log | OFF
|
| log_bin | ON
|
| log_bin_trust_function_creators | ON
|
| log_bin_trust_routine_creators  | ON
|
| log_error   | /var/log/mysql-error.log
|
| log_output  | FILE
|
| log_queries_not_using_indexes   | ON
|
| log_slave_updates   | OFF
|
| log_slow_admin_statements   | OFF
|
| log_slow_filter |
|
| log_slow_queries| ON
|
| log_slow_rate_limit | 1
|
| log_slow_slave_statements   | OFF
|
| log_slow_sp_statements  | ON
|
| log_slow_timestamp_every| OFF
|
| log_slow_verbosity  | microtime
|
| log_warnings| 1
|
| max_binlog_cache_size   | 18446744073709547520
|
| max_binlog_size | 104857600
|
| max_relay_log_size  | 0
|
| relay_log   | /var/log/mysql/mysqld-relay-bin
|
| relay_log_index |
|
| relay_log_info_file | relay-log.info
|
| relay_log_purge | ON
|
| relay_log_space_limit   | 0
|
| slow_query_log  | ON
|
| slow_query_log_file | /var/log/mysql/mysql-slow.log
|
| slow_query_log_microseconds_timestamp   | OFF
|
| sql_log_bin | ON
|
| sql_log_off | OFF
|
| sql_log_update  | ON
|
| suppress_log_warning_1592   | OFF
|
| sync_binlog | 0
|
| use_global_log_slow_control | none
|
+-+-+
51 rows in set (0.01 sec)

Here is full output, but writing happens ONLY if
log_queries_not_using_indexes turned ON.

Query takes:
# Query_time: 0.291280  Lock_time: 0.50  Rows_sent: 0  Rows_examined:
133876  Rows_affected: 0  Rows_read: 1
# Bytes_sent: 1775  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
# InnoDB_trx_id: F229398
SET timestamp=1350389078;
SELECT sql_id, momt, sender, receiver, udhdata, msgdata, time, smsc_id,
service, account, id, sms_type, mclass, mwi, coding, compress, validity,
deferred, dlr_mask, dlr_url, pid, alt_dcs, rpi, charset, boxc_id, binfo,
meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time)
WHERE time = UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50;


2012/10/16 Shawn Green shawn.l.gr...@oracle.com

 On 10/15/2012 7:15 PM, spameden wrote:

 Thanks a lot for all your comments!

 I did disable Query cache before testing with

 set query_cache_type=OFF

 for the current session.

 I will report this to the MySQL bugs site later.



 First. What are all of your logging settings?

 SHOW GLOBAL VARIABLES LIKE '%log%';

 Next. When you physically look in the slow query log, how long does it say
 that it took this command to execute?

 And last, before you can ask MySQL to fix a bug, you must first ensure
 it's a MySQL bug. Please try to reproduce your results using official
 binaries, not those constructed by a third party.  If the problem exists in
 our packages, do tell us about it. If the problem is not reproducible using
 official MySQL products, then please report it to the appropriate channel
 for the product you are using.

 MySQL Bugs -
 http://bugs.mysql.com/

 Thanks!
 --
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread Michael Dykman
your now() statement is getting executed for every row on the select.  try
ptting the phrase up front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.

On 2012-10-16 8:42 AM, spameden spame...@gmail.com wrote:

Will do.

mysql  SHOW GLOBAL VARIABLES LIKE '%log%';
+-+-+
| Variable_name   | Value
|
+-+-+
| back_log| 50
|
| binlog_cache_size   | 32768
|
| binlog_direct_non_transactional_updates | OFF
|
| binlog_format   | MIXED
|
| expire_logs_days| 5
|
| general_log | OFF
|
| general_log_file| /var/run/mysqld/mysqld.log
|
| innodb_flush_log_at_trx_commit  | 2
|
| innodb_flush_log_at_trx_commit_session  | 3
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_block_size   | 512
|
| innodb_log_buffer_size  | 8388608
|
| innodb_log_file_size| 2145386496
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | ./
|
| innodb_mirrored_log_groups  | 1
|
| innodb_overwrite_relay_log_info | OFF
|
| log | OFF
|
| log_bin | ON
|
| log_bin_trust_function_creators | ON
|
| log_bin_trust_routine_creators  | ON
|
| log_error   | /var/log/mysql-error.log
|
| log_output  | FILE
|
| log_queries_not_using_indexes   | ON
|
| log_slave_updates   | OFF
|
| log_slow_admin_statements   | OFF
|
| log_slow_filter |
|
| log_slow_queries| ON
|
| log_slow_rate_limit | 1
|
| log_slow_slave_statements   | OFF
|
| log_slow_sp_statements  | ON
|
| log_slow_timestamp_every| OFF
|
| log_slow_verbosity  | microtime
|
| log_warnings| 1
|
| max_binlog_cache_size   | 18446744073709547520
|
| max_binlog_size | 104857600
|
| max_relay_log_size  | 0
|
| relay_log   | /var/log/mysql/mysqld-relay-bin
|
| relay_log_index |
|
| relay_log_info_file | relay-log.info
|
| relay_log_purge | ON
|
| relay_log_space_limit   | 0
|
| slow_query_log  | ON
|
| slow_query_log_file | /var/log/mysql/mysql-slow.log
|
| slow_query_log_microseconds_timestamp   | OFF
|
| sql_log_bin | ON
|
| sql_log_off | OFF
|
| sql_log_update  | ON
|
| suppress_log_warning_1592   | OFF
|
| sync_binlog | 0
|
| use_global_log_slow_control | none
|
+-+-+
51 rows in set (0.01 sec)

Here is full output, but writing happens ONLY if
log_queries_not_using_indexes turned ON.

Query takes:
# Query_time: 0.291280  Lock_time: 0.50  Rows_sent: 0  Rows_examined:
133876  Rows_affected: 0  Rows_read: 1
# Bytes_sent: 1775  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
# InnoDB_trx_id: F229398
SET timestamp=1350389078;
SELECT sql_id, momt, sender, receiver, udhdata, msgdata, time, smsc_id,
service, account, id, sms_type, mclass, mwi, coding, compress, validity,
deferred, dlr_mask, dlr_url, pid, alt_dcs, rpi, charset, boxc_id, binfo,
meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time)
WHERE time = UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50;



2012/10/16 Shawn Green shawn.l.gr...@oracle.com

 On 10/15/2012 7:15 PM, spameden wrote:

 T...


Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
Interesting thought, but I get the same result.

# Query_time: 0.001769  Lock_time: 0.001236 Rows_sent: 0  Rows_examined: 0
use kannel;
SET timestamp=1350413592;
select * from send_sms FORCE INDEX (priority_time) where time=@ut order by
priority limit 0,11;

the MySQL i'm using is 5.5.28 from dotdeb.org, pretty sure it's close to
the original except packaging scripts.

I will check this on the release from MySQL site and report back.

Thanks to all.

2012/10/16 Michael Dykman mdyk...@gmail.com

 your now() statement is getting executed for every row on the select.  try
 ptting the phrase up front
 as in:
 set @ut= unix_timestamp(now())
 and then use that in your statement.

 On 2012-10-16 8:42 AM, spameden spame...@gmail.com wrote:

 Will do.

 mysql  SHOW GLOBAL VARIABLES LIKE '%log%';

 +-+-+
 | Variable_name   | Value
 |

 +-+-+
 | back_log| 50
 |
 | binlog_cache_size   | 32768
 |
 | binlog_direct_non_transactional_updates | OFF
 |
 | binlog_format   | MIXED
 |
 | expire_logs_days| 5
 |
 | general_log | OFF
 |
 | general_log_file| /var/run/mysqld/mysqld.log
 |
 | innodb_flush_log_at_trx_commit  | 2
 |
 | innodb_flush_log_at_trx_commit_session  | 3
 |
 | innodb_locks_unsafe_for_binlog  | OFF
 |
 | innodb_log_block_size   | 512
 |
 | innodb_log_buffer_size  | 8388608
 |
 | innodb_log_file_size| 2145386496
 |
 | innodb_log_files_in_group   | 2
 |
 | innodb_log_group_home_dir   | ./
 |
 | innodb_mirrored_log_groups  | 1
 |
 | innodb_overwrite_relay_log_info | OFF
 |
 | log | OFF
 |
 | log_bin | ON
 |
 | log_bin_trust_function_creators | ON
 |
 | log_bin_trust_routine_creators  | ON
 |
 | log_error   | /var/log/mysql-error.log
 |
 | log_output  | FILE
 |
 | log_queries_not_using_indexes   | ON
 |
 | log_slave_updates   | OFF
 |
 | log_slow_admin_statements   | OFF
 |
 | log_slow_filter |
 |
 | log_slow_queries| ON
 |
 | log_slow_rate_limit | 1
 |
 | log_slow_slave_statements   | OFF
 |
 | log_slow_sp_statements  | ON
 |
 | log_slow_timestamp_every| OFF
 |
 | log_slow_verbosity  | microtime
 |
 | log_warnings| 1
 |
 | max_binlog_cache_size   | 18446744073709547520
 |
 | max_binlog_size | 104857600
 |
 | max_relay_log_size  | 0
 |
 | relay_log   | /var/log/mysql/mysqld-relay-bin
 |
 | relay_log_index |
 |
 | relay_log_info_file | relay-log.info
 |
 | relay_log_purge | ON
 |
 | relay_log_space_limit   | 0
 |
 | slow_query_log  | ON
 |
 | slow_query_log_file | /var/log/mysql/mysql-slow.log
 |
 | slow_query_log_microseconds_timestamp   | OFF
 |
 | sql_log_bin | ON
 |
 | sql_log_off | OFF
 |
 | sql_log_update  | ON
 |
 | suppress_log_warning_1592   | OFF
 |
 | sync_binlog | 0
 |
 | use_global_log_slow_control | none
 |

 +-+-+
 51 rows in set (0.01 sec)

 Here is full output, but writing happens ONLY if
 log_queries_not_using_indexes turned ON.

 Query takes:
 # Query_time: 0.291280  Lock_time: 0.50  Rows_sent: 0  Rows_examined:
 133876  Rows_affected: 0  Rows_read: 1
 # Bytes_sent: 1775  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
 # InnoDB_trx_id: F229398
 SET timestamp=1350389078;
 SELECT sql_id, momt, sender, receiver, udhdata, msgdata, time, smsc_id,
 service, account, id, sms_type, mclass, mwi, coding, compress, validity,
 deferred, dlr_mask, dlr_url, pid, alt_dcs, rpi, charset, boxc_id, binfo,
 meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time)
 WHERE time = UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50;



 2012/10/16 Shawn Green shawn.l.gr...@oracle.com

  On 10/15/2012 7:15 PM, spameden wrote:
 
  T...




Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread hsv
 2012/10/16 12:57 -0400, Michael Dykman 
your now() statement is getting executed for every row on the select.  try
ptting the phrase up front
as in:
set @ut= unix_timestamp(now())
and then use that in your statement.

Quote:

Functions that return the current date or time each are evaluated only once per 
query at the start of query execution. This means that multiple references to a 
function such as 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_nowNOW()
 within a single query always produce the same result. (For our purposes, a 
single query also includes a call to a stored program (stored routine, trigger, 
or event) and all subprograms called by that program.) This principle also 
applies to 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_curdateCURDATE(),
 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_curtimeCURTIME(),
 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_utc-dateUTC_DATE(),
 
file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_utc-timeUTC_TIME(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/f!
 unctio
ns.html#function_utc-timestampUTC_TIMESTAMP(), and to any of their synonyms. 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
That's exactly what I thought when reading Michael's email, but tried
anyways, thanks for clarification :)

2012/10/16 h...@tbbs.net

  2012/10/16 12:57 -0400, Michael Dykman 
 your now() statement is getting executed for every row on the select.  try
 ptting the phrase up front
 as in:
 set @ut= unix_timestamp(now())
 and then use that in your statement.
 
 Quote:

 Functions that return the current date or time each are evaluated only
 once per query at the start of query execution. This means that multiple
 references to a function such as
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_nowNOW()
 within a single query always produce the same result. (For our purposes, a
 single query also includes a call to a stored program (stored routine,
 trigger, or event) and all subprograms called by that program.) This
 principle also applies to
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_curdateCURDATE(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_curtimeCURTIME(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_utc-dateUTC_DATE(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_utc-timeUTC_TIME(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/f!
  unctio
 ns.html#function_utc-timestampUTC_TIMESTAMP(), and to any of their
 synonyms.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
 | SIMPLE  | send_sms_test | index | NULL  | priority_time
| 12  | NULL |  * 11* | Using where |
++-+---+---+---+---+-+--+--+-+
1 row in set (0.00 sec)

But If I issue the query I see in the mysql-slow.log:
select * from send_sms_test where time=UNIX_TIMESTAMP(NOW()) order by
priority limit 0,11;

If I do create INDEX time,priority (in reverse order instead of
priority,time) I get still the same usage of priority_time key with the
same length, but rows now are doubled):
mysql *create index time_priority ON send_sms_test (time,priority);*
Query OK, 0 rows affected (0.67 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql *desc select * from send_sms_test where time=UNIX_TIMESTAMP(NOW())
order by priority limit 0,11;*
++-+---+---+---+---+-+--+--+-+
| id | select_type | table | type  | possible_keys | key
| key_len | ref  | rows | Extra   |
++-+---+---+---+---+-+--+--+-+
|  1 | SIMPLE  | send_sms_test | index | time_priority | priority_time
| 12  | NULL |   *22* | Using where |
++-+---+---+---+---+-+--+--+-+

And if both indexes created I do not have anymore this query in the
slow-log.

Of course If I disable log_queries_not_using_indexes I get none of the
queries.

So is it a bug inside Percona's implementation or it's generally MySQL
behavior?

Thanks


Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
` bigint(20) DEFAULT NULL,
   `rpi` bigint(20) DEFAULT NULL,
   `charset` varchar(255) DEFAULT NULL,
   `boxc_id` varchar(255) DEFAULT NULL,
   `binfo` varchar(255) DEFAULT NULL,
   `meta_data` text,
   `task_id` bigint(20) DEFAULT NULL,
   `msgid` bigint(20) DEFAULT NULL,
   `priority` int(3) unsigned NOT NULL DEFAULT '500',
   PRIMARY KEY (`sql_id`),
   KEY `task_id` (`task_id`),
   KEY `receiver` (`receiver`),
   KEY `msgid` (`msgid`),
   KEY `priority_time` (`priority`,`time`)
 ) ENGINE=InnoDB AUTO_INCREMENT=7806318 DEFAULT CHARSET=utf8

 Slow-queries turned on with an option:
 | log_queries_not_using_indexes | ON|

 mysqld --version
 mysqld  Ver 5.1.65-rel14.0 for debian-linux-gnu on x86_64 ((Percona Server
 (GPL), 14.0, Revision 475))

 If I check with EXPLAIN MySQL says it would use the index:
 mysql *desc select * from send_sms_test where
 time=UNIX_TIMESTAMP(NOW()) order by priority limit 0,11;*

 ++-+---+---+---+---+-+--+--+-+
 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows | Extra   |

 ++-+---+---+---+---+-+--+--+-+
 |  1 | SIMPLE  | send_sms_test | index | NULL  | priority_time
 | 12  | NULL |  * 11* | Using where |

 ++-+---+---+---+---+-+--+--+-+
 1 row in set (0.00 sec)

 But If I issue the query I see in the mysql-slow.log:
 select * from send_sms_test where time=UNIX_TIMESTAMP(NOW()) order by
 priority limit 0,11;

 If I do create INDEX time,priority (in reverse order instead of
 priority,time) I get still the same usage of priority_time key with the
 same length, but rows now are doubled):
 mysql *create index time_priority ON send_sms_test (time,priority);*
 Query OK, 0 rows affected (0.67 sec)
 Records: 0  Duplicates: 0  Warnings: 0

 mysql *desc select * from send_sms_test where
 time=UNIX_TIMESTAMP(NOW()) order by priority limit 0,11;*

 ++-+---+---+---+---+-+--+--+-+
 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows | Extra   |

 ++-+---+---+---+---+-+--+--+-+
 |  1 | SIMPLE  | send_sms_test | index | time_priority | priority_time
 | 12  | NULL |   *22* | Using where |

 ++-+---+---+---+---+-+--+--+-+

 And if both indexes created I do not have anymore this query in the
 slow-log.

 Of course If I disable log_queries_not_using_indexes I get none of the
 queries.

 So is it a bug inside Percona's implementation or it's generally MySQL
 behavior?

 Thanks



RE: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread Rick James
* Rows = 11 / 22 -- don't take the numbers too seriously; they are crude 
approximations based on estimated cardinality.

* The 11 comes from the LIMIT -- therefore useless in judging the efficiency.  
(The 22 may be 2*11; I don't know.)

* Run the EXPLAINs without LIMIT -- that will avoid the bogus 11/22.

* If the CREATE INDEX took only 0.67 sec, I surmise that you have very few rows 
in the table??  So this discussion is not necessarily valid in general cases.

* What percentage of time values meet the WHERE?  This has a big impact on the 
choice of explain plan and performance.

* Set long_query_time = 0; to get it in the slowlog even if it is fast.  Then 
look at the various extra values (such as filesort, on disk, temp table used, 
etc).

* Do this (with each index):
SHOW SESSION STATUS LIKE 'Handler_read%';
SELECT ... FORCE INDEX(...) ...;
SHOW SESSION STATUS LIKE 'Handler_read%';
Then take the diffs of the handler counts.  This will give you a pretty 
detailed idea of what is going on; better than the SlowLog.

* INT(3) is not a 3-digit integer, it is a full 32-bit integer (4 bytes).  
Perhaps you should have SMALLINT UNSIGNED (2 bytes).

* BIGINT takes 8 bytes -- usually over-sized.


 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Monday, October 15, 2012 1:42 PM
 To: mysql@lists.mysql.com
 Subject: mysql logs query with indexes used to the slow-log and not
 logging if there is index in reverse order
 
 Hi, list.
 
 Sorry for the long subject, but I'm really interested in solving this
 and need a help:
 
 I've got a table:
 
 mysql show create table send_sms_test;
 +---+--
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---+
 | Table | Create
 Table
 |
 +---+--
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---+
 | send_sms_test | CREATE TABLE `send_sms_test` (
   `sql_id` bigint(20) NOT NULL AUTO_INCREMENT,
   `momt` enum('MO','MT') DEFAULT NULL,
   `sender` varchar(20) DEFAULT NULL,
   `receiver` varchar(20) DEFAULT NULL,
   `udhdata` blob,
   `msgdata` text,
   `time` bigint(20) NOT NULL,
   `smsc_id` varchar(255) DEFAULT 'main',
   `service` varchar(255) DEFAULT NULL,
   `account` varchar(255) DEFAULT NULL,
   `id` bigint(20) DEFAULT NULL,
   `sms_type` tinyint(1) DEFAULT '2',
   `mclass

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
|
| Handler_read_next | 576090 |
| Handler_read_prev | 0  |
| Handler_read_rnd  | 126|
| Handler_read_rnd_next | 223|
+---++
6 rows in set (0.00 sec)

mysql select * from send_sms_test FORCE INDEX (time_priority) where
time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
100 rows in set (0.09 sec)

mysql SHOW SESSION STATUS LIKE 'Handler_read%';
+---++
| Variable_name | Value  |
+---++
| Handler_read_first| 18 |
| Handler_read_key  | 244|
| Handler_read_next | 719969 |
| Handler_read_prev | 0  |
| Handler_read_rnd  | 226|
| Handler_read_rnd_next | 223|
+---++
6 rows in set (0.00 sec)

I don't understand much in Handler thing, could you please explain more,
based on the results I've posted ? In which case it works better and how it
uses the index?

About BIGINT(20) and INT(3) I will look further into this later, I
understand it might be oversized, but my main question is about index why
it's using it so weird.

Many thanks for your quick answer!
2012/10/16 Rick James rja...@yahoo-inc.com

 * Rows = 11 / 22 -- don't take the numbers too seriously; they are crude
 approximations based on estimated cardinality.

 * The 11 comes from the LIMIT -- therefore useless in judging the
 efficiency.  (The 22 may be 2*11; I don't know.)

 * Run the EXPLAINs without LIMIT -- that will avoid the bogus 11/22.

 * If the CREATE INDEX took only 0.67 sec, I surmise that you have very few
 rows in the table??  So this discussion is not necessarily valid in general
 cases.

 * What percentage of time values meet the WHERE?  This has a big impact on
 the choice of explain plan and performance.

 * Set long_query_time = 0; to get it in the slowlog even if it is fast.
  Then look at the various extra values (such as filesort, on disk, temp
 table used, etc).

 * Do this (with each index):
 SHOW SESSION STATUS LIKE 'Handler_read%';
 SELECT ... FORCE INDEX(...) ...;
 SHOW SESSION STATUS LIKE 'Handler_read%';
 Then take the diffs of the handler counts.  This will give you a pretty
 detailed idea of what is going on; better than the SlowLog.

 * INT(3) is not a 3-digit integer, it is a full 32-bit integer (4 bytes).
  Perhaps you should have SMALLINT UNSIGNED (2 bytes).

 * BIGINT takes 8 bytes -- usually over-sized.


  -Original Message-
  From: spameden [mailto:spame...@gmail.com]
  Sent: Monday, October 15, 2012 1:42 PM
  To: mysql@lists.mysql.com
  Subject: mysql logs query with indexes used to the slow-log and not
  logging if there is index in reverse order
 
  Hi, list.
 
  Sorry for the long subject, but I'm really interested in solving this
  and need a help:
 
  I've got a table:
 
  mysql show create table send_sms_test;
  +---+--
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---+
  | Table | Create
  Table

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
Sorry, forgot to say:

mysql show variables like 'long_query_time%';
+-+---+
| Variable_name   | Value |
+-+---+
| long_query_time | 10.00 |
+-+---+
1 row in set (0.00 sec)

It's getting in the log only due:

mysql show variables like '%indexes%';
+---+---+
| Variable_name | Value |
+---+---+
| log_queries_not_using_indexes | ON|
+---+---+
1 row in set (0.00 sec)

If I turn it off - it's all fine

My initial question was why MySQL logs it in the slow log if the query uses
an INDEX?

And why it's not logging if I create an INDEX (time, priority) (but in the
query there is FORCE INDEX (priority,time) specified, so MySQL shouldn't
use newly created INDEX (time, priority) at all).

2012/10/16 spameden spame...@gmail.com

 Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.

 Here is the MySQL-5.1 Percona testing table:

 mysql select count(*) from send_sms_test;
 +--+
 | count(*) |
 +--+
 |   143879 |
 +--+
 1 row in set (0.03 sec)

 Without LIMIT:
 mysql desc select * from send_sms_test FORCE INDEX (time_priority) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--+---+-+

 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows  | Extra   |

 ++-+---+---+---+---+-+--+---+-+
 |  1 | SIMPLE  | send_sms_test | range | time_priority | time_priority
 | 8   | NULL | 73920 | Using where; Using filesort |

 ++-+---+---+---+---+-+--+---+-+
 1 row in set (0.00 sec)

 mysql desc select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--++-+

 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows   | Extra   |

 ++-+---+---+---+---+-+--++-+
 |  1 | SIMPLE  | send_sms_test | index | NULL  | priority_time
 | 12  | NULL | 147840 | Using where |

 ++-+---+---+---+---+-+--++-+

 1 row in set (0.00 sec)

 But I actually need to use LIMIT, because client uses this to limit the
 number of records returned to process.

 mysql select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
 100 rows in set (0.00 sec)

 mysql show profile;
 ++--+
 | Status | Duration |
 ++--+
 | starting   | 0.53 |
 | Opening tables | 0.09 |
 | System lock| 0.05 |
 | Table lock | 0.04 |
 | init   | 0.37 |
 | optimizing | 0.05 |
 | statistics | 0.07 |
 | preparing  | 0.05 |
 | executing  | 0.01 |
 | Sorting result | 0.03 |
 | Sending data   | 0.000856 |
 | end| 0.03 |
 | query end  | 0.01 |
 | freeing items  | 0.15 |
 | logging slow query | 0.01 |
 | logging slow query | 0.47 |
 | cleaning up| 0.02 |
 ++--+
 17 rows in set (0.00 sec)

 mysql select * from send_sms_test FORCE INDEX (time_priority) where
 time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
 100 rows in set (0.08 sec)
 mysql show profile;
 ++--+
 | Status | Duration |
 ++--+
 | starting   | 0.48 |
 | Opening tables | 0.09 |
 | System lock| 0.02 |
 | Table lock | 0.04 |
 | init   | 0.47 |
 | optimizing | 0.06 |
 | statistics | 0.43 |
 | preparing  | 0.18 |
 | executing  | 0.01 |
 | Sorting result | 0.076725 |
 | Sending data   | 0.001406 |
 | end| 0.03 |
 | query end  | 0.01 |
 | freeing items  | 0.12 |
 | logging slow query | 0.01 |
 | cleaning up| 0.02 |
 ++--+
 16 rows in set (0.00 sec)

 As you can see latter query takes more time, because it's using filesort
 as well.

 Now, handler:
 mysql SHOW SESSION STATUS LIKE 'Handler_read%';select * from
 send_sms_test FORCE INDEX (priority_time) where time=UNIX_TIMESTAMP(NOW())
 order by priority LIMIT 0,100;SHOW SESSION STATUS LIKE

RE: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread Rick James
I don't fully understand Handler numbers, either.  But note the vast difference 
in Handler_read_next, as if the second test had to read (sequentially scan) a 
lot more stuff (in the index or the data).

Summary:
   INDEX(time, priority) -- slower; bigger Handler numbers; shorter key_len; 
filesort
   INDEX(priority, time) -- faster; smaller; seems to use both keys of the 
index (key_len=12); avoids filesort (because INDEX(priority, ...) agrees with 
ORDER BY priority).

The Optimizer has (at some level) two choices:

* Start with the WHERE

* Start with the ORDER BY
Since the ORDER BY matches one of the indexes, it can avoid the sort and stop 
with the LIMIT.  However, if most of the rows failed the WHERE clause, this 
could be the wrong choice.
That is, it is hard for the optimizer to get a query like this right every 
time.

To see what I mean, flip the inequality in WHERE time = ... around;  I think 
the results will be disappointing.

If you had more than a million rows, I would bring up PARTITIONing as a assist 
to this 2-dimensional type of problem.

From: spameden [mailto:spame...@gmail.com]
Sent: Monday, October 15, 2012 3:23 PM
To: Rick James
Cc: mysql@lists.mysql.com
Subject: Re: mysql logs query with indexes used to the slow-log and not logging 
if there is index in reverse order

Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.

Here is the MySQL-5.1 Percona testing table:

mysql select count(*) from send_sms_test;
+--+
| count(*) |
+--+
|   143879 |
+--+
1 row in set (0.03 sec)

Without LIMIT:
mysql desc select * from send_sms_test FORCE INDEX (time_priority) where 
time=UNIX_TIMESTAMP(NOW()) order by priority;
++-+---+---+---+---+-+--+---+-+
| id | select_type | table | type  | possible_keys | key   | 
key_len | ref  | rows  | Extra   |
++-+---+---+---+---+-+--+---+-+
|  1 | SIMPLE  | send_sms_test | range | time_priority | time_priority | 8  
 | NULL | 73920 | Using where; Using filesort |
++-+---+---+---+---+-+--+---+-+
1 row in set (0.00 sec)

mysql desc select * from send_sms_test FORCE INDEX (priority_time) where 
time=UNIX_TIMESTAMP(NOW()) order by priority;
++-+---+---+---+---+-+--++-+
| id | select_type | table | type  | possible_keys | key   | 
key_len | ref  | rows   | Extra   |
++-+---+---+---+---+-+--++-+
|  1 | SIMPLE  | send_sms_test | index | NULL  | priority_time | 12 
 | NULL | 147840 | Using where |
++-+---+---+---+---+-+--++-+
1 row in set (0.00 sec)

But I actually need to use LIMIT, because client uses this to limit the number 
of records returned to process.

mysql select * from send_sms_test FORCE INDEX (priority_time) where 
time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
100 rows in set (0.00 sec)

mysql show profile;
++--+
| Status | Duration |
++--+
| starting   | 0.53 |
| Opening tables | 0.09 |
| System lock| 0.05 |
| Table lock | 0.04 |
| init   | 0.37 |
| optimizing | 0.05 |
| statistics | 0.07 |
| preparing  | 0.05 |
| executing  | 0.01 |
| Sorting result | 0.03 |
| Sending data   | 0.000856 |
| end| 0.03 |
| query end  | 0.01 |
| freeing items  | 0.15 |
| logging slow query | 0.01 |
| logging slow query | 0.47 |
| cleaning up| 0.02 |
++--+
17 rows in set (0.00 sec)

mysql select * from send_sms_test FORCE INDEX (time_priority) where 
time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
100 rows in set (0.08 sec)
mysql show profile;
++--+
| Status | Duration |
++--+
| starting   | 0.48 |
| Opening tables | 0.09 |
| System lock| 0.02 |
| Table lock | 0.04 |
| init   | 0.47 |
| optimizing | 0.06 |
| statistics | 0.43 |
| preparing  | 0.18 |
| executing  | 0.01 |
| Sorting result | 0.076725 |
| Sending data   | 0.001406 |
| end| 0.03 |
| query end  | 0.01 |
| freeing items  | 0.12 |
| logging slow query | 0.01 |
| cleaning up| 0.02

RE: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread Rick James
Ø  My initial question was why MySQL logs it in the slow log if the query uses 
an INDEX?

That _may_ be worth a bug report.

A _possible_ answer...  EXPLAIN presents what the optimizer is in the mood for 
at that moment.  It does not necessarily reflect what it was in the mood for 
when it ran the query.

When timing things, run them twice (and be sure not to hit the Query cache).  
The first time freshens the cache (buffer_pool, etc); the second time gives you 
a 'reproducible' time.  I believe (without proof) that the cache contents can 
affect the optimizer's choice.

From: spameden [mailto:spame...@gmail.com]
Sent: Monday, October 15, 2012 3:29 PM
To: Rick James
Cc: mysql@lists.mysql.com
Subject: Re: mysql logs query with indexes used to the slow-log and not logging 
if there is index in reverse order

Sorry, forgot to say:

mysql show variables like 'long_query_time%';
+-+---+
| Variable_name   | Value |
+-+---+
| long_query_time | 10.00 |
+-+---+
1 row in set (0.00 sec)

It's getting in the log only due:

mysql show variables like '%indexes%';
+---+---+
| Variable_name | Value |
+---+---+
| log_queries_not_using_indexes | ON|
+---+---+
1 row in set (0.00 sec)

If I turn it off - it's all fine

My initial question was why MySQL logs it in the slow log if the query uses an 
INDEX?

And why it's not logging if I create an INDEX (time, priority) (but in the 
query there is FORCE INDEX (priority,time) specified, so MySQL shouldn't use 
newly created INDEX (time, priority) at all).
2012/10/16 spameden spame...@gmail.commailto:spame...@gmail.com
Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.

Here is the MySQL-5.1 Percona testing table:

mysql select count(*) from send_sms_test;
+--+
| count(*) |
+--+
|   143879 |
+--+
1 row in set (0.03 sec)

Without LIMIT:
mysql desc select * from send_sms_test FORCE INDEX (time_priority) where 
time=UNIX_TIMESTAMP(NOW()) order by priority;
++-+---+---+---+---+-+--+---+-+

| id | select_type | table | type  | possible_keys | key   | 
key_len | ref  | rows  | Extra   |
++-+---+---+---+---+-+--+---+-+
|  1 | SIMPLE  | send_sms_test | range | time_priority | time_priority | 8  
 | NULL | 73920 | Using where; Using filesort |
++-+---+---+---+---+-+--+---+-+
1 row in set (0.00 sec)
mysql desc select * from send_sms_test FORCE INDEX (priority_time) where 
time=UNIX_TIMESTAMP(NOW()) order by priority;
++-+---+---+---+---+-+--++-+

| id | select_type | table | type  | possible_keys | key   | 
key_len | ref  | rows   | Extra   |
++-+---+---+---+---+-+--++-+
|  1 | SIMPLE  | send_sms_test | index | NULL  | priority_time | 12 
 | NULL | 147840 | Using where |
++-+---+---+---+---+-+--++-+

1 row in set (0.00 sec)
But I actually need to use LIMIT, because client uses this to limit the number 
of records returned to process.

mysql select * from send_sms_test FORCE INDEX (priority_time) where 
time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
100 rows in set (0.00 sec)

mysql show profile;
++--+
| Status | Duration |
++--+
| starting   | 0.53 |
| Opening tables | 0.09 |
| System lock| 0.05 |
| Table lock | 0.04 |
| init   | 0.37 |
| optimizing | 0.05 |
| statistics | 0.07 |
| preparing  | 0.05 |
| executing  | 0.01 |
| Sorting result | 0.03 |
| Sending data   | 0.000856 |
| end| 0.03 |
| query end  | 0.01 |
| freeing items  | 0.15 |
| logging slow query | 0.01 |
| logging slow query | 0.47 |
| cleaning up| 0.02 |
++--+
17 rows in set (0.00 sec)

mysql select * from send_sms_test FORCE INDEX (time_priority) where 
time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
100 rows in set (0.08 sec)
mysql show profile;
++--+
| Status | Duration |
++--+
| starting   | 0.48 |
| Opening tables | 0.09 |
| System lock| 0.02 |
| Table lock

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
Thanks a lot for all your comments!

I did disable Query cache before testing with

set query_cache_type=OFF

for the current session.

I will report this to the MySQL bugs site later.



2012/10/16 Rick James rja...@yahoo-inc.com

 **Ø  **My initial question was why MySQL logs it in the slow log if the
 query uses an INDEX?

 

 That _may_ be worth a bug report.

 ** **

 A _possible_ answer...  EXPLAIN presents what the optimizer is in the mood
 for at that moment.  It does not necessarily reflect what it was in the
 mood for when it ran the query.

 ** **

 When timing things, run them twice (and be sure not to hit the Query
 cache).  The first time freshens the cache (buffer_pool, etc); the second
 time gives you a 'reproducible' time.  I believe (without proof) that the
 cache contents can affect the optimizer's choice.

 ** **

 *From:* spameden [mailto:spame...@gmail.com]
 *Sent:* Monday, October 15, 2012 3:29 PM

 *To:* Rick James
 *Cc:* mysql@lists.mysql.com
 *Subject:* Re: mysql logs query with indexes used to the slow-log and not
 logging if there is index in reverse order

 ** **

 Sorry, forgot to say:

 mysql show variables like 'long_query_time%';
 +-+---+
 | Variable_name   | Value |
 +-+---+
 | long_query_time | 10.00 |
 +-+---+
 1 row in set (0.00 sec)

 It's getting in the log only due:

 mysql show variables like '%indexes%';
 +---+---+
 | Variable_name | Value |
 +---+---+
 | log_queries_not_using_indexes | ON|
 +---+---+
 1 row in set (0.00 sec)

 If I turn it off - it's all fine

 My initial question was why MySQL logs it in the slow log if the query
 uses an INDEX?

 And why it's not logging if I create an INDEX (time, priority) (but in the
 query there is FORCE INDEX (priority,time) specified, so MySQL shouldn't
 use newly created INDEX (time, priority) at all).

 2012/10/16 spameden spame...@gmail.com

 Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.

 Here is the MySQL-5.1 Percona testing table:

 mysql select count(*) from send_sms_test;
 +--+
 | count(*) |
 +--+
 |   143879 |
 +--+
 1 row in set (0.03 sec)

 Without LIMIT:
 mysql desc select * from send_sms_test FORCE INDEX (time_priority) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--+---+-+
 


 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows  | Extra   |


 ++-+---+---+---+---+-+--+---+-+
 |  1 | SIMPLE  | send_sms_test | range | time_priority | time_priority
 | 8   | NULL | 73920 | Using where; Using filesort |

 ++-+---+---+---+---+-+--+---+-+
 

 1 row in set (0.00 sec)

 mysql desc select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--++-+
 


 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows   | Extra   |


 ++-+---+---+---+---+-+--++-+
 |  1 | SIMPLE  | send_sms_test | index | NULL  | priority_time
 | 12  | NULL | 147840 | Using where |

 ++-+---+---+---+---+-+--++-+
 


 1 row in set (0.00 sec)

 But I actually need to use LIMIT, because client uses this to limit the
 number of records returned to process.

 mysql select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
 100 rows in set (0.00 sec)

 mysql show profile;
 ++--+
 | Status | Duration |
 ++--+
 | starting   | 0.53 |
 | Opening tables | 0.09 |
 | System lock| 0.05 |
 | Table lock | 0.04 |
 | init   | 0.37 |
 | optimizing | 0.05 |
 | statistics | 0.07 |
 | preparing  | 0.05 |
 | executing  | 0.01 |
 | Sorting result | 0.03 |
 | Sending data   | 0.000856 |
 | end| 0.03 |
 | query end  | 0.01 |
 | freeing items  | 0.15 |
 | logging slow query | 0.01 |
 | logging slow query | 0.47 |
 | cleaning up| 0.02 |
 ++--+
 17 rows in set (0.00

Re: user last activity and log in

2012-10-05 Thread Singer Wang
Mellow greetings,

Enhance your calm. Lets get our facts straight and not go off our
rockers. MySQL 5.6 Enterprise edition will be able to do this natively (
https://blogs.oracle.com/MySQL/entry/new_in_mysql_enterprise_edition), but
otherwise you cannot do it natively. This does not mean
its impossible, though as there's a few easy work arounds.

1) Force all logins to use the PAM or AD authentication plugin -- if the
authentication is success then log it in AD or PAM
2) use a init-connect to log logins but that doesn't work for users with
super privileges as Keith mentioned below (thanks Keith for actually trying
to help!)
3) Write your own plugin using the MySQL Plugin APIs
4) use the McAfee Audit Plugin for MySQL (Free:
http://www.devshed.com/c/a/MySQL/McAfee-Releases-Audit-Plugin-for-MySQL-Users-86548/
)


Singer Wang
(yes, I just watched Demolition Man)



On Thu, Oct 4, 2012 at 11:29 PM, Keith Murphy bmur...@paragon-cs.comwrote:

 My friend Dave Holoboff wrote this up some time ago:


 http://mysqlhints.blogspot.com/2011/01/how-to-log-user-connections-in-mysql.html

 You know you people sound like children.

 Really  unprofessional.

 Go ahead --- call me names. i left middle school almost 30 years ago. It
 won't bother me.

 Can we knock off the name calling and actually offer advice and possible
 solutions? I thought that was what this list was for.

 For those of us out in the field doing things ... This might be your
 ticket. It requires a restart of MySQL (which may or may not be acceptable)
 bit it's a fairly clean solution.

 Minimal load, easy to query for your last connection time and how often
 connections are made by a user.

 Again, requires a restart to enable (and disable) . Oh, and users with
 super privileges won't be logged.

 Thanks,

 Keith

 --

 Keith Murphy
 Senior MySQL DBA
 Principal Trainer
 Paragon Consulting Services
 http://www.paragon-cs.com
 850-637-3877







 --

 *
 *
 (c) 850-637-3877



Re: user last activity and log in

2012-10-05 Thread Singer Wang
My bad actually, the MySQL Enterprise Audit is available in MySQL 5.5

S

On Fri, Oct 5, 2012 at 2:14 AM, Singer Wang w...@singerwang.com wrote:

 Mellow greetings,

 Enhance your calm. Lets get our facts straight and not go off our
 rockers. MySQL 5.6 Enterprise edition will be able to do this natively (
 https://blogs.oracle.com/MySQL/entry/new_in_mysql_enterprise_edition),
 but otherwise you cannot do it natively. This does not mean
 its impossible, though as there's a few easy work arounds.

 1) Force all logins to use the PAM or AD authentication plugin -- if the
 authentication is success then log it in AD or PAM
 2) use a init-connect to log logins but that doesn't work for users with
 super privileges as Keith mentioned below (thanks Keith for actually trying
 to help!)
 3) Write your own plugin using the MySQL Plugin APIs
 4) use the McAfee Audit Plugin for MySQL (Free:
 http://www.devshed.com/c/a/MySQL/McAfee-Releases-Audit-Plugin-for-MySQL-Users-86548/
 )


 Singer Wang
 (yes, I just watched Demolition Man)



 On Thu, Oct 4, 2012 at 11:29 PM, Keith Murphy bmur...@paragon-cs.comwrote:

 My friend Dave Holoboff wrote this up some time ago:


 http://mysqlhints.blogspot.com/2011/01/how-to-log-user-connections-in-mysql.html

 You know you people sound like children.

 Really  unprofessional.

 Go ahead --- call me names. i left middle school almost 30 years ago. It
 won't bother me.

 Can we knock off the name calling and actually offer advice and possible
 solutions? I thought that was what this list was for.

 For those of us out in the field doing things ... This might be your
 ticket. It requires a restart of MySQL (which may or may not be
 acceptable)
 bit it's a fairly clean solution.

 Minimal load, easy to query for your last connection time and how often
 connections are made by a user.

 Again, requires a restart to enable (and disable) . Oh, and users with
 super privileges won't be logged.

 Thanks,

 Keith

 --

 Keith Murphy
 Senior MySQL DBA
 Principal Trainer
 Paragon Consulting Services
 http://www.paragon-cs.com
 850-637-3877







 --

 *
 *
 (c) 850-637-3877





Re: user last activity and log in

2012-10-05 Thread Johan De Meersman
- Original Message -
 From: Singer Wang w...@singerwang.com
 
 2) use a init-connect to log logins but that doesn't work for users
 with super privileges as Keith mentioned below (thanks Keith for actually
 trying to help!)

That is indeed quite the nifty trick. Thanks, Keith :-)

 3) Write your own plugin using the MySQL Plugin APIs
 4) use the McAfee Audit Plugin for MySQL (Free:

Hmm. Smells of McAfee, but looks interesting. Not sure what to think :-)


-- 
Linux Bier Wanderung 2012, now also available in Belgium!
August, 12 to 19, Diksmuide, Belgium - http://lbw2012.tuxera.be

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



user last activity and log in

2012-10-04 Thread Aastha
Hello,

I want to find the last time the given list of users logged in.
Is there any mysql table from where i can retrieve the data or anyt
specific sql


Aastha Gupta


Re: user last activity and log in

2012-10-04 Thread List Man
There is no such thing.  Your application has to deal with such info.

LS


On Oct 4, 2012, at 11:28 AM, Aastha wrote:

 Hello,
 
 I want to find the last time the given list of users logged in.
 Is there any mysql table from where i can retrieve the data or anyt
 specific sql
 
 
 Aastha Gupta


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: user last activity and log in

2012-10-04 Thread Reindl Harald


Am 04.10.2012 17:28, schrieb Aastha:
 I want to find the last time the given list of users logged in.
 Is there any mysql table from where i can retrieve the data or any
 specific sql

no - because this would mean a WRITE QUERY in the mysql-database
for every connection - having a web-application with hundrets
of calls per second would kill the performance

this makes pretty no sense and is NOT the job of a RDBMS
implement it in your application / db-abstraction-layer



signature.asc
Description: OpenPGP digital signature


Re: user last activity and log in

2012-10-04 Thread Singer Wang
It is possible in MySQL 5.6

S


On Thu, Oct 4, 2012 at 11:30 AM, List Man list@bluejeantime.com wrote:

 There is no such thing.  Your application has to deal with such info.

 LS


 On Oct 4, 2012, at 11:28 AM, Aastha wrote:

  Hello,
 
  I want to find the last time the given list of users logged in.
  Is there any mysql table from where i can retrieve the data or anyt
  specific sql
 
 
  Aastha Gupta


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: user last activity and log in

2012-10-04 Thread Johan De Meersman

- Original Message -
 From: Reindl Harald h.rei...@thelounge.net
 
 this makes pretty no sense and is NOT the job of a RDBMS
 implement it in your application / db-abstraction-layer

I notice no specification of what kind of users, so I'm assuming DB users. 
There *is* such a thing: you can find it in the general query log. Turning that 
on is a considerable performance overhead, though, and so is firmly discouraged 
on production systems.


-- 
Linux Bier Wanderung 2012, now also available in Belgium!
August, 12 to 19, Diksmuide, Belgium - http://lbw2012.tuxera.be

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: user last activity and log in

2012-10-04 Thread Aastha
Yes, i meant DB users.

On Thu, Oct 4, 2012 at 10:57 AM, Johan De Meersman vegiv...@tuxera.bewrote:


 - Original Message -
  From: Reindl Harald h.rei...@thelounge.net
 
  this makes pretty no sense and is NOT the job of a RDBMS
  implement it in your application / db-abstraction-layer

 I notice no specification of what kind of users, so I'm assuming DB users.
 There *is* such a thing: you can find it in the general query log. Turning
 that on is a considerable performance overhead, though, and so is firmly
 discouraged on production systems.


 --
 Linux Bier Wanderung 2012, now also available in Belgium!
 August, 12 to 19, Diksmuide, Belgium - http://lbw2012.tuxera.be

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: user last activity and log in

2012-10-04 Thread Reindl Harald
it does not matter what kind of users

usually each application has it's own datanase and it's
own user, the application makes the connection and
can at this point log whatever you want

using the general query log can only be a bad joke
you will log EVERY query and not only logins

again: it is not the job of a RDBMS to waste I/O and
performance with such things - the application as
example could refresh it only once per user-session

the RDBMS would write blindly for each connection

Am 04.10.2012 18:18, schrieb Aastha:
 Yes, i meant DB users.
 
 On Thu, Oct 4, 2012 at 10:57 AM, Johan De Meersman vegiv...@tuxera.be 
 mailto:vegiv...@tuxera.be wrote:
 
 
 - Original Message -
  From: Reindl Harald h.rei...@thelounge.net 
 mailto:h.rei...@thelounge.net
 
  this makes pretty no sense and is NOT the job of a RDBMS
  implement it in your application / db-abstraction-layer
 
 I notice no specification of what kind of users, so I'm assuming DB 
 users. There *is* such a thing: you can
 find it in the general query log. Turning that on is a considerable 
 performance overhead, though, and so is
 firmly discouraged on production systems.



signature.asc
Description: OpenPGP digital signature


Re: user last activity and log in

2012-10-04 Thread Johan De Meersman
- Original Message -
 From: Reindl Harald rei...@thelounge.net
 
 it does not matter what kind of users

I'm happy for you that you still have all the answers anyone could ever want, 
Harald. Regardless of having any background knowledge on the circumstance of 
the question, even. You truly are a gifted individual.

 using the general query log can only be a bad joke
 you will log EVERY query and not only log-ins

Yes, which is why I specified explicitly that it is very much discouraged for 
production use.

However, it can be useful at times. I recently turned it on to investigate 
sudden, unpredictable and above all annoyingly brief peaks in the number of 
connections, and I needed to know what APPLICATION INSTANCE was responsible, 
not which particular user - as well as have a good view of what the offending 
sessions did. A tcpdump would have been an option, but given that wireshark 
still isn't too good at decoding MySQL traffic I still opted for the full query 
log. There was some more tomfoolery involved, but after almost a week of 
logging we successfully identified the culprit.

Now you may do things differently, and you may also reach a satisfactory 
solution; but I am absolutely sick and tired of hearing how your way is the 
only valid way, casually implying that the rest of the world are all bloody 
idiots that should just shut up and listen while you tell them every ridiculous 
way in which they are wrong and inferior. 

PLEASE, for your own sake - not to mention the nerves of the people around you 
- learn to accept that there are a lot of different ways to do things, and that 
sometimes people pick their optimal solution on quite different criteria than 
the ones you use. That does not necessarily make them wrong, merely different 
from you. As the Perl mantra goes, There's More Than One Way To Do It.

/Johan

-- 
Linux Bier Wanderung 2012, now also available in Belgium!
August, 12 to 19, Diksmuide, Belgium - http://lbw2012.tuxera.be

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: user last activity and log in

2012-10-04 Thread Reindl Harald


Am 04.10.2012 23:12, schrieb Johan De Meersman:
 - Original Message -
 From: Reindl Harald rei...@thelounge.net

 it does not matter what kind of users
 
 I'm happy for you that you still have all the answers anyone could ever want, 
 Harald.

not all but the one to the topic

IT IS IMPOSSIBLE
MYSQL CAN NOT DO WHAT THE OP WANT

 Regardless of having any background knowledge on the circumstance of the 
 question, even.

mysql can not an dwill not log user-logins

 You truly are a gifted individual.

your opinion, but the answer to the question of the OP
is simply NO you can't

 using the general query log can only be a bad joke
 you will log EVERY query and not only log-ins
 
 Yes, which is why I specified explicitly that it is very much discouraged for 
 production use.

it is NOT the answer to the question damned
it doe NOT log the last login of a mysql user in a USEABLE form

 Now you may do things differently, and you may also reach a satisfactory 
 solution; 
 but I am absolutely sick and tired of hearing how your way is the only valid 
 way

i don't give a damn about what you are tired of

the answer to I want to find the last time the given list of users
logged in. Is there any mysql table from where i can retrieve the data
is SIMPLY NO and not a useless full query log

 casually implying that the rest of the world are all bloody idiots 

maybe in this case your conclusion i liked to call you
a bloody idiot for bringing full query log as answer
comes because you realized how useless the idea is

 that should just shut up and listen while you tell them every ridiculous 
 way in which they are wrong and inferior

maybe you should shut up yourself as long
you are hypersensible

 learn to accept that there are a lot of different ways to do things

again: your solution full query log is not one
if you can't face the truth this is your problem

 and that sometimes people pick their optimal solution on quite different 
 criteria than the ones you use. 

if someone does not like answers he should not ask questions

 There's More Than One Way To Do It.

full query og is none of them

if it takes SIX hours for your reply in the way you did here
my conclusion is that you recently came home drunken and should
go to bed



signature.asc
Description: OpenPGP digital signature


Re: user last activity and log in

2012-10-04 Thread Claudio Nanni
Hi,

2012/10/4 Reindl Harald h.rei...@thelounge.net



 Am 04.10.2012 17:28, schrieb Aastha:
  I want to find the last time the given list of users logged in.
  Is there any mysql table from where i can retrieve the data or any
  specific sql

 no - because this would mean a WRITE QUERY in the mysql-database
 for every connection - having a web-application with hundrets
 of calls per second would kill the performance

 No because MySQL does not have this facility. (5.6)
Saying that a feature is not present because the hypothetical
implementation would impact performance doesn't make much sense in my
opinion.


 this makes pretty no sense and is NOT the job of a RDBMS
 implement it in your application / db-abstraction-layer


I can suggest a reading here:
http://www.amazon.com/Implementing-Database-Security-Auditing-Examples/dp/183342

Regards
-- 
Claudio


RE: user last activity and log in

2012-10-04 Thread Rick James
In looking at a couple hundred machine, I see that
  Connections / Uptime
has a median of about 0.5 (one connection every 2 seconds)
and a max of about 140.

140 writes to some audit table _might_ have a small impact on the system.

 -Original Message-
 From: Claudio Nanni [mailto:claudio.na...@gmail.com]
 Sent: Thursday, October 04, 2012 3:51 PM
 To: Reindl Harald
 Cc: mysql@lists.mysql.com
 Subject: Re: user last activity and log in
 
 Hi,
 
 2012/10/4 Reindl Harald h.rei...@thelounge.net
 
 
 
  Am 04.10.2012 17:28, schrieb Aastha:
   I want to find the last time the given list of users logged in.
   Is there any mysql table from where i can retrieve the data or any
   specific sql
 
  no - because this would mean a WRITE QUERY in the mysql-database for
  every connection - having a web-application with hundrets of calls
 per
  second would kill the performance
 
  No because MySQL does not have this facility. (5.6)
 Saying that a feature is not present because the hypothetical
 implementation would impact performance doesn't make much sense in my
 opinion.
 
 
  this makes pretty no sense and is NOT the job of a RDBMS implement it
  in your application / db-abstraction-layer
 
 
 I can suggest a reading here:
 http://www.amazon.com/Implementing-Database-Security-Auditing-
 Examples/dp/183342
 
 Regards
 --
 Claudio

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: user last activity and log in

2012-10-04 Thread Reindl Harald
beside the fact that msql CAN NOT do this at all

the median is not really releavt
in the median you see also night hours with zero load

on a typical webserver with load you have much more

* a cms system
* many page requests per second
* no you can not use persistent connections if you have
  let's say 100 databases and 100 domains with 500 prefork
  pcroesses because these would mean in the worst case 5
  connections
* enable query log on machines with some hundret queriers
  per second would be a self DOS and fill your disks

Am 05.10.2012 01:26, schrieb Rick James:
 In looking at a couple hundred machine, I see that
 Connections / Uptime
 has a median of about 0.5 (one connection every 2 seconds)
 and a max of about 140.
 
 140 writes to some audit table _might_ have a small impact on the system.
 
 -Original Message-
 From: Claudio Nanni [mailto:claudio.na...@gmail.com]
 Sent: Thursday, October 04, 2012 3:51 PM
 To: Reindl Harald
 Cc: mysql@lists.mysql.com
 Subject: Re: user last activity and log in

 Hi,

 2012/10/4 Reindl Harald h.rei...@thelounge.net



 Am 04.10.2012 17:28, schrieb Aastha:
 I want to find the last time the given list of users logged in.
 Is there any mysql table from where i can retrieve the data or any
 specific sql

 no - because this would mean a WRITE QUERY in the mysql-database for
 every connection - having a web-application with hundrets of calls
 per
 second would kill the performance

 No because MySQL does not have this facility. (5.6)
 Saying that a feature is not present because the hypothetical
 implementation would impact performance doesn't make much sense in my
 opinion.


 this makes pretty no sense and is NOT the job of a RDBMS implement it
 in your application / db-abstraction-layer


 I can suggest a reading here:
 http://www.amazon.com/Implementing-Database-Security-Auditing-
 Examples/dp/183342



signature.asc
Description: OpenPGP digital signature


Re: user last activity and log in

2012-10-04 Thread Keith Murphy
My friend Dave Holoboff wrote this up some time ago:

http://mysqlhints.blogspot.com/2011/01/how-to-log-user-connections-in-mysql.html

You know you people sound like children.

Really  unprofessional.

Go ahead --- call me names. i left middle school almost 30 years ago. It
won't bother me.

Can we knock off the name calling and actually offer advice and possible
solutions? I thought that was what this list was for.

For those of us out in the field doing things ... This might be your
ticket. It requires a restart of MySQL (which may or may not be acceptable)
bit it's a fairly clean solution.

Minimal load, easy to query for your last connection time and how often
connections are made by a user.

Again, requires a restart to enable (and disable) . Oh, and users with
super privileges won't be logged.

Thanks,

Keith

--

Keith Murphy
Senior MySQL DBA
Principal Trainer
Paragon Consulting Services
http://www.paragon-cs.com
850-637-3877







-- 

*
*
(c) 850-637-3877


Re: user last activity and log in

2012-10-04 Thread Keith Murphy
One small correction. Init-connect doesn't require a restart of MySQL.  I
was thinking of init-file. So that's even better.



On Thursday, October 4, 2012, Keith Murphy wrote:

 My friend Dave Holoboff wrote this up some time ago:


 http://mysqlhints.blogspot.com/2011/01/how-to-log-user-connections-in-mysql.html

 You know you people sound like children.

 Really  unprofessional.

 Go ahead --- call me names. i left middle school almost 30 years ago. It
 won't bother me.

 Can we knock off the name calling and actually offer advice and possible
 solutions? I thought that was what this list was for.

 For those of us out in the field doing things ... This might be your
 ticket. It requires a restart of MySQL (which may or may not be acceptable)
 bit it's a fairly clean solution.

 Minimal load, easy to query for your last connection time and how often
 connections are made by a user.

 Again, requires a restart to enable (and disable) . Oh, and users with
 super privileges won't be logged.

 Thanks,

 Keith

 --

 Keith Murphy
 Senior MySQL DBA
 Principal Trainer
 Paragon Consulting Services
 http://www.paragon-cs.com
 850-637-3877







 --

 *
 *
 (c) 850-637-3877



-- 

Keith Murphy
Senior MySQL DBA
Principal Trainer
Paragon Consulting Services
http://www.paragon-cs.com
850-637-3877


Re: Understanding Slow Query Log

2012-09-05 Thread Manuel Arostegui
2012/9/5 Adarsh Sharma eddy.ada...@gmail.com

 Actually that query is not my concern :

 i have a query that is taking so much time :
 Slow Log Output :
 # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
 # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
 # Attribute  total min max avg 95%  stddev  median
 #  === === === === === === ===
 # Exec time 80887s   192us   2520s415s   1732s612s 80s
 # Lock time   13ms   0   133us68us   103us23us69us
 # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
 # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
 # Query size65.45k   6 577  343.70  563.87  171.06  246.02

 In the logs output :
 # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
  Rows_examined: 471150
 SET timestamp=1346655789;
 SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
 t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
 t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
 ORDER BY t0.created_time DESC LIMIT 0, 50;

 The table is near about 30 GB and growing day by day.


Just out curiosity, is that table too fragmented? 471k rows are quite a
lot, but 488 of query time is insane. Seems you're reading from disk too
much!



 Attaching the table definition  indexes output. I have a index on bean
 type column but cann't understand why it
 examined the all rows of table.


Where's the table's schema so we can give it a try?

Manu


Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
I already attached the list.

Attaching one more time  thanks for the interest.

Cheers

On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui man...@tuenti.com wrote:



 2012/9/5 Adarsh Sharma eddy.ada...@gmail.com

 Actually that query is not my concern :

 i have a query that is taking so much time :
 Slow Log Output :
 # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
 # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
 # Attribute  total min max avg 95%  stddev  median
 #  === === === === === === ===
 # Exec time 80887s   192us   2520s415s   1732s612s 80s
 # Lock time   13ms   0   133us68us   103us23us69us
 # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
 # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
 # Query size65.45k   6 577  343.70  563.87  171.06  246.02

 In the logs output :
 # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
  Rows_examined: 471150
 SET timestamp=1346655789;
 SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
 t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
 t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
 ORDER BY t0.created_time DESC LIMIT 0, 50;

 The table is near about 30 GB and growing day by day.


 Just out curiosity, is that table too fragmented? 471k rows are quite a
 lot, but 488 of query time is insane. Seems you're reading from disk too
 much!



 Attaching the table definition  indexes output. I have a index on bean
 type column but cann't understand why it
 examined the all rows of table.


 Where's the table's schema so we can give it a try?

 Manu


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
true Michael, pasting the output :

CREATE TABLE `WF_1` (
  `id` varchar(255) NOT NULL,
  `app_name` varchar(255) DEFAULT NULL,
  `app_path` varchar(255) DEFAULT NULL,
  `conf` text,
  `group_name` varchar(255) DEFAULT NULL,
  `parent_id` varchar(255) DEFAULT NULL,
  `run` int(11) DEFAULT NULL,
  `user_name` varchar(255) DEFAULT NULL,
  `bean_type` varchar(31) DEFAULT NULL,
  `auth_token` text,
  `created_time` datetime DEFAULT NULL,
  `end_time` datetime DEFAULT NULL,
  `external_id` varchar(255) DEFAULT NULL,
  `last_modified_time` datetime DEFAULT NULL,
  `log_token` varchar(255) DEFAULT NULL,
  `proto_action_conf` text,
  `sla_xml` text,
  `start_time` datetime DEFAULT NULL,
  `status` varchar(255) DEFAULT NULL,
  `wf_instance` mediumblob,
  PRIMARY KEY (`id`),
  KEY `I_WF_1_DTYPE` (`bean_type`),
  KEY `I_WF_1_END_TIME` (`end_time`),
  KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
  KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
  KEY `I_WF_1_STATUS` (`status`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |


show indexes from WF_1;
+-++--+--++---+-+--++--++-+
| Table   | Non_unique | Key_name | Seq_in_index |
Column_name| Collation | Cardinality | Sub_part | Packed | Null |
Index_type | Comment |
+-++--+--++---+-+--++--++-+
|WF_1 |  0 | PRIMARY  |1 | id
  | A |  551664 | NULL | NULL   |  | BTREE
 | |
|WF_1 |  1 | I_WF_1_DTYPE  |1 | bean_type
   | A |  18 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_END_TIME   |1 | end_time
| A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 | external_id
   | A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
last_modified_time | A |  551664 | NULL | NULL   | YES  |
BTREE  | |
|WF_1 |  1 | I_WF_1_STATUS |1 | status
| A |  18 | NULL | NULL   | YES  | BTREE  |
|
+-++--+--++---+-+--++--++-+


Thanks

On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman mdyk...@gmail.com wrote:

 The attachments do not appear to be coming through.

 I am more curious what the cardinality of bean_type is.  What is the
 result of
 select count(*) as cnt, bean_type from WS_1 group by bean_type  ?

 Low cardinality can render an index usrless.

 On 2012-09-05 5:19 AM, Adarsh Sharma eddy.ada...@gmail.com wrote:

 I already attached the list.

 Attaching one more time  thanks for the interest.

 Cheers



 On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui man...@tuenti.com
 wrote:
 
 
 
  2012/9/5 Adar...



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
* SHOW VARIABLES LIKE 'innodb%';  -- some of them may be hurting performance.

* More that 20% of the table has bean_type = 'Workflow'?  -- if so, it is more 
efficient to do a table scan than to use the index.

* KEY `I_WF_1_DTYPE` (`bean_type`), --
KEY bean_time (`bean_type`, created_time)
Compound index may be your cure.

* Fields with low cardinality (bean_type, status) make very poor INDEXes.

* Consider using an ENUM instead of VARCHAR for status and bean_type, (and 
others?)

* VARCHAR(255) is an awful PRIMARY KEY.  The PK is included implicitly (in 
InnoDB) in every secondary key.

* LIMIT 0, 50 -- are you doing pagination via OFFSET?  Bad idea.

Lots more about these topics is discussed in similar questions in
http://forums.mysql.com/list.php?24
Lots more tips here:
http://mysql.rjweb.org/doc.php/ricksrots


 -Original Message-
 From: Adarsh Sharma [mailto:eddy.ada...@gmail.com]
 Sent: Wednesday, September 05, 2012 11:27 AM
 To: Michael Dykman
 Cc: mysql@lists.mysql.com
 Subject: Re: Understanding Slow Query Log
 
 true Michael, pasting the output :
 
 CREATE TABLE `WF_1` (
   `id` varchar(255) NOT NULL,
   `app_name` varchar(255) DEFAULT NULL,
   `app_path` varchar(255) DEFAULT NULL,
   `conf` text,
   `group_name` varchar(255) DEFAULT NULL,
   `parent_id` varchar(255) DEFAULT NULL,
   `run` int(11) DEFAULT NULL,
   `user_name` varchar(255) DEFAULT NULL,
   `bean_type` varchar(31) DEFAULT NULL,
   `auth_token` text,
   `created_time` datetime DEFAULT NULL,
   `end_time` datetime DEFAULT NULL,
   `external_id` varchar(255) DEFAULT NULL,
   `last_modified_time` datetime DEFAULT NULL,
   `log_token` varchar(255) DEFAULT NULL,
   `proto_action_conf` text,
   `sla_xml` text,
   `start_time` datetime DEFAULT NULL,
   `status` varchar(255) DEFAULT NULL,
   `wf_instance` mediumblob,
   PRIMARY KEY (`id`),
   KEY `I_WF_1_DTYPE` (`bean_type`),
   KEY `I_WF_1_END_TIME` (`end_time`),
   KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
   KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
   KEY `I_WF_1_STATUS` (`status`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
 
 
 show indexes from WF_1;
 +-++--+--+-
 ---+---+-+--++-
 -++-+
 | Table   | Non_unique | Key_name | Seq_in_index |
 Column_name| Collation | Cardinality | Sub_part | Packed | Null
 |
 Index_type | Comment |
 +-++--+--+-
 ---+---+-+--++-
 -++-+
 |WF_1 |  0 | PRIMARY  |1 | id
   | A |  551664 | NULL | NULL   |  | BTREE
  | |
 |WF_1 |  1 | I_WF_1_DTYPE  |1 |
 bean_type
| A |  18 | NULL | NULL   | YES  | BTREE
 |
 |
 |WF_1 |  1 | I_WF_1_END_TIME   |1 |
 end_time
 | A |  551664 | NULL | NULL   | YES  | BTREE
 |
 |
 |WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 |
 external_id
| A |  551664 | NULL | NULL   | YES  | BTREE
 |
 |
 |WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
 last_modified_time | A |  551664 | NULL | NULL   | YES
 |
 BTREE  | |
 |WF_1 |  1 | I_WF_1_STATUS |1 | status
 | A |  18 | NULL | NULL   | YES  | BTREE
 |
 |
 +-++--+--+-
 ---+---+-+--++-
 -++-+
 
 
 Thanks
 
 On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman mdyk...@gmail.com
 wrote:
 
  The attachments do not appear to be coming through.
 
  I am more curious what the cardinality of bean_type is.  What is the
  result of select count(*) as cnt, bean_type from WS_1 group by
  bean_type  ?
 
  Low cardinality can render an index usrless.
 
  On 2012-09-05 5:19 AM, Adarsh Sharma eddy.ada...@gmail.com wrote:
 
  I already attached the list.
 
  Attaching one more time  thanks for the interest.
 
  Cheers
 
 
 
  On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui man...@tuenti.com
  wrote:
  
  
  
   2012/9/5 Adar...
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Ok, this raises a question for me - what's a better way to do pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing pagination via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code.
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
Remember where you left off.

Your Next button now says something like ?page=5size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345size=50
and then do SELECT ... WHERE id  12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a Next page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
   GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a crawler (eg, search engine) brought 
a site to its knees because of pagination via OFFSET.

Pagination via OFFSET is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  
By the time the entire list has been paged through, about N*N/2 units of work 
have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !

 -Original Message-
 From: Andy Wallace [mailto:awall...@ihouseweb.com]
 Sent: Wednesday, September 05, 2012 2:05 PM
 To: mysql@lists.mysql.com
 Subject: Re: Understanding Slow Query Log
 
 Ok, this raises a question for me - what's a better way to do
 pagination?
 
 On 9/5/12 2:02 PM, Rick James wrote:
  * LIMIT 0, 50 -- are you doing pagination via OFFSET?  Bad idea.
 
 
 
 --
 Andy Wallace
 iHOUSEweb, Inc.
 awall...@ihouseweb.com
 (866) 645-7700 ext 219
 --
 Sometimes it pays to stay in bed on Monday, rather than spending the
 rest of the week debugging Monday's code.
 - Christopher Thompson
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Thanks, Rick - definitely something to think about. I've been
troubled by the pagination stuff in our code. This looks like
something I can definitely use!

andy

On 9/5/12 2:40 PM, Rick James wrote:

Remember where you left off.

Your Next button now says something like ?page=5size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345size=50
and then do SELECT ... WHERE id  12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a Next page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a crawler (eg, search engine) brought a site to 
its knees because of pagination via OFFSET.

Pagination via OFFSET is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  By the 
time the entire list has been paged through, about N*N/2 units of work have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Wednesday, September 05, 2012 2:05 PM
To: mysql@lists.mysql.com
Subject: Re: Understanding Slow Query Log

Ok, this raises a question for me - what's a better way to do
pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing pagination via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
Sometimes it pays to stay in bed on Monday, rather than spending the
rest of the week debugging Monday's code.
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code.
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-04 Thread Rick James
100 is tantamount to turning off the log.  I prefer 2.

select  count(ENTITY_NAME)
from  ALERT_EVENTS
where  EVENT_TIME  date_sub(now(),INTERVAL 60 MINUTE)
  and  status=upper('failed')
  and  ENTITY_NAME='FETL-ImpressionRC-conversion';
begs for the _compound_ index
INDEX(ENTITY_NAME, EVENT_TIME)
This would be even better:
INDEX(ENTITY_NAME, status, EVENT_TIME)

COUNT(*) should be used if you don't need to check the column for being NULL.

 Rows_examined: 141145
That is probably the entire table.  Will the table grow?  If so, the query will 
get slower.  Meanwhile, the index I suggested will (probably) be much faster.


 -Original Message-
 From: Suresh Kuna [mailto:sureshkumar...@gmail.com]
 Sent: Saturday, September 01, 2012 1:03 AM
 To: Adarsh Sharma
 Cc: mysql@lists.mysql.com
 Subject: Re: Understanding Slow Query Log
 
 Disable log-queries-not-using-indexes to log only queries  100 sec.
 
 Just do  /var/lib/mysql/slow-queries.log it will clear the log.
 
 On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma
 eddy.ada...@gmail.comwrote:
 
  Hi all,
 
  I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow
  query log by setting below parameters in my.cnf :
 
  log-slow-queries=/usr/local/mysql/slow-query.log
  long_query_time=100
  log-queries-not-using-indexes
 
  I am assuming from the inf. from the internet that long_query_time is
  in seconds , but i see the slow query log ,  there are lots of
  statements ( queries ) :
 
  # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
  Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1
 Rows_examined:
  141145
  SET timestamp=1346409734;
  select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
  date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
  ENTITY_NAME='FETL-ImpressionRC-conversion';
  # Time: 120831 10:43:14
  # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
  Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1
 Rows_examined:
  141145
  SET timestamp=1346409794;
  select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
  date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
  ENTITY_NAME='FETL-click-enhancer-deferred';
  # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111] #
  Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1
 Rows_examined:
  141145
  SET timestamp=1346409794;
  select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
  date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
  ENTITY_NAME='FETL-ImpressionRC-conversion';
  # Time: 120831 10:43:22
  # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111] #
  Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1
 Rows_examined:
  13
 
 
 
  I don't  understand the query time unit in slow query log because i
  expect queries to be logged that takes  100 s. I tested with sleep
  command for 60s , it doesn't logged in slow query log and when i
 sleep
  for 120 s it logged but i don't why the other queries are logging in
 slow log.
 
  # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1
 Rows_examined:
  0
  SET timestamp=1346443103;
  SELECT SLEEP(120);
 
  And also my slow log is increasing and decided to purge thorogh below
  command :
 
  cat /dev/null  /var/lib/mysql/slow-queries.log
 
 
  Anyone any ideas about this.
 
 
  Thanks
 
 
 
 
 --
 Thanks
 Suresh Kuna
 MySQL DBA

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-04 Thread Adarsh Sharma
Actually that query is not my concern :

i have a query that is taking so much time :
Slow Log Output :
# Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
# Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
# Attribute  total min max avg 95%  stddev  median
#  === === === === === === ===
# Exec time 80887s   192us   2520s415s   1732s612s 80s
# Lock time   13ms   0   133us68us   103us23us69us
# Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
# Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
# Query size65.45k   6 577  343.70  563.87  171.06  246.02

In the logs output :
# Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50  Rows_examined:
471150
SET timestamp=1346655789;
SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name, t0.group_name,
t0.created_time, t0.start_time, t0.last_modified_time, t0.end_time,
t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow' ORDER BY
t0.created_time DESC LIMIT 0, 50;

The table is near about 30 GB and growing day by day.

Attaching the table definition  indexes output. I have a index on bean
type column but cann't understand why it
examined the all rows of table.

Thanks



On Wed, Sep 5, 2012 at 12:24 AM, Rick James rja...@yahoo-inc.com wrote:

 100 is tantamount to turning off the log.  I prefer 2.

 select  count(ENTITY_NAME)
 from  ALERT_EVENTS
 where  EVENT_TIME  date_sub(now(),INTERVAL 60 MINUTE)
   and  status=upper('failed')
   and  ENTITY_NAME='FETL-ImpressionRC-conversion';
 begs for the _compound_ index
 INDEX(ENTITY_NAME, EVENT_TIME)
 This would be even better:
 INDEX(ENTITY_NAME, status, EVENT_TIME)

 COUNT(*) should be used if you don't need to check the column for being
 NULL.

  Rows_examined: 141145
 That is probably the entire table.  Will the table grow?  If so, the query
 will get slower.  Meanwhile, the index I suggested will (probably) be much
 faster.


  -Original Message-
  From: Suresh Kuna [mailto:sureshkumar...@gmail.com]
  Sent: Saturday, September 01, 2012 1:03 AM
  To: Adarsh Sharma
  Cc: mysql@lists.mysql.com
  Subject: Re: Understanding Slow Query Log
 
  Disable log-queries-not-using-indexes to log only queries  100 sec.
 
  Just do  /var/lib/mysql/slow-queries.log it will clear the log.
 
  On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma
  eddy.ada...@gmail.comwrote:
 
   Hi all,
  
   I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow
   query log by setting below parameters in my.cnf :
  
   log-slow-queries=/usr/local/mysql/slow-query.log
   long_query_time=100
   log-queries-not-using-indexes
  
   I am assuming from the inf. from the internet that long_query_time is
   in seconds , but i see the slow query log ,  there are lots of
   statements ( queries ) :
  
   # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
   Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1
  Rows_examined:
   141145
   SET timestamp=1346409734;
   select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
   date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
   ENTITY_NAME='FETL-ImpressionRC-conversion';
   # Time: 120831 10:43:14
   # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
   Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1
  Rows_examined:
   141145
   SET timestamp=1346409794;
   select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
   date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
   ENTITY_NAME='FETL-click-enhancer-deferred';
   # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111] #
   Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1
  Rows_examined:
   141145
   SET timestamp=1346409794;
   select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
   date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
   ENTITY_NAME='FETL-ImpressionRC-conversion';
   # Time: 120831 10:43:22
   # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111] #
   Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1
  Rows_examined:
   13
  
  
  
   I don't  understand the query time unit in slow query log because i
   expect queries to be logged that takes  100 s. I tested with sleep
   command for 60s , it doesn't logged in slow query log and when i
  sleep
   for 120 s it logged but i don't why the other queries are logging in
  slow log.
  
   # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1
  Rows_examined:
   0
   SET timestamp=1346443103;
   SELECT SLEEP(120);
  
   And also my slow log is increasing and decided to purge thorogh below
   command :
  
   cat /dev/null  /var/lib/mysql/slow-queries.log
  
  
   Anyone any ideas about this.
  
  
   Thanks
  
 
 
 
  --
  Thanks
  Suresh Kuna
  MySQL DBA


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql

Understanding Slow Query Log

2012-09-01 Thread Adarsh Sharma
Hi all,

I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
by setting below parameters in my.cnf :

log-slow-queries=/usr/local/mysql/slow-query.log
long_query_time=100
log-queries-not-using-indexes

I am assuming from the inf. from the internet that long_query_time is in
seconds , but i see the slow query log ,  there are lots of statements (
queries ) :

# User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
141145
SET timestamp=1346409734;
select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
ENTITY_NAME='FETL-ImpressionRC-conversion';
# Time: 120831 10:43:14
# User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
141145
SET timestamp=1346409794;
select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
ENTITY_NAME='FETL-click-enhancer-deferred';
# User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
141145
SET timestamp=1346409794;
select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
ENTITY_NAME='FETL-ImpressionRC-conversion';
# Time: 120831 10:43:22
# User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13



I don't  understand the query time unit in slow query log because i expect
queries to be logged that takes  100 s. I tested with sleep command for
60s , it doesn't logged in slow query log and when i sleep for 120 s it
logged but i don't why the other queries are logging in slow log.

# Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined: 0
SET timestamp=1346443103;
SELECT SLEEP(120);

And also my slow log is increasing and decided to purge thorogh below
command :

cat /dev/null  /var/lib/mysql/slow-queries.log


Anyone any ideas about this.


Thanks


Re: Understanding Slow Query Log

2012-09-01 Thread yoku ts
Hi

Because of that, those queries don't use index.
log-queries-not-using-indexes works even if query time less than
long-query-time.

http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_log-queries-not-using-indexes

regards,

yoku

2012/9/1 Adarsh Sharma eddy.ada...@gmail.com:
 Hi all,

 I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
 by setting below parameters in my.cnf :

 log-slow-queries=/usr/local/mysql/slow-query.log
 long_query_time=100
 log-queries-not-using-indexes

 I am assuming from the inf. from the internet that long_query_time is in
 seconds , but i see the slow query log ,  there are lots of statements (
 queries ) :

 # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
 141145
 SET timestamp=1346409734;
 select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
 date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
 ENTITY_NAME='FETL-ImpressionRC-conversion';
 # Time: 120831 10:43:14
 # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
 141145
 SET timestamp=1346409794;
 select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
 date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
 ENTITY_NAME='FETL-click-enhancer-deferred';
 # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
 141145
 SET timestamp=1346409794;
 select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
 date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
 ENTITY_NAME='FETL-ImpressionRC-conversion';
 # Time: 120831 10:43:22
 # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13



 I don't  understand the query time unit in slow query log because i expect
 queries to be logged that takes  100 s. I tested with sleep command for
 60s , it doesn't logged in slow query log and when i sleep for 120 s it
 logged but i don't why the other queries are logging in slow log.

 # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined: 0
 SET timestamp=1346443103;
 SELECT SLEEP(120);

 And also my slow log is increasing and decided to purge thorogh below
 command :

 cat /dev/null  /var/lib/mysql/slow-queries.log


 Anyone any ideas about this.


 Thanks

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-01 Thread Suresh Kuna
Disable log-queries-not-using-indexes to log only queries  100 sec.

Just do  /var/lib/mysql/slow-queries.log it will clear the log.

On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma eddy.ada...@gmail.comwrote:

 Hi all,

 I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
 by setting below parameters in my.cnf :

 log-slow-queries=/usr/local/mysql/slow-query.log
 long_query_time=100
 log-queries-not-using-indexes

 I am assuming from the inf. from the internet that long_query_time is in
 seconds , but i see the slow query log ,  there are lots of statements (
 queries ) :

 # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
 141145
 SET timestamp=1346409734;
 select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
 date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
 ENTITY_NAME='FETL-ImpressionRC-conversion';
 # Time: 120831 10:43:14
 # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
 141145
 SET timestamp=1346409794;
 select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
 date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
 ENTITY_NAME='FETL-click-enhancer-deferred';
 # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
 141145
 SET timestamp=1346409794;
 select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME 
 date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
 ENTITY_NAME='FETL-ImpressionRC-conversion';
 # Time: 120831 10:43:22
 # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
 # Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13



 I don't  understand the query time unit in slow query log because i expect
 queries to be logged that takes  100 s. I tested with sleep command for
 60s , it doesn't logged in slow query log and when i sleep for 120 s it
 logged but i don't why the other queries are logging in slow log.

 # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined:
 0
 SET timestamp=1346443103;
 SELECT SLEEP(120);

 And also my slow log is increasing and decided to purge thorogh below
 command :

 cat /dev/null  /var/lib/mysql/slow-queries.log


 Anyone any ideas about this.


 Thanks




-- 
Thanks
Suresh Kuna
MySQL DBA


RE: using the bin-log approach on the master side, how can I accomplish my replication objectives

2012-05-03 Thread Brown, Charles
Hello Nitin,

Please give Nitin a prize. What a quiet genius she is. Now, I get it. Now, I 
can see clearly.
I’ve tried it and it worked.

Thanks so much.

From: Nitin Mehta [mailto:ntn...@yahoo.com]
Sent: Wednesday, May 02, 2012 11:25 PM
To: Brown, Charles
Cc: mysql@lists.mysql.com
Subject: Re: using the bin-log approach on the master side, how can I 
accomplish my replication objectives

Hi Charles,
I believe you would already have bin-log configured, is that right? If not, you 
need to.
Secondly, If you think the bin-log generated for the entire stack of 
databases/schemas is too big, you may want to restrict it using binlog-do-db 
BUT that may create problem if you have any DMLs which run in the below manner:
mysql use db1;
mysql update db2.tb2 
You may be surprised to find that this update will not be recorded in master's 
bin-log if you have set binlog-do-db=db2.
To sum it up, if you're not very sure, simply enable bin-log on master and use 
replicate-do-table on slave. So in one word response to your question, no!
Hope that helps!
Also, please reply to all so the people, helping you, know about the status of 
your problem.


Regards,

From: Brown, Charles cbr...@bmi.commailto:cbr...@bmi.com
To: Nitin Mehta ntn...@yahoo.commailto:ntn...@yahoo.com
Sent: Thursday, May 3, 2012 9:27 AM
Subject: RE: using the bin-log approach on the master side, how can I 
accomplish my replication objectives

Hello Nitin,
Help Me!  Using your approach, do I have to put anything on  the master 
“my.cnf”.

Thanks


From: Nitin Mehta [mailto:ntn...@yahoo.com]
Sent: Wednesday, May 02, 2012 10:51 PM
To: Brown, Charles; Rick James; a.sm...@ukgrid.netmailto:a.sm...@ukgrid.net; 
mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Re: using the bin-log approach on the master side, how can I 
accomplish my replication objectives

I guess, it only makes things even more simple.

You should use replicate-do-table=db1.tb1 and so one in slave's cnf to ensure 
that only this particular table gets replicated. However, all the databases on 
master will get bin-logged and you may or may not want to do that.

So making it even simpler:

on the slave:

replicate-do-table=db1.tb1
replicate-do-table=db2.tb2
replicate-do-table=db3.tb3
You might want to have a look at :
http://dev.mysql.com/doc/refman/5.5/en/replication-options-slave.html#option_mysqld_replicate-do-table
http://dev.mysql.com/doc/refman/5.5/en/replication-rules.html

Hope this helps!

From: Brown, Charles cbr...@bmi.commailto:cbr...@bmi.com
To: Rick James rja...@yahoo-inc.commailto:rja...@yahoo-inc.com; 
a.sm...@ukgrid.netmailto:a.sm...@ukgrid.net 
a.sm...@ukgrid.netmailto:a.sm...@ukgrid.net; 
mysql@lists.mysql.commailto:mysql@lists.mysql.com 
mysql@lists.mysql.commailto:mysql@lists.mysql.com
Sent: Thursday, May 3, 2012 8:51 AM
Subject: using the bin-log approach on the master side, how can I accomplish my 
replication objectives

Hello Rick,

I think I'll go with your recommendations - but help me here a little bit. 
Because all look fuzzy like a brain surgery. So make it simple and clean for me.



For the sake of simplicity, here is what my master has:

Database: db1

Tables: db1tb1, db1tb2, db1tb3



Database: db2

Tables: db2tb1, db2tb2, db2tb3



Database: db3

Tables: db3tb1, db3tb2, db3tb3



Now, I would like to replicate only these tables that belong to respective 
databases:

db1tb1, db2tb2, and db3tb3



My question is: using the bin-log approach on the master side, how can I 
accomplish my replication objectives? Help me



Best regards and thanks








This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.
If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.
If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.
Thank you.




This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us

master BIN-LOG maintenace

2012-05-03 Thread Brown, Charles
Hi  Gurus,

I would like to set BIN-LOG maintenance procedure for my master. The master is 
on a windows platform. I’m all for make it simple and clean therefore I’ve been 
leaning toward   the automatic BIN-LOG removal “expire-logs-days=7”. The 
problem is for this option to work, it should be preceded by a “BIN-LOG FLUSH” 
command. Okay, where do I go from here in order to make this work. Please help 
me.

Thanks



This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.



Re: master BIN-LOG maintenace

2012-05-03 Thread Nitin Mehta
Hi Charles,
 
I guess your application doesn't generate too much of binary logs. The 
parameter expire-logs-days kicks in at the flush but does not necessarily 
require a manual flush logs command. You can reduce the value of 
max_binlog_size to make sure that at least one new file is created daily and 
this will force mysql to check and delete old bin-logs. The default (and 
maximum) value for this variable is 1G. If I'm not wrong, it is a dynamic 
variable, which means you can try different values without a restart of the 
database until you find a suitable one.
 
Hope that helps!
 
Regards,
 


 From: Brown, Charles cbr...@bmi.com
To: Nitin Mehta ntn...@yahoo.com 
Cc: mysql@lists.mysql.com mysql@lists.mysql.com 
Sent: Thursday, May 3, 2012 4:24 PM
Subject: master BIN-LOG maintenace
  
Hi  Gurus,

I would like to set BIN-LOG maintenance procedure for my master. The master is 
on a windows platform. I’m all for make it simple and clean therefore I’ve been 
leaning toward   the automatic BIN-LOG removal “expire-logs-days=7”. The 
problem is for this option to work, it should be preceded by a “BIN-LOG FLUSH” 
command. Okay, where do I go from here in order to make this work. Please help 
me.

Thanks



This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


Re: using the bin-log approach on the master side, how can I accomplish my replication objectives

2012-05-03 Thread Nitin Mehta
Thanks for your kind words, Charles! It comes easily to you when you have 
experienced it. I'm glad it worked for you.

 


 From: Brown, Charles cbr...@bmi.com
To: Nitin Mehta ntn...@yahoo.com 
Cc: mysql@lists.mysql.com mysql@lists.mysql.com 
Sent: Thursday, May 3, 2012 3:17 PM
Subject: RE: using the bin-log approach on the master side, how can I 
accomplish my replication objectives
  
Hello Nitin,

Please give Nitin a prize. What a quiet genius she is. Now, I get it. Now, I 
can see clearly.
I’ve tried it and it worked.

Thanks so much.

From: Nitin Mehta [mailto:ntn...@yahoo.com]
Sent: Wednesday, May 02, 2012 11:25 PM
To: Brown, Charles
Cc: mysql@lists.mysql.com
Subject: Re: using the bin-log approach on the master side, how can I 
accomplish my replication objectives

Hi Charles,
I believe you would already have bin-log configured, is that right? If not, you 
need to.
Secondly, If you think the bin-log generated for the entire stack of 
databases/schemas is too big, you may want to restrict it using binlog-do-db 
BUT that may create problem if you have any DMLs which run in the below manner:
mysql use db1;
mysql update db2.tb2 
You may be surprised to find that this update will not be recorded in master's 
bin-log if you have set binlog-do-db=db2.
To sum it up, if you're not very sure, simply enable bin-log on master and use 
replicate-do-table on slave. So in one word response to your question, no!
Hope that helps!
Also, please reply to all so the people, helping you, know about the status of 
your problem.


Regards,

From: Brown, Charles cbr...@bmi.commailto:cbr...@bmi.com
To: Nitin Mehta ntn...@yahoo.commailto:ntn...@yahoo.com
Sent: Thursday, May 3, 2012 9:27 AM
Subject: RE: using the bin-log approach on the master side, how can I 
accomplish my replication objectives

Hello Nitin,
Help Me!  Using your approach, do I have to put anything on  the master 
“my.cnf”.

Thanks


From: Nitin Mehta [mailto:ntn...@yahoo.com]
Sent: Wednesday, May 02, 2012 10:51 PM
To: Brown, Charles; Rick James; a.sm...@ukgrid.netmailto:a.sm...@ukgrid.net; 
mysql@lists.mysql.commailto:mysql@lists.mysql.com
Subject: Re: using the bin-log approach on the master side, how can I 
accomplish my replication objectives

I guess, it only makes things even more simple.

You should use replicate-do-table=db1.tb1 and so one in slave's cnf to ensure 
that only this particular table gets replicated. However, all the databases on 
master will get bin-logged and you may or may not want to do that.

So making it even simpler:

on the slave:

replicate-do-table=db1.tb1
replicate-do-table=db2.tb2
replicate-do-table=db3.tb3
You might want to have a look at :
http://dev.mysql.com/doc/refman/5.5/en/replication-options-slave.html#option_mysqld_replicate-do-table
 
http://dev.mysql.com/doc/refman/5.5/en/replication-rules.html 

Hope this helps!

From: Brown, Charles cbr...@bmi.commailto:cbr...@bmi.com
To: Rick James rja...@yahoo-inc.commailto:rja...@yahoo-inc.com; 
a.sm...@ukgrid.netmailto:a.sm...@ukgrid.net 
a.sm...@ukgrid.netmailto:a.sm...@ukgrid.net; 
mysql@lists.mysql.commailto:mysql@lists.mysql.com 
mysql@lists.mysql.commailto:mysql@lists.mysql.com
Sent: Thursday, May 3, 2012 8:51 AM
Subject: using the bin-log approach on the master side, how can I accomplish my 
replication objectives

Hello Rick,

I think I'll go with your recommendations - but help me here a little bit. 
Because all look fuzzy like a brain surgery. So make it simple and clean for me.



For the sake of simplicity, here is what my master has:

Database: db1

Tables: db1tb1, db1tb2, db1tb3



Database: db2

Tables: db2tb1, db2tb2, db2tb3



Database: db3

Tables: db3tb1, db3tb2, db3tb3



Now, I would like to replicate only these tables that belong to respective 
databases:

db1tb1, db2tb2, and db3tb3



My question is: using the bin-log approach on the master side, how can I 
accomplish my replication objectives? Help me



Best regards and thanks








This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.
If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.
If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.
Thank you

RE: master BIN-LOG maintenace

2012-05-03 Thread Rick James
Or look at how old your one binlog is, and how big it is.  Do a little 
arithmetic to see the avg bytes per day; set max_binlog_size to that.

Expire-logs probably only deletes old binlogs, and does not split any binlogs.  
So what you have now will not really do anything until you have 2 binlogs, and 
the older one is  7 days old.  That may not happen for months.  Even with the 
FLUSH you asked about, the expire may not kick in until 14 or more days.

 -Original Message-
 From: Nitin Mehta [mailto:ntn...@yahoo.com]
 Sent: Thursday, May 03, 2012 4:20 AM
 To: mysql@lists.mysql.com
 Subject: Re: master BIN-LOG maintenace
 
 Hi Charles,
 
 I guess your application doesn't generate too much of binary logs. The
 parameter expire-logs-days kicks in at the flush but does not
 necessarily require a manual flush logs command. You can reduce the
 value of max_binlog_size to make sure that at least one new file is
 created daily and this will force mysql to check and delete old bin-
 logs. The default (and maximum) value for this variable is 1G. If I'm
 not wrong, it is a dynamic variable, which means you can try different
 values without a restart of the database until you find a suitable one.
 
 Hope that helps!
 
 Regards,
 
 
 
  From: Brown, Charles cbr...@bmi.com
 To: Nitin Mehta ntn...@yahoo.com
 Cc: mysql@lists.mysql.com mysql@lists.mysql.com
 Sent: Thursday, May 3, 2012 4:24 PM
 Subject: master BIN-LOG maintenace
 
 Hi  Gurus,
 
 I would like to set BIN-LOG maintenance procedure for my master. The
 master is on a windows platform. I’m all for make it simple and clean
 therefore I’ve been leaning toward   the automatic BIN-LOG removal
 “expire-logs-days=7”. The problem is for this option to work, it should
 be preceded by a “BIN-LOG FLUSH” command. Okay, where do I go from here
 in order to make this work. Please help me.
 
 Thanks
 
 
 
 This message is intended only for the use of the Addressee and may
 contain information that is PRIVILEGED and CONFIDENTIAL.
 
 If you are not the intended recipient, you are hereby notified that any
 dissemination of this communication is strictly prohibited.
 
 If you have received this communication in error, please erase all
 copies of the message and its attachments and notify us immediately.
 
 Thank you.
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: using the bin-log approach on the master side, how can I accomplish my replication objectives

2012-05-03 Thread Rick James
That surprise happens whether you filter it on the master or on the slave.  The 
problem is that some tables got updated; some did not.

The only viable workaround (that I know of) is to use the wild filtering -- 
this forces the filtering to actually look at the query, not simply use the USE.

 -Original Message-
 From: Nitin Mehta [mailto:ntn...@yahoo.com]
 Sent: Wednesday, May 02, 2012 9:25 PM
 To: Brown, Charles
 Cc: mysql@lists.mysql.com
 Subject: Re: using the bin-log approach on the master side, how can I
 accomplish my replication objectives
 
 Hi Charles,
 I believe you would already have bin-log configured, is that right? If
 not, you need to.
 Secondly, If you think the bin-log generated for the entire stack of
 databases/schemas is too big, you may want to restrict it using
 binlog-do-db BUT that may create problem if you have any DMLs which
 run in the below manner:
 mysql use db1;
 mysql update db2.tb2 
 You may be surprised to find that this update will not be recorded in
 master's bin-log if you have set binlog-do-db=db2.
 To sum it up, if you're not very sure, simply enable bin-log on master
 and use replicate-do-table on slave. So in one word response to your
 question, no!
 Hope that helps!
 Also, please reply to all so the people, helping you, know about the
 status of your problem.
 
 
 Regards,
 
 
 
 
  From: Brown, Charles cbr...@bmi.com
 To: Nitin Mehta ntn...@yahoo.com
 Sent: Thursday, May 3, 2012 9:27 AM
 Subject: RE: using the bin-log approach on the master side, how can I
 accomplish my replication objectives
 
 
 Hello Nitin,
 Help Me!  Using your approach, do I have to put anything on  the master
 “my.cnf”.
 
 Thanks
 
 
 From:Nitin Mehta [mailto:ntn...@yahoo.com]
 Sent: Wednesday, May 02, 2012 10:51 PM
 To: Brown, Charles; Rick James; a.sm...@ukgrid.net;
 mysql@lists.mysql.com
 Subject: Re: using the bin-log approach on the master side, how can I
 accomplish my replication objectives
 
 I guess, it only makes things even more simple.
 
 You should use replicate-do-table=db1.tb1 and so one in slave's cnf
 to ensure that only this particular table gets replicated. However, all
 the databases on master will get bin-logged and you may or may not want
 to do that.
 
 So making it even simpler:
 
 on the slave:
 
 replicate-do-table=db1.tb1
 replicate-do-table=db2.tb2
 replicate-do-table=db3.tb3
 You might want to have a look at :
 http://dev.mysql.com/doc/refman/5.5/en/replication-options-
 slave.html#option_mysqld_replicate-do-table
 http://dev.mysql.com/doc/refman/5.5/en/replication-rules.html
 
 Hope this helps!
 
 From:Brown, Charles cbr...@bmi.com
 To: Rick James rja...@yahoo-inc.com; a.sm...@ukgrid.net
 a.sm...@ukgrid.net; mysql@lists.mysql.com mysql@lists.mysql.com
 Sent: Thursday, May 3, 2012 8:51 AM
 Subject: using the bin-log approach on the master side, how can I
 accomplish my replication objectives
 
 Hello Rick,
 
 I think I'll go with your recommendations - but help me here a little
 bit. Because all look fuzzy like a brain surgery. So make it simple and
 clean for me.
 
 
 
 For the sake of simplicity, here is what my master has:
 
 Database: db1
 
 Tables: db1tb1, db1tb2, db1tb3
 
 
 
 Database: db2
 
 Tables: db2tb1, db2tb2, db2tb3
 
 
 
 Database: db3
 
 Tables: db3tb1, db3tb2, db3tb3
 
 
 
 Now, I would like to replicate only these tables that belong to
 respective databases:
 
 db1tb1, db2tb2, and db3tb3
 
 
 
 My question is: using the bin-log approach on the master side, how can
 I accomplish my replication objectives? Help me
 
 
 
 Best regards and thanks
 
 
 
 
 
 
 
 
 This message is intended only for the use of the Addressee and
 may contain information that is PRIVILEGED and CONFIDENTIAL.
 
 If you are not the intended recipient, you are hereby notified
 that any dissemination of this communication is strictly prohibited.
 
 If you have received this communication in error, please erase
 all copies of the message and its attachments and notify us
 immediately.
 
 Thank you.
 
 
 
 
 This message is intended only
 for the use of the Addressee and
 may contain information that is PRIVILEGED
 and CONFIDENTIAL.
 If you are not the intended recipient, you are hereby notified
 that any
 dissemination of this communication is strictly prohibited.
 If you have received this communication in error, please erase
 all copies
 of the message and its attachments and notify us
 immediately.
 Thank you.
 


using the bin-log approach on the master side, how can I accomplish my replication objectives

2012-05-02 Thread Brown, Charles
Hello Rick,

I think I'll go with your recommendations - but help me here a little bit. 
Because all look fuzzy like a brain surgery. So make it simple and clean for me.



For the sake of simplicity, here is what my master has:

Database: db1

Tables: db1tb1, db1tb2, db1tb3



Database: db2

Tables: db2tb1, db2tb2, db2tb3



Database: db3

Tables: db3tb1, db3tb2, db3tb3



Now, I would like to replicate only these tables that belong to respective 
databases:

db1tb1, db2tb2, and db3tb3



My question is: using the bin-log approach on the master side, how can I 
accomplish my replication objectives? Help me



Best regards and thanks








This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.



Re: using the bin-log approach on the master side, how can I accomplish my replication objectives

2012-05-02 Thread Nitin Mehta
I guess, it only makes things even more simple.
 
You should use replicate-do-table=db1.tb1 and so one in slave's cnf to ensure 
that only this particular table gets replicated. However, all the databases on 
master will get bin-logged and you may or may not want to do that.
 
So making it even simpler:
 
on the slave:
 
replicate-do-table=db1.tb1
replicate-do-table=db2.tb2
replicate-do-table=db3.tb3

You might want to have a look at :
http://dev.mysql.com/doc/refman/5.5/en/replication-options-slave.html#option_mysqld_replicate-do-table
http://dev.mysql.com/doc/refman/5.5/en/replication-rules.html
 
Hope this helps!
  


 From: Brown, Charles cbr...@bmi.com
To: Rick James rja...@yahoo-inc.com; a.sm...@ukgrid.net 
a.sm...@ukgrid.net; mysql@lists.mysql.com mysql@lists.mysql.com 
Sent: Thursday, May 3, 2012 8:51 AM
Subject: using the bin-log approach on the master side, how can I accomplish my 
replication objectives
  
Hello Rick,

I think I'll go with your recommendations - but help me here a little bit. 
Because all look fuzzy like a brain surgery. So make it simple and clean for me.



For the sake of simplicity, here is what my master has:

Database: db1

Tables: db1tb1, db1tb2, db1tb3



Database: db2

Tables: db2tb1, db2tb2, db2tb3



Database: db3

Tables: db3tb1, db3tb2, db3tb3



Now, I would like to replicate only these tables that belong to respective 
databases:

db1tb1, db2tb2, and db3tb3



My question is: using the bin-log approach on the master side, how can I 
accomplish my replication objectives? Help me



Best regards and thanks








This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


Re: using the bin-log approach on the master side, how can I accomplish my replication objectives

2012-05-02 Thread Nitin Mehta
Hi Charles, 
I believe you would already have bin-log configured, is that right? If not, you 
need to. 
Secondly, If you think the bin-log generated for the entire stack of 
databases/schemas is too big, you may want to restrict it using binlog-do-db 
BUT that may create problem if you have any DMLs which run in the below manner: 
mysql use db1;
mysql update db2.tb2  
You may be surprised to find that this update will not be recorded in master's 
bin-log if you have set binlog-do-db=db2. 
To sum it up, if you're not very sure, simply enable bin-log on master and use 
replicate-do-table on slave. So in one word response to your question, no! 
Hope that helps! 
Also, please reply to all so the people, helping you, know about the status of 
your problem.
 
 
Regards,

 


 From: Brown, Charles cbr...@bmi.com
To: Nitin Mehta ntn...@yahoo.com 
Sent: Thursday, May 3, 2012 9:27 AM
Subject: RE: using the bin-log approach on the master side, how can I 
accomplish my replication objectives
  

Hello Nitin,
Help Me!  Using your approach, do I have to put anything on  the master 
“my.cnf”.
 
Thanks
 
 
From:Nitin Mehta [mailto:ntn...@yahoo.com] 
Sent: Wednesday, May 02, 2012 10:51 PM
To: Brown, Charles; Rick James; a.sm...@ukgrid.net; mysql@lists.mysql.com
Subject: Re: using the bin-log approach on the master side, how can I 
accomplish my replication objectives
 
I guess, it only makes things even more simple.
 
You should use replicate-do-table=db1.tb1 and so one in slave's cnf to ensure 
that only this particular table gets replicated. However, all the databases on 
master will get bin-logged and you may or may not want to do that.
 
So making it even simpler:
 
on the slave:
 
replicate-do-table=db1.tb1
replicate-do-table=db2.tb2
replicate-do-table=db3.tb3
You might want to have a look at :
http://dev.mysql.com/doc/refman/5.5/en/replication-options-slave.html#option_mysqld_replicate-do-table
http://dev.mysql.com/doc/refman/5.5/en/replication-rules.html
 
Hope this helps!
  
From:Brown, Charles cbr...@bmi.com
To: Rick James rja...@yahoo-inc.com; a.sm...@ukgrid.net 
a.sm...@ukgrid.net; mysql@lists.mysql.com mysql@lists.mysql.com 
Sent: Thursday, May 3, 2012 8:51 AM
Subject: using the bin-log approach on the master side, how can I accomplish my 
replication objectives

Hello Rick,

I think I'll go with your recommendations - but help me here a little bit. 
Because all look fuzzy like a brain surgery. So make it simple and clean for me.



For the sake of simplicity, here is what my master has:

Database: db1

Tables: db1tb1, db1tb2, db1tb3



Database: db2

Tables: db2tb1, db2tb2, db2tb3



Database: db3

Tables: db3tb1, db3tb2, db3tb3



Now, I would like to replicate only these tables that belong to respective 
databases:

db1tb1, db2tb2, and db3tb3



My question is: using the bin-log approach on the master side, how can I 
accomplish my replication objectives? Help me



Best regards and thanks








This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


 

This message is intended only 
for the use of the Addressee and
may contain information that is PRIVILEGED 
and CONFIDENTIAL. 
If you are not the intended recipient, you are hereby notified
that any 
dissemination of this communication is strictly prohibited. 
If you have received this communication in error, please erase
all copies 
of the message and its attachments and notify us
immediately. 
Thank you.
  

Re: logrotate script doesn't create a new log file

2012-02-23 Thread Honza Horak

On 02/22/2012 07:58 PM, Johan De Meersman wrote:

Having this line commented, we have to rely on logrotate.conf to have
something similar defined and we see an error when using mysql-rotate


Then something else is wrong. Does your MySQL daemon happen to run as a user 
who can normally not create files in the directory where the log files are?


This seems to be the reason. MySQL is run under mysql user and the log 
file is located under /var/log in Fedora, so the daemon doesn't have 
enough privileges. It's clear now, we'd need to un-comment the line in 
such configuration.


Thanks for the tip.

Honza

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



logrotate script doesn't create a new log file

2012-02-22 Thread Honza Horak

Hi all,

I'm thinking of logrotate script, that is shipped in mysql tar ball 
(e.g. mysql-5.5.20/support-files/mysql-log-rotate.sh). There is a 
commented line # create 600 mysql mysql, that should originally ensure 
logrotate utility creates a new log file after rotating. Is there any 
particular reason why the line is not used by default?


Having this line commented, we have to rely on logrotate.conf to have 
something similar defined and we see an error when using mysql-rotate 
script alone, such as:


 logrotate -f /etc/logrotate.d/mysqld

I think the line shouldn't be commented by default.

Any thoughts appreciated.

Thanks.

Honza

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



  1   2   3   4   5   6   7   8   9   10   >