Re: Undo Changes

2013-07-06 Thread spameden
2013/7/6 Rafael Ribeiro - iPhone rafaelribeiro...@gmail.com

 Dear Coleagues,

 I would like to listen your opinion about a situation.

 There is a function that is able to REMOVE all data from an specific date ?


are you talking about removing whole data from the tables or just specific
data inserted at some time?

you can do this different ways..

First way:
1) add a field indicating current time of insertion (for example:
insert_date)
2) DELETE FROM table WHERE insert_date = DATE_SUB(NOW(),INTERVAL 3 HOURS)
3) set up a cronjob to execute this php script every 3 hours or you can
just delete data via cronjob (all data):

Second way:
1) TRUNCATE table
2) also via cronjob running every 3 hours



 I mean ... We are developing a demo script website, where users can join
 and test our system.

 We need a database reset every 3 hours ... Delete all new data and back
 all changes to a default point.

 Better do that with php or mysql has an internal function that handle this.

 Rafael Ribeiro
 Sent by iPhone
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: restore question

2013-07-05 Thread spameden
Hi


2013/7/5 Jim Sheffer j...@higherpowered.com

 Hi everyone-

 This is probably a no brainer (I'm new to Navicat) but I have a backup of
 a database from Navicat.

 I want to be able to see if a certain field has changed since this morning
 in the backup (We are having problems with an order that somehow
 duplicated the items.  I need to see if there was only 1 of each item or
 two removed from inventory).  I don't need to do a restore into the
 database, just have a look at the backup.


First, dump current scheme with:

 mysqldump --skip-data database  database_schema.sql

Second, extract schema from Navicat (this might need additional filtering,
I'm not sure):

  grep -v 'INSERT INTO' backup.dump.sql  navicat_schema.sql

Third, compare:

  diff -u database_schema.sql navicat_schema.sql


 Is this possible without going through the hoops of creating a copy of the
 database and restoring to the copy (I assume this is possible) - I DO NOT
 want to restore into the currently running database :-)

 Any suggestions would b greatly appreciated!


 James Sheffer,

 The HigherPowered Team!

 supp...@higherpowered.com  sa...@higherpowered.com
 Web Design  Development http://www.higherpowered.com
 phone:  469-256-0268
  We help businesses succeed on the web!
  ---


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Master not creating new binary log.

2013-07-04 Thread spameden
issue on the slave:

SHOW SLAVE STATUS\G and post here

most likely after you reset the master your slave can't synch anymore,
because its missing next sequence of replication file.

why don't you backup your master with mysqldump and re-issue it ont he new
setup (i.e. on MySQL 5.5 instance)?


2013/7/4 Machiel Richards - Gmail machiel.richa...@gmail.com

 Hi,

 in short what we did was the following:

 - binary logs was written to a 20GB filesystem and due to company
 policies we kept expire logs at 7 days.
 - system got quite busy over the years so the space was becoming a
 problem and thus we had to move to another directory.
 - setting that was changed is : log_bin =  new directory
 - old binary logs were moved to the new directory after shutting down
 the database
 - database started up and continued as normal, however stopped at the
 last binary log when it filled up and complained about a corrupted binary
 log.
 - a flush logs and reset master was done and a new binary log was
 created mysql-bin.1
 - however same thing happening here, the binlog file fills up to 100Mb
 as configured, then stops without creating a new binary log.
 - this is then the point where the replication crashes as well.


 Output of the queries:


 mysql SHOW VARIABLES LIKE '%bin%';
 +-**+--+
 | Variable_name   | Value|
 +-**+--+
 | binlog_cache_size   | 1048576  |
 | innodb_locks_unsafe_for_binlog  | OFF  |
 | log_bin | ON   |
 | log_bin_trust_function_**creators | OFF  |
 | max_binlog_cache_size   | 18446744073709547520 |
 | max_binlog_size | 104857600|
 | sync_binlog | 0|
 +-**+--+


 mysql SHOW VARIABLES LIKE '%dir%';
 ++**+
 | Variable_name  | Value  |
 ++**+
 | basedir| /usr/  |
 | character_sets_dir | /usr/share/mysql/charsets/ |
 | datadir| /var/lib/mysql/|
 | innodb_data_home_dir   ||
 | innodb_log_arch_dir||
 | innodb_log_group_home_dir  | ./ |
 | innodb_max_dirty_pages_pct | 90 |
 | plugin_dir ||
 | slave_load_tmpdir  | /var/lib/mysql/tmp/|
 | tmpdir | /var/lib/mysql/tmp |
 ++**+
 10 rows in set (0.00 sec)






 Regards



 On 07/03/2013 08:29 PM, Rick James wrote:

 What setting(s) did you change to move to the separate partition?
 SHOW VARIABLES LIKE '%bin%';
 SHOW VARIABLES LIKE '%dir%';
 (there may be other VARIABLES worth checking)
 What steps did you take for the move?  (Actually move bin.1?  Start over?
  etc?)

 Consider using expire_logs_days.

 5.0 -- I don't think anything relevant has changed during 4.0 thru 5.6.

  -Original Message-
 From: Machiel Richards - Gmail 
 [mailto:machiel.richards@**gmail.commachiel.richa...@gmail.com
 ]
 Sent: Wednesday, July 03, 2013 3:20 AM
 To: mysql list
 Subject: Master not creating new binary log.

 Hi all

 I hope all are well.

 I would like some assistance with something really strange.

 We currently have a problem with a master slave setup running
 mysql
 5.0.

  This is one of our legacy servers which are in the planning to
 be
 upgraded, however in order for this to be done the replication needs to
 be
 up and running.

  The problem we have currently however is that the binary logs on
 the master was moved to a seperate partition due to disc space
 restrictions.

  A new binlog file called mysql-bin.1 was created and everything
 seemed to work fine.

   However, the moment the file reached the file size of 100Mb, it
 does not go on to create a new binlog file called mysql-bin.2 and the
 replication fails stating that it is unable to read the binary log file.

   Thus far we have done a flush logs and reset master , but the
 same problem occurs, where it creates mysql-bin.1 and the moment it
 reaches it's max size and suppose to create a new file, it stops and does
 not create the new one.

   I really hope this makes sense, and that someone can perhaps
 point us in the correct direction.

   Any help would be appreciated.

 Regards





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




 --
 MySQL General 

Re: database perfomance worries

2013-07-02 Thread spameden

  We are on a quest to improve the overall performance of our database.
 It's
  generally
  working pretty well, but we periodically get big slowdowns for no
 apparent
  reason. A
  prime example today - in the command line interface to the DB, I tried to
  update one
  record, and got:
 
  mysql update agent set number_of_emails = 5 where acnt = 'AR287416';
  Query OK, 1 row affected (36.35 sec)
  Rows matched: 1  Changed: 1  Warnings: 0
 
  36 seconds to update one table? The primary key is `acnt`. If I run the
  same (basic)


Check if there is some DELETE running for the selected table.

If there is a DELETE involving whole table it might be locking up database.

Look into mysql-slow.log

Try to optimize your application queries with EXPLAIN.


[!!] InnoDB data size / buffer pool: 7.8G/5.5G
Variables to adjust: innodb_buffer_pool_size (= 7G)

2 GB innodb_buffer_pool is a joke for a dataset of 33 GB

that leads in permanently I/O on mixed load at the chances
are high that there are times where nothing needed to
operate is in the buffer_pool and on concurrent load
mysqld ends in repeatly swap data in and out of the pool

at least all repeatly accessed tables should fit
permanently in the buffer

it depends on the load and how much data you're acquiring.

if you have 33GB in total, but only using few same tables in total size of
less than 2GB at the same time it would work just fine.

for example I have 136GB of data, but my buffer is only about 10Gb, but
most of the queries work just fine (I'm using it for mostly read-only
things).

but ofc, you need to check your system usage, if mysqld swaps its a bad
thing and most likely you need to either upgrade your hardware or consider
checking your data architecture (i.e. use LIMIT for quieries, add more
indexes, split large tables for a smaller ones which you really update or
store large data in mongodb etc).



  command again a few seconds later, I get:
 
  mysql update agent set number_of_emails = 15 where acnt =
 'AR287416';
  Query OK, 1 row affected (0.00 sec)
  Rows matched: 1  Changed: 1  Warnings: 0
 
  Why would we be getting such huge variations? We're running Solaris 10 on
  i386, with
  4 processors and 16GB of memory, MySQL 5.1.46-log. We are working out a
  plan to upgrade
  to MySQL 5.6, but I certainly don't want to depend on that upgrade to
  solve all performance
  problems.
 
  CREATE TABLE `agent`
  (
`acnt` varchar(20) NOT NULL,
`passwd` varchar(20) NOT NULL,
`package` char(2) DEFAULT NULL,
`data_template` varchar(20) DEFAULT 'NULL',
`default_search_type` enum('1','2','3') NOT NULL DEFAULT '1',
`status` enum('A','T','P','C','D','X','**S') NOT NULL
`flags` varchar(10) DEFAULT NULL COMMENT 'pipe-separated flags',
`aliases` varchar(4000) NOT NULL DEFAULT '',
`offices` varchar(4000) NOT NULL DEFAULT '',
`license_no` varchar(40) NOT NULL DEFAULT '',
`agent_code` varchar(20) DEFAULT NULL,
`office_code` varchar(20) DEFAULT NULL,
`parent_acnt` varchar(20) DEFAULT NULL,
`number_of_agentlinks` int(11) DEFAULT NULL,
`number_of_emails` int(11) DEFAULT NULL,
`fname` varchar(30) DEFAULT NULL,
`lname` varchar(30) DEFAULT NULL,
 
  whole bunch of other fields
 
PRIMARY KEY (`acnt`),
KEY `parent_acnt` (`parent_acnt`),
KEY `status` (`status`),
KEY `email` (`email`)
  ) ENGINE=InnoDB DEFAULT CHARSET=utf8
 
 
  --
  Andy Wallace
  iHOUSEweb, Inc.
  awall...@ihouseweb.com
  (866) 645-7700 ext 219
  --
  Sometimes it pays to stay in bed on Monday, rather than spending the
 rest
  of the week debugging Monday's code.
  - Christopher Thompson
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 



Re: how to list record in column (instead of a row)

2013-04-28 Thread spameden
you can do:

select * from table\G


2013/4/24 h...@tbbs.net

  2013/04/24 09:06 -0700, Rajeev Prasad 
 this table has many columns and only 1 record. select * from table;
 generates an unreadable list. how can i list the record as in two columns?
 (column name and its value)? i looked at UNPIVOT, but could not get it to
 work.
 SQL select * from table UNPIVOTE INCLUDE NULLS;
  select * from table UNPIVOTE INCLUDE NULLS
  * ERROR at line 1: ORA-00933: SQL command not properly ended
 
 From MySQL client, if started with flag '-G':
 select * from table
 ego

 For the same program there is flag '--auto-vertical-output'.

 But it seems you are using Oracle; this is MySQL list.
 In any case, you wrote both 'UNPIVOT' and 'UNPIVOTE'; I suspect the former
 is better.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Reg: MYSQL Mail Agent

2013-04-08 Thread spameden
2013/4/8 Reindl Harald h.rei...@thelounge.net

 do not top-post

 Am 08.04.2013 12:40, schrieb Bharani Kumar:
  On Mon, Apr 8, 2013 at 4:02 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:
 
  - Original Message -
  From: Bharani Kumar bharanikumariyer...@gmail.com
 
  How to enable mail agent service in MYSQL. and what are the necessary
  steps to send mail.
 
  I suspect you're in the wrong place - the Mail Agent is an MS SQL
 service,
  iirc. MySQL is a different database entirely, and has no built-in
  provisions for mail
 
  When i use mssql, i used the mail agent, so similar one expecting in
 MYSQL

 no, you can not expect that different software has the same feature
 set because we would not need different software if all is the same

 mssql is a big and fat software, mysql is a tiny software and because


I love your rants keep it doing :)


 it's running mostly on unix systems it do not need a MTA




 unix philipsopy: one tool for one task


Almost every system needs an MTA or mail daemon, couldnt agree more with
you about philosophy.

OP: take a look at postfix or any other daemon suited for mailing purposes,
don't forget to configure SPF / TXT entries of mail domains used in your
systems to avoid message being marked as spam.


Re: How to change max simultaneous connection parameter in mysql.

2013-04-02 Thread spameden
2013/3/24 Reindl Harald h.rei...@thelounge.net



 Am 24.03.2013 05:20, schrieb spameden:
  2013/3/19 Rick James rja...@yahoo-inc.com:
  you never have hosted a large site
  Check my email address before saying that.
 
  :D

 as said, big company does not have only geniusses


I do not judge only on 1 parameter, Rick has been constantly helping here
and I'm pretty sure he has more knowledge on MySQL than you.



  20 may be low, but 100 is rather high.
  Never use apache2 it has so many problems under load..

 if you are too supid to configure it yes


Ever heard about Slow HTTP DoS attack?



  The best combo is php5-fpm+nginx.
  Handles loads of users at once if well tuned

 Apache 2.4 handles the load of 600 parallel executed
 php-scripts from our own CMS-system


Nginx serves static content way better than apache2 (did few benchmarks
already).

nginx+php5-fpm handles better load than apache2-prefork+mod_php

you can google benchmarks if you dont trust me

also nginx eats much less memory than apache2

php5-fpm can be tuned as well to suit your needs if you have lots of
dynamic content


 maybe you guys should learn what a opcode-cache is and
 how to compile and optimize software (binaries and config)

 o'rly?


Re: How to change max simultaneous connection parameter in mysql.

2013-04-02 Thread spameden
2013/4/3 Rick James rja...@yahoo-inc.com

 SELECT is not performed in the same thread as nginx; it is performed in
 another process, or even (in big web setups) in a different host.
  Therefore, nginx would be in some form of wait state, thereby not really
 using the CPU.


ofc select is not performed in nginx thread, nginx acts as a proxying
server and just passes the request to the backend

it's entirely depends on your backend how fast it's gonna process certain
SELECT and ofc depends on what kind of database you've got

if backend takes too long to respond nginx just shows 502 error with
timeout from backend (by default: 30s).

good practice is to have multiple backends behind load balancer, so under
huge load no single request would be lost.



  -Original Message-
  From: Reindl Harald [mailto:h.rei...@thelounge.net]
  Sent: Tuesday, April 02, 2013 2:00 PM
  To: mysql@lists.mysql.com
  Subject: Re: How to change max simultaneous connection parameter in
  mysql.
 
 
 
  Am 02.04.2013 22:56, schrieb Rick James:
   I hear that nginx is very fast for a certain class of web serving.
 
  yes
 
   But what happens if a web page needs to do a SELECT?
 
  what should happen?
 
   Is nginx single-threaded, thereby sitting idle waiting for the
  SELECT?


nginx is multi threaded and it supports SMP architecture, there is main
process which controls everything and nginx configuration can be reloaded
with zero downtime.

I saw multiple test where under load (simple DDoS simulation attack like
there where 40k bots hitting the site at once) nginx+php5-fpm dropped much
less requests than apache2 + mod_php.

apache2 is so bad at eating memory and system resources.


 
  why should it do that?
 
   And, should you run 8 nginx web servers on an 8-core box?


no, you just tune

worker_processes 8;


  why should you do that?
 
  http://en.wikipedia.org/wiki/Nginx
  nginx uses an asynchronous event-driven approach to handling requests
 
   -Original Message-
   From: spameden [mailto:spame...@gmail.com]
   Sent: Tuesday, April 02, 2013 7:10 AM
   To: Reindl Harald
   Cc: mysql@lists.mysql.com
   Subject: Re: How to change max simultaneous connection parameter in
   mysql.
  
   2013/3/24 Reindl Harald h.rei...@thelounge.net
  
  
  
   Am 24.03.2013 05:20, schrieb spameden:
   2013/3/19 Rick James rja...@yahoo-inc.com:
   you never have hosted a large site
   Check my email address before saying that.
  
   :D
  
   as said, big company does not have only geniusses
  
  
   I do not judge only on 1 parameter, Rick has been constantly helping
   here and I'm pretty sure he has more knowledge on MySQL than you.
  
  
  
   20 may be low, but 100 is rather high.
   Never use apache2 it has so many problems under load..
  
   if you are too supid to configure it yes
  
  
   Ever heard about Slow HTTP DoS attack?
  
  
  
   The best combo is php5-fpm+nginx.
   Handles loads of users at once if well tuned
  
   Apache 2.4 handles the load of 600 parallel executed php-scripts
   from our own CMS-system
  
  
   Nginx serves static content way better than apache2 (did few
   benchmarks already).
  
   nginx+php5-fpm handles better load than apache2-prefork+mod_php
  
   you can google benchmarks if you dont trust me
  
   also nginx eats much less memory than apache2
  
   php5-fpm can be tuned as well to suit your needs if you have lots of
   dynamic content
  
  
   maybe you guys should learn what a opcode-cache is and how to
   compile and optimize software (binaries and config)
  
   o'rly?


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: All client commands to syslog?

2013-04-02 Thread spameden
2013/3/28 Rafał Radecki radecki.ra...@gmail.com

 Hi All.

 I have a production setup of four databases connected with
 replication. I would like to log every command that clients execute
 for auditing.


Take a look at general  query log it's exactly what you need.

http://dev.mysql.com/doc/refman/5.5/en/query-log.html



 I've read
 http://www.percona.com/doc/percona-server/5.5/diagnostics/mysql_syslog.html?id=percona-server:features:mysql_syslogredirect=1#client-variables
 but despite the fact that I use percona
 Percona-Server-server-55-5.5.28-rel29.3.388.rhel6.x86_64 it does not
 seem to work. I've added syslog/syslog ON to my my.cnf server
 configuration file but no info about executed commands in logs.

 I've also read
 http://www.mysqlperformanceblog.com/2008/07/08/logging-your-mysql-command-line-client-sessions/
 .

 What is the best way to log all client commands?

 Best regards,
 Rafal.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: How to change max simultaneous connection parameter in mysql.

2013-03-23 Thread spameden
2013/3/19 Rick James rja...@yahoo-inc.com:
 you never have hosted a large site
 Check my email address before saying that.

:D


 20 may be low, but 100 is rather high.

Never use apache2 it has so many problems under load..

The best combo is php5-fpm+nginx.

Handles loads of users at once if well tuned.


 -Original Message-
 From: Reindl Harald [mailto:h.rei...@thelounge.net]
 Sent: Monday, March 18, 2013 1:36 PM
 To: mysql@lists.mysql.com
 Subject: Re: How to change max simultaneous connection parameter in
 mysql.



 Am 18.03.2013 21:01, schrieb Rick James:
  20 is plenty if your pages run fast enough

 it is not
 you never have hosted a large site

  Excess clients after MaxClients are queued in Apache

 so what - it doe snot help you
 been there, done that

 if you have some hundret USERS at the same time any every of them is
 requesting the same page with a lot of images you are simply DEAD with
 a limit of
 20 in your configuration

  If the 20 are consuming resources (eg cpu/disk) it is better to queue
  the excess than to have everybody stumbling over each other.

 if your server can not serve more than 20 simultaionous requests you
 are not doing any serious things

 sorry, 20 can be done with any crappy notebook these days

  In MySQL, the excess clients beyond max_connections are give an
 error.
 
  -Original Message-
  From: Reindl Harald [mailto:h.rei...@thelounge.net]
  Sent: Monday, March 18, 2013 12:15 PM
  To: mysql@lists.mysql.com
  Subject: Re: How to change max simultaneous connection parameter in
  mysql.
 
 
 
  Am 18.03.2013 19:26, schrieb Rick James:
  If you are running Apache with MaxClients set too high, that can
  cause the problem.
 
  too high is relative
 
  That Apache setting should be something like 20.  (Other web
 servers
  have similar settings.)
 
  20 is a laughable value as long you are not hosting only sites with
  no users at all
 
  i have seen MaxClients 500 be critical while the hardware was not
  overloaded and we had THOUSANDS of users which liked to get the
  website with all it's images, no way with stupid settings of 20
 which
  means only ONE USER at the same time can fetch a single page with
  images and stylesheets


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: file level encryption on mysql

2013-03-14 Thread spameden
I'm sorry for top-posting, but I think you can achieve the best
practice if you'd encrypt user data with some sort of hash made from
the part of the password, i.e. after user is logged in you can store
personal key for the user in memory for decryption so you have to know
every user password (or part of it) to get the info from these 3
tables.

The password itself for the user should be stored as a hash in
database (use bcrypt). All decryption / encryption should be done in
your application.

The only disadvantage is you won't be able to read user's data if you
don't know user's password.

2013/2/5 Rick James rja...@yahoo-inc.com:
 AES encryption is weak because it is too easy for the hacker to get the 
 passphrase.

 If you can somehow hide the passphrase behind 'root', you can at least 
 prevent a non-sudo user from seeing the data.  Your web server starts as 
 root, then degrades itself before taking requests.  If it can grab the 
 passphrase before that, it can keep it in RAM for use, but not otherwise 
 expose it.

 Bottom line:  The problem (of protecting data from hacker/thief/etc) cannot 
 be solved by just MySQL.  (And perhaps MySQL is not even part of the 
 solution.)

 -Original Message-
 From: Mike Franon [mailto:kongfra...@gmail.com]
 Sent: Tuesday, February 05, 2013 6:43 AM
 To: Reindl Harald
 Cc: mysql@lists.mysql.com
 Subject: Re: file level encryption on mysql

 Which is the best way ?

 I see you can do it from PHP itself

 http://coding.smashingmagazine.com/2012/05/20/replicating-mysql-aes-
 encryption-methods-with-php/


 or can use mysql AES?

 http://security.stackexchange.com/questions/16473/how-do-i-protect-
 user-data-at-rest

 From what I understand we need two way and one way encryption.  Is the
 best way what the first article is recommending?



 On Tue, Feb 5, 2013 at 9:20 AM, Reindl Harald h.rei...@thelounge.net
 wrote:
  you have to encrypt them in the application and make the key stored
 as
  safe as possible, however for a full intrution there is no way to
  protect data which can not be only hashed
 
  somewhere you need the information how to encrypt them
 
  Am 05.02.2013 15:18, schrieb Mike Franon:
  I tried all these methods and you are right this is not going to
 work for us.
 
  I am not a developer, does anyone have any good links or reference
 to
  the best way I can share with my developers on best way to encrypt
  and decrypt personal user info.
 
  We do not store credit cards, but want to store 3 tables that have
  email address, ip address, and personal info.
 
  On Sun, Feb 3, 2013 at 12:57 PM, Reindl Harald
 h.rei...@thelounge.net wrote:
 
 
  Am 03.02.2013 18:52, schrieb Mike Franon:
  Hi,
 
  I was wondering what type of encryption for linux would you
  recommend to encrypt the database files on the OS level? I had a
  hard time starting the database after I moved it to a partiton
 with
  encryptFS
 
  I only need 3 tables encrypted and know it is better to do it from
  the application, but unfortunately that cannot happen for a while.
 
  Has anyone done OS file level encryption, and if so which one did
 they use?
 
  https://wiki.archlinux.org/index.php/Dm-crypt_with_LUKS
 
  but this all is useless in case of intrusion because the FS is
  unlocked and you have no gain - FS encryption only matters if your
  notebook or disks get stolen which is unlikely on a server
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: auto_increment field behavior

2013-03-13 Thread spameden
2013/3/13 Reindl Harald h.rei...@thelounge.net:


 Am 12.03.2013 22:34, schrieb spameden:
 NOTE: AUTO_INCREMENT is 32768 instead of 17923 ! So next inserted row
 would have pc_id=32768.

 Please suggest if it's normal behavior or not

 what do you expect if a PRIMARY KEY record get's removed?
 re-use the same primary key?
 this is not the way a database is allowed to work

No, I do not want the same key, I just want the next key after I
insert 17922 records it should be 17923 ?

I didn't delete or modificate any records.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: auto_increment field behavior

2013-03-12 Thread spameden
Furthermore I've tested on 133K records and AUTO_INCREMENT field in
the end had the value of 234076.

mysql select count(*) from billing.phone_codes;
+--+
| count(*) |
+--+
|   12 |
+--+
1 row in set (0.02 sec)

AUTO_INCREMENT=234076

So it basically means If I have large enough table I'd need to enlarge
PRIMARY KEY storage type, because it's almost double size of the
actual records.

I didn't delete records in this test too, I've inserted them all via LOAD DATA.

2013/3/13 spameden spame...@gmail.com:
 2013/3/13 Reindl Harald h.rei...@thelounge.net:


 Am 12.03.2013 22:34, schrieb spameden:
 NOTE: AUTO_INCREMENT is 32768 instead of 17923 ! So next inserted row
 would have pc_id=32768.

 Please suggest if it's normal behavior or not

 what do you expect if a PRIMARY KEY record get's removed?
 re-use the same primary key?
 this is not the way a database is allowed to work

 No, I do not want the same key, I just want the next key after I
 insert 17922 records it should be 17923 ?

 I didn't delete or modificate any records.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: auto_increment field behavior

2013-03-12 Thread spameden
After setting innodb_autoinc_lock_mode=0 it seems to start working as
expected for me:

mysql show variables like 'innodb_autoinc_lock_mode';
+--+---+
| Variable_name| Value |
+--+---+
| innodb_autoinc_lock_mode | 0 |
+--+---+
1 row in set (0.00 sec)

mysql truncate test;
Query OK, 0 rows affected (0.01 sec)

mysql load data infile '/tmp/ABC3x' into table test fields terminated
by ';' enclosed by '#' lines terminated by '\r\n'
(@var1,@var2,@var3,@var4,@var5,@var6,@var7) SET pc_type='ABC';
Query OK, 17922 rows affected (0.21 sec)
Records: 17922  Deleted: 0  Skipped: 0  Warnings: 0

mysql show create table test;
+---+-+
| Table | Create Table

|
+---+-+
| test  | CREATE TABLE `test` (
  `pc_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
  `pc_type` enum('ABC','DEF') DEFAULT NULL,
  PRIMARY KEY (`pc_id`)
) ENGINE=InnoDB AUTO_INCREMENT=17923 DEFAULT CHARSET=utf8 |
+---+-+
1 row in set (0.00 sec)

Shame it's a read-only variable and need to restart whole MySQL server.


2013/3/13 spameden spame...@gmail.com:
 Nevermind, I've found the bug:

 http://bugs.mysql.com/bug.php?id=57643

 I'm gonna subscribe for it and see if it's gonna be resolved.

 Many thanks guys for all your assistance!

 2013/3/13 spameden spame...@gmail.com:
 2013/3/13 Rick James rja...@yahoo-inc.com:
 AUTO_INCREMENT guarantees that it will not assign the same number twice.  
 That's about all it is willing to guarantee.

 With InnoDB, if a transaction starts, uses an auto_inc value, then rolls 
 back, that id is lost.

 True, but if you do not specify START TRANSACTION I believe it's done
 automatically? Nothing rolled back for me for that table and noone has
 been using it except me.


 When you have multiple threads loading data into the same table, diff 
 values of innodb_autoinc_lock_mode give you diff tradeoff between speed and 
 predictability.  If replication is involved, you want predictability.

 No, I do not have multiple threads, only 1.


 InnoDB and MyISAM act differently, especially after recovering from a crash.

 I understand the difference between InnoDB and MyISAM. InnoDB is a
 transactional DB engine with single row-level locking.


 If you DELETE the _highest_ id, then restart the server, that id will be 
 reused.  (This is irritating to some people.)  Otherwise, a deleted id will 
 not be reused.

 I didn't DELETE anything! The only actions I did:

 1. Created the TABLE
 2. used LOAD FILE only via command line (1 thread)

 So is it normal or should I fill a bug?

 There may be more.  Most of those are covered here:
 http://mysql.rjweb.org/doc.php/ricksrots




 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, March 12, 2013 2:46 PM
 To: Rick James
 Cc: mysql@lists.mysql.com
 Subject: Re: auto_increment field behavior

 2013/3/13 Rick James rja...@yahoo-inc.com:
  What settings?  (innodb_autoinc_lock_mode comes to mind, but there
 may
  be others.)
 Hi, Rick.

 Many thanks for the quick answer here is my settings:

 mysql show variables like '%inc%';
 +-+---+
 | Variable_name   | Value |
 +-+---+
 | auto_increment_increment| 1 |
 | auto_increment_offset   | 1 |
 | div_precision_increment | 4 |
 | innodb_autoextend_increment | 8 |
 | innodb_autoinc_lock_mode| 1 |
 +-+---+
 5 rows in set (0.00 sec)


 
  It is acceptable, by the definition of AUTO_INCREMENT, for it to burn
 the missing 15K ids.

 I don't get this explanation, could you please explain bit more? So
 it's completely normal for AUTO_INCREMENT field to act like this?


 
  -Original Message-
  From: spameden [mailto:spame...@gmail.com]
  Sent: Tuesday, March 12, 2013 2:34 PM
  To: mysql@lists.mysql.com
  Subject: auto_increment field behavior
 
  Hi, I'm running MySQL-5.5 on Ubuntu
 
  ~ $ mysqld -V
  mysqld  Ver 5.5.29-0ubuntu0.12.04.2 for debian-linux-gnu on x86_64
  ((Ubuntu))
 
  Would like to know if it's normal behavior with auto_increment field
  (tried both signed and unsigned now):
 
  mysql show create table phone_codes

Re: auto_increment field behavior

2013-03-12 Thread spameden
Nevermind, I've found the bug:

http://bugs.mysql.com/bug.php?id=57643

I'm gonna subscribe for it and see if it's gonna be resolved.

Many thanks guys for all your assistance!

2013/3/13 spameden spame...@gmail.com:
 2013/3/13 Rick James rja...@yahoo-inc.com:
 AUTO_INCREMENT guarantees that it will not assign the same number twice.  
 That's about all it is willing to guarantee.

 With InnoDB, if a transaction starts, uses an auto_inc value, then rolls 
 back, that id is lost.

 True, but if you do not specify START TRANSACTION I believe it's done
 automatically? Nothing rolled back for me for that table and noone has
 been using it except me.


 When you have multiple threads loading data into the same table, diff values 
 of innodb_autoinc_lock_mode give you diff tradeoff between speed and 
 predictability.  If replication is involved, you want predictability.

 No, I do not have multiple threads, only 1.


 InnoDB and MyISAM act differently, especially after recovering from a crash.

 I understand the difference between InnoDB and MyISAM. InnoDB is a
 transactional DB engine with single row-level locking.


 If you DELETE the _highest_ id, then restart the server, that id will be 
 reused.  (This is irritating to some people.)  Otherwise, a deleted id will 
 not be reused.

 I didn't DELETE anything! The only actions I did:

 1. Created the TABLE
 2. used LOAD FILE only via command line (1 thread)

 So is it normal or should I fill a bug?

 There may be more.  Most of those are covered here:
 http://mysql.rjweb.org/doc.php/ricksrots




 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, March 12, 2013 2:46 PM
 To: Rick James
 Cc: mysql@lists.mysql.com
 Subject: Re: auto_increment field behavior

 2013/3/13 Rick James rja...@yahoo-inc.com:
  What settings?  (innodb_autoinc_lock_mode comes to mind, but there
 may
  be others.)
 Hi, Rick.

 Many thanks for the quick answer here is my settings:

 mysql show variables like '%inc%';
 +-+---+
 | Variable_name   | Value |
 +-+---+
 | auto_increment_increment| 1 |
 | auto_increment_offset   | 1 |
 | div_precision_increment | 4 |
 | innodb_autoextend_increment | 8 |
 | innodb_autoinc_lock_mode| 1 |
 +-+---+
 5 rows in set (0.00 sec)


 
  It is acceptable, by the definition of AUTO_INCREMENT, for it to burn
 the missing 15K ids.

 I don't get this explanation, could you please explain bit more? So
 it's completely normal for AUTO_INCREMENT field to act like this?


 
  -Original Message-
  From: spameden [mailto:spame...@gmail.com]
  Sent: Tuesday, March 12, 2013 2:34 PM
  To: mysql@lists.mysql.com
  Subject: auto_increment field behavior
 
  Hi, I'm running MySQL-5.5 on Ubuntu
 
  ~ $ mysqld -V
  mysqld  Ver 5.5.29-0ubuntu0.12.04.2 for debian-linux-gnu on x86_64
  ((Ubuntu))
 
  Would like to know if it's normal behavior with auto_increment field
  (tried both signed and unsigned now):
 
  mysql show create table phone_codes;
  +-+-
 -
  +-+--
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  --+
  | Table   | Create Table
 
 
 
 
 
 
 
 
|
  +-+-
 -
  +-+--
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  --+
  | phone_codes | CREATE TABLE `phone_codes` (
`pc_id` int(11) NOT NULL AUTO_INCREMENT,
`pc_type` enum('ABC','DEF') DEFAULT NULL,
`pc_code` decimal(3,0) NOT NULL,
`pc_from` decimal(7,0) NOT NULL,
`pc_to` decimal(7,0) NOT NULL,
`pc_capacity` decimal(8,0) NOT NULL,
`pc_operator` varchar(255) DEFAULT NULL,
`pc_city` varchar(255

Re: auto_increment field behavior

2013-03-12 Thread spameden
2013/3/13 Rick James rja...@yahoo-inc.com:
 AUTO_INCREMENT guarantees that it will not assign the same number twice.  
 That's about all it is willing to guarantee.

 With InnoDB, if a transaction starts, uses an auto_inc value, then rolls 
 back, that id is lost.

True, but if you do not specify START TRANSACTION I believe it's done
automatically? Nothing rolled back for me for that table and noone has
been using it except me.


 When you have multiple threads loading data into the same table, diff values 
 of innodb_autoinc_lock_mode give you diff tradeoff between speed and 
 predictability.  If replication is involved, you want predictability.

No, I do not have multiple threads, only 1.


 InnoDB and MyISAM act differently, especially after recovering from a crash.

I understand the difference between InnoDB and MyISAM. InnoDB is a
transactional DB engine with single row-level locking.


 If you DELETE the _highest_ id, then restart the server, that id will be 
 reused.  (This is irritating to some people.)  Otherwise, a deleted id will 
 not be reused.

I didn't DELETE anything! The only actions I did:

1. Created the TABLE
2. used LOAD FILE only via command line (1 thread)

So is it normal or should I fill a bug?

 There may be more.  Most of those are covered here:
 http://mysql.rjweb.org/doc.php/ricksrots




 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, March 12, 2013 2:46 PM
 To: Rick James
 Cc: mysql@lists.mysql.com
 Subject: Re: auto_increment field behavior

 2013/3/13 Rick James rja...@yahoo-inc.com:
  What settings?  (innodb_autoinc_lock_mode comes to mind, but there
 may
  be others.)
 Hi, Rick.

 Many thanks for the quick answer here is my settings:

 mysql show variables like '%inc%';
 +-+---+
 | Variable_name   | Value |
 +-+---+
 | auto_increment_increment| 1 |
 | auto_increment_offset   | 1 |
 | div_precision_increment | 4 |
 | innodb_autoextend_increment | 8 |
 | innodb_autoinc_lock_mode| 1 |
 +-+---+
 5 rows in set (0.00 sec)


 
  It is acceptable, by the definition of AUTO_INCREMENT, for it to burn
 the missing 15K ids.

 I don't get this explanation, could you please explain bit more? So
 it's completely normal for AUTO_INCREMENT field to act like this?


 
  -Original Message-
  From: spameden [mailto:spame...@gmail.com]
  Sent: Tuesday, March 12, 2013 2:34 PM
  To: mysql@lists.mysql.com
  Subject: auto_increment field behavior
 
  Hi, I'm running MySQL-5.5 on Ubuntu
 
  ~ $ mysqld -V
  mysqld  Ver 5.5.29-0ubuntu0.12.04.2 for debian-linux-gnu on x86_64
  ((Ubuntu))
 
  Would like to know if it's normal behavior with auto_increment field
  (tried both signed and unsigned now):
 
  mysql show create table phone_codes;
  +-+-
 -
  +-+--
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  --+
  | Table   | Create Table
 
 
 
 
 
 
 
 
|
  +-+-
 -
  +-+--
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  
 -
  --
  --+
  | phone_codes | CREATE TABLE `phone_codes` (
`pc_id` int(11) NOT NULL AUTO_INCREMENT,
`pc_type` enum('ABC','DEF') DEFAULT NULL,
`pc_code` decimal(3,0) NOT NULL,
`pc_from` decimal(7,0) NOT NULL,
`pc_to` decimal(7,0) NOT NULL,
`pc_capacity` decimal(8,0) NOT NULL,
`pc_operator` varchar(255) DEFAULT NULL,
`pc_city` varchar(255) DEFAULT NULL,
`pc_region` varchar(255) DEFAULT NULL,
PRIMARY KEY (`pc_id`),
KEY `pc_code` (`pc_code`),
KEY `pc_code_from_to` (`pc_code`,`pc_from`,`pc_to`),
KEY `pc_operator` (`pc_operator`),
KEY `pc_city

Re: auto_increment field behavior

2013-03-12 Thread spameden
2013/3/13 Rick James rja...@yahoo-inc.com:
 What settings?  (innodb_autoinc_lock_mode comes to mind, but there may be 
 others.)
Hi, Rick.

Many thanks for the quick answer here is my settings:

mysql show variables like '%inc%';
+-+---+
| Variable_name   | Value |
+-+---+
| auto_increment_increment| 1 |
| auto_increment_offset   | 1 |
| div_precision_increment | 4 |
| innodb_autoextend_increment | 8 |
| innodb_autoinc_lock_mode| 1 |
+-+---+
5 rows in set (0.00 sec)



 It is acceptable, by the definition of AUTO_INCREMENT, for it to burn the 
 missing 15K ids.

I don't get this explanation, could you please explain bit more? So
it's completely normal for AUTO_INCREMENT field to act like this?



 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, March 12, 2013 2:34 PM
 To: mysql@lists.mysql.com
 Subject: auto_increment field behavior

 Hi, I'm running MySQL-5.5 on Ubuntu

 ~ $ mysqld -V
 mysqld  Ver 5.5.29-0ubuntu0.12.04.2 for debian-linux-gnu on x86_64
 ((Ubuntu))

 Would like to know if it's normal behavior with auto_increment field
 (tried both signed and unsigned now):

 mysql show create table phone_codes;
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 --+
 | Table   | Create Table








   |
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 --+
 | phone_codes | CREATE TABLE `phone_codes` (
   `pc_id` int(11) NOT NULL AUTO_INCREMENT,
   `pc_type` enum('ABC','DEF') DEFAULT NULL,
   `pc_code` decimal(3,0) NOT NULL,
   `pc_from` decimal(7,0) NOT NULL,
   `pc_to` decimal(7,0) NOT NULL,
   `pc_capacity` decimal(8,0) NOT NULL,
   `pc_operator` varchar(255) DEFAULT NULL,
   `pc_city` varchar(255) DEFAULT NULL,
   `pc_region` varchar(255) DEFAULT NULL,
   PRIMARY KEY (`pc_id`),
   KEY `pc_code` (`pc_code`),
   KEY `pc_code_from_to` (`pc_code`,`pc_from`,`pc_to`),
   KEY `pc_operator` (`pc_operator`),
   KEY `pc_city` (`pc_city`),
   KEY `pc_region` (`pc_region`)
 ) ENGINE=InnoDB DEFAULT CHARSET=utf8 |
 +-+
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 ---
 --+
 1 row in set (0.00 sec)

 mysql load data infile '/tmp/ABC3x' into table phone_codes fields
 terminated by ';' enclosed by '#' lines terminated by '\r\n'
 (pc_code,pc_from,pc_to,pc_capacity,pc_operator,pc_city,pc_region) SET
 pc_type='ABC'; Query OK, 17922 rows affected (4.44 sec)
 Records: 17922  Deleted: 0  Skipped: 0  Warnings: 0

 mysql show create table phone_codes

Re: auto_increment field behavior

2013-03-12 Thread spameden
Also, forget to quote from the docs
(http://dev.mysql.com/doc/refman/5.5/en/innodb-auto-increment-handling.html)

With innodb_autoinc_lock_mode set to 0 (“traditional”) or 1
(“consecutive”), the auto-increment values generated by any given
statement will be consecutive, without gaps, because the table-level
AUTO-INC lock is held until the end of the statement, and only one
such statement can execute at a time. 

So I believe this is a bug in MySQL because there were no parallel
INSERTs at all.

Sorry for the spam :)

2013/3/13 spameden spame...@gmail.com:
 After setting innodb_autoinc_lock_mode=0 it seems to start working as
 expected for me:

 mysql show variables like 'innodb_autoinc_lock_mode';
 +--+---+
 | Variable_name| Value |
 +--+---+
 | innodb_autoinc_lock_mode | 0 |
 +--+---+
 1 row in set (0.00 sec)

 mysql truncate test;
 Query OK, 0 rows affected (0.01 sec)

 mysql load data infile '/tmp/ABC3x' into table test fields terminated
 by ';' enclosed by '#' lines terminated by '\r\n'
 (@var1,@var2,@var3,@var4,@var5,@var6,@var7) SET pc_type='ABC';
 Query OK, 17922 rows affected (0.21 sec)
 Records: 17922  Deleted: 0  Skipped: 0  Warnings: 0

 mysql show create table test;
 +---+-+
 | Table | Create Table

 |
 +---+-+
 | test  | CREATE TABLE `test` (
   `pc_id` int(11) unsigned NOT NULL AUTO_INCREMENT,
   `pc_type` enum('ABC','DEF') DEFAULT NULL,
   PRIMARY KEY (`pc_id`)
 ) ENGINE=InnoDB AUTO_INCREMENT=17923 DEFAULT CHARSET=utf8 |
 +---+-+
 1 row in set (0.00 sec)

 Shame it's a read-only variable and need to restart whole MySQL server.


 2013/3/13 spameden spame...@gmail.com:
 Nevermind, I've found the bug:

 http://bugs.mysql.com/bug.php?id=57643

 I'm gonna subscribe for it and see if it's gonna be resolved.

 Many thanks guys for all your assistance!

 2013/3/13 spameden spame...@gmail.com:
 2013/3/13 Rick James rja...@yahoo-inc.com:
 AUTO_INCREMENT guarantees that it will not assign the same number twice.  
 That's about all it is willing to guarantee.

 With InnoDB, if a transaction starts, uses an auto_inc value, then rolls 
 back, that id is lost.

 True, but if you do not specify START TRANSACTION I believe it's done
 automatically? Nothing rolled back for me for that table and noone has
 been using it except me.


 When you have multiple threads loading data into the same table, diff 
 values of innodb_autoinc_lock_mode give you diff tradeoff between speed 
 and predictability.  If replication is involved, you want predictability.

 No, I do not have multiple threads, only 1.


 InnoDB and MyISAM act differently, especially after recovering from a 
 crash.

 I understand the difference between InnoDB and MyISAM. InnoDB is a
 transactional DB engine with single row-level locking.


 If you DELETE the _highest_ id, then restart the server, that id will be 
 reused.  (This is irritating to some people.)  Otherwise, a deleted id 
 will not be reused.

 I didn't DELETE anything! The only actions I did:

 1. Created the TABLE
 2. used LOAD FILE only via command line (1 thread)

 So is it normal or should I fill a bug?

 There may be more.  Most of those are covered here:
 http://mysql.rjweb.org/doc.php/ricksrots




 -Original Message-
 From: spameden [mailto:spame...@gmail.com]
 Sent: Tuesday, March 12, 2013 2:46 PM
 To: Rick James
 Cc: mysql@lists.mysql.com
 Subject: Re: auto_increment field behavior

 2013/3/13 Rick James rja...@yahoo-inc.com:
  What settings?  (innodb_autoinc_lock_mode comes to mind, but there
 may
  be others.)
 Hi, Rick.

 Many thanks for the quick answer here is my settings:

 mysql show variables like '%inc%';
 +-+---+
 | Variable_name   | Value |
 +-+---+
 | auto_increment_increment| 1 |
 | auto_increment_offset   | 1 |
 | div_precision_increment | 4 |
 | innodb_autoextend_increment | 8 |
 | innodb_autoinc_lock_mode| 1 |
 +-+---+
 5 rows in set (0.00 sec)


 
  It is acceptable, by the definition of AUTO_INCREMENT, for it to burn
 the missing 15K ids.

 I don't get this explanation, could you please explain bit more? So
 it's completely normal

Re: Recover dropped database

2012-10-29 Thread spameden
Hi,
could your collegue please share steps he taken to recover data?

I'd be interested most definetely!

Thanks

2012/10/29 Lorenzo Milesi max...@ufficyo.com

  That's rough. The only thing I could suggest is try out Percona's
  data recovery tool

 My collegue did some recovery using Percona tools and (suspance...)
 recovered 95% of the data!

 Lovely!

 Ciao.
 maxxer
 --
 Lorenzo Milesi - lorenzo.mil...@yetopen.it

 GPG/PGP Key-Id: 0xE704E230 - http://keyserver.linux.it



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
Will do.

mysql  SHOW GLOBAL VARIABLES LIKE '%log%';
+-+-+
| Variable_name   | Value
|
+-+-+
| back_log| 50
|
| binlog_cache_size   | 32768
|
| binlog_direct_non_transactional_updates | OFF
|
| binlog_format   | MIXED
|
| expire_logs_days| 5
|
| general_log | OFF
|
| general_log_file| /var/run/mysqld/mysqld.log
|
| innodb_flush_log_at_trx_commit  | 2
|
| innodb_flush_log_at_trx_commit_session  | 3
|
| innodb_locks_unsafe_for_binlog  | OFF
|
| innodb_log_block_size   | 512
|
| innodb_log_buffer_size  | 8388608
|
| innodb_log_file_size| 2145386496
|
| innodb_log_files_in_group   | 2
|
| innodb_log_group_home_dir   | ./
|
| innodb_mirrored_log_groups  | 1
|
| innodb_overwrite_relay_log_info | OFF
|
| log | OFF
|
| log_bin | ON
|
| log_bin_trust_function_creators | ON
|
| log_bin_trust_routine_creators  | ON
|
| log_error   | /var/log/mysql-error.log
|
| log_output  | FILE
|
| log_queries_not_using_indexes   | ON
|
| log_slave_updates   | OFF
|
| log_slow_admin_statements   | OFF
|
| log_slow_filter |
|
| log_slow_queries| ON
|
| log_slow_rate_limit | 1
|
| log_slow_slave_statements   | OFF
|
| log_slow_sp_statements  | ON
|
| log_slow_timestamp_every| OFF
|
| log_slow_verbosity  | microtime
|
| log_warnings| 1
|
| max_binlog_cache_size   | 18446744073709547520
|
| max_binlog_size | 104857600
|
| max_relay_log_size  | 0
|
| relay_log   | /var/log/mysql/mysqld-relay-bin
|
| relay_log_index |
|
| relay_log_info_file | relay-log.info
|
| relay_log_purge | ON
|
| relay_log_space_limit   | 0
|
| slow_query_log  | ON
|
| slow_query_log_file | /var/log/mysql/mysql-slow.log
|
| slow_query_log_microseconds_timestamp   | OFF
|
| sql_log_bin | ON
|
| sql_log_off | OFF
|
| sql_log_update  | ON
|
| suppress_log_warning_1592   | OFF
|
| sync_binlog | 0
|
| use_global_log_slow_control | none
|
+-+-+
51 rows in set (0.01 sec)

Here is full output, but writing happens ONLY if
log_queries_not_using_indexes turned ON.

Query takes:
# Query_time: 0.291280  Lock_time: 0.50  Rows_sent: 0  Rows_examined:
133876  Rows_affected: 0  Rows_read: 1
# Bytes_sent: 1775  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
# InnoDB_trx_id: F229398
SET timestamp=1350389078;
SELECT sql_id, momt, sender, receiver, udhdata, msgdata, time, smsc_id,
service, account, id, sms_type, mclass, mwi, coding, compress, validity,
deferred, dlr_mask, dlr_url, pid, alt_dcs, rpi, charset, boxc_id, binfo,
meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time)
WHERE time = UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50;


2012/10/16 Shawn Green shawn.l.gr...@oracle.com

 On 10/15/2012 7:15 PM, spameden wrote:

 Thanks a lot for all your comments!

 I did disable Query cache before testing with

 set query_cache_type=OFF

 for the current session.

 I will report this to the MySQL bugs site later.



 First. What are all of your logging settings?

 SHOW GLOBAL VARIABLES LIKE '%log%';

 Next. When you physically look in the slow query log, how long does it say
 that it took this command to execute?

 And last, before you can ask MySQL to fix a bug, you must first ensure
 it's a MySQL bug. Please try to reproduce your results using official
 binaries, not those constructed by a third party.  If the problem exists in
 our packages, do tell us about it. If the problem is not reproducible using
 official MySQL products, then please report it to the appropriate channel
 for the product you are using.

 MySQL Bugs -
 http://bugs.mysql.com/

 Thanks!
 --
 Shawn Green
 MySQL Principal Technical Support Engineer
 Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
 Office: Blountville, TN



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
Interesting thought, but I get the same result.

# Query_time: 0.001769  Lock_time: 0.001236 Rows_sent: 0  Rows_examined: 0
use kannel;
SET timestamp=1350413592;
select * from send_sms FORCE INDEX (priority_time) where time=@ut order by
priority limit 0,11;

the MySQL i'm using is 5.5.28 from dotdeb.org, pretty sure it's close to
the original except packaging scripts.

I will check this on the release from MySQL site and report back.

Thanks to all.

2012/10/16 Michael Dykman mdyk...@gmail.com

 your now() statement is getting executed for every row on the select.  try
 ptting the phrase up front
 as in:
 set @ut= unix_timestamp(now())
 and then use that in your statement.

 On 2012-10-16 8:42 AM, spameden spame...@gmail.com wrote:

 Will do.

 mysql  SHOW GLOBAL VARIABLES LIKE '%log%';

 +-+-+
 | Variable_name   | Value
 |

 +-+-+
 | back_log| 50
 |
 | binlog_cache_size   | 32768
 |
 | binlog_direct_non_transactional_updates | OFF
 |
 | binlog_format   | MIXED
 |
 | expire_logs_days| 5
 |
 | general_log | OFF
 |
 | general_log_file| /var/run/mysqld/mysqld.log
 |
 | innodb_flush_log_at_trx_commit  | 2
 |
 | innodb_flush_log_at_trx_commit_session  | 3
 |
 | innodb_locks_unsafe_for_binlog  | OFF
 |
 | innodb_log_block_size   | 512
 |
 | innodb_log_buffer_size  | 8388608
 |
 | innodb_log_file_size| 2145386496
 |
 | innodb_log_files_in_group   | 2
 |
 | innodb_log_group_home_dir   | ./
 |
 | innodb_mirrored_log_groups  | 1
 |
 | innodb_overwrite_relay_log_info | OFF
 |
 | log | OFF
 |
 | log_bin | ON
 |
 | log_bin_trust_function_creators | ON
 |
 | log_bin_trust_routine_creators  | ON
 |
 | log_error   | /var/log/mysql-error.log
 |
 | log_output  | FILE
 |
 | log_queries_not_using_indexes   | ON
 |
 | log_slave_updates   | OFF
 |
 | log_slow_admin_statements   | OFF
 |
 | log_slow_filter |
 |
 | log_slow_queries| ON
 |
 | log_slow_rate_limit | 1
 |
 | log_slow_slave_statements   | OFF
 |
 | log_slow_sp_statements  | ON
 |
 | log_slow_timestamp_every| OFF
 |
 | log_slow_verbosity  | microtime
 |
 | log_warnings| 1
 |
 | max_binlog_cache_size   | 18446744073709547520
 |
 | max_binlog_size | 104857600
 |
 | max_relay_log_size  | 0
 |
 | relay_log   | /var/log/mysql/mysqld-relay-bin
 |
 | relay_log_index |
 |
 | relay_log_info_file | relay-log.info
 |
 | relay_log_purge | ON
 |
 | relay_log_space_limit   | 0
 |
 | slow_query_log  | ON
 |
 | slow_query_log_file | /var/log/mysql/mysql-slow.log
 |
 | slow_query_log_microseconds_timestamp   | OFF
 |
 | sql_log_bin | ON
 |
 | sql_log_off | OFF
 |
 | sql_log_update  | ON
 |
 | suppress_log_warning_1592   | OFF
 |
 | sync_binlog | 0
 |
 | use_global_log_slow_control | none
 |

 +-+-+
 51 rows in set (0.01 sec)

 Here is full output, but writing happens ONLY if
 log_queries_not_using_indexes turned ON.

 Query takes:
 # Query_time: 0.291280  Lock_time: 0.50  Rows_sent: 0  Rows_examined:
 133876  Rows_affected: 0  Rows_read: 1
 # Bytes_sent: 1775  Tmp_tables: 0  Tmp_disk_tables: 0  Tmp_table_sizes: 0
 # InnoDB_trx_id: F229398
 SET timestamp=1350389078;
 SELECT sql_id, momt, sender, receiver, udhdata, msgdata, time, smsc_id,
 service, account, id, sms_type, mclass, mwi, coding, compress, validity,
 deferred, dlr_mask, dlr_url, pid, alt_dcs, rpi, charset, boxc_id, binfo,
 meta_data, task_id, msgid FROM send_sms_test FORCE INDEX (priority_time)
 WHERE time = UNIX_TIMESTAMP(NOW()) ORDER by priority LIMIT 0,50;



 2012/10/16 Shawn Green shawn.l.gr...@oracle.com

  On 10/15/2012 7:15 PM, spameden wrote:
 
  T...




Re: Odd Behavior During Replication Start-Up

2012-10-16 Thread spameden
2012/10/16 Tim Gustafson t...@soe.ucsc.edu

 Thanks for all the responses; I'll respond to each of them in turn below:

  you can not simply copy a single database in this state
  innodb is much more complex like myisam

 I know; that's why I rsync'd the entire /var/db/mysql folder (which
 includes the ib_logfile and ibdata files, as well as all other
 database and table data), not just individual databases.  I also made
 sure that flush tables with read lock had been executed before
 creating the snapshot.  The steps I followed were verbatim what the
 MySQL documentation said to do.  The MySQL documentation even mentions
 ZFS snapshots as an effective way to make a backup:

 http://dev.mysql.com/doc/refman/5.5/en/flush.html

  I have to agree with Harald on this: filesystem snapshots are not an
  effective way to clone innodb databases.  The rsync-based method
  described has worked for me in large scale data situations very
  reliably.

 I'm confused: in the first sentence, you say snapshots are bad (which
 directly contradicts the official MySQL documentation), and in the
 second sentence you say rsync is good.  Why would an rsync of a file
 system snapshot not be good enough?  By the way: I forgot to mention
 that I also did create a snapshot when the MySQL server on db-01 was
 actually shut down, and got the same sort of results.


I think you can do this too, I did that once. But make sure you have
changed server-id in the my.cnf and use change master to to select proper
master, you can ofc reset master too if you want on the slave. Basically
reset master means you're flushing all logs and resetting to the first log
entry and position.


  You can do replication flawlessly for InnoDB tables without stopping
  master at all.
 
  what you need is issue on master:
 
  mysqldump --single-transaction -A  ALL.databases.dump.sql 
  mysql -e 'SHOW MASTER STATUS\G  ALL.binlog

 All total, we have approximately 125GB of MySQL databases.  That
 command would take hours to run.

True, it's not fast.


 During that time, no new
 transactions could be committed to any of our databases, and
 performance for read-only queries would be seriously affected.
 Further, we have a combination of MyISAM and InnoDB databases and
 tables, and the --single-transaction parameter to mysqldump does not
 lock MyISAM tables.


Yes, it's true, the thing I mentioned only relates to InnoDB, but for
MyISAM you can issue for specific tables:
LOCK TABLES for writing

 There used to be a MySQL command that basically
 did all that in one statement (LOAD DATA FROM MASTER), but they
 dropped it because of the difficulties in getting all the master data
 that way.


 --

 Tim Gustafson
 t...@soe.ucsc.edu
 831-459-5354
 Baskin Engineering, Room 313A

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Odd Behavior During Replication Start-Up

2012-10-16 Thread spameden
Also, forgot to say you need to shutdown completely MySQL before rsync'ing
it's data, otherwise your snapshot might be inconsistent thus InnoDB fail.

Also make sure database shutdown was correct in the log.

2012/10/16 Tim Gustafson t...@soe.ucsc.edu

  load data from master never worked for innodb.

 And the suggested mysqldump command does not work for MyISAM.

 Either way, the suggestion is a non-starter.  I could flush tables
 with read lock and then do a mysqldump but again that would take
 hours and all the databases would be read-only during the whole
 operation, and even the read-only performance would be poor.  I'd
 really prefer not to do that.

 I'll say again that the MySQL documentation specifically says that ZFS
 snapshots are a good way to make backups of MySQL databases.  Is the
 documentation incorrect?  Or perhaps does that also only work for
 MyISAM tables?  If so, that ought to be called out more clearly in the
 documentation.

 --

 Tim Gustafson
 t...@soe.ucsc.edu
 831-459-5354
 Baskin Engineering, Room 313A

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-16 Thread spameden
That's exactly what I thought when reading Michael's email, but tried
anyways, thanks for clarification :)

2012/10/16 h...@tbbs.net

  2012/10/16 12:57 -0400, Michael Dykman 
 your now() statement is getting executed for every row on the select.  try
 ptting the phrase up front
 as in:
 set @ut= unix_timestamp(now())
 and then use that in your statement.
 
 Quote:

 Functions that return the current date or time each are evaluated only
 once per query at the start of query execution. This means that multiple
 references to a function such as
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_nowNOW()
 within a single query always produce the same result. (For our purposes, a
 single query also includes a call to a stored program (stored routine,
 trigger, or event) and all subprograms called by that program.) This
 principle also applies to
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_curdateCURDATE(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_curtimeCURTIME(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_utc-dateUTC_DATE(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/functions.html#function_utc-timeUTC_TIME(),
 file:///C:/Program%20Files/MySQL/MySQL%20Server%205.5/HELP/f!
  unctio
 ns.html#function_utc-timestampUTC_TIMESTAMP(), and to any of their
 synonyms.



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
Hi, list.

Sorry for the long subject, but I'm really interested in solving this and
need a help:

I've got a table:

mysql show create table send_sms_test;
+---+---+
| Table | Create
Table
|
+---+---+
| send_sms_test | CREATE TABLE `send_sms_test` (
  `sql_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `momt` enum('MO','MT') DEFAULT NULL,
  `sender` varchar(20) DEFAULT NULL,
  `receiver` varchar(20) DEFAULT NULL,
  `udhdata` blob,
  `msgdata` text,
  `time` bigint(20) NOT NULL,
  `smsc_id` varchar(255) DEFAULT 'main',
  `service` varchar(255) DEFAULT NULL,
  `account` varchar(255) DEFAULT NULL,
  `id` bigint(20) DEFAULT NULL,
  `sms_type` tinyint(1) DEFAULT '2',
  `mclass` bigint(20) DEFAULT NULL,
  `mwi` bigint(20) DEFAULT NULL,
  `coding` bigint(20) DEFAULT NULL,
  `compress` bigint(20) DEFAULT NULL,
  `validity` bigint(20) DEFAULT NULL,
  `deferred` bigint(20) DEFAULT NULL,
  `dlr_mask` bigint(20) DEFAULT NULL,
  `dlr_url` varchar(255) DEFAULT NULL,
  `pid` bigint(20) DEFAULT NULL,
  `alt_dcs` bigint(20) DEFAULT NULL,
  `rpi` bigint(20) DEFAULT NULL,
  `charset` varchar(255) DEFAULT NULL,
  `boxc_id` varchar(255) DEFAULT NULL,
  `binfo` varchar(255) DEFAULT NULL,
  `meta_data` text,
  `task_id` bigint(20) DEFAULT NULL,
  `msgid` bigint(20) DEFAULT NULL,
  `priority` int(3) unsigned NOT NULL DEFAULT '500',
  PRIMARY KEY (`sql_id`),
  KEY `task_id` (`task_id`),
  KEY `receiver` (`receiver`),
  KEY `msgid` (`msgid`),
  KEY `priority_time` (`priority`,`time`)
) ENGINE=InnoDB AUTO_INCREMENT=7806318 DEFAULT CHARSET=utf8

Slow-queries turned on with an option:
| log_queries_not_using_indexes | ON|

mysqld --version
mysqld  Ver 5.1.65-rel14.0 for debian-linux-gnu on x86_64 ((Percona Server
(GPL), 14.0, Revision 475))

If I check with EXPLAIN MySQL says it would use the index:
mysql *desc select * from send_sms_test where time=UNIX_TIMESTAMP(NOW())
order by priority limit 0,11;*
++-+---+---+---+---+-+--+--+-+
| id | select_type | table | type  | possible_keys | key
| key_len | ref  | rows | Extra   |
++-+---+---+---+---+-+--+--+-+
|  1 

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
Hi, I've just checked on MySQL-5.5.28

it acts absolutely same.

I need to use (priority,time) KEY instead of (time, priority) because query
results in better performance.

With first key used there is no need to sort at all, whilst if using latter:
mysql *desc select * from send_sms_test FORCE INDEX (time_priority) where
time=UNIX_TIMESTAMP(NOW()) order by priority limit 0,13;*
++-+---+---+---+---+-+--+---+-+
| id | select_type | table | type  | possible_keys | key
| key_len | ref  | rows  | Extra   |
++-+---+---+---+---+-+--+---+-+
|  1 | SIMPLE  | send_sms_test | range | time_priority | time_priority
| 8   | NULL | 73920 | Using where; *Using filesort* |
++-+---+---+---+---+-+--+---+-+
1 row in set (0.00 sec)

It uses filesort and results in a worser performance...

Any suggestions ? Should I submit a bug?

2012/10/16 spameden spame...@gmail.com

 Hi, list.

 Sorry for the long subject, but I'm really interested in solving this and
 need a help:

 I've got a table:

 mysql show create table send_sms_test;

 +---+---+
 | Table | Create
 Table
 |

 +---+---+
 | send_sms_test | CREATE TABLE `send_sms_test` (
   `sql_id` bigint(20) NOT NULL AUTO_INCREMENT,
   `momt` enum('MO','MT') DEFAULT NULL,
   `sender` varchar(20) DEFAULT NULL,
   `receiver` varchar(20) DEFAULT NULL,
   `udhdata` blob,
   `msgdata` text,
   `time` bigint(20) NOT NULL,
   `smsc_id` varchar(255) DEFAULT 'main',
   `service` varchar(255) DEFAULT NULL,
   `account` varchar(255) DEFAULT NULL,
   `id` bigint(20) DEFAULT NULL,
   `sms_type` tinyint(1) DEFAULT '2',
   `mclass` bigint(20) DEFAULT NULL,
   `mwi` bigint(20) DEFAULT NULL,
   `coding` bigint(20) DEFAULT NULL,
   `compress` bigint(20) DEFAULT NULL,
   `validity` bigint(20) DEFAULT NULL,
   `deferred` bigint(20) DEFAULT NULL,
   `dlr_mask` bigint(20) DEFAULT NULL,
   `dlr_url` varchar(255) DEFAULT NULL,
   `pid` bigint(20) DEFAULT NULL,
   `alt_dcs

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
|
| Handler_read_next | 576090 |
| Handler_read_prev | 0  |
| Handler_read_rnd  | 126|
| Handler_read_rnd_next | 223|
+---++
6 rows in set (0.00 sec)

mysql select * from send_sms_test FORCE INDEX (time_priority) where
time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
100 rows in set (0.09 sec)

mysql SHOW SESSION STATUS LIKE 'Handler_read%';
+---++
| Variable_name | Value  |
+---++
| Handler_read_first| 18 |
| Handler_read_key  | 244|
| Handler_read_next | 719969 |
| Handler_read_prev | 0  |
| Handler_read_rnd  | 226|
| Handler_read_rnd_next | 223|
+---++
6 rows in set (0.00 sec)

I don't understand much in Handler thing, could you please explain more,
based on the results I've posted ? In which case it works better and how it
uses the index?

About BIGINT(20) and INT(3) I will look further into this later, I
understand it might be oversized, but my main question is about index why
it's using it so weird.

Many thanks for your quick answer!
2012/10/16 Rick James rja...@yahoo-inc.com

 * Rows = 11 / 22 -- don't take the numbers too seriously; they are crude
 approximations based on estimated cardinality.

 * The 11 comes from the LIMIT -- therefore useless in judging the
 efficiency.  (The 22 may be 2*11; I don't know.)

 * Run the EXPLAINs without LIMIT -- that will avoid the bogus 11/22.

 * If the CREATE INDEX took only 0.67 sec, I surmise that you have very few
 rows in the table??  So this discussion is not necessarily valid in general
 cases.

 * What percentage of time values meet the WHERE?  This has a big impact on
 the choice of explain plan and performance.

 * Set long_query_time = 0; to get it in the slowlog even if it is fast.
  Then look at the various extra values (such as filesort, on disk, temp
 table used, etc).

 * Do this (with each index):
 SHOW SESSION STATUS LIKE 'Handler_read%';
 SELECT ... FORCE INDEX(...) ...;
 SHOW SESSION STATUS LIKE 'Handler_read%';
 Then take the diffs of the handler counts.  This will give you a pretty
 detailed idea of what is going on; better than the SlowLog.

 * INT(3) is not a 3-digit integer, it is a full 32-bit integer (4 bytes).
  Perhaps you should have SMALLINT UNSIGNED (2 bytes).

 * BIGINT takes 8 bytes -- usually over-sized.


  -Original Message-
  From: spameden [mailto:spame...@gmail.com]
  Sent: Monday, October 15, 2012 1:42 PM
  To: mysql@lists.mysql.com
  Subject: mysql logs query with indexes used to the slow-log and not
  logging if there is index in reverse order
 
  Hi, list.
 
  Sorry for the long subject, but I'm really interested in solving this
  and need a help:
 
  I've got a table:
 
  mysql show create table send_sms_test;
  +---+--
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---
  ---+
  | Table | Create
  Table

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
Sorry, forgot to say:

mysql show variables like 'long_query_time%';
+-+---+
| Variable_name   | Value |
+-+---+
| long_query_time | 10.00 |
+-+---+
1 row in set (0.00 sec)

It's getting in the log only due:

mysql show variables like '%indexes%';
+---+---+
| Variable_name | Value |
+---+---+
| log_queries_not_using_indexes | ON|
+---+---+
1 row in set (0.00 sec)

If I turn it off - it's all fine

My initial question was why MySQL logs it in the slow log if the query uses
an INDEX?

And why it's not logging if I create an INDEX (time, priority) (but in the
query there is FORCE INDEX (priority,time) specified, so MySQL shouldn't
use newly created INDEX (time, priority) at all).

2012/10/16 spameden spame...@gmail.com

 Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.

 Here is the MySQL-5.1 Percona testing table:

 mysql select count(*) from send_sms_test;
 +--+
 | count(*) |
 +--+
 |   143879 |
 +--+
 1 row in set (0.03 sec)

 Without LIMIT:
 mysql desc select * from send_sms_test FORCE INDEX (time_priority) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--+---+-+

 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows  | Extra   |

 ++-+---+---+---+---+-+--+---+-+
 |  1 | SIMPLE  | send_sms_test | range | time_priority | time_priority
 | 8   | NULL | 73920 | Using where; Using filesort |

 ++-+---+---+---+---+-+--+---+-+
 1 row in set (0.00 sec)

 mysql desc select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--++-+

 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows   | Extra   |

 ++-+---+---+---+---+-+--++-+
 |  1 | SIMPLE  | send_sms_test | index | NULL  | priority_time
 | 12  | NULL | 147840 | Using where |

 ++-+---+---+---+---+-+--++-+

 1 row in set (0.00 sec)

 But I actually need to use LIMIT, because client uses this to limit the
 number of records returned to process.

 mysql select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
 100 rows in set (0.00 sec)

 mysql show profile;
 ++--+
 | Status | Duration |
 ++--+
 | starting   | 0.53 |
 | Opening tables | 0.09 |
 | System lock| 0.05 |
 | Table lock | 0.04 |
 | init   | 0.37 |
 | optimizing | 0.05 |
 | statistics | 0.07 |
 | preparing  | 0.05 |
 | executing  | 0.01 |
 | Sorting result | 0.03 |
 | Sending data   | 0.000856 |
 | end| 0.03 |
 | query end  | 0.01 |
 | freeing items  | 0.15 |
 | logging slow query | 0.01 |
 | logging slow query | 0.47 |
 | cleaning up| 0.02 |
 ++--+
 17 rows in set (0.00 sec)

 mysql select * from send_sms_test FORCE INDEX (time_priority) where
 time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
 100 rows in set (0.08 sec)
 mysql show profile;
 ++--+
 | Status | Duration |
 ++--+
 | starting   | 0.48 |
 | Opening tables | 0.09 |
 | System lock| 0.02 |
 | Table lock | 0.04 |
 | init   | 0.47 |
 | optimizing | 0.06 |
 | statistics | 0.43 |
 | preparing  | 0.18 |
 | executing  | 0.01 |
 | Sorting result | 0.076725 |
 | Sending data   | 0.001406 |
 | end| 0.03 |
 | query end  | 0.01 |
 | freeing items  | 0.12 |
 | logging slow query | 0.01 |
 | cleaning up| 0.02 |
 ++--+
 16 rows in set (0.00 sec)

 As you can see latter query takes more time, because it's using filesort
 as well.

 Now, handler:
 mysql SHOW SESSION STATUS LIKE 'Handler_read%';select * from
 send_sms_test FORCE INDEX (priority_time) where time=UNIX_TIMESTAMP(NOW())
 order by priority LIMIT 0,100;SHOW SESSION STATUS LIKE

Re: mysql logs query with indexes used to the slow-log and not logging if there is index in reverse order

2012-10-15 Thread spameden
Thanks a lot for all your comments!

I did disable Query cache before testing with

set query_cache_type=OFF

for the current session.

I will report this to the MySQL bugs site later.



2012/10/16 Rick James rja...@yahoo-inc.com

 **Ø  **My initial question was why MySQL logs it in the slow log if the
 query uses an INDEX?

 

 That _may_ be worth a bug report.

 ** **

 A _possible_ answer...  EXPLAIN presents what the optimizer is in the mood
 for at that moment.  It does not necessarily reflect what it was in the
 mood for when it ran the query.

 ** **

 When timing things, run them twice (and be sure not to hit the Query
 cache).  The first time freshens the cache (buffer_pool, etc); the second
 time gives you a 'reproducible' time.  I believe (without proof) that the
 cache contents can affect the optimizer's choice.

 ** **

 *From:* spameden [mailto:spame...@gmail.com]
 *Sent:* Monday, October 15, 2012 3:29 PM

 *To:* Rick James
 *Cc:* mysql@lists.mysql.com
 *Subject:* Re: mysql logs query with indexes used to the slow-log and not
 logging if there is index in reverse order

 ** **

 Sorry, forgot to say:

 mysql show variables like 'long_query_time%';
 +-+---+
 | Variable_name   | Value |
 +-+---+
 | long_query_time | 10.00 |
 +-+---+
 1 row in set (0.00 sec)

 It's getting in the log only due:

 mysql show variables like '%indexes%';
 +---+---+
 | Variable_name | Value |
 +---+---+
 | log_queries_not_using_indexes | ON|
 +---+---+
 1 row in set (0.00 sec)

 If I turn it off - it's all fine

 My initial question was why MySQL logs it in the slow log if the query
 uses an INDEX?

 And why it's not logging if I create an INDEX (time, priority) (but in the
 query there is FORCE INDEX (priority,time) specified, so MySQL shouldn't
 use newly created INDEX (time, priority) at all).

 2012/10/16 spameden spame...@gmail.com

 Sorry, my previous e-mail was a test on MySQL-5.5.28 on an empty table.

 Here is the MySQL-5.1 Percona testing table:

 mysql select count(*) from send_sms_test;
 +--+
 | count(*) |
 +--+
 |   143879 |
 +--+
 1 row in set (0.03 sec)

 Without LIMIT:
 mysql desc select * from send_sms_test FORCE INDEX (time_priority) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--+---+-+
 


 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows  | Extra   |


 ++-+---+---+---+---+-+--+---+-+
 |  1 | SIMPLE  | send_sms_test | range | time_priority | time_priority
 | 8   | NULL | 73920 | Using where; Using filesort |

 ++-+---+---+---+---+-+--+---+-+
 

 1 row in set (0.00 sec)

 mysql desc select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority;

 ++-+---+---+---+---+-+--++-+
 


 | id | select_type | table | type  | possible_keys | key
 | key_len | ref  | rows   | Extra   |


 ++-+---+---+---+---+-+--++-+
 |  1 | SIMPLE  | send_sms_test | index | NULL  | priority_time
 | 12  | NULL | 147840 | Using where |

 ++-+---+---+---+---+-+--++-+
 


 1 row in set (0.00 sec)

 But I actually need to use LIMIT, because client uses this to limit the
 number of records returned to process.

 mysql select * from send_sms_test FORCE INDEX (priority_time) where
 time=UNIX_TIMESTAMP(NOW()) order by priority LIMIT 0,100;
 100 rows in set (0.00 sec)

 mysql show profile;
 ++--+
 | Status | Duration |
 ++--+
 | starting   | 0.53 |
 | Opening tables | 0.09 |
 | System lock| 0.05 |
 | Table lock | 0.04 |
 | init   | 0.37 |
 | optimizing | 0.05 |
 | statistics | 0.07 |
 | preparing  | 0.05 |
 | executing  | 0.01 |
 | Sorting result | 0.03 |
 | Sending data   | 0.000856 |
 | end| 0.03 |
 | query end  | 0.01 |
 | freeing items  | 0.15 |
 | logging slow query | 0.01 |
 | logging slow query | 0.47 |
 | cleaning up| 0.02 |
 ++--+
 17 rows in set (0.00