Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Thanks, Rick - definitely something to think about. I've been
troubled by the pagination stuff in our code. This looks like
something I can definitely use!

andy

On 9/5/12 2:40 PM, Rick James wrote:

Remember where you "left off".

Your "Next" button now says something like ?page=5&size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345&size=50
and then do SELECT ... WHERE id > 12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a "Next" page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a "crawler" (eg, search engine) brought a site to 
its knees because of "pagination via OFFSET".

"Pagination via OFFSET" is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  By the 
time the entire list has been paged through, about N*N/2 units of work have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Wednesday, September 05, 2012 2:05 PM
To: mysql@lists.mysql.com
Subject: Re: Understanding Slow Query Log

Ok, this raises a question for me - what's a better way to do
pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the
rest of the week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
Remember where you "left off".

Your "Next" button now says something like ?page=5&size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345&size=50
and then do SELECT ... WHERE id > 12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a "Next" page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
   GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a "crawler" (eg, search engine) brought 
a site to its knees because of "pagination via OFFSET".

"Pagination via OFFSET" is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  
By the time the entire list has been paged through, about N*N/2 units of work 
have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !

> -Original Message-
> From: Andy Wallace [mailto:awall...@ihouseweb.com]
> Sent: Wednesday, September 05, 2012 2:05 PM
> To: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> Ok, this raises a question for me - what's a better way to do
> pagination?
> 
> On 9/5/12 2:02 PM, Rick James wrote:
> > * LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.
> 
> 
> 
> --
> Andy Wallace
> iHOUSEweb, Inc.
> awall...@ihouseweb.com
> (866) 645-7700 ext 219
> --
> "Sometimes it pays to stay in bed on Monday, rather than spending the
> rest of the week debugging Monday's code."
> - Christopher Thompson
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Ok, this raises a question for me - what's a better way to do pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
* SHOW VARIABLES LIKE 'innodb%';  -- some of them may be hurting performance.

* More that 20% of the table has bean_type = 'Workflow'?  -- if so, it is more 
efficient to do a table scan than to use the index.

* KEY `I_WF_1_DTYPE` (`bean_type`), -->
KEY bean_time (`bean_type`, created_time)
"Compound" index may be your cure.

* Fields with low cardinality (bean_type, status) make very poor INDEXes.

* Consider using an ENUM instead of VARCHAR for status and bean_type, (and 
others?)

* VARCHAR(255) is an awful PRIMARY KEY.  The PK is included implicitly (in 
InnoDB) in every secondary key.

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.

Lots more about these topics is discussed in similar questions in
http://forums.mysql.com/list.php?24
Lots more tips here:
http://mysql.rjweb.org/doc.php/ricksrots


> -Original Message-
> From: Adarsh Sharma [mailto:eddy.ada...@gmail.com]
> Sent: Wednesday, September 05, 2012 11:27 AM
> To: Michael Dykman
> Cc: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> true Michael, pasting the output :
> 
> CREATE TABLE `WF_1` (
>   `id` varchar(255) NOT NULL,
>   `app_name` varchar(255) DEFAULT NULL,
>   `app_path` varchar(255) DEFAULT NULL,
>   `conf` text,
>   `group_name` varchar(255) DEFAULT NULL,
>   `parent_id` varchar(255) DEFAULT NULL,
>   `run` int(11) DEFAULT NULL,
>   `user_name` varchar(255) DEFAULT NULL,
>   `bean_type` varchar(31) DEFAULT NULL,
>   `auth_token` text,
>   `created_time` datetime DEFAULT NULL,
>   `end_time` datetime DEFAULT NULL,
>   `external_id` varchar(255) DEFAULT NULL,
>   `last_modified_time` datetime DEFAULT NULL,
>   `log_token` varchar(255) DEFAULT NULL,
>   `proto_action_conf` text,
>   `sla_xml` text,
>   `start_time` datetime DEFAULT NULL,
>   `status` varchar(255) DEFAULT NULL,
>   `wf_instance` mediumblob,
>   PRIMARY KEY (`id`),
>   KEY `I_WF_1_DTYPE` (`bean_type`),
>   KEY `I_WF_1_END_TIME` (`end_time`),
>   KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
>   KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
>   KEY `I_WF_1_STATUS` (`status`)
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
> 
> 
> show indexes from WF_1;
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> | Table   | Non_unique | Key_name | Seq_in_index |
> Column_name| Collation | Cardinality | Sub_part | Packed | Null
> |
> Index_type | Comment |
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> |WF_1 |  0 | PRIMARY  |1 | id
>   | A |  551664 | NULL | NULL   |  | BTREE
>  | |
> |WF_1 |  1 | I_WF_1_DTYPE  |1 |
> bean_type
>| A |  18 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_END_TIME   |1 |
> end_time
> | A |  551664 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 |
> external_id
>| A |  551664 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
> last_modified_time | A |  551664 | NULL | NULL   | YES
> |
> BTREE  | |
> |WF_1 |  1 | I_WF_1_STATUS |1 | status
> | A |  18 | NULL | NULL   | YES  | BTREE
> |
> |
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> 
> 
> Thanks
> 
> On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman 
> wrote:
> 
> > The attachments do not appear to be coming through.
> >
> > I am more curious what the cardinality of bean_type is.  What is the
> > result of select count(*) as cnt, bean_type from WS_1 group by
> > bean_type  ?
> >
> > Low cardinality can render an index usrless.
> >
> > On 2012-09-05 5:19 AM, "Adarsh Sharma"  wrote:
> >
> > I already attached the list.
> >
> > Attaching one more time & thanks for the interest.
> >
> > Cheers
> >
> >
> >
> > On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui 
> > wrote:
> > >
> > >
> > >
> > > 2012/9/5 Adar...
> >
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
true Michael, pasting the output :

CREATE TABLE `WF_1` (
  `id` varchar(255) NOT NULL,
  `app_name` varchar(255) DEFAULT NULL,
  `app_path` varchar(255) DEFAULT NULL,
  `conf` text,
  `group_name` varchar(255) DEFAULT NULL,
  `parent_id` varchar(255) DEFAULT NULL,
  `run` int(11) DEFAULT NULL,
  `user_name` varchar(255) DEFAULT NULL,
  `bean_type` varchar(31) DEFAULT NULL,
  `auth_token` text,
  `created_time` datetime DEFAULT NULL,
  `end_time` datetime DEFAULT NULL,
  `external_id` varchar(255) DEFAULT NULL,
  `last_modified_time` datetime DEFAULT NULL,
  `log_token` varchar(255) DEFAULT NULL,
  `proto_action_conf` text,
  `sla_xml` text,
  `start_time` datetime DEFAULT NULL,
  `status` varchar(255) DEFAULT NULL,
  `wf_instance` mediumblob,
  PRIMARY KEY (`id`),
  KEY `I_WF_1_DTYPE` (`bean_type`),
  KEY `I_WF_1_END_TIME` (`end_time`),
  KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
  KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
  KEY `I_WF_1_STATUS` (`status`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |


show indexes from WF_1;
+-++--+--++---+-+--++--++-+
| Table   | Non_unique | Key_name | Seq_in_index |
Column_name| Collation | Cardinality | Sub_part | Packed | Null |
Index_type | Comment |
+-++--+--++---+-+--++--++-+
|WF_1 |  0 | PRIMARY  |1 | id
  | A |  551664 | NULL | NULL   |  | BTREE
 | |
|WF_1 |  1 | I_WF_1_DTYPE  |1 | bean_type
   | A |  18 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_END_TIME   |1 | end_time
| A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 | external_id
   | A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
last_modified_time | A |  551664 | NULL | NULL   | YES  |
BTREE  | |
|WF_1 |  1 | I_WF_1_STATUS |1 | status
| A |  18 | NULL | NULL   | YES  | BTREE  |
|
+-++--+--++---+-+--++--++-+


Thanks

On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman  wrote:

> The attachments do not appear to be coming through.
>
> I am more curious what the cardinality of bean_type is.  What is the
> result of
> select count(*) as cnt, bean_type from WS_1 group by bean_type  ?
>
> Low cardinality can render an index usrless.
>
> On 2012-09-05 5:19 AM, "Adarsh Sharma"  wrote:
>
> I already attached the list.
>
> Attaching one more time & thanks for the interest.
>
> Cheers
>
>
>
> On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui 
> wrote:
> >
> >
> >
> > 2012/9/5 Adar...
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
I already attached the list.

Attaching one more time & thanks for the interest.

Cheers

On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui  wrote:

>
>
> 2012/9/5 Adarsh Sharma 
>
>> Actually that query is not my concern :
>>
>> i have a query that is taking so much time :
>> Slow Log Output :
>> # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
>> # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
>> # Attribute  total min max avg 95%  stddev  median
>> #  === === === === === === ===
>> # Exec time 80887s   192us   2520s415s   1732s612s 80s
>> # Lock time   13ms   0   133us68us   103us23us69us
>> # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
>> # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
>> # Query size65.45k   6 577  343.70  563.87  171.06  246.02
>>
>> In the logs output :
>> # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
>>  Rows_examined: 471150
>> SET timestamp=1346655789;
>> SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
>> t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
>> t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
>> ORDER BY t0.created_time DESC LIMIT 0, 50;
>>
>> The table is near about 30 GB and growing day by day.
>>
>
> Just out curiosity, is that table too fragmented? 471k rows are quite a
> lot, but 488 of query time is insane. Seems you're reading from disk too
> much!
>
>
>>
>> Attaching the table definition & indexes output. I have a index on bean
>> type column but cann't understand why it
>> examined the all rows of table.
>>
>
> Where's the table's schema so we can give it a try?
>
> Manu
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: Understanding Slow Query Log

2012-09-04 Thread Manuel Arostegui
2012/9/5 Adarsh Sharma 

> Actually that query is not my concern :
>
> i have a query that is taking so much time :
> Slow Log Output :
> # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
> # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
> # Attribute  total min max avg 95%  stddev  median
> #  === === === === === === ===
> # Exec time 80887s   192us   2520s415s   1732s612s 80s
> # Lock time   13ms   0   133us68us   103us23us69us
> # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
> # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
> # Query size65.45k   6 577  343.70  563.87  171.06  246.02
>
> In the logs output :
> # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
>  Rows_examined: 471150
> SET timestamp=1346655789;
> SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
> t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
> t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
> ORDER BY t0.created_time DESC LIMIT 0, 50;
>
> The table is near about 30 GB and growing day by day.
>

Just out curiosity, is that table too fragmented? 471k rows are quite a
lot, but 488 of query time is insane. Seems you're reading from disk too
much!


>
> Attaching the table definition & indexes output. I have a index on bean
> type column but cann't understand why it
> examined the all rows of table.
>

Where's the table's schema so we can give it a try?

Manu


Re: Understanding Slow Query Log

2012-09-04 Thread Adarsh Sharma
Actually that query is not my concern :

i have a query that is taking so much time :
Slow Log Output :
# Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
# Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
# Attribute  total min max avg 95%  stddev  median
#  === === === === === === ===
# Exec time 80887s   192us   2520s415s   1732s612s 80s
# Lock time   13ms   0   133us68us   103us23us69us
# Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
# Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
# Query size65.45k   6 577  343.70  563.87  171.06  246.02

In the logs output :
# Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50  Rows_examined:
471150
SET timestamp=1346655789;
SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name, t0.group_name,
t0.created_time, t0.start_time, t0.last_modified_time, t0.end_time,
t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow' ORDER BY
t0.created_time DESC LIMIT 0, 50;

The table is near about 30 GB and growing day by day.

Attaching the table definition & indexes output. I have a index on bean
type column but cann't understand why it
examined the all rows of table.

Thanks



On Wed, Sep 5, 2012 at 12:24 AM, Rick James  wrote:

> 100 is tantamount to turning off the log.  I prefer 2.
>
> select  count(ENTITY_NAME)
> from  ALERT_EVENTS
> where  EVENT_TIME > date_sub(now(),INTERVAL 60 MINUTE)
>   and  status=upper('failed')
>   and  ENTITY_NAME='FETL-ImpressionRC-conversion';
> begs for the _compound_ index
> INDEX(ENTITY_NAME, EVENT_TIME)
> This would be even better:
> INDEX(ENTITY_NAME, status, EVENT_TIME)
>
> COUNT(*) should be used if you don't need to check the column for being
> NULL.
>
> > Rows_examined: 141145
> That is probably the entire table.  Will the table grow?  If so, the query
> will get slower.  Meanwhile, the index I suggested will (probably) be much
> faster.
>
>
> > -Original Message-
> > From: Suresh Kuna [mailto:sureshkumar...@gmail.com]
> > Sent: Saturday, September 01, 2012 1:03 AM
> > To: Adarsh Sharma
> > Cc: mysql@lists.mysql.com
> > Subject: Re: Understanding Slow Query Log
> >
> > Disable log-queries-not-using-indexes to log only queries > 100 sec.
> >
> > Just do "> /var/lib/mysql/slow-queries.log" it will clear the log.
> >
> > On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma
> > wrote:
> >
> > > Hi all,
> > >
> > > I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow
> > > query log by setting below parameters in my.cnf :
> > >
> > > log-slow-queries=/usr/local/mysql/slow-query.log
> > > long_query_time=100
> > > log-queries-not-using-indexes
> > >
> > > I am assuming from the inf. from the internet that long_query_time is
> > > in seconds , but i see the slow query log ,  there are lots of
> > > statements ( queries ) :
> > >
> > > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1
> > Rows_examined:
> > > 141145
> > > SET timestamp=1346409734;
> > > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > > # Time: 120831 10:43:14
> > > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1
> > Rows_examined:
> > > 141145
> > > SET timestamp=1346409794;
> > > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > > ENTITY_NAME='FETL-click-enhancer-deferred';
> > > # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1
> > Rows_examined:
> > > 141145
> > > SET timestamp=1346409794;
> > > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > > # Time: 120831 10:43:22
> > > # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1
> > Rows_ex

RE: Understanding Slow Query Log

2012-09-04 Thread Rick James
100 is tantamount to turning off the log.  I prefer 2.

select  count(ENTITY_NAME)
from  ALERT_EVENTS
where  EVENT_TIME > date_sub(now(),INTERVAL 60 MINUTE)
  and  status=upper('failed')
  and  ENTITY_NAME='FETL-ImpressionRC-conversion';
begs for the _compound_ index
INDEX(ENTITY_NAME, EVENT_TIME)
This would be even better:
INDEX(ENTITY_NAME, status, EVENT_TIME)

COUNT(*) should be used if you don't need to check the column for being NULL.

> Rows_examined: 141145
That is probably the entire table.  Will the table grow?  If so, the query will 
get slower.  Meanwhile, the index I suggested will (probably) be much faster.


> -Original Message-
> From: Suresh Kuna [mailto:sureshkumar...@gmail.com]
> Sent: Saturday, September 01, 2012 1:03 AM
> To: Adarsh Sharma
> Cc: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> Disable log-queries-not-using-indexes to log only queries > 100 sec.
> 
> Just do "> /var/lib/mysql/slow-queries.log" it will clear the log.
> 
> On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma
> wrote:
> 
> > Hi all,
> >
> > I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow
> > query log by setting below parameters in my.cnf :
> >
> > log-slow-queries=/usr/local/mysql/slow-query.log
> > long_query_time=100
> > log-queries-not-using-indexes
> >
> > I am assuming from the inf. from the internet that long_query_time is
> > in seconds , but i see the slow query log ,  there are lots of
> > statements ( queries ) :
> >
> > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409734;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > # Time: 120831 10:43:14
> > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409794;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-click-enhancer-deferred';
> > # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409794;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > # Time: 120831 10:43:22
> > # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1
> Rows_examined:
> > 13
> >
> >
> >
> > I don't  understand the query time unit in slow query log because i
> > expect queries to be logged that takes > 100 s. I tested with sleep
> > command for 60s , it doesn't logged in slow query log and when i
> sleep
> > for 120 s it logged but i don't why the other queries are logging in
> slow log.
> >
> > # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1
> Rows_examined:
> > 0
> > SET timestamp=1346443103;
> > SELECT SLEEP(120);
> >
> > And also my slow log is increasing and decided to purge thorogh below
> > command :
> >
> > cat /dev/null > /var/lib/mysql/slow-queries.log
> >
> >
> > Anyone any ideas about this.
> >
> >
> > Thanks
> >
> 
> 
> 
> --
> Thanks
> Suresh Kuna
> MySQL DBA

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-01 Thread Suresh Kuna
Disable log-queries-not-using-indexes to log only queries > 100 sec.

Just do "> /var/lib/mysql/slow-queries.log" it will clear the log.

On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma wrote:

> Hi all,
>
> I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
> by setting below parameters in my.cnf :
>
> log-slow-queries=/usr/local/mysql/slow-query.log
> long_query_time=100
> log-queries-not-using-indexes
>
> I am assuming from the inf. from the internet that long_query_time is in
> seconds , but i see the slow query log ,  there are lots of statements (
> queries ) :
>
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409734;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:14
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-click-enhancer-deferred';
> # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:22
> # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13
>
>
>
> I don't  understand the query time unit in slow query log because i expect
> queries to be logged that takes > 100 s. I tested with sleep command for
> 60s , it doesn't logged in slow query log and when i sleep for 120 s it
> logged but i don't why the other queries are logging in slow log.
>
> # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined:
> 0
> SET timestamp=1346443103;
> SELECT SLEEP(120);
>
> And also my slow log is increasing and decided to purge thorogh below
> command :
>
> cat /dev/null > /var/lib/mysql/slow-queries.log
>
>
> Anyone any ideas about this.
>
>
> Thanks
>



-- 
Thanks
Suresh Kuna
MySQL DBA


Re: Understanding Slow Query Log

2012-09-01 Thread yoku ts
Hi

Because of that, those queries don't use index.
log-queries-not-using-indexes works even if query time less than
long-query-time.

http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_log-queries-not-using-indexes

regards,

yoku

2012/9/1 Adarsh Sharma :
> Hi all,
>
> I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
> by setting below parameters in my.cnf :
>
> log-slow-queries=/usr/local/mysql/slow-query.log
> long_query_time=100
> log-queries-not-using-indexes
>
> I am assuming from the inf. from the internet that long_query_time is in
> seconds , but i see the slow query log ,  there are lots of statements (
> queries ) :
>
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409734;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:14
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-click-enhancer-deferred';
> # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:22
> # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13
>
>
>
> I don't  understand the query time unit in slow query log because i expect
> queries to be logged that takes > 100 s. I tested with sleep command for
> 60s , it doesn't logged in slow query log and when i sleep for 120 s it
> logged but i don't why the other queries are logging in slow log.
>
> # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined: 0
> SET timestamp=1346443103;
> SELECT SLEEP(120);
>
> And also my slow log is increasing and decided to purge thorogh below
> command :
>
> cat /dev/null > /var/lib/mysql/slow-queries.log
>
>
> Anyone any ideas about this.
>
>
> Thanks

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql