RE: Are There Slow Queries that Don't Show in the Slow Query Logs?

2013-05-31 Thread Robinson, Eric
> -Original Message-
> From: Vikas Shukla [mailto:myfriendvi...@gmail.com] 
> Sent: Thursday, May 30, 2013 7:19 PM
> To: Robinson, Eric; mysql@lists.mysql.com
> Subject: RE: Are There Slow Queries that Don't Show in the 
> Slow Query Logs?
> 
> Hi,
> 
> No, it does not represents the time from request to response 
> not does it includes the time that is spent in waiting for 
> the locks to be released.
> The slow query log consists of SQL statements that took more 
> than long_query_time seconds to EXECUTE. The default value of 
> long_query_time is 10.
> The time to acquire the initial locks is not counted as 
> execution time.
> mysqld writes a statement to the slow query log after it has 
> been executed and after all locks have been released, so log 
> order might differ from execution order.
> 
> Lets take an example, if a query is received at 10:00 hrs and 
> it waits till 10:05 hrs , it starts getting executed at 
> 10:05:00 and completed at 10:05:24 (HH:MM:SS). So, here it 
> took 24 seconds to execute. So only
> 24 seconds is counted.
> So if long_query_time is equal to 10, which is by default, 
> this would be logged in slow query log as it takes more than 
> 10 seconds to execute.
> 
> Sent from my Windows Phone From: Robinson, Eric
> Sent: 31-05-2013 03:48
> To: mysql@lists.mysql.com
> Subject: Are There Slow Queries that Don't Show in the Slow 
> Query Logs?
> As everyone knows, with MyISAM, queries and inserts can lock 
> tables and force other queries to wait in a queue. When that 
> happens, does the time shown in the slow query logs represent 
> the whole time from when the server received the request to 
> when the response was sent to the client? Or is the time a 
> query spends waiting for a table lock to be released omitted 
> from what is recorded in the slow query logs?
> 
> --
> Eric Robinson
> 
> 

Very good answer, Vikas. Thank you for the clarification!

--Eric




Disclaimer - May 31, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Vikas Shukla,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Are There Slow Queries that Don't Show in the Slow Query Logs?

2013-05-30 Thread Vikas Shukla
Hi,

No, it does not represents the time from request to response not does
it includes the time that is spent in waiting for the locks to be
released.
The slow query log consists of SQL statements that took more than
long_query_time seconds to EXECUTE. The default value of
long_query_time is 10.
The time to acquire the initial locks is not counted as execution time.
mysqld writes a statement to the slow query log after it has been
executed and after all locks have been released, so log order might
differ from execution order.

Lets take an example, if a query is received at 10:00 hrs and it waits
till 10:05 hrs , it starts getting executed at 10:05:00 and completed
at 10:05:24 (HH:MM:SS). So, here it took 24 seconds to execute. So only
24 seconds is counted.
So if long_query_time is equal to 10, which is by default, this would
be logged in slow query log as it takes more than 10 seconds to execute.

Sent from my Windows Phone From: Robinson, Eric
Sent: 31-05-2013 03:48
To: mysql@lists.mysql.com
Subject: Are There Slow Queries that Don't Show in the Slow Query Logs?
As everyone knows, with MyISAM, queries and inserts can lock tables
and force other queries to wait in a queue. When that happens, does
the time shown in the slow query logs represent the whole time from
when the server received the request to when the response was sent to
the client? Or is the time a query spends waiting for a table lock to
be released omitted from what is recorded in the slow query logs?

--
Eric Robinson






Disclaimer - May 30, 2013
This email and any files transmitted with it are confidential and
intended solely for 'mysql@lists.mysql.com'. If you are not the named
addressee you should not disseminate, distribute, copy or alter this
email. Any views or opinions presented in this email are solely those
of the author and might not represent those of Physicians' Managed
Care or Physician Select Management. Warning: Although Physicians'
Managed Care or Physician Select Management has taken reasonable
precautions to ensure no viruses are present in this email, the
company cannot accept responsibility for any loss or damage arising
from the use of this email or attachments.
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Thanks, Rick - definitely something to think about. I've been
troubled by the pagination stuff in our code. This looks like
something I can definitely use!

andy

On 9/5/12 2:40 PM, Rick James wrote:

Remember where you "left off".

Your "Next" button now says something like ?page=5&size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345&size=50
and then do SELECT ... WHERE id > 12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a "Next" page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a "crawler" (eg, search engine) brought a site to 
its knees because of "pagination via OFFSET".

"Pagination via OFFSET" is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  By the 
time the entire list has been paged through, about N*N/2 units of work have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !


-Original Message-
From: Andy Wallace [mailto:awall...@ihouseweb.com]
Sent: Wednesday, September 05, 2012 2:05 PM
To: mysql@lists.mysql.com
Subject: Re: Understanding Slow Query Log

Ok, this raises a question for me - what's a better way to do
pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the
rest of the week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
Remember where you "left off".

Your "Next" button now says something like ?page=5&size=50
When you get there, you are doing something like SELECT ... ORDER BY ... LIMIT 
250, 50

Instead...
Make it say ?after_id=12345&size=50
and then do SELECT ... WHERE id > 12345 ORDER BY ... LIMIT 51

With 51, you get 3 things:
* the 50 items (or fewer) for the page
* a clue that there will be a "Next" page
* the id of the first item for that Next page

'Exercises for the reader':
* 'Prev'
* each of the next 5
* each of the previous 5
* go to last page
* go to first page
* Knowing whether to have those links or 'gray them out'.

A sample UI layout (you've probably seen web pages like this):
   GoTo Page [1] ... [13] [14] 15 [16] [17] ... [last]
Where
* [] represents a link.
* You are currently (for this example) on page 15
* It is showing you only the Next/Prev 2 pages.

I have encountered multiple cases where a "crawler" (eg, search engine) brought 
a site to its knees because of "pagination via OFFSET".

"Pagination via OFFSET" is Order(N) to fetch a page; Order(N*N) to scan the 
entire list.  The first page takes 1 unit of effort. The second takes 2; etc.  
By the time the entire list has been paged through, about N*N/2 units of work 
have been done.

My technique is Order(1) for a page, Order(N) for a complete scan.

N is the number of pages.  Some implementations have more than 10,000 pages.  
10,000 * 10,000 = 100 million !

> -Original Message-
> From: Andy Wallace [mailto:awall...@ihouseweb.com]
> Sent: Wednesday, September 05, 2012 2:05 PM
> To: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> Ok, this raises a question for me - what's a better way to do
> pagination?
> 
> On 9/5/12 2:02 PM, Rick James wrote:
> > * LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.
> 
> 
> 
> --
> Andy Wallace
> iHOUSEweb, Inc.
> awall...@ihouseweb.com
> (866) 645-7700 ext 219
> --
> "Sometimes it pays to stay in bed on Monday, rather than spending the
> rest of the week debugging Monday's code."
> - Christopher Thompson
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Andy Wallace

Ok, this raises a question for me - what's a better way to do pagination?

On 9/5/12 2:02 PM, Rick James wrote:

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.




--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
"Sometimes it pays to stay in bed on Monday, rather than spending the rest of the 
week debugging Monday's code."
- Christopher Thompson

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Understanding Slow Query Log

2012-09-05 Thread Rick James
* SHOW VARIABLES LIKE 'innodb%';  -- some of them may be hurting performance.

* More that 20% of the table has bean_type = 'Workflow'?  -- if so, it is more 
efficient to do a table scan than to use the index.

* KEY `I_WF_1_DTYPE` (`bean_type`), -->
KEY bean_time (`bean_type`, created_time)
"Compound" index may be your cure.

* Fields with low cardinality (bean_type, status) make very poor INDEXes.

* Consider using an ENUM instead of VARCHAR for status and bean_type, (and 
others?)

* VARCHAR(255) is an awful PRIMARY KEY.  The PK is included implicitly (in 
InnoDB) in every secondary key.

* LIMIT 0, 50 -- are you doing "pagination" via OFFSET?  Bad idea.

Lots more about these topics is discussed in similar questions in
http://forums.mysql.com/list.php?24
Lots more tips here:
http://mysql.rjweb.org/doc.php/ricksrots


> -Original Message-
> From: Adarsh Sharma [mailto:eddy.ada...@gmail.com]
> Sent: Wednesday, September 05, 2012 11:27 AM
> To: Michael Dykman
> Cc: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> true Michael, pasting the output :
> 
> CREATE TABLE `WF_1` (
>   `id` varchar(255) NOT NULL,
>   `app_name` varchar(255) DEFAULT NULL,
>   `app_path` varchar(255) DEFAULT NULL,
>   `conf` text,
>   `group_name` varchar(255) DEFAULT NULL,
>   `parent_id` varchar(255) DEFAULT NULL,
>   `run` int(11) DEFAULT NULL,
>   `user_name` varchar(255) DEFAULT NULL,
>   `bean_type` varchar(31) DEFAULT NULL,
>   `auth_token` text,
>   `created_time` datetime DEFAULT NULL,
>   `end_time` datetime DEFAULT NULL,
>   `external_id` varchar(255) DEFAULT NULL,
>   `last_modified_time` datetime DEFAULT NULL,
>   `log_token` varchar(255) DEFAULT NULL,
>   `proto_action_conf` text,
>   `sla_xml` text,
>   `start_time` datetime DEFAULT NULL,
>   `status` varchar(255) DEFAULT NULL,
>   `wf_instance` mediumblob,
>   PRIMARY KEY (`id`),
>   KEY `I_WF_1_DTYPE` (`bean_type`),
>   KEY `I_WF_1_END_TIME` (`end_time`),
>   KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
>   KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
>   KEY `I_WF_1_STATUS` (`status`)
> ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
> 
> 
> show indexes from WF_1;
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> | Table   | Non_unique | Key_name | Seq_in_index |
> Column_name| Collation | Cardinality | Sub_part | Packed | Null
> |
> Index_type | Comment |
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> |WF_1 |  0 | PRIMARY  |1 | id
>   | A |  551664 | NULL | NULL   |  | BTREE
>  | |
> |WF_1 |  1 | I_WF_1_DTYPE  |1 |
> bean_type
>| A |  18 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_END_TIME   |1 |
> end_time
> | A |  551664 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 |
> external_id
>| A |  551664 | NULL | NULL   | YES  | BTREE
> |
> |
> |WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
> last_modified_time | A |  551664 | NULL | NULL   | YES
> |
> BTREE  | |
> |WF_1 |  1 | I_WF_1_STATUS |1 | status
> | A |  18 | NULL | NULL   | YES  | BTREE
> |
> |
> +-++--+--+-
> ---+---+-+--++-
> -++-+
> 
> 
> Thanks
> 
> On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman 
> wrote:
> 
> > The attachments do not appear to be coming through.
> >
> > I am more curious what the cardinality of bean_type is.  What is the
> > result of select count(*) as cnt, bean_type from WS_1 group by
> > bean_type  ?
> >
> > Low cardinality can render an index usrless.
> >
> > On 2012-09-05 5:19 AM, "Adarsh Sharma"  wrote:
> >
> > I already attached the list.
> >
> > Attaching one more time & thanks for the interest.
> >
> > Cheers
> >
> >
> >
> > On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui 
> > wrote:
> > >
> > >
> > >
> > > 2012/9/5 Adar...
> >
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
true Michael, pasting the output :

CREATE TABLE `WF_1` (
  `id` varchar(255) NOT NULL,
  `app_name` varchar(255) DEFAULT NULL,
  `app_path` varchar(255) DEFAULT NULL,
  `conf` text,
  `group_name` varchar(255) DEFAULT NULL,
  `parent_id` varchar(255) DEFAULT NULL,
  `run` int(11) DEFAULT NULL,
  `user_name` varchar(255) DEFAULT NULL,
  `bean_type` varchar(31) DEFAULT NULL,
  `auth_token` text,
  `created_time` datetime DEFAULT NULL,
  `end_time` datetime DEFAULT NULL,
  `external_id` varchar(255) DEFAULT NULL,
  `last_modified_time` datetime DEFAULT NULL,
  `log_token` varchar(255) DEFAULT NULL,
  `proto_action_conf` text,
  `sla_xml` text,
  `start_time` datetime DEFAULT NULL,
  `status` varchar(255) DEFAULT NULL,
  `wf_instance` mediumblob,
  PRIMARY KEY (`id`),
  KEY `I_WF_1_DTYPE` (`bean_type`),
  KEY `I_WF_1_END_TIME` (`end_time`),
  KEY `I_WF_1_EXTERNAL_ID` (`external_id`),
  KEY `I_WF_1_LAST_MODIFIED_TIME` (`last_modified_time`),
  KEY `I_WF_1_STATUS` (`status`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |


show indexes from WF_1;
+-++--+--++---+-+--++--++-+
| Table   | Non_unique | Key_name | Seq_in_index |
Column_name| Collation | Cardinality | Sub_part | Packed | Null |
Index_type | Comment |
+-++--+--++---+-+--++--++-+
|WF_1 |  0 | PRIMARY  |1 | id
  | A |  551664 | NULL | NULL   |  | BTREE
 | |
|WF_1 |  1 | I_WF_1_DTYPE  |1 | bean_type
   | A |  18 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_END_TIME   |1 | end_time
| A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_EXTERNAL_ID|1 | external_id
   | A |  551664 | NULL | NULL   | YES  | BTREE  |
|
|WF_1 |  1 | I_WF_1_LAST_MODIFIED_TIME |1 |
last_modified_time | A |  551664 | NULL | NULL   | YES  |
BTREE  | |
|WF_1 |  1 | I_WF_1_STATUS |1 | status
| A |  18 | NULL | NULL   | YES  | BTREE  |
|
+-++--+--++---+-+--++--++-+


Thanks

On Wed, Sep 5, 2012 at 8:43 PM, Michael Dykman  wrote:

> The attachments do not appear to be coming through.
>
> I am more curious what the cardinality of bean_type is.  What is the
> result of
> select count(*) as cnt, bean_type from WS_1 group by bean_type  ?
>
> Low cardinality can render an index usrless.
>
> On 2012-09-05 5:19 AM, "Adarsh Sharma"  wrote:
>
> I already attached the list.
>
> Attaching one more time & thanks for the interest.
>
> Cheers
>
>
>
> On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui 
> wrote:
> >
> >
> >
> > 2012/9/5 Adar...
>
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: Understanding Slow Query Log

2012-09-05 Thread Adarsh Sharma
I already attached the list.

Attaching one more time & thanks for the interest.

Cheers

On Wed, Sep 5, 2012 at 11:44 AM, Manuel Arostegui  wrote:

>
>
> 2012/9/5 Adarsh Sharma 
>
>> Actually that query is not my concern :
>>
>> i have a query that is taking so much time :
>> Slow Log Output :
>> # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
>> # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
>> # Attribute  total min max avg 95%  stddev  median
>> #  === === === === === === ===
>> # Exec time 80887s   192us   2520s415s   1732s612s 80s
>> # Lock time   13ms   0   133us68us   103us23us69us
>> # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
>> # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
>> # Query size65.45k   6 577  343.70  563.87  171.06  246.02
>>
>> In the logs output :
>> # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
>>  Rows_examined: 471150
>> SET timestamp=1346655789;
>> SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
>> t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
>> t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
>> ORDER BY t0.created_time DESC LIMIT 0, 50;
>>
>> The table is near about 30 GB and growing day by day.
>>
>
> Just out curiosity, is that table too fragmented? 471k rows are quite a
> lot, but 488 of query time is insane. Seems you're reading from disk too
> much!
>
>
>>
>> Attaching the table definition & indexes output. I have a index on bean
>> type column but cann't understand why it
>> examined the all rows of table.
>>
>
> Where's the table's schema so we can give it a try?
>
> Manu
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql

Re: Understanding Slow Query Log

2012-09-04 Thread Manuel Arostegui
2012/9/5 Adarsh Sharma 

> Actually that query is not my concern :
>
> i have a query that is taking so much time :
> Slow Log Output :
> # Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
> # Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
> # Attribute  total min max avg 95%  stddev  median
> #  === === === === === === ===
> # Exec time 80887s   192us   2520s415s   1732s612s 80s
> # Lock time   13ms   0   133us68us   103us23us69us
> # Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
> # Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
> # Query size65.45k   6 577  343.70  563.87  171.06  246.02
>
> In the logs output :
> # Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50
>  Rows_examined: 471150
> SET timestamp=1346655789;
> SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name,
> t0.group_name, t0.created_time, t0.start_time, t0.last_modified_time,
> t0.end_time, t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow'
> ORDER BY t0.created_time DESC LIMIT 0, 50;
>
> The table is near about 30 GB and growing day by day.
>

Just out curiosity, is that table too fragmented? 471k rows are quite a
lot, but 488 of query time is insane. Seems you're reading from disk too
much!


>
> Attaching the table definition & indexes output. I have a index on bean
> type column but cann't understand why it
> examined the all rows of table.
>

Where's the table's schema so we can give it a try?

Manu


Re: Understanding Slow Query Log

2012-09-04 Thread Adarsh Sharma
Actually that query is not my concern :

i have a query that is taking so much time :
Slow Log Output :
# Overall: 195 total, 16 unique, 0.00 QPS, 0.31x concurrency _
# Time range: 2012-09-01 14:30:01 to 2012-09-04 14:13:46
# Attribute  total min max avg 95%  stddev  median
#  === === === === === === ===
# Exec time 80887s   192us   2520s415s   1732s612s 80s
# Lock time   13ms   0   133us68us   103us23us69us
# Rows sent430.89k   0  17.58k   2.21k  12.50k   3.96k   49.17
# Rows examine  32.30M   0 466.46k 169.63k 440.37k 186.02k 117.95k
# Query size65.45k   6 577  343.70  563.87  171.06  246.02

In the logs output :
# Query_time: 488.031783  Lock_time: 0.41 Rows_sent: 50  Rows_examined:
471150
SET timestamp=1346655789;
SELECT t0.id, t0.app_name, t0.status, t0.run, t0.user_name, t0.group_name,
t0.created_time, t0.start_time, t0.last_modified_time, t0.end_time,
t0.external_id FROM WF_1 t0 WHERE t0.bean_type = 'Workflow' ORDER BY
t0.created_time DESC LIMIT 0, 50;

The table is near about 30 GB and growing day by day.

Attaching the table definition & indexes output. I have a index on bean
type column but cann't understand why it
examined the all rows of table.

Thanks



On Wed, Sep 5, 2012 at 12:24 AM, Rick James  wrote:

> 100 is tantamount to turning off the log.  I prefer 2.
>
> select  count(ENTITY_NAME)
> from  ALERT_EVENTS
> where  EVENT_TIME > date_sub(now(),INTERVAL 60 MINUTE)
>   and  status=upper('failed')
>   and  ENTITY_NAME='FETL-ImpressionRC-conversion';
> begs for the _compound_ index
> INDEX(ENTITY_NAME, EVENT_TIME)
> This would be even better:
> INDEX(ENTITY_NAME, status, EVENT_TIME)
>
> COUNT(*) should be used if you don't need to check the column for being
> NULL.
>
> > Rows_examined: 141145
> That is probably the entire table.  Will the table grow?  If so, the query
> will get slower.  Meanwhile, the index I suggested will (probably) be much
> faster.
>
>
> > -Original Message-
> > From: Suresh Kuna [mailto:sureshkumar...@gmail.com]
> > Sent: Saturday, September 01, 2012 1:03 AM
> > To: Adarsh Sharma
> > Cc: mysql@lists.mysql.com
> > Subject: Re: Understanding Slow Query Log
> >
> > Disable log-queries-not-using-indexes to log only queries > 100 sec.
> >
> > Just do "> /var/lib/mysql/slow-queries.log" it will clear the log.
> >
> > On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma
> > wrote:
> >
> > > Hi all,
> > >
> > > I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow
> > > query log by setting below parameters in my.cnf :
> > >
> > > log-slow-queries=/usr/local/mysql/slow-query.log
> > > long_query_time=100
> > > log-queries-not-using-indexes
> > >
> > > I am assuming from the inf. from the internet that long_query_time is
> > > in seconds , but i see the slow query log ,  there are lots of
> > > statements ( queries ) :
> > >
> > > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1
> > Rows_examined:
> > > 141145
> > > SET timestamp=1346409734;
> > > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > > # Time: 120831 10:43:14
> > > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1
> > Rows_examined:
> > > 141145
> > > SET timestamp=1346409794;
> > > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > > ENTITY_NAME='FETL-click-enhancer-deferred';
> > > # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1
> > Rows_examined:
> > > 141145
> > > SET timestamp=1346409794;
> > > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > > # Time: 120831 10:43:22
> > > # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111] #
> > > Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1
> > Rows_ex

RE: Understanding Slow Query Log

2012-09-04 Thread Rick James
100 is tantamount to turning off the log.  I prefer 2.

select  count(ENTITY_NAME)
from  ALERT_EVENTS
where  EVENT_TIME > date_sub(now(),INTERVAL 60 MINUTE)
  and  status=upper('failed')
  and  ENTITY_NAME='FETL-ImpressionRC-conversion';
begs for the _compound_ index
INDEX(ENTITY_NAME, EVENT_TIME)
This would be even better:
INDEX(ENTITY_NAME, status, EVENT_TIME)

COUNT(*) should be used if you don't need to check the column for being NULL.

> Rows_examined: 141145
That is probably the entire table.  Will the table grow?  If so, the query will 
get slower.  Meanwhile, the index I suggested will (probably) be much faster.


> -Original Message-
> From: Suresh Kuna [mailto:sureshkumar...@gmail.com]
> Sent: Saturday, September 01, 2012 1:03 AM
> To: Adarsh Sharma
> Cc: mysql@lists.mysql.com
> Subject: Re: Understanding Slow Query Log
> 
> Disable log-queries-not-using-indexes to log only queries > 100 sec.
> 
> Just do "> /var/lib/mysql/slow-queries.log" it will clear the log.
> 
> On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma
> wrote:
> 
> > Hi all,
> >
> > I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow
> > query log by setting below parameters in my.cnf :
> >
> > log-slow-queries=/usr/local/mysql/slow-query.log
> > long_query_time=100
> > log-queries-not-using-indexes
> >
> > I am assuming from the inf. from the internet that long_query_time is
> > in seconds , but i see the slow query log ,  there are lots of
> > statements ( queries ) :
> >
> > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409734;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > # Time: 120831 10:43:14
> > # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409794;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-click-enhancer-deferred';
> > # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1
> Rows_examined:
> > 141145
> > SET timestamp=1346409794;
> > select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> > date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> > ENTITY_NAME='FETL-ImpressionRC-conversion';
> > # Time: 120831 10:43:22
> > # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111] #
> > Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1
> Rows_examined:
> > 13
> >
> >
> >
> > I don't  understand the query time unit in slow query log because i
> > expect queries to be logged that takes > 100 s. I tested with sleep
> > command for 60s , it doesn't logged in slow query log and when i
> sleep
> > for 120 s it logged but i don't why the other queries are logging in
> slow log.
> >
> > # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1
> Rows_examined:
> > 0
> > SET timestamp=1346443103;
> > SELECT SLEEP(120);
> >
> > And also my slow log is increasing and decided to purge thorogh below
> > command :
> >
> > cat /dev/null > /var/lib/mysql/slow-queries.log
> >
> >
> > Anyone any ideas about this.
> >
> >
> > Thanks
> >
> 
> 
> 
> --
> Thanks
> Suresh Kuna
> MySQL DBA

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Understanding Slow Query Log

2012-09-01 Thread Suresh Kuna
Disable log-queries-not-using-indexes to log only queries > 100 sec.

Just do "> /var/lib/mysql/slow-queries.log" it will clear the log.

On Sat, Sep 1, 2012 at 12:34 PM, Adarsh Sharma wrote:

> Hi all,
>
> I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
> by setting below parameters in my.cnf :
>
> log-slow-queries=/usr/local/mysql/slow-query.log
> long_query_time=100
> log-queries-not-using-indexes
>
> I am assuming from the inf. from the internet that long_query_time is in
> seconds , but i see the slow query log ,  there are lots of statements (
> queries ) :
>
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409734;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:14
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-click-enhancer-deferred';
> # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:22
> # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13
>
>
>
> I don't  understand the query time unit in slow query log because i expect
> queries to be logged that takes > 100 s. I tested with sleep command for
> 60s , it doesn't logged in slow query log and when i sleep for 120 s it
> logged but i don't why the other queries are logging in slow log.
>
> # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined:
> 0
> SET timestamp=1346443103;
> SELECT SLEEP(120);
>
> And also my slow log is increasing and decided to purge thorogh below
> command :
>
> cat /dev/null > /var/lib/mysql/slow-queries.log
>
>
> Anyone any ideas about this.
>
>
> Thanks
>



-- 
Thanks
Suresh Kuna
MySQL DBA


Re: Understanding Slow Query Log

2012-09-01 Thread yoku ts
Hi

Because of that, those queries don't use index.
log-queries-not-using-indexes works even if query time less than
long-query-time.

http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_log-queries-not-using-indexes

regards,

yoku

2012/9/1 Adarsh Sharma :
> Hi all,
>
> I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
> by setting below parameters in my.cnf :
>
> log-slow-queries=/usr/local/mysql/slow-query.log
> long_query_time=100
> log-queries-not-using-indexes
>
> I am assuming from the inf. from the internet that long_query_time is in
> seconds , but i see the slow query log ,  there are lots of statements (
> queries ) :
>
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409734;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:14
> # User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-click-enhancer-deferred';
> # User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
> 141145
> SET timestamp=1346409794;
> select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
> date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
> ENTITY_NAME='FETL-ImpressionRC-conversion';
> # Time: 120831 10:43:22
> # User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
> # Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13
>
>
>
> I don't  understand the query time unit in slow query log because i expect
> queries to be logged that takes > 100 s. I tested with sleep command for
> 60s , it doesn't logged in slow query log and when i sleep for 120 s it
> logged but i don't why the other queries are logging in slow log.
>
> # Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined: 0
> SET timestamp=1346443103;
> SELECT SLEEP(120);
>
> And also my slow log is increasing and decided to purge thorogh below
> command :
>
> cat /dev/null > /var/lib/mysql/slow-queries.log
>
>
> Anyone any ideas about this.
>
>
> Thanks

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Understanding Slow Query Log

2012-09-01 Thread Adarsh Sharma
Hi all,

I am using Mysql Ver 14.14 Distrib 5.1.58 in which i enabled slow query log
by setting below parameters in my.cnf :

log-slow-queries=/usr/local/mysql/slow-query.log
long_query_time=100
log-queries-not-using-indexes

I am assuming from the inf. from the internet that long_query_time is in
seconds , but i see the slow query log ,  there are lots of statements (
queries ) :

# User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.052784  Lock_time: 0.43 Rows_sent: 1  Rows_examined:
141145
SET timestamp=1346409734;
select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
ENTITY_NAME='FETL-ImpressionRC-conversion';
# Time: 120831 10:43:14
# User@Host: user1[user1] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.053599  Lock_time: 0.79 Rows_sent: 1  Rows_examined:
141145
SET timestamp=1346409794;
select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
ENTITY_NAME='FETL-click-enhancer-deferred';
# User@Host: user1[user2] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.054185  Lock_time: 0.86 Rows_sent: 1  Rows_examined:
141145
SET timestamp=1346409794;
select count(ENTITY_NAME) from ALERT_EVENTS where EVENT_TIME >
date_sub(now(),INTERVAL 60 MINUTE) and status=upper('failed') and
ENTITY_NAME='FETL-ImpressionRC-conversion';
# Time: 120831 10:43:22
# User@Host: user2[user2] @ abc.dd.aa.com [192.112.111.111]
# Query_time: 0.000163  Lock_time: 0.45 Rows_sent: 1  Rows_examined: 13



I don't  understand the query time unit in slow query log because i expect
queries to be logged that takes > 100 s. I tested with sleep command for
60s , it doesn't logged in slow query log and when i sleep for 120 s it
logged but i don't why the other queries are logging in slow log.

# Query_time: 120.000259  Lock_time: 0.00 Rows_sent: 1  Rows_examined: 0
SET timestamp=1346443103;
SELECT SLEEP(120);

And also my slow log is increasing and decided to purge thorogh below
command :

cat /dev/null > /var/lib/mysql/slow-queries.log


Anyone any ideas about this.


Thanks


Re: alternative to slow query

2012-07-17 Thread brian

On 12-07-16 06:57 PM, Rick James wrote:


Plan A:
Would the anti-UNION problem be solved by hiding the UNION in a subquery?  The 
outer query would simply return what the UNION found.


Of course! Yes, problem solved.


Plan B:
Insert every row twice into expression_expression -- (e1,e2) and also (e2,e1).
Then, you need only one index into that table, and you don't need to UNION (or 
the LEFT JOINs).


This is the approach that I was considering for when the app is 
rewritten. (I've inherited a disaster.)


Thank again for the tip.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: alternative to slow query

2012-07-16 Thread Rick James
Please provide SHOW CREATE TABLE for the two tables.

Plan A:
Would the anti-UNION problem be solved by hiding the UNION in a subquery?  The 
outer query would simply return what the UNION found.

Plan B:
Insert every row twice into expression_expression -- (e1,e2) and also (e2,e1).
Then, you need only one index into that table, and you don't need to UNION (or 
the LEFT JOINs).

Plan C:
Do something with a VIEW.  Caution:  Performance _may_ be even worse.

> -Original Message-
> From: brian [mailto:mysql-l...@logi.ca]
> Sent: Tuesday, July 03, 2012 12:50 PM
> To: mysql@lists.mysql.com
> Subject: Re: alternative to slow query
> 
> On 12-07-03 02:18 PM, Stillman, Benjamin wrote:
> > Not sure why it  wouldn't show primary as a possible key then...
> 
> Yes, that seems rather strange.
> 
> 
> >  From your first email:
> >
> > *** 1. row ***
> >  id: 1
> > select_type: SIMPLE
> >   table: e
> >type: ALL
> > possible_keys: NULL
> > key: NULL
> > key_len: NULL
> > ref: NULL
> >rows: 95127
> >   Extra:
> >
> >
> > I'd be curious to see the explain from this:
> >
> > select id, lang, term from expression where id = (insert a random,
> > valid id value here);
> >
> > Does it use a key then? Or at least show primary as a possible key?
> 
> mysql db_lexi > EXPLAIN SELECT id, term, lang_id FROM expression WHERE
> id = 223363\G
> *** 1. row ***
> id: 1
>select_type: SIMPLE
>  table: expression
>   type: const
> possible_keys: PRIMARY
>key: PRIMARY
>key_len: 8
>ref: const
>   rows: 1
>  Extra:
> 1 row in set (0.00 sec)
> 
> 
> Here's the query again, with some of the stuff I'd removed for clarity.
> There are still some other fields missing here but they involve 2 left
> joins on other tables.
> 
> mysql db_lexi > EXPLAIN SELECT e.id, e.lang_id, e.term
>  -> FROM (expression AS e)
>  -> LEFT JOIN expression_expression AS ee1
>  -> ON ee1.expression1_id = e.id AND ee1.deleted_at = 0
>  -> LEFT JOIN expression_expression AS ee2
>  -> ON ee2.expression2_id = e.id AND ee2.deleted_at = 0
>  -> WHERE (ee2.expression1_id = 223363 OR ee1.expression2_id =
> 223363)
>  -> AND e.original_id IS NULL
>  -> AND e.deleted_at = 0\G
> *** 1. row ***
> id: 1
>select_type: SIMPLE
>  table: e
>   type: ref
> possible_keys: original_id_idx,deleted_at_idx
>key: original_id_idx
>key_len: 9
>ref: const
>   rows: 60560
>  Extra: Using where
> *** 2. row ***
> id: 1
>select_type: SIMPLE
>  table: ee1
>   type: ref
> possible_keys: expression1_id_idx
>key: expression1_id_idx
>key_len: 8
>ref: db_lexi.e.id
>   rows: 1
>  Extra:
> *** 3. row ***
> id: 1
>select_type: SIMPLE
>  table: ee2
>   type: ref
> possible_keys: expression2_id_idx
>key: expression2_id_idx
>key_len: 8
>ref: db_lexi.e.id
>   rows: 1
>  Extra: Using where
> 3 rows in set (0.00 sec)
> 
> I presume that e.id is not being used because I'm not specifically
> querying against it. Instead, I'm using expression_expression's FKs.
> 
> Which gives me an idea. I can add expression a 2nd time to the FROM
> clause:
> 
> mysql db_lexi > EXPLAIN SELECT e.id, e.lang_id, e.term
>  -> FROM (expression AS e, expression AS e_pk)
>  -> LEFT JOIN expression_expression AS ee1
>  -> ON ee1.expression1_id = e.id AND ee1.deleted_at = 0
>  -> LEFT JOIN expression_expression AS ee2
>  -> ON ee2.expression2_id = e.id AND ee2.deleted_at = 0
>  -> WHERE
>  -> (ee1.expression2_id = e_pk.id AND ee2.expression1_id = 223363)
>  -> OR (ee1.expression1_id = e_pk.id AND ee1.expression2_id =
> 223363)
>  -> AND e.original_id IS NULL
>  -> AND e.deleted_at = 0\G
> *** 1. row ***
> id: 1
>select_type: SIMPLE
>  table: ee1
>   type: ALL
> possible_keys: expression2_id_idx,expression1_id_idx
>   

Re: alternative to slow query

2012-07-03 Thread brian

On 12-07-03 02:18 PM, Stillman, Benjamin wrote:

Not sure why it  wouldn't show primary as a possible key then...


Yes, that seems rather strange.



 From your first email:

*** 1. row ***
 id: 1
select_type: SIMPLE
  table: e
   type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
   rows: 95127
  Extra:


I'd be curious to see the explain from this:

select id, lang, term from expression where id = (insert a random, valid id 
value here);

Does it use a key then? Or at least show primary as a possible key?


mysql db_lexi > EXPLAIN SELECT id, term, lang_id FROM expression WHERE 
id = 223363\G

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: expression
 type: const
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 8
  ref: const
 rows: 1
Extra:
1 row in set (0.00 sec)


Here's the query again, with some of the stuff I'd removed for clarity. 
There are still some other fields missing here but they involve 2 left 
joins on other tables.


mysql db_lexi > EXPLAIN SELECT e.id, e.lang_id, e.term
-> FROM (expression AS e)
-> LEFT JOIN expression_expression AS ee1
-> ON ee1.expression1_id = e.id AND ee1.deleted_at = 0
-> LEFT JOIN expression_expression AS ee2
-> ON ee2.expression2_id = e.id AND ee2.deleted_at = 0
-> WHERE (ee2.expression1_id = 223363 OR ee1.expression2_id = 223363)
-> AND e.original_id IS NULL
-> AND e.deleted_at = 0\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: e
 type: ref
possible_keys: original_id_idx,deleted_at_idx
  key: original_id_idx
  key_len: 9
  ref: const
 rows: 60560
Extra: Using where
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: ee1
 type: ref
possible_keys: expression1_id_idx
  key: expression1_id_idx
  key_len: 8
  ref: db_lexi.e.id
 rows: 1
Extra:
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: ee2
 type: ref
possible_keys: expression2_id_idx
  key: expression2_id_idx
  key_len: 8
  ref: db_lexi.e.id
 rows: 1
Extra: Using where
3 rows in set (0.00 sec)

I presume that e.id is not being used because I'm not specifically 
querying against it. Instead, I'm using expression_expression's FKs.


Which gives me an idea. I can add expression a 2nd time to the FROM clause:

mysql db_lexi > EXPLAIN SELECT e.id, e.lang_id, e.term
-> FROM (expression AS e, expression AS e_pk)
-> LEFT JOIN expression_expression AS ee1
-> ON ee1.expression1_id = e.id AND ee1.deleted_at = 0
-> LEFT JOIN expression_expression AS ee2
-> ON ee2.expression2_id = e.id AND ee2.deleted_at = 0
-> WHERE
-> (ee1.expression2_id = e_pk.id AND ee2.expression1_id = 223363)
-> OR (ee1.expression1_id = e_pk.id AND ee1.expression2_id = 223363)
-> AND e.original_id IS NULL
-> AND e.deleted_at = 0\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: ee1
 type: ALL
possible_keys: expression2_id_idx,expression1_id_idx
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 106191
Extra: Using where
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: e
 type: eq_ref
possible_keys: PRIMARY,original_id_idx,deleted_at_idx
  key: PRIMARY
  key_len: 8
  ref: db_lexi.ee1.expression1_id
 rows: 1
Extra:
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: ee2
 type: ref
possible_keys: expression2_id_idx
  key: expression2_id_idx
  key_len: 8
  ref: db_lexi.ee1.expression1_id
 rows: 1
Extra: Using where
*** 4. row ***
   id: 1
  select_type: SIMPLE
table: e_pk
 type: ALL
possible_keys: PRIMARY
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 121120
Extra: Range checked for each record (index map: 0x1)
4 rows in set (0.00 sec)


But this doesn't feel like an elegant solution. Regardless, I'm still 
seeing the query take ~2.5sec. I'm just looking into the "Range checked 
for each record" msg now. Perhaps this is the right direction but 
requires a little tweaking.



I don't understand why deleted_at_idx is also not used, though. Perhaps 
because I'm only looking for values of 0? Regardless, that doesn't seem 
to be the heart of the problem.


BTW, I inherited the DB

RE: alternative to slow query

2012-07-03 Thread Stillman, Benjamin
Not sure why it  wouldn't show primary as a possible key then...

>From your first email:

*** 1. row ***
id: 1
   select_type: SIMPLE
 table: e
  type: ALL
possible_keys: NULL
   key: NULL
   key_len: NULL
   ref: NULL
  rows: 95127
 Extra:


I'd be curious to see the explain from this:

select id, lang, term from expression where id = (insert a random, valid id 
value here);

Does it use a key then? Or at least show primary as a possible key?




-Original Message-
From: brian [mailto:mysql-l...@logi.ca]
Sent: Tuesday, July 03, 2012 1:47 PM
To: mysql@lists.mysql.com
Subject: Re: alternative to slow query

On 12-07-03 01:13 PM, Stillman, Benjamin wrote:
> I don't see an index for expression.id.
>

mysql db_lexi > show index from expression\G
*** 1. row ***
Table: expression
   Non_unique: 0
 Key_name: PRIMARY
Seq_in_index: 1
  Column_name: id
Collation: A
  Cardinality: 96111
 Sub_part: NULL
   Packed: NULL
 Null:
   Index_type: BTREE
  Comment:

... and 10 other keys, 2 of which are also being used in the WHERE part.
I left them out for clarity because while they help to narrow things down a bit 
including them doesn't speed up the query all that much.



mysql db_lexi > SHOW TABLE STATUS WHERE NAME = 'expression'\G
*** 1. row ***
Name: expression
  Engine: InnoDB
 Version: 10
  Row_format: Compact
Rows: 100747
  Avg_row_length: 193
 Data_length: 19447808
Max_data_length: 0
Index_length: 31621120
   Data_free: 48234496
  Auto_increment: 240840
 Create_time: 2012-06-27 14:18:57
 Update_time: NULL
  Check_time: NULL
   Collation: utf8_general_ci
Checksum: NULL
  Create_options:
 Comment:
1 row in set (0.77 sec)

mysql db_lexi > SHOW TABLE STATUS WHERE NAME = 'expression_expression'\G
*** 1. row ***
Name: expression_expression
  Engine: InnoDB
 Version: 10
  Row_format: Compact
Rows: 106191
  Avg_row_length: 103
 Data_length: 11026432
Max_data_length: 0
Index_length: 14204928
   Data_free: 48234496
  Auto_increment: 218884
 Create_time: 2012-06-27 14:19:31
 Update_time: NULL
  Check_time: NULL
   Collation: utf8_general_ci
Checksum: NULL
  Create_options:
 Comment:
1 row in set (0.13 sec)

I realise that I should have posted all this in the original msg.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: alternative to slow query

2012-07-03 Thread brian

On 12-07-03 01:13 PM, Stillman, Benjamin wrote:

I don't see an index for expression.id.



mysql db_lexi > show index from expression\G
*** 1. row ***
   Table: expression
  Non_unique: 0
Key_name: PRIMARY
Seq_in_index: 1
 Column_name: id
   Collation: A
 Cardinality: 96111
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:

... and 10 other keys, 2 of which are also being used in the WHERE part. 
I left them out for clarity because while they help to narrow things 
down a bit including them doesn't speed up the query all that much.




mysql db_lexi > SHOW TABLE STATUS WHERE NAME = 'expression'\G
*** 1. row ***
   Name: expression
 Engine: InnoDB
Version: 10
 Row_format: Compact
   Rows: 100747
 Avg_row_length: 193
Data_length: 19447808
Max_data_length: 0
   Index_length: 31621120
  Data_free: 48234496
 Auto_increment: 240840
Create_time: 2012-06-27 14:18:57
Update_time: NULL
 Check_time: NULL
  Collation: utf8_general_ci
   Checksum: NULL
 Create_options:
Comment:
1 row in set (0.77 sec)

mysql db_lexi > SHOW TABLE STATUS WHERE NAME = 'expression_expression'\G
*** 1. row ***
   Name: expression_expression
 Engine: InnoDB
Version: 10
 Row_format: Compact
   Rows: 106191
 Avg_row_length: 103
Data_length: 11026432
Max_data_length: 0
   Index_length: 14204928
  Data_free: 48234496
 Auto_increment: 218884
Create_time: 2012-06-27 14:19:31
Update_time: NULL
 Check_time: NULL
  Collation: utf8_general_ci
   Checksum: NULL
 Create_options:
Comment:
1 row in set (0.13 sec)

I realise that I should have posted all this in the original msg.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: alternative to slow query

2012-07-03 Thread Stillman, Benjamin
I don't see an index for expression.id.



-Original Message-
From: brian [mailto:mysql-l...@logi.ca]
Sent: Tuesday, July 03, 2012 12:28 PM
To: mysql@lists.mysql.com
Subject: Re: alternative to slow query

On 12-07-02 09:33 PM, yoku ts wrote:
> Hello,
>
> add index to expression1_id and expression2_id on expression_expression.
>
> it doesn't use index,following,
>
>> WHERE
>>  ee2.expression1_id = $ID
>> OR
>>  ee1.expression2_id = $ID
>

Thank you for your reply. The table already has indexes on these columns, 
however:

mysql db_lexi > show index from expression_expression\G
*** 1. row ***
Table: expression_expression
   Non_unique: 0
 Key_name: PRIMARY
Seq_in_index: 1
  Column_name: id
Collation: A
  Cardinality: 106191
 Sub_part: NULL
   Packed: NULL
 Null:
   Index_type: BTREE
  Comment:
*** 2. row ***
Table: expression_expression
   Non_unique: 1
 Key_name: expression2_id_idx
Seq_in_index: 1
  Column_name: expression2_id
Collation: A
  Cardinality: 106191
 Sub_part: NULL
   Packed: NULL
 Null:
   Index_type: BTREE
  Comment:
*** 3. row ***
Table: expression_expression
   Non_unique: 1
 Key_name: type_lien_id_idx
Seq_in_index: 1
  Column_name: type_lien_id
Collation: A
  Cardinality: 43
 Sub_part: NULL
   Packed: NULL
 Null: YES
   Index_type: BTREE
  Comment:
*** 4. row ***
Table: expression_expression
   Non_unique: 1
 Key_name: expression1_id_idx
Seq_in_index: 1
  Column_name: expression1_id
Collation: A
  Cardinality: 106191
 Sub_part: NULL
   Packed: NULL
 Null:
   Index_type: BTREE
  Comment:
4 rows in set (0.23 sec)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql




Notice: This communication may contain privileged and/or confidential 
information. If you are not the intended recipient, please notify the sender by 
email, and immediately delete the message and any attachments without copying 
or disclosing them. LBI may, for any reason, intercept, access, use, and 
disclose any information that is communicated by or through, or which is stored 
on, its networks, applications, services, and devices.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: alternative to slow query

2012-07-03 Thread brian

On 12-07-02 09:33 PM, yoku ts wrote:

Hello,

add index to expression1_id and expression2_id on expression_expression.

it doesn't use index,following,


WHERE
 ee2.expression1_id = $ID
OR
 ee1.expression2_id = $ID




Thank you for your reply. The table already has indexes on these 
columns, however:


mysql db_lexi > show index from expression_expression\G
*** 1. row ***
   Table: expression_expression
  Non_unique: 0
Key_name: PRIMARY
Seq_in_index: 1
 Column_name: id
   Collation: A
 Cardinality: 106191
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:
*** 2. row ***
   Table: expression_expression
  Non_unique: 1
Key_name: expression2_id_idx
Seq_in_index: 1
 Column_name: expression2_id
   Collation: A
 Cardinality: 106191
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:
*** 3. row ***
   Table: expression_expression
  Non_unique: 1
Key_name: type_lien_id_idx
Seq_in_index: 1
 Column_name: type_lien_id
   Collation: A
 Cardinality: 43
Sub_part: NULL
  Packed: NULL
Null: YES
  Index_type: BTREE
 Comment:
*** 4. row ***
   Table: expression_expression
  Non_unique: 1
Key_name: expression1_id_idx
Seq_in_index: 1
 Column_name: expression1_id
   Collation: A
 Cardinality: 106191
Sub_part: NULL
  Packed: NULL
Null:
  Index_type: BTREE
 Comment:
4 rows in set (0.23 sec)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: alternative to slow query

2012-07-02 Thread yoku ts
Hello,

add index to expression1_id and expression2_id on expression_expression.

it doesn't use index,following,

> WHERE
> ee2.expression1_id = $ID
> OR
> ee1.expression2_id = $ID

regards,


2012/7/3 brian 

> I have a table that joins on itself through a second table:
>
> table expression:
>
> id INT PRIMARY KEY,
> lang_id INT
> term VARCHAR(128)
>
> table expression_expression:
>
> id INT PRIMARY KEY
> expression1_id INT
> expression2_id INT
>
> In order to find associated records, I had originally used a UNION, which
> worked very well. However, the application is written in PHP and uses PDO.
> PDOStatement::getColumnMeta() doesn't return anything for the table name
> with a UNION and this is crucial to the application. So I've come up with
> the following substitute:
>
> SELECT e.id, e.lang_id, e.term
> FROM expression AS e
> LEFT JOIN expression_expression AS ee1
> ON ee1.expression1_id = e.id
> LEFT JOIN expression_expression AS ee2
> ON ee2.expression2_id = e.id
> WHERE
> ee2.expression1_id = $ID
> OR
> ee1.expression2_id = $ID
>
> This gives me the correct values but is rather (~2-4 sec) slow. Here's the
> EXPLAIN output:
>
> *** 1. row ***
>id: 1
>   select_type: SIMPLE
> table: e
>  type: ALL
> possible_keys: NULL
>   key: NULL
>   key_len: NULL
>   ref: NULL
>  rows: 95127
> Extra:
> *** 2. row ***
>id: 1
>   select_type: SIMPLE
> table: ee1
>  type: ref
> possible_keys: expression1_id_idx
>   key: expression1_id_idx
>   key_len: 8
>   ref: db_lexi.e.id
>  rows: 1
> Extra:
> *** 3. row ***
>id: 1
>   select_type: SIMPLE
> table: ee2
>  type: ref
> possible_keys: expression2_id_idx
>   key: expression2_id_idx
>   key_len: 8
>   ref: db_lexi.e.id
>  rows: 1
> Extra: Using where
> 3 rows in set (0.00 sec)
>
>
> Can someone suggest a better approach?
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


alternative to slow query

2012-07-02 Thread brian

I have a table that joins on itself through a second table:

table expression:

id INT PRIMARY KEY,
lang_id INT
term VARCHAR(128)

table expression_expression:

id INT PRIMARY KEY
expression1_id INT
expression2_id INT

In order to find associated records, I had originally used a UNION, 
which worked very well. However, the application is written in PHP and 
uses PDO. PDOStatement::getColumnMeta() doesn't return anything for the 
table name with a UNION and this is crucial to the application. So I've 
come up with the following substitute:


SELECT e.id, e.lang_id, e.term
FROM expression AS e
LEFT JOIN expression_expression AS ee1
ON ee1.expression1_id = e.id
LEFT JOIN expression_expression AS ee2
ON ee2.expression2_id = e.id
WHERE
ee2.expression1_id = $ID
OR
ee1.expression2_id = $ID

This gives me the correct values but is rather (~2-4 sec) slow. Here's 
the EXPLAIN output:


*** 1. row ***
   id: 1
  select_type: SIMPLE
table: e
 type: ALL
possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 95127
Extra:
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: ee1
 type: ref
possible_keys: expression1_id_idx
  key: expression1_id_idx
  key_len: 8
  ref: db_lexi.e.id
 rows: 1
Extra:
*** 3. row ***
   id: 1
  select_type: SIMPLE
table: ee2
 type: ref
possible_keys: expression2_id_idx
  key: expression2_id_idx
  key_len: 8
  ref: db_lexi.e.id
 rows: 1
Extra: Using where
3 rows in set (0.00 sec)


Can someone suggest a better approach?

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Slow query - please help

2011-10-05 Thread Tompkins Neil
I already have a FULLTEXT index on cities.name ? Do I still need to add a
normal index ?

On Wed, Oct 5, 2011 at 1:34 PM, Johnny Withers  wrote:

> Try adding an index on cities.name, it may prevent the file sort. What was
> the original query time and what is it now?
>
> Sent from my iPad
>
> On Oct 5, 2011, at 4:01 AM, Tompkins Neil 
> wrote:
>
> > Following my mail below, if anyone can help optimise the query further
> that
> > would be a great help.
> >
> > -- Forwarded message --
> > From: Tompkins Neil 
> > Date: Wed, Oct 5, 2011 at 9:48 AM
> > Subject: Re: Slow query - please help
> > To: Johnny Withers 
> > Cc: "mysql@lists.mysql.com" 
> >
> >
> > I just revised my query and now get the following output :
> >
> > '1', 'PRIMARY', 'cities', 'ref', 'IDX_country_code,IDX_language_code',
> > 'IDX_country_code', '7', 'const', '2088', '100.00', 'Using index
> condition;
> > Using where; Using filesort'
> > '4', 'DEPENDENT SUBQUERY', 'districts', 'ref',
> > 'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
> > '121', '100.00', 'Using index condition; Using where'
> > '3', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
> > 'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
> > '9982', '100.00', 'Using index condition; Using where'
> > '2', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
> > 'UNQ_folder_url,IDX_country_code,IDX_enabled,IDX_language_code',
> > 'IDX_enabled', '1', 'const', '267', '100.00', 'Using index condition;
> Using
> > where'
> >
> > After doing this the query speed is acceptable.
> >
> > Thanks
> > Neil
> >
> > On Wed, Oct 5, 2011 at 3:12 AM, Johnny Withers 
> wrote:
> >
> >> Can you post the explain extended output of your query?
> >>
> >> Sent from my iPad
> >>
> >> On Oct 4, 2011, at 2:45 PM, Neil Tompkins  >
> >> wrote:
> >>
> >>> Can anyone help me ?
> >>>
> >>>
> >>> Begin forwarded message:
> >>>
> >>>> From: Tompkins Neil 
> >>>> Date: 30 September 2011 20:23:47 GMT+01:00
> >>>> To: mark carson 
> >>>> Cc: "[MySQL]" 
> >>>> Subject: Re: Slow query - please help
> >>>>
> >>>
> >>>> I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions
> >> are below, let me know if you need any more information.
> >>>>
> >>>> CREATE TABLE `districts` (
> >>>> `district_id` int(11) NOT NULL,
> >>>> `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
> >>>> `city_id` int(11) DEFAULT NULL,
> >>>> `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>>> `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
> >>>> `latitude` double DEFAULT NULL,
> >>>> `longitude` double DEFAULT NULL,
> >>>> `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>>> `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>>> `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>>> `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>>> `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>>> `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
> >>>> `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
> >>>> PRIMARY KEY (`district_id`,`language_code`),
> >>>> UNIQUE KEY `UNQ_folder_url` (`folder_url`),
> >>>> KEY `IDX_country_code` (`country_code`),
> >>>> KEY `IDX_enabled` (`enabled`),
> >>>> KEY `IDX_folder_url` (`folder_url`),
> >>>> KEY `IDX_language_code` (`language_code`),
> >>>> KEY `IDX_latitude` (`latitude`),
> >>>> KEY `IDX_longitude` (`longitude`)
> >>>> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
> >>>>
> >>>> CREATE TABLE `cities`

Re: Slow query - please help

2011-10-05 Thread Johnny Withers
Try adding an index on cities.name, it may prevent the file sort. What was the 
original query time and what is it now?

Sent from my iPad

On Oct 5, 2011, at 4:01 AM, Tompkins Neil  wrote:

> Following my mail below, if anyone can help optimise the query further that
> would be a great help.
> 
> -- Forwarded message --
> From: Tompkins Neil 
> Date: Wed, Oct 5, 2011 at 9:48 AM
> Subject: Re: Slow query - please help
> To: Johnny Withers 
> Cc: "mysql@lists.mysql.com" 
> 
> 
> I just revised my query and now get the following output :
> 
> '1', 'PRIMARY', 'cities', 'ref', 'IDX_country_code,IDX_language_code',
> 'IDX_country_code', '7', 'const', '2088', '100.00', 'Using index condition;
> Using where; Using filesort'
> '4', 'DEPENDENT SUBQUERY', 'districts', 'ref',
> 'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
> '121', '100.00', 'Using index condition; Using where'
> '3', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
> 'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
> '9982', '100.00', 'Using index condition; Using where'
> '2', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
> 'UNQ_folder_url,IDX_country_code,IDX_enabled,IDX_language_code',
> 'IDX_enabled', '1', 'const', '267', '100.00', 'Using index condition; Using
> where'
> 
> After doing this the query speed is acceptable.
> 
> Thanks
> Neil
> 
> On Wed, Oct 5, 2011 at 3:12 AM, Johnny Withers  wrote:
> 
>> Can you post the explain extended output of your query?
>> 
>> Sent from my iPad
>> 
>> On Oct 4, 2011, at 2:45 PM, Neil Tompkins 
>> wrote:
>> 
>>> Can anyone help me ?
>>> 
>>> 
>>> Begin forwarded message:
>>> 
>>>> From: Tompkins Neil 
>>>> Date: 30 September 2011 20:23:47 GMT+01:00
>>>> To: mark carson 
>>>> Cc: "[MySQL]" 
>>>> Subject: Re: Slow query - please help
>>>> 
>>> 
>>>> I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions
>> are below, let me know if you need any more information.
>>>> 
>>>> CREATE TABLE `districts` (
>>>> `district_id` int(11) NOT NULL,
>>>> `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
>>>> `city_id` int(11) DEFAULT NULL,
>>>> `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
>>>> `latitude` double DEFAULT NULL,
>>>> `longitude` double DEFAULT NULL,
>>>> `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
>>>> `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
>>>> PRIMARY KEY (`district_id`,`language_code`),
>>>> UNIQUE KEY `UNQ_folder_url` (`folder_url`),
>>>> KEY `IDX_country_code` (`country_code`),
>>>> KEY `IDX_enabled` (`enabled`),
>>>> KEY `IDX_folder_url` (`folder_url`),
>>>> KEY `IDX_language_code` (`language_code`),
>>>> KEY `IDX_latitude` (`latitude`),
>>>> KEY `IDX_longitude` (`longitude`)
>>>> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
>>>> 
>>>> CREATE TABLE `cities` (
>>>> `city_id` int(11) NOT NULL,
>>>> `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
>>>> `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
>>>> `nr_hotels` int(11) DEFAULT NULL,
>>>> `latitude` double DEFAULT NULL,
>>>> `longitude` double DEFAULT NULL,
>>>> `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>> `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>>&

Fwd: Slow query - please help

2011-10-05 Thread Tompkins Neil
Following my mail below, if anyone can help optimise the query further that
would be a great help.

-- Forwarded message --
From: Tompkins Neil 
Date: Wed, Oct 5, 2011 at 9:48 AM
Subject: Re: Slow query - please help
To: Johnny Withers 
Cc: "mysql@lists.mysql.com" 


I just revised my query and now get the following output :

'1', 'PRIMARY', 'cities', 'ref', 'IDX_country_code,IDX_language_code',
'IDX_country_code', '7', 'const', '2088', '100.00', 'Using index condition;
Using where; Using filesort'
'4', 'DEPENDENT SUBQUERY', 'districts', 'ref',
'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
'121', '100.00', 'Using index condition; Using where'
'3', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
'9982', '100.00', 'Using index condition; Using where'
'2', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
'UNQ_folder_url,IDX_country_code,IDX_enabled,IDX_language_code',
'IDX_enabled', '1', 'const', '267', '100.00', 'Using index condition; Using
where'

After doing this the query speed is acceptable.

Thanks
Neil

On Wed, Oct 5, 2011 at 3:12 AM, Johnny Withers  wrote:

> Can you post the explain extended output of your query?
>
> Sent from my iPad
>
> On Oct 4, 2011, at 2:45 PM, Neil Tompkins 
> wrote:
>
> > Can anyone help me ?
> >
> >
> > Begin forwarded message:
> >
> >> From: Tompkins Neil 
> >> Date: 30 September 2011 20:23:47 GMT+01:00
> >> To: mark carson 
> >> Cc: "[MySQL]" 
> >> Subject: Re: Slow query - please help
> >>
> >
> >> I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions
> are below, let me know if you need any more information.
> >>
> >> CREATE TABLE `districts` (
> >>  `district_id` int(11) NOT NULL,
> >>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
> >>  `city_id` int(11) DEFAULT NULL,
> >>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
> >>  `latitude` double DEFAULT NULL,
> >>  `longitude` double DEFAULT NULL,
> >>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
> >>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
> >>  PRIMARY KEY (`district_id`,`language_code`),
> >>  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
> >>  KEY `IDX_country_code` (`country_code`),
> >>  KEY `IDX_enabled` (`enabled`),
> >>  KEY `IDX_folder_url` (`folder_url`),
> >>  KEY `IDX_language_code` (`language_code`),
> >>  KEY `IDX_latitude` (`latitude`),
> >>  KEY `IDX_longitude` (`longitude`)
> >> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
> >>
> >> CREATE TABLE `cities` (
> >>  `city_id` int(11) NOT NULL,
> >>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
> >>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
> >>  `nr_hotels` int(11) DEFAULT NULL,
> >>  `latitude` double DEFAULT NULL,
> >>  `longitude` double DEFAULT NULL,
> >>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
> >>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
> >>  PRIMARY KEY (`city_id`,`language_code`),
> >>  UNIQUE KEY `UNQ_folder

Re: Slow query - please help

2011-10-05 Thread Tompkins Neil
I just revised my query and now get the following output :

'1', 'PRIMARY', 'cities', 'ref', 'IDX_country_code,IDX_language_code',
'IDX_country_code', '7', 'const', '2088', '100.00', 'Using index condition;
Using where; Using filesort'
'4', 'DEPENDENT SUBQUERY', 'districts', 'ref',
'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
'121', '100.00', 'Using index condition; Using where'
'3', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
'IDX_country_code,IDX_language_code', 'IDX_country_code', '7', 'const',
'9982', '100.00', 'Using index condition; Using where'
'2', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
'UNQ_folder_url,IDX_country_code,IDX_enabled,IDX_language_code',
'IDX_enabled', '1', 'const', '267', '100.00', 'Using index condition; Using
where'

After doing this the query speed is acceptable.

Thanks
Neil

On Wed, Oct 5, 2011 at 3:12 AM, Johnny Withers  wrote:

> Can you post the explain extended output of your query?
>
> Sent from my iPad
>
> On Oct 4, 2011, at 2:45 PM, Neil Tompkins 
> wrote:
>
> > Can anyone help me ?
> >
> >
> > Begin forwarded message:
> >
> >> From: Tompkins Neil 
> >> Date: 30 September 2011 20:23:47 GMT+01:00
> >> To: mark carson 
> >> Cc: "[MySQL]" 
> >> Subject: Re: Slow query - please help
> >>
> >
> >> I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions
> are below, let me know if you need any more information.
> >>
> >> CREATE TABLE `districts` (
> >>  `district_id` int(11) NOT NULL,
> >>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
> >>  `city_id` int(11) DEFAULT NULL,
> >>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
> >>  `latitude` double DEFAULT NULL,
> >>  `longitude` double DEFAULT NULL,
> >>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
> >>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
> >>  PRIMARY KEY (`district_id`,`language_code`),
> >>  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
> >>  KEY `IDX_country_code` (`country_code`),
> >>  KEY `IDX_enabled` (`enabled`),
> >>  KEY `IDX_folder_url` (`folder_url`),
> >>  KEY `IDX_language_code` (`language_code`),
> >>  KEY `IDX_latitude` (`latitude`),
> >>  KEY `IDX_longitude` (`longitude`)
> >> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
> >>
> >> CREATE TABLE `cities` (
> >>  `city_id` int(11) NOT NULL,
> >>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
> >>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
> >>  `nr_hotels` int(11) DEFAULT NULL,
> >>  `latitude` double DEFAULT NULL,
> >>  `longitude` double DEFAULT NULL,
> >>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
> >>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
> >>  PRIMARY KEY (`city_id`,`language_code`),
> >>  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
> >>  KEY `IDX_country_code` (`country_code`),
> >>  KEY `IDX_enabled` (`enabled`),
> >>  KEY `IDX_folder_url` (`folder_url`),
> >>  KEY `IDX_language_code` (`language_code`),
> >>  KEY `IDX_latitude` (`latitude`),
> >>  K

Re: Slow query - please help

2011-10-05 Thread Tompkins Neil
Hi

Here is the response :

'1', 'PRIMARY', 'cities', 'ref', 'IDX_country_code,IDX_language_code',
'IDX_country_code', '7', 'const', '2088', '100.00', 'Using index condition;
Using where; Using filesort'
'4', 'DEPENDENT SUBQUERY', 'districts', 'ALL', 'IDX_language_code', NULL,
NULL, NULL, '1207', '99.75', 'Using where'
'3', 'DEPENDENT SUBQUERY', 'hotels', 'ALL', 'IDX_language_code', NULL, NULL,
NULL, '163102', '100.00', 'Using where'
'2', 'DEPENDENT SUBQUERY', 'hotels', 'ref',
'UNQ_folder_url,IDX_enabled,IDX_language_code', 'IDX_enabled', '1', 'const',
'267', '100.00', 'Using index condition; Using where'


Thanks
Neil

On Wed, Oct 5, 2011 at 3:12 AM, Johnny Withers  wrote:

> Can you post the explain extended output of your query?
>
> Sent from my iPad
>
> On Oct 4, 2011, at 2:45 PM, Neil Tompkins 
> wrote:
>
> > Can anyone help me ?
> >
> >
> > Begin forwarded message:
> >
> >> From: Tompkins Neil 
> >> Date: 30 September 2011 20:23:47 GMT+01:00
> >> To: mark carson 
> >> Cc: "[MySQL]" 
> >> Subject: Re: Slow query - please help
> >>
> >
> >> I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions
> are below, let me know if you need any more information.
> >>
> >> CREATE TABLE `districts` (
> >>  `district_id` int(11) NOT NULL,
> >>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
> >>  `city_id` int(11) DEFAULT NULL,
> >>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
> >>  `latitude` double DEFAULT NULL,
> >>  `longitude` double DEFAULT NULL,
> >>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
> >>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
> >>  PRIMARY KEY (`district_id`,`language_code`),
> >>  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
> >>  KEY `IDX_country_code` (`country_code`),
> >>  KEY `IDX_enabled` (`enabled`),
> >>  KEY `IDX_folder_url` (`folder_url`),
> >>  KEY `IDX_language_code` (`language_code`),
> >>  KEY `IDX_latitude` (`latitude`),
> >>  KEY `IDX_longitude` (`longitude`)
> >> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
> >>
> >> CREATE TABLE `cities` (
> >>  `city_id` int(11) NOT NULL,
> >>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
> >>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
> >>  `nr_hotels` int(11) DEFAULT NULL,
> >>  `latitude` double DEFAULT NULL,
> >>  `longitude` double DEFAULT NULL,
> >>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
> >>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
> >>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
> >>  PRIMARY KEY (`city_id`,`language_code`),
> >>  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
> >>  KEY `IDX_country_code` (`country_code`),
> >>  KEY `IDX_enabled` (`enabled`),
> >>  KEY `IDX_folder_url` (`folder_url`),
> >>  KEY `IDX_language_code` (`language_code`),
> >>  KEY `IDX_latitude` (`latitude`),
> >>  KEY `IDX_longitude` (`longitude`)
> >> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
> >>
> >>
> >> CREATE TABLE `hotels` (
> >>  `hotel_id` int(11) NOT NULL,
> >>  `language_code` char(2) COLLATE utf8_

Re: Slow query - please help

2011-10-04 Thread Johnny Withers
Can you post the explain extended output of your query?

Sent from my iPad

On Oct 4, 2011, at 2:45 PM, Neil Tompkins  wrote:

> Can anyone help me ?
> 
> 
> Begin forwarded message:
> 
>> From: Tompkins Neil 
>> Date: 30 September 2011 20:23:47 GMT+01:00
>> To: mark carson 
>> Cc: "[MySQL]" 
>> Subject: Re: Slow query - please help
>> 
> 
>> I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions are 
>> below, let me know if you need any more information.
>> 
>> CREATE TABLE `districts` (
>>  `district_id` int(11) NOT NULL,
>>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
>>  `city_id` int(11) DEFAULT NULL,
>>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
>>  `latitude` double DEFAULT NULL,
>>  `longitude` double DEFAULT NULL,
>>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
>>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
>>  PRIMARY KEY (`district_id`,`language_code`),
>>  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
>>  KEY `IDX_country_code` (`country_code`),
>>  KEY `IDX_enabled` (`enabled`),
>>  KEY `IDX_folder_url` (`folder_url`),
>>  KEY `IDX_language_code` (`language_code`),
>>  KEY `IDX_latitude` (`latitude`),
>>  KEY `IDX_longitude` (`longitude`)
>> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
>> 
>> CREATE TABLE `cities` (
>>  `city_id` int(11) NOT NULL,
>>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
>>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
>>  `nr_hotels` int(11) DEFAULT NULL,
>>  `latitude` double DEFAULT NULL,
>>  `longitude` double DEFAULT NULL,
>>  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
>>  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
>>  PRIMARY KEY (`city_id`,`language_code`),
>>  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
>>  KEY `IDX_country_code` (`country_code`),
>>  KEY `IDX_enabled` (`enabled`),
>>  KEY `IDX_folder_url` (`folder_url`),
>>  KEY `IDX_language_code` (`language_code`),
>>  KEY `IDX_latitude` (`latitude`),
>>  KEY `IDX_longitude` (`longitude`)
>> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
>> 
>> 
>> CREATE TABLE `hotels` (
>>  `hotel_id` int(11) NOT NULL,
>>  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
>>  `address` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `city_id` int(11) DEFAULT NULL,
>>  `class_is_estimated` tinyint(4) DEFAULT NULL,
>>  `class` tinyint(4) DEFAULT NULL,
>>  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `currencycode` char(3) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `district` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `hoteltype_id` int(11) DEFAULT NULL,
>>  `is_closed` tinyint(4) DEFAULT NULL,
>>  `latitude` double DEFAULT NULL,
>>  `longitude` double DEFAULT NULL,
>>  `maxrate` double DEFAULT NULL,
>>  `minrate` double DEFAULT NULL,
>>  `original_name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `nr_rooms` int(11) DEFAULT NULL,
>>  `preferred` int(11) DEFAULT NULL,
>>  `ranking` int(11) DEFAULT NULL,
>>  `review_nr` int(11) DEFAULT NULL,
>>  `review_score` double DEFAULT NULL,
>>  `zip` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `checkin_from` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `checkin_to` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
>>  `checkout_from` varchar(15) COLLATE

Fwd: Slow query - please help

2011-10-04 Thread Neil Tompkins
Can anyone help me ?


Begin forwarded message:

> From: Tompkins Neil 
> Date: 30 September 2011 20:23:47 GMT+01:00
> To: mark carson 
> Cc: "[MySQL]" 
> Subject: Re: Slow query - please help
> 

> I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions are 
> below, let me know if you need any more information.
> 
> CREATE TABLE `districts` (
>   `district_id` int(11) NOT NULL,
>   `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
>   `city_id` int(11) DEFAULT NULL,
>   `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
>   `latitude` double DEFAULT NULL,
>   `longitude` double DEFAULT NULL,
>   `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
>   `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
>   PRIMARY KEY (`district_id`,`language_code`),
>   UNIQUE KEY `UNQ_folder_url` (`folder_url`),
>   KEY `IDX_country_code` (`country_code`),
>   KEY `IDX_enabled` (`enabled`),
>   KEY `IDX_folder_url` (`folder_url`),
>   KEY `IDX_language_code` (`language_code`),
>   KEY `IDX_latitude` (`latitude`),
>   KEY `IDX_longitude` (`longitude`)
> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
> 
> CREATE TABLE `cities` (
>   `city_id` int(11) NOT NULL,
>   `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
>   `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
>   `nr_hotels` int(11) DEFAULT NULL,
>   `latitude` double DEFAULT NULL,
>   `longitude` double DEFAULT NULL,
>   `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
>   `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
>   PRIMARY KEY (`city_id`,`language_code`),
>   UNIQUE KEY `UNQ_folder_url` (`folder_url`),
>   KEY `IDX_country_code` (`country_code`),
>   KEY `IDX_enabled` (`enabled`),
>   KEY `IDX_folder_url` (`folder_url`),
>   KEY `IDX_language_code` (`language_code`),
>   KEY `IDX_latitude` (`latitude`),
>   KEY `IDX_longitude` (`longitude`)
> ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
> 
> 
> CREATE TABLE `hotels` (
>   `hotel_id` int(11) NOT NULL,
>   `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
>   `address` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `city_id` int(11) DEFAULT NULL,
>   `class_is_estimated` tinyint(4) DEFAULT NULL,
>   `class` tinyint(4) DEFAULT NULL,
>   `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `currencycode` char(3) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `district` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `hoteltype_id` int(11) DEFAULT NULL,
>   `is_closed` tinyint(4) DEFAULT NULL,
>   `latitude` double DEFAULT NULL,
>   `longitude` double DEFAULT NULL,
>   `maxrate` double DEFAULT NULL,
>   `minrate` double DEFAULT NULL,
>   `original_name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `nr_rooms` int(11) DEFAULT NULL,
>   `preferred` int(11) DEFAULT NULL,
>   `ranking` int(11) DEFAULT NULL,
>   `review_nr` int(11) DEFAULT NULL,
>   `review_score` double DEFAULT NULL,
>   `zip` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `checkin_from` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `checkin_to` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `checkout_from` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `checkout_to` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
>   `folder_url` varchar(255) COLLATE utf

Re: Slow query - please help

2011-09-30 Thread Tompkins Neil
I downloaded version mysql-5.6.2-m5-win32.msi and he table definitions are
below, let me know if you need any more information.

CREATE TABLE `districts` (
  `district_id` int(11) NOT NULL,
  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
  `city_id` int(11) DEFAULT NULL,
  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `latitude` double DEFAULT NULL,
  `longitude` double DEFAULT NULL,
  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`district_id`,`language_code`),
  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
  KEY `IDX_country_code` (`country_code`),
  KEY `IDX_enabled` (`enabled`),
  KEY `IDX_folder_url` (`folder_url`),
  KEY `IDX_language_code` (`language_code`),
  KEY `IDX_latitude` (`latitude`),
  KEY `IDX_longitude` (`longitude`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

CREATE TABLE `cities` (
  `city_id` int(11) NOT NULL,
  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL,
  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
  `name` varchar(255) COLLATE utf8_unicode_ci NOT NULL,
  `nr_hotels` int(11) DEFAULT NULL,
  `latitude` double DEFAULT NULL,
  `longitude` double DEFAULT NULL,
  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `folder_url` varchar(55) COLLATE utf8_unicode_ci DEFAULT NULL,
  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`city_id`,`language_code`),
  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
  KEY `IDX_country_code` (`country_code`),
  KEY `IDX_enabled` (`enabled`),
  KEY `IDX_folder_url` (`folder_url`),
  KEY `IDX_language_code` (`language_code`),
  KEY `IDX_latitude` (`latitude`),
  KEY `IDX_longitude` (`longitude`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;


CREATE TABLE `hotels` (
  `hotel_id` int(11) NOT NULL,
  `language_code` char(2) COLLATE utf8_unicode_ci NOT NULL DEFAULT 'en',
  `address` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `city` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `city_id` int(11) DEFAULT NULL,
  `class_is_estimated` tinyint(4) DEFAULT NULL,
  `class` tinyint(4) DEFAULT NULL,
  `country_code` char(2) COLLATE utf8_unicode_ci DEFAULT NULL,
  `currencycode` char(3) COLLATE utf8_unicode_ci DEFAULT NULL,
  `district` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
  `hoteltype_id` int(11) DEFAULT NULL,
  `is_closed` tinyint(4) DEFAULT NULL,
  `latitude` double DEFAULT NULL,
  `longitude` double DEFAULT NULL,
  `maxrate` double DEFAULT NULL,
  `minrate` double DEFAULT NULL,
  `original_name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `name` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `nr_rooms` int(11) DEFAULT NULL,
  `preferred` int(11) DEFAULT NULL,
  `ranking` int(11) DEFAULT NULL,
  `review_nr` int(11) DEFAULT NULL,
  `review_score` double DEFAULT NULL,
  `zip` varchar(45) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkin_from` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkin_to` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkout_from` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `checkout_to` varchar(15) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_title` varchar(70) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_keywords` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `meta_description` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `link_text` varchar(155) COLLATE utf8_unicode_ci DEFAULT NULL,
  `folder_url` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `short_profile` varchar(255) COLLATE utf8_unicode_ci DEFAULT NULL,
  `enabled` enum('Y','N') COLLATE utf8_unicode_ci NOT NULL DEFAULT 'N',
  `last_changed` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`hotel_id`,`language_code`),
  UNIQUE KEY `UNQ_folder_url` (`folder_url`),
  KEY `IDX_country_code` (`country_code`),
  KEY `IDX_enabled` (`enabled`),
  KEY `IDX_language_code` (`language_code`),
  KEY `IDX_latitude` (`latitude`),
  KEY `IDX_longitude` (`longitude`),
  KEY `IDX_name` (`name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

On Fri, Sep 30, 2011 at 6:08 PM, mark carson  wrote:

> Hi
>
> You need version of mysql and table/key/index layout in order to get a
> response
>
>

Slow query - please help

2011-09-30 Thread Tompkins Neil
Hi

I've the following query :

SELECT city_id, name, meta_title, meta_description, meta_keywords,
country_code, link_text, folder_url, enabled, last_changed, nr_hotels,
(SELECT COUNT(hotels.hotel_id) FROM hotels WHERE hotels.city_id =
cities.city_id AND hotels.country_code = 'gb' AND hotels.enabled = 'Y' AND
hotels.folder_url IS NOT NULL AND hotels.language_code = 'en') AS
hotel_count,
(SELECT COUNT(hotels.hotel_id) FROM hotels WHERE hotels.city_id =
cities.city_id AND hotels.country_code = 'gb' AND hotels.language_code =
'en') AS available_hotel_count,
(SELECT COUNT(districts.city_id) FROM districts WHERE districts.city_id =
cities.city_id AND districts.language_code = 'en' AND districts.country_code
= 'gb') AS district_count
FROM cities WHERE language_code = 'en' AND country_code = 'gb'
ORDER BY cities.name ASC , cities.city_id ASC

Previously the table format was Innodb with foreign keys and the query was
pretty much instant.  Now I've changed the table format to MyISAM and
obviously removed the foreign keys and the query takes forever to execute
using the same data.  Can anyone help and tell me where I've gone wrong.

Thanks
Neil


Re: Help with slow query

2011-03-10 Thread Andy Wallace



On 3/10/11 10:46 AM, Shawn Green (MySQL) wrote:

On 3/10/2011 12:32, Jim McNeely wrote:

Rhino,

Thanks for the help and time! Actually, I thought the same thing, but what's 
weird is that is the only thing that doesn't slow it down.
If I take out all of the join clauses EXCEPT that one the query runs virtually 
instantaneously. for some reason it will use the index in
that case and it works. If I take out everything like this:

SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI

from Appt_ a
LEFT JOIN patient_ p
ON a.IdPatient = p.IdPatient
WHERE a.ApptDate>= '2009-03-01';



1) Verify that the indexes on `patient_` haven't been disabled

SHOW INDEXES FROM `patient_`;

http://dev.mysql.com/doc/refman/5.5/en/show-index.html

2) Verify that the data types of `Appt_`.`IdPatient` and `patient_`.`IdPatient` 
are not incompatible. (for example: one is varchar, the
other int)


This last one can be HUGE. I tracked a big performance issue to this exact
problem - the columns used in the join had the same name, but different
data types. Correcting to be the same type (both ints) made a terrific
performance increase.



--
Andy Wallace
iHOUSEweb, Inc.
awall...@ihouseweb.com
(866) 645-7700 ext 219
--
There are two ways to build software:
Make it so simple that there are obviously no bugs,
or make it so complex that there are no obvious bugs.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with slow query

2011-03-10 Thread Shawn Green (MySQL)

On 3/10/2011 13:12, Jim McNeely wrote:

Shawn,

This is the first thing that I though as well, but here is a portion from the 
show create table for patient_:


PRIMARY KEY (`zzk`),
  KEY `IdPatient` (`IdPatient`),
  KEY `SSN` (`SSN`),
  KEY `IdLastword` (`IdLastword`),
  KEY `DOB` (`DateOfBirth`),
  KEY `NameFirst` (`NameFirst`),
  KEY `NameLast` (`NameLast`)

This extremely simple join is still massively slow.

Jim

On Mar 10, 2011, at 10:00 AM, Shawn Green (MySQL) wrote:


On 3/10/2011 12:32, Jim McNeely wrote:

Rhino,

Thanks for the help and time! Actually, I thought the same thing, but what's 
weird is that is the only thing that doesn't slow it down. If I take out all of 
the join clauses EXCEPT that one the query runs virtually instantaneously. for 
some reason it will use the index in that case and it works. If I take out 
everything like this:

SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI

from Appt_ a
LEFT JOIN patient_ p
  ON a.IdPatient = p.IdPatient
WHERE a.ApptDate>= '2009-03-01';



1) Verify that the indexes on `patient_` haven't been disabled

SHOW INDEXES FROM `patient_`;

http://dev.mysql.com/doc/refman/5.5/en/show-index.html

2) Verify that the data types of `Appt_`.`IdPatient` and 
`patient_`.`IdPatient` are not incompatible. (for example: one is 
varchar, the other int)


Thanks,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with slow query

2011-03-10 Thread Jim McNeely
Shawn,

This is the first thing that I though as well, but here is a portion from the 
show create table for patient_:


PRIMARY KEY (`zzk`),
 KEY `IdPatient` (`IdPatient`),
 KEY `SSN` (`SSN`),
 KEY `IdLastword` (`IdLastword`),
 KEY `DOB` (`DateOfBirth`),
 KEY `NameFirst` (`NameFirst`),
 KEY `NameLast` (`NameLast`)

This extremely simple join is still massively slow.

Jim

On Mar 10, 2011, at 10:00 AM, Shawn Green (MySQL) wrote:

> On 3/10/2011 12:32, Jim McNeely wrote:
>> Rhino,
>> 
>> Thanks for the help and time! Actually, I thought the same thing, but what's 
>> weird is that is the only thing that doesn't slow it down. If I take out all 
>> of the join clauses EXCEPT that one the query runs virtually 
>> instantaneously. for some reason it will use the index in that case and it 
>> works. If I take out everything like this:
>> 
>> SELECT a.IdAppt, a.IdPatient,
>> p.NameLast, p.NameFirst, p.NameMI
>> 
>> from Appt_ a
>> LEFT JOIN patient_ p
>>  ON a.IdPatient = p.IdPatient
>> WHERE a.ApptDate>= '2009-03-01';
>> 
>> It is still utterly slow. EXPLAIN looks like this:
>> 
>> ++-+---+---+---+--+-+--++-+
>> | id | select_type | table | type  | possible_keys | key  | key_len | 
>> ref  | rows   | Extra   |
>> ++-+---+---+---+--+-+--++-+
>> |  1 | SIMPLE  | a | range | apptdate  | apptdate | 4   | 
>> NULL | 296166 | Using where |
>> |  1 | SIMPLE  | p | ALL   | NULL  | NULL | NULL| 
>> NULL | 262465 | |
>> ++-+---+---+---+--+-+--++-+
>> 
>> But, very good try. I thought this might be it as well.
>> 
> ... snip ...
> 
> According to this report, there are no indexes on the `patient_` table that 
> include the column `IdPatient` as the first column. Fix that and this query 
> should be much faster.
> 
> -- 
> Shawn Green
> MySQL Principal Technical Support Engineer
> Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
> Office: Blountville, TN
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=j...@newcenturydata.com
> 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with slow query

2011-03-10 Thread Shawn Green (MySQL)

On 3/10/2011 12:32, Jim McNeely wrote:

Rhino,

Thanks for the help and time! Actually, I thought the same thing, but what's 
weird is that is the only thing that doesn't slow it down. If I take out all of 
the join clauses EXCEPT that one the query runs virtually instantaneously. for 
some reason it will use the index in that case and it works. If I take out 
everything like this:

SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI

from Appt_ a
LEFT JOIN patient_ p
  ON a.IdPatient = p.IdPatient
WHERE a.ApptDate>= '2009-03-01';

It is still utterly slow. EXPLAIN looks like this:

++-+---+---+---+--+-+--++-+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  
| rows   | Extra   |
++-+---+---+---+--+-+--++-+
|  1 | SIMPLE  | a | range | apptdate  | apptdate | 4   | NULL 
| 296166 | Using where |
|  1 | SIMPLE  | p | ALL   | NULL  | NULL | NULL| NULL 
| 262465 | |
++-+---+---+---+--+-+--++-+

But, very good try. I thought this might be it as well.


... snip ...

According to this report, there are no indexes on the `patient_` table 
that include the column `IdPatient` as the first column. Fix that and 
this query should be much faster.


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with slow query

2011-03-10 Thread mos

If the optimizer chooses the wrong index, you can tell it what index to use.

SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI

from Appt_ a force index(id_patient)
LEFT JOIN patient_ p
 ON a.IdPatient = p.IdPatient

WHERE a.ApptDate >= '2009-03-01';

See http://dev.mysql.com/doc/refman/5.1/en/index-hints.html

Mike

At 11:32 AM 3/10/2011, Jim McNeely wrote:

Rhino,

Thanks for the help and time! Actually, I thought the same thing, but 
what's weird is that is the only thing that doesn't slow it down. If I 
take out all of the join clauses EXCEPT that one the query runs virtually 
instantaneously. for some reason it will use the index in that case and it 
works. If I take out everything like this:


SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI

from Appt_ a
LEFT JOIN patient_ p
 ON a.IdPatient = p.IdPatient
WHERE a.ApptDate >= '2009-03-01';

It is still utterly slow. EXPLAIN looks like this:

++-+---+---+---+--+-+--++-+
| id | select_type | table | type  | possible_keys | key  | key_len | 
ref  | rows   | Extra   |

++-+---+---+---+--+-+--++-+
|  1 | SIMPLE  | a | range | apptdate  | apptdate | 4   | 
NULL | 296166 | Using where |
|  1 | SIMPLE  | p | ALL   | NULL  | NULL | NULL| 
NULL | 262465 | |

++-+---+---+---+--+-+--++-+

But, very good try. I thought this might be it as well.

Thanks,

Jim McNeely

On Mar 10, 2011, at 9:05 AM, Rhino wrote:

>
> What I'm about to say may be completely out to lunch so don't be afraid 
to dismiss it. I'm more a DB2 guy than a MySQL guy and I'm getting rusty 
on both but I've always been struck by how similar the two dtabases are. 
Therefore, I want to offer an insight on why this query would not perform 
terribly well in DB2. I simply don't know if it is applicable to MySQL.

>
> In DB2, using functions on predicates (conditions in a WHERE clause), 
prevents DB2 from using an index to satisfy that predicate. (Or at least 
it used to: I'm not certain if that has been remedied in recent versions 
of the DB2 optimizer.) Therefore, the CONCAT() function in the line
> "AND CONCAT(a.IdAppt, '0') = c.IdApptType" would ensure that no index 
on the IdAppt column would be used to find the rows of the table that 
satisfied that condition.

>
> My suggestion is that you try rewriting that condition to avoid using 
CONCAT() - or any other function - and see if that helps the performance 
of your query. That would require modifying your data to append a zero to 
the end of the existing date in IdApptType column, which may or may not 
be a reasonable thing to do. You'll have to decide about that.

>
> Again, I could be all wet here so don't have me killed if I'm wrong 
about this :-) I'm just trying to help ;-)

>
> --
> Rhino
>
> On 2011-03-10 11:38, Jim McNeely wrote:
>> Shawn,
>>
>> Thanks for the great help! It still is not working. I did an EXPLAIN 
on this query with your amended split out join statements and got this:

>>
>> 
++-+---+---+---++-+--++-+
>> | id | select_type | table | type  | possible_keys | key| 
key_len | ref  | rows   | Extra   |
>> 
++-+---+---+---++-+--++-+
>> |  1 | SIMPLE  | a | range | apptdate  | apptdate   | 
4   | NULL | 296148 | Using where |
>> |  1 | SIMPLE  | p | ALL   | NULL  | NULL   | 
NULL| NULL | 262462 | |
>> |  1 | SIMPLE  | t | ALL   | NULL  | NULL   | 
NULL| NULL | 311152 | |
>> |  1 | SIMPLE  | c | ref   | IdApptType| IdApptType | 
51  | func |  1 | |
>> |  1 | SIMPLE  | af| ALL   | NULL  | NULL   | 
NULL| NULL |   5680 | |
>> 
++-+---+---+---++-+--++-+

>>
>> What I'm not catching is why it says there is no key it can use for 
the patient table; here is a portion of the show create:

>>
>> PRIMARY KEY (`zzk`),
>>   KEY `IdPatient` (`IdPatient`),
>>   KEY `SSN` (`SSN`),
>>   KEY `IdLastword` (`IdLastword`),
>>   KEY `DOB` (`DateOfBirth`),
>>   KEY `NameFirst` (`NameFirst`),
>>   KEY `NameLast` (`NameLast`)
>>
>> So, the IdPatient is at least a POSSIBLE key, right?
>>
>> On Mar 9, 2011, at 8:34 PM, Shawn Green (MySQL) wrote:
>>
>>> Hi Jim,
>>>
>>> On 3/9/2011 17:57, Jim McNeely wrote:
 I am trying to set up an export query which is executing very 
slowly, and I was hoping I could get some help. Here is the query:


 SELECT a.IdAppt, a.IdPatient,
 p.NameLast, p.NameFirst, p.NameMI,
 a.IdProcedure, a.Pro

Re: Help with slow query

2011-03-10 Thread Jim McNeely
Rhino,

Thanks for the help and time! Actually, I thought the same thing, but what's 
weird is that is the only thing that doesn't slow it down. If I take out all of 
the join clauses EXCEPT that one the query runs virtually instantaneously. for 
some reason it will use the index in that case and it works. If I take out 
everything like this:

SELECT a.IdAppt, a.IdPatient, 
p.NameLast, p.NameFirst, p.NameMI

from Appt_ a
LEFT JOIN patient_ p
 ON a.IdPatient = p.IdPatient
WHERE a.ApptDate >= '2009-03-01';

It is still utterly slow. EXPLAIN looks like this:

++-+---+---+---+--+-+--++-+
| id | select_type | table | type  | possible_keys | key  | key_len | ref  
| rows   | Extra   |
++-+---+---+---+--+-+--++-+
|  1 | SIMPLE  | a | range | apptdate  | apptdate | 4   | NULL 
| 296166 | Using where |
|  1 | SIMPLE  | p | ALL   | NULL  | NULL | NULL| NULL 
| 262465 | |
++-+---+---+---+--+-+--++-+

But, very good try. I thought this might be it as well.

Thanks,

Jim McNeely

On Mar 10, 2011, at 9:05 AM, Rhino wrote:

> 
> What I'm about to say may be completely out to lunch so don't be afraid to 
> dismiss it. I'm more a DB2 guy than a MySQL guy and I'm getting rusty on both 
> but I've always been struck by how similar the two dtabases are. Therefore, I 
> want to offer an insight on why this query would not perform terribly well in 
> DB2. I simply don't know if it is applicable to MySQL.
> 
> In DB2, using functions on predicates (conditions in a WHERE clause), 
> prevents DB2 from using an index to satisfy that predicate. (Or at least it 
> used to: I'm not certain if that has been remedied in recent versions of the 
> DB2 optimizer.) Therefore, the CONCAT() function in the line
> "AND CONCAT(a.IdAppt, '0') = c.IdApptType" would ensure that no index on the 
> IdAppt column would be used to find the rows of the table that satisfied that 
> condition.
> 
> My suggestion is that you try rewriting that condition to avoid using 
> CONCAT() - or any other function - and see if that helps the performance of 
> your query. That would require modifying your data to append a zero to the 
> end of the existing date in IdApptType column, which may or may not be a 
> reasonable thing to do. You'll have to decide about that.
> 
> Again, I could be all wet here so don't have me killed if I'm wrong about 
> this :-) I'm just trying to help ;-)
> 
> --
> Rhino
> 
> On 2011-03-10 11:38, Jim McNeely wrote:
>> Shawn,
>> 
>> Thanks for the great help! It still is not working. I did an EXPLAIN on this 
>> query with your amended split out join statements and got this:
>> 
>> ++-+---+---+---++-+--++-+
>> | id | select_type | table | type  | possible_keys | key| key_len | 
>> ref  | rows   | Extra   |
>> ++-+---+---+---++-+--++-+
>> |  1 | SIMPLE  | a | range | apptdate  | apptdate   | 4   | 
>> NULL | 296148 | Using where |
>> |  1 | SIMPLE  | p | ALL   | NULL  | NULL   | NULL| 
>> NULL | 262462 | |
>> |  1 | SIMPLE  | t | ALL   | NULL  | NULL   | NULL| 
>> NULL | 311152 | |
>> |  1 | SIMPLE  | c | ref   | IdApptType| IdApptType | 51  | 
>> func |  1 | |
>> |  1 | SIMPLE  | af| ALL   | NULL  | NULL   | NULL| 
>> NULL |   5680 | |
>> ++-+---+---+---++-+--++-+
>> 
>> What I'm not catching is why it says there is no key it can use for the 
>> patient table; here is a portion of the show create:
>> 
>> PRIMARY KEY (`zzk`),
>>   KEY `IdPatient` (`IdPatient`),
>>   KEY `SSN` (`SSN`),
>>   KEY `IdLastword` (`IdLastword`),
>>   KEY `DOB` (`DateOfBirth`),
>>   KEY `NameFirst` (`NameFirst`),
>>   KEY `NameLast` (`NameLast`)
>> 
>> So, the IdPatient is at least a POSSIBLE key, right?
>> 
>> On Mar 9, 2011, at 8:34 PM, Shawn Green (MySQL) wrote:
>> 
>>> Hi Jim,
>>> 
>>> On 3/9/2011 17:57, Jim McNeely wrote:
 I am trying to set up an export query which is executing very slowly, and 
 I was hoping I could get some help. Here is the query:
 
 SELECT a.IdAppt, a.IdPatient,
 p.NameLast, p.NameFirst, p.NameMI,
 a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate, " ", a.ApptTimeOut) AS 
 CHAR)
 ApptDateTime, a.ApptLenMin Duration,
 a.Room Primary_Resource_ID, 'R' as Primary_Resource_Type,
 t.Note, c.IdPractAffil DoctorID, af.PractName DoctorName, a.ICD9code ICD9,
 '??' Diagnosis_free_test
 
 from Appt_ a LEFT JOIN 

Re: Help with slow query

2011-03-10 Thread Jim McNeely
Shawn,

Thanks for the great help! It still is not working. I did an EXPLAIN on this 
query with your amended split out join statements and got this:

++-+---+---+---++-+--++-+
| id | select_type | table | type  | possible_keys | key| key_len | ref 
 | rows   | Extra   |
++-+---+---+---++-+--++-+
|  1 | SIMPLE  | a | range | apptdate  | apptdate   | 4   | 
NULL | 296148 | Using where |
|  1 | SIMPLE  | p | ALL   | NULL  | NULL   | NULL| 
NULL | 262462 | |
|  1 | SIMPLE  | t | ALL   | NULL  | NULL   | NULL| 
NULL | 311152 | |
|  1 | SIMPLE  | c | ref   | IdApptType| IdApptType | 51  | 
func |  1 | |
|  1 | SIMPLE  | af| ALL   | NULL  | NULL   | NULL| 
NULL |   5680 | |
++-+---+---+---++-+--++-+

What I'm not catching is why it says there is no key it can use for the patient 
table; here is a portion of the show create:

PRIMARY KEY (`zzk`),
  KEY `IdPatient` (`IdPatient`),
  KEY `SSN` (`SSN`),
  KEY `IdLastword` (`IdLastword`),
  KEY `DOB` (`DateOfBirth`),
  KEY `NameFirst` (`NameFirst`),
  KEY `NameLast` (`NameLast`)

So, the IdPatient is at least a POSSIBLE key, right?

On Mar 9, 2011, at 8:34 PM, Shawn Green (MySQL) wrote:

> Hi Jim,
> 
> On 3/9/2011 17:57, Jim McNeely wrote:
>> I am trying to set up an export query which is executing very slowly, and I 
>> was hoping I could get some help. Here is the query:
>> 
>> SELECT a.IdAppt, a.IdPatient,
>> p.NameLast, p.NameFirst, p.NameMI,
>> a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate, " ", a.ApptTimeOut) AS 
>> CHAR)
>> ApptDateTime, a.ApptLenMin Duration,
>> a.Room Primary_Resource_ID, 'R' as Primary_Resource_Type,
>> t.Note, c.IdPractAffil DoctorID, af.PractName DoctorName, a.ICD9code ICD9,
>> '??' Diagnosis_free_test
>> 
>> from Appt_ a LEFT JOIN (patient_ p, today_ t, Copy_ c, Affil_ af)
>> ON (a.IdPatient = p.IdPatient
>> AND a.IdPatientDate = t.IdPatientDate
>> AND CONCAT(a.IdAppt, '0') = c.IdApptType
>> AND a.IdPriCarePhy = af.IdAffil)
>> WHERE a.ApptDate>= '2009-03-01';
>> 
>> p.IdPatient, t.IdPatientDate, c.IdApptType, and af.IdAffil are all indexed. 
>> Also I selectively took out join parameters until there was nothing but a 
>> join on the patient table, and it was still slow, but when I took that out, 
>> the query was extremely fast. What might I be doing wrong?
>> 
>> Thanks,
>> 
>> Jim McNeely
> 
> The performance problem is with your Cartesian product. I think you meant to 
> write:
> 
> from Appt_ a
> LEFT JOIN patient_ p
>  ON a.IdPatient = p.IdPatient
> LEFT JOIN today_ t
>  ON a.IdPatientDate = t.IdPatientDate
> LEFT JOIN Copy_ c
>  ON CONCAT(a.IdAppt, '0') = c.IdApptType
> LEFT JOIN Affil_ af
>  ON a.IdPriCarePhy = af.IdAffil
> 
> As of 5.0.12, the comma operator for table joins was demoted in the 'order of 
> precedence' for query execution. That means that MySQL became more complaint 
> with the SQL standard but it also means that using a comma-join instead of an 
> explicit ANSI join can result in a Cartesian product more frequently.
> 
> Try my style and compare how it works. If both styles are similarly slow, 
> collect the EXPLAIN plan for this query and share with the list.
> 
> Yours,
> -- 
> Shawn Green
> MySQL Principal Technical Support Engineer
> Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
> Office: Blountville, TN
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=j...@newcenturydata.com
> 



Re: Help with slow query

2011-03-09 Thread Shawn Green (MySQL)

Hi Jim,

On 3/9/2011 17:57, Jim McNeely wrote:

I am trying to set up an export query which is executing very slowly, and I was 
hoping I could get some help. Here is the query:

SELECT a.IdAppt, a.IdPatient,
p.NameLast, p.NameFirst, p.NameMI,
a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate, " ", a.ApptTimeOut) AS CHAR)
ApptDateTime, a.ApptLenMin Duration,
a.Room Primary_Resource_ID, 'R' as Primary_Resource_Type,
t.Note, c.IdPractAffil DoctorID, af.PractName DoctorName, a.ICD9code ICD9,
'??' Diagnosis_free_test

from Appt_ a LEFT JOIN (patient_ p, today_ t, Copy_ c, Affil_ af)
ON (a.IdPatient = p.IdPatient
AND a.IdPatientDate = t.IdPatientDate
AND CONCAT(a.IdAppt, '0') = c.IdApptType
AND a.IdPriCarePhy = af.IdAffil)
WHERE a.ApptDate>= '2009-03-01';

p.IdPatient, t.IdPatientDate, c.IdApptType, and af.IdAffil are all indexed. 
Also I selectively took out join parameters until there was nothing but a join 
on the patient table, and it was still slow, but when I took that out, the 
query was extremely fast. What might I be doing wrong?

Thanks,

Jim McNeely


The performance problem is with your Cartesian product. I think you 
meant to write:


from Appt_ a
LEFT JOIN patient_ p
  ON a.IdPatient = p.IdPatient
LEFT JOIN today_ t
  ON a.IdPatientDate = t.IdPatientDate
LEFT JOIN Copy_ c
  ON CONCAT(a.IdAppt, '0') = c.IdApptType
LEFT JOIN Affil_ af
  ON a.IdPriCarePhy = af.IdAffil

As of 5.0.12, the comma operator for table joins was demoted in the 
'order of precedence' for query execution. That means that MySQL became 
more complaint with the SQL standard but it also means that using a 
comma-join instead of an explicit ANSI join can result in a Cartesian 
product more frequently.


Try my style and compare how it works. If both styles are similarly 
slow, collect the EXPLAIN plan for this query and share with the list.


Yours,
--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Help with slow query

2011-03-09 Thread Jim McNeely
I am trying to set up an export query which is executing very slowly, and I was 
hoping I could get some help. Here is the query:

SELECT a.IdAppt, a.IdPatient, 
p.NameLast, p.NameFirst, p.NameMI, 
a.IdProcedure, a.ProcName, CAST(CONCAT(a.ApptDate, " ", a.ApptTimeOut) AS CHAR) 
ApptDateTime, a.ApptLenMin Duration,
a.Room Primary_Resource_ID, 'R' as Primary_Resource_Type, 
t.Note, c.IdPractAffil DoctorID, af.PractName DoctorName, a.ICD9code ICD9, 
'??' Diagnosis_free_test

from Appt_ a LEFT JOIN (patient_ p, today_ t, Copy_ c, Affil_ af) 
ON (a.IdPatient = p.IdPatient 
AND a.IdPatientDate = t.IdPatientDate 
AND CONCAT(a.IdAppt, '0') = c.IdApptType 
AND a.IdPriCarePhy = af.IdAffil)
WHERE a.ApptDate >= '2009-03-01'; 

p.IdPatient, t.IdPatientDate, c.IdApptType, and af.IdAffil are all indexed. 
Also I selectively took out join parameters until there was nothing but a join 
on the patient table, and it was still slow, but when I took that out, the 
query was extremely fast. What might I be doing wrong?

Thanks,

Jim McNeely
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-25 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 6:43 PM, Gavin Towey  wrote:

> If you show the EXPLAIN SELECT .. output, and the table structure, someone
> will be able to give a more definite answer.
>
>
Thanks for the reply Gavin. I actually did place this info in my very first
message on this thread, along with my basic table structure and server
version. Myself and others have just stopped keeping the full,
deeply-nested, quoted thread inside all subsequent messages which is why you
probably haven't seen it.

However, here is the EXPLAIN SELECT from the first message (reformatted for
email):

select_type: SIMPLE
table: recipients
type: ref
possible_keys: messages_fk, employee_idx
key: employee_idx
key_len: 5
ref: const
rows: 222640
Extra: Using where; Using temporary; Using filesort

select_type: SIMPLE
table: messages
type: eq_ref
possible_keys: PRIMARY
key: PRIMARY
key_len: 4
ref: email_archive.recipients.message_id
rows: 1
Extra:

Anyhow, having now copied these tables to another server (MySQL 5.1) and
done some tests (bumping up innodb_buffer_pool_size and playing with
innodb_flush_log_at_trx_commit for my writes and a few other knobs) it is
simply that these somewhat large tables need lots of RAM to perform well,
just as Reindl Harald originally pointed out.

Thanks again for the help everyone!

-- 
Kendall Gifford
zettab...@gmail.com


RE: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Gavin Towey
If you show the EXPLAIN SELECT .. output, and the table structure, someone will 
be able to give a more definite answer.

-Original Message-
From: Kendall Gifford [mailto:zettab...@gmail.com]
Sent: Monday, January 24, 2011 2:29 PM
To: mysql@lists.mysql.com
Subject: Re: Slow query on MySQL4 server doing simple inner join of two InnoDB 
tables

On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford wrote:

>
>
> On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe wrote:
>
>> Hi everybody!
>>
>>
>> Shawn Green (MySQL) wrote:
>> > On 1/21/2011 14:21, Kendall Gifford wrote:
>> >> Hello everyone, I've got a database on an old Fedora Core 4 server
>> >> running
>> >> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
>> >> has
>> >> just two (InnoDB) tables:
>> >>
>> >> messages (approx 2.5 million records)
>> >> recipients (approx 6.5 million records)
>> >>
>> >> [[ ... see the original post for the schema details ... ]]
>> >>
>> >>
>> >> I have the following query that is just too slow:
>> >>
>> >>> SELECT messages.* FROM messages
>> >>> INNER JOIN recipients ON recipients.message_id = messages.id
>> >>> WHERE recipients.employee_id = X
>> >>> GROUP BY messages.id
>> >>> ORDER BY sent_at DESC
>> >>> LIMIT 0, 25;
>> >>
>> >> This takes about 44 seconds on average. [[...]]
>> >>
>> >
>> > You need to get rid of the GROUP BY to make this go faster. You can do
>> > that by running two queries, one to pick the list of unique
>> > recipients.message_id values that match your where condition then
>> > another to actually retrieve the message data. [[...]]
>>
>> I don't want to contradict Shawn, but currently I fail to see the need
>> for the GROUP BY: Joining like this
>>
>>  messages INNER JOIN recipients ON recipients.message_id = messages.id
>>  WHERE recipients.employee_id = X
>>
>> can return only one row, unless there are multiple "recipients" records
>> for the same values of "message_id" and "employee_id".
>>
>> I don't know whether that can happen in the poster's application, and
>> whether it would cause trouble if the result line would occur multiple
>> times.
>>
>>
> In my application, there CAN in fact be several "recipients" records with
> both the same "message_id" foreign key value AND the same "employee_id"
> value (some employees may be a recipient of a message several times over via
> alternative addresses and/or aliases). However, as I rework things, I could
> probably rework application logic nuke the GROUP BY and just cope, in code,
> with these extra "messages" records in my result set. (Just FYI, the SQL
> query is simply the default query as created by rails or, more specifically,
> ActiveRecord 2.3.9 which I can/will-be optimizing).
>
> I will additionally be moving this database to a new server. However, for
> academic interest, I'll see if I can make time to post the query time(s)
> once I change the app, before moving the database to a new (and better
> configured) server.
>
>
Just an update for posterity, simply removing the GROUP BY clause of my
query above has, overall, no noticeable effect on performance. I suspect
server configuration, as pointed out by Reindl, is too much of a bottleneck
and is what I first need to change (working on that now). Perhaps the
removal of GROUP BY would/will be noticeable if the server configuration for
InnoDB tables wasn't so horrendous. I'll find out...

--
Kendall Gifford
zettab...@gmail.com

IMPORTANT: This email message is intended only for the use of the individual to 
whom, or entity to which, it is addressed and may contain information that is 
privileged, confidential and exempt from disclosure under applicable law. If 
you are NOT the intended recipient, you are hereby notified that any use, 
dissemination, distribution or copying of this communication is strictly 
prohibited.  If you have received this communication in error, please reply to 
the sender immediately and permanently delete this email. Thank you.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 2:20 PM, Kendall Gifford wrote:

>
>
> On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe wrote:
>
>> Hi everybody!
>>
>>
>> Shawn Green (MySQL) wrote:
>> > On 1/21/2011 14:21, Kendall Gifford wrote:
>> >> Hello everyone, I've got a database on an old Fedora Core 4 server
>> >> running
>> >> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
>> >> has
>> >> just two (InnoDB) tables:
>> >>
>> >> messages (approx 2.5 million records)
>> >> recipients (approx 6.5 million records)
>> >>
>> >> [[ ... see the original post for the schema details ... ]]
>> >>
>> >>
>> >> I have the following query that is just too slow:
>> >>
>> >>> SELECT messages.* FROM messages
>> >>> INNER JOIN recipients ON recipients.message_id = messages.id
>> >>> WHERE recipients.employee_id = X
>> >>> GROUP BY messages.id
>> >>> ORDER BY sent_at DESC
>> >>> LIMIT 0, 25;
>> >>
>> >> This takes about 44 seconds on average. [[...]]
>> >>
>> >
>> > You need to get rid of the GROUP BY to make this go faster. You can do
>> > that by running two queries, one to pick the list of unique
>> > recipients.message_id values that match your where condition then
>> > another to actually retrieve the message data. [[...]]
>>
>> I don't want to contradict Shawn, but currently I fail to see the need
>> for the GROUP BY: Joining like this
>>
>>  messages INNER JOIN recipients ON recipients.message_id = messages.id
>>  WHERE recipients.employee_id = X
>>
>> can return only one row, unless there are multiple "recipients" records
>> for the same values of "message_id" and "employee_id".
>>
>> I don't know whether that can happen in the poster's application, and
>> whether it would cause trouble if the result line would occur multiple
>> times.
>>
>>
> In my application, there CAN in fact be several "recipients" records with
> both the same "message_id" foreign key value AND the same "employee_id"
> value (some employees may be a recipient of a message several times over via
> alternative addresses and/or aliases). However, as I rework things, I could
> probably rework application logic nuke the GROUP BY and just cope, in code,
> with these extra "messages" records in my result set. (Just FYI, the SQL
> query is simply the default query as created by rails or, more specifically,
> ActiveRecord 2.3.9 which I can/will-be optimizing).
>
> I will additionally be moving this database to a new server. However, for
> academic interest, I'll see if I can make time to post the query time(s)
> once I change the app, before moving the database to a new (and better
> configured) server.
>
>
Just an update for posterity, simply removing the GROUP BY clause of my
query above has, overall, no noticeable effect on performance. I suspect
server configuration, as pointed out by Reindl, is too much of a bottleneck
and is what I first need to change (working on that now). Perhaps the
removal of GROUP BY would/will be noticeable if the server configuration for
InnoDB tables wasn't so horrendous. I'll find out...

-- 
Kendall Gifford
zettab...@gmail.com


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Kendall Gifford
On Mon, Jan 24, 2011 at 3:40 AM, Joerg Bruehe wrote:

> Hi everybody!
>
>
> Shawn Green (MySQL) wrote:
> > On 1/21/2011 14:21, Kendall Gifford wrote:
> >> Hello everyone, I've got a database on an old Fedora Core 4 server
> >> running
> >> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
> >> has
> >> just two (InnoDB) tables:
> >>
> >> messages (approx 2.5 million records)
> >> recipients (approx 6.5 million records)
> >>
> >> [[ ... see the original post for the schema details ... ]]
> >>
> >>
> >> I have the following query that is just too slow:
> >>
> >>> SELECT messages.* FROM messages
> >>> INNER JOIN recipients ON recipients.message_id = messages.id
> >>> WHERE recipients.employee_id = X
> >>> GROUP BY messages.id
> >>> ORDER BY sent_at DESC
> >>> LIMIT 0, 25;
> >>
> >> This takes about 44 seconds on average. [[...]]
> >>
> >
> > You need to get rid of the GROUP BY to make this go faster. You can do
> > that by running two queries, one to pick the list of unique
> > recipients.message_id values that match your where condition then
> > another to actually retrieve the message data. [[...]]
>
> I don't want to contradict Shawn, but currently I fail to see the need
> for the GROUP BY: Joining like this
>
>  messages INNER JOIN recipients ON recipients.message_id = messages.id
>  WHERE recipients.employee_id = X
>
> can return only one row, unless there are multiple "recipients" records
> for the same values of "message_id" and "employee_id".
>
> I don't know whether that can happen in the poster's application, and
> whether it would cause trouble if the result line would occur multiple
> times.
>
>
In my application, there CAN in fact be several "recipients" records with
both the same "message_id" foreign key value AND the same "employee_id"
value (some employees may be a recipient of a message several times over via
alternative addresses and/or aliases). However, as I rework things, I could
probably rework application logic nuke the GROUP BY and just cope, in code,
with these extra "messages" records in my result set. (Just FYI, the SQL
query is simply the default query as created by rails or, more specifically,
ActiveRecord 2.3.9 which I can/will-be optimizing).

I will additionally be moving this database to a new server. However, for
academic interest, I'll see if I can make time to post the query time(s)
once I change the app, before moving the database to a new (and better
configured) server.

Thanks for the help everybody.


>
> Regards,
> Jörg
>
>
-- 
Kendall Gifford
zettab...@gmail.com


Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-24 Thread Joerg Bruehe
Hi everybody!


Shawn Green (MySQL) wrote:
> On 1/21/2011 14:21, Kendall Gifford wrote:
>> Hello everyone, I've got a database on an old Fedora Core 4 server
>> running
>> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question
>> has
>> just two (InnoDB) tables:
>>
>> messages (approx 2.5 million records)
>> recipients (approx 6.5 million records)
>>
>> [[ ... see the original post for the schema details ... ]]
>>
>>
>> I have the following query that is just too slow:
>>
>>> SELECT messages.* FROM messages
>>> INNER JOIN recipients ON recipients.message_id = messages.id
>>> WHERE recipients.employee_id = X
>>> GROUP BY messages.id
>>> ORDER BY sent_at DESC
>>> LIMIT 0, 25;
>>
>> This takes about 44 seconds on average. [[...]]
>>
> 
> You need to get rid of the GROUP BY to make this go faster. You can do
> that by running two queries, one to pick the list of unique
> recipients.message_id values that match your where condition then
> another to actually retrieve the message data. [[...]]

I don't want to contradict Shawn, but currently I fail to see the need
for the GROUP BY: Joining like this

  messages INNER JOIN recipients ON recipients.message_id = messages.id
  WHERE recipients.employee_id = X

can return only one row, unless there are multiple "recipients" records
for the same values of "message_id" and "employee_id".

I don't know whether that can happen in the poster's application, and
whether it would cause trouble if the result line would occur multiple
times.


Regards,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
ORACLE Deutschland B.V. & Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Kendall Gifford
On Fri, Jan 21, 2011 at 2:01 PM, Shawn Green (MySQL) <
shawn.l.gr...@oracle.com> wrote:

> On 1/21/2011 14:21, Kendall Gifford wrote:
>
>> Hello everyone, I've got a database on an old Fedora Core 4 server running
>> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
>> just two (InnoDB) tables:
>>
>> messages (approx 2.5 million records)
>> recipients (approx 6.5 million records)
>>
>> These track information about email messages. Each message "has many"
>> recipient records. The structure of the two tables (omitting irrelevant
>> data
>> fields) are as follows:
>>
>>
>> +-+--+--+-+-++
>> | Field   | Type | Null | Key | Default |
>> Extra  |
>>
>> +-+--+--+-+-++
>> | id  | int(10) unsigned |  | PRI | NULL|
>> auto_increment |
>> | sent_at | datetime |  | MUL | -00-00 00:00:00
>> ||
>> | . OTHER FIELDS OMITTED FOR BREVITY
>> ... |
>>
>> +-+--+--+-+-++
>>
>>
>> +-+--+--+-+-++
>> | Field   | Type | Null | Key | Default |
>> Extra  |
>>
>> +-+--+--+-+-++
>> | id  | int(10) unsigned |  | PRI | NULL|
>> auto_increment |
>> | message_id  | int(10) unsigned |  | MUL | 0
>> ||
>> | employee_id | int(10) unsigned | YES  | MUL | NULL
>> ||
>> | . OTHER FIELDS OMITTED FOR BREVITY
>> ... |
>>
>> +-+--+--+-+-++
>>
>> I have the following query that is just too slow:
>>
>>  SELECT messages.* FROM messages
>>> INNER JOIN recipients ON recipients.message_id = messages.id
>>> WHERE recipients.employee_id = X
>>> GROUP BY messages.id
>>> ORDER BY sent_at DESC
>>> LIMIT 0, 25;
>>>
>>
>> This takes about 44 seconds on average. The query explanation is as
>> follows:
>>
>>
>> ++-+++--+--+-+-++--+
>> | id | select_type | table  | type   | possible_keys|
>> key  | key_len | ref | rows   |
>> Extra|
>>
>> ++-+++--+--+-+-++--+
>> |  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
>> employee_idx |   5 | const   | 222640 |
>> Using where; Using temporary; Using filesort |
>> |  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
>> PRIMARY  |   4 | email_archive.recipients.message_id |  1
>> |  |
>>
>> ++-+++--+--+-+-++--+
>>
>> I've been doing some searching on the web and have no idea if/how this can
>> be sped up. Most searches these days reference MySQL 5.x which I'm just
>> not
>> sure how much applies. I'm hoping that there is something obvious that I'm
>> missing, or that one of you experts knows what I might be able to change
>> to
>> speed this query up.
>>
>> Anyhow, thanks in advance for even so much as reading my message, let
>> alone
>> replying :).
>>
>>
> You need to get rid of the GROUP BY to make this go faster. You can do that
> by running two queries, one to pick the list of unique recipients.message_id
> values that match your where condition then another to actually retrieve the
> message data. Something like this
>
> CREATE TEMPORARY TABLE tmpMessages (message_id INT UNSIGNED, PRIMARY KEY
> (message_id)) ENGINE=MEMORY;
>
> INSERT IGNORE tmpMessages
> SELECT message_id
> FROM recipients
> WHERE employee_id = X;
>
> SELECT messages.* FROM messages
> INNER JOIN tmpMessages ON tmpMessages.message_id = messages.id
>
> ORDER BY sent_at DESC
> LIMIT 0, 25;
>
> By pre-selecting a limited set of message_id values from the recipients
> table, you seriously reduce the number of rows that need to be scanned.
> Also, the INSERT IGNORE technique is faster than the GROUP BY because it
> uses an index to identify any duplicates instead of a scan of all previous
> unique values.
>
> Please let us all know if this is faster enough. (and don't forget to drop
> the temp table once you are thr

Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Shawn Green (MySQL)

On 1/21/2011 14:21, Kendall Gifford wrote:

Hello everyone, I've got a database on an old Fedora Core 4 server running
MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
just two (InnoDB) tables:

messages (approx 2.5 million records)
recipients (approx 6.5 million records)

These track information about email messages. Each message "has many"
recipient records. The structure of the two tables (omitting irrelevant data
fields) are as follows:

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| sent_at | datetime |  | MUL | -00-00 00:00:00
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| message_id  | int(10) unsigned |  | MUL | 0
||
| employee_id | int(10) unsigned | YES  | MUL | NULL
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

I have the following query that is just too slow:


SELECT messages.* FROM messages
INNER JOIN recipients ON recipients.message_id = messages.id
WHERE recipients.employee_id = X
GROUP BY messages.id
ORDER BY sent_at DESC
LIMIT 0, 25;


This takes about 44 seconds on average. The query explanation is as follows:

++-+++--+--+-+-++--+
| id | select_type | table  | type   | possible_keys|
key  | key_len | ref | rows   |
Extra|
++-+++--+--+-+-++--+
|  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
employee_idx |   5 | const   | 222640 |
Using where; Using temporary; Using filesort |
|  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
PRIMARY  |   4 | email_archive.recipients.message_id |  1
|  |
++-+++--+--+-+-++--+

I've been doing some searching on the web and have no idea if/how this can
be sped up. Most searches these days reference MySQL 5.x which I'm just not
sure how much applies. I'm hoping that there is something obvious that I'm
missing, or that one of you experts knows what I might be able to change to
speed this query up.

Anyhow, thanks in advance for even so much as reading my message, let alone
replying :).



You need to get rid of the GROUP BY to make this go faster. You can do 
that by running two queries, one to pick the list of unique 
recipients.message_id values that match your where condition then 
another to actually retrieve the message data. Something like this


CREATE TEMPORARY TABLE tmpMessages (message_id INT UNSIGNED, PRIMARY KEY 
(message_id)) ENGINE=MEMORY;


INSERT IGNORE tmpMessages
SELECT message_id
FROM recipients
WHERE employee_id = X;

SELECT messages.* FROM messages
INNER JOIN tmpMessages ON tmpMessages.message_id = messages.id
ORDER BY sent_at DESC
LIMIT 0, 25;

By pre-selecting a limited set of message_id values from the recipients 
table, you seriously reduce the number of rows that need to be scanned. 
Also, the INSERT IGNORE technique is faster than the GROUP BY because it 
uses an index to identify any duplicates instead of a scan of all 
previous unique values.


Please let us all know if this is faster enough. (and don't forget to 
drop the temp table once you are through using it)


--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Reindl Harald
you need hughe ram / innodb_buffer_pool for large datasets
in a perfect world the buffer_pool is as large as the data

how looks your current config?
how much RAM has the machine?

Am 21.01.2011 20:21, schrieb Kendall Gifford:
> Hello everyone, I've got a database on an old Fedora Core 4 server running
> MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
> just two (InnoDB) tables:
> 
> messages (approx 2.5 million records)
> recipients (approx 6.5 million records)
> 
> These track information about email messages. Each message "has many"
> recipient records. The structure of the two tables (omitting irrelevant data
> fields) are as follows:
> 
> +-+--+--+-+-++
> | Field   | Type | Null | Key | Default |
> Extra  |
> +-+--+--+-+-++
> | id  | int(10) unsigned |  | PRI | NULL|
> auto_increment |
> | sent_at | datetime |  | MUL | -00-00 00:00:00
> ||
> | . OTHER FIELDS OMITTED FOR BREVITY
> ... |
> +-+--+--+-+-++
> 
> +-+--+--+-+-++
> | Field   | Type | Null | Key | Default |
> Extra  |
> +-+--+--+-+-++
> | id  | int(10) unsigned |  | PRI | NULL|
> auto_increment |
> | message_id  | int(10) unsigned |  | MUL | 0
> ||
> | employee_id | int(10) unsigned | YES  | MUL | NULL
> ||
> | . OTHER FIELDS OMITTED FOR BREVITY
> ... |
> +-+--+--+-+-++
> 
> I have the following query that is just too slow:
> 
>> SELECT messages.* FROM messages
>> INNER JOIN recipients ON recipients.message_id = messages.id
>> WHERE recipients.employee_id = X
>> GROUP BY messages.id
>> ORDER BY sent_at DESC
>> LIMIT 0, 25;
> 
> This takes about 44 seconds on average. The query explanation is as follows:
> 
> ++-+++--+--+-+-++--+
> | id | select_type | table  | type   | possible_keys|
> key  | key_len | ref | rows   |
> Extra|
> ++-+++--+--+-+-++--+
> |  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
> employee_idx |   5 | const   | 222640 |
> Using where; Using temporary; Using filesort |
> |  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
> PRIMARY  |   4 | email_archive.recipients.message_id |  1
> |  |
> ++-+++--+--+-+-++--+
> 
> I've been doing some searching on the web and have no idea if/how this can
> be sped up. Most searches these days reference MySQL 5.x which I'm just not
> sure how much applies. I'm hoping that there is something obvious that I'm
> missing, or that one of you experts knows what I might be able to change to
> speed this query up.
> 
> Anyhow, thanks in advance for even so much as reading my message, let alone
> replying :).
> 

-- 

Mit besten Grüßen, Reindl Harald
the lounge interactive design GmbH
A-1060 Vienna, Hofmühlgasse 17
CTO / software-development / cms-solutions
p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
icq: 154546673, http://www.thelounge.net/



signature.asc
Description: OpenPGP digital signature


Slow query on MySQL4 server doing simple inner join of two InnoDB tables

2011-01-21 Thread Kendall Gifford
Hello everyone, I've got a database on an old Fedora Core 4 server running
MySQL 4 (mysql-server.x86_64 4.1.12-2.FC4.1). The database in question has
just two (InnoDB) tables:

messages (approx 2.5 million records)
recipients (approx 6.5 million records)

These track information about email messages. Each message "has many"
recipient records. The structure of the two tables (omitting irrelevant data
fields) are as follows:

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| sent_at | datetime |  | MUL | -00-00 00:00:00
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

+-+--+--+-+-++
| Field   | Type | Null | Key | Default |
Extra  |
+-+--+--+-+-++
| id  | int(10) unsigned |  | PRI | NULL|
auto_increment |
| message_id  | int(10) unsigned |  | MUL | 0
||
| employee_id | int(10) unsigned | YES  | MUL | NULL
||
| . OTHER FIELDS OMITTED FOR BREVITY
... |
+-+--+--+-+-++

I have the following query that is just too slow:

> SELECT messages.* FROM messages
> INNER JOIN recipients ON recipients.message_id = messages.id
> WHERE recipients.employee_id = X
> GROUP BY messages.id
> ORDER BY sent_at DESC
> LIMIT 0, 25;

This takes about 44 seconds on average. The query explanation is as follows:

++-+++--+--+-+-++--+
| id | select_type | table  | type   | possible_keys|
key  | key_len | ref | rows   |
Extra|
++-+++--+--+-+-++--+
|  1 | SIMPLE  | recipients | ref| messages_fk,employee_idx |
employee_idx |   5 | const   | 222640 |
Using where; Using temporary; Using filesort |
|  1 | SIMPLE  | messages   | eq_ref | PRIMARY  |
PRIMARY  |   4 | email_archive.recipients.message_id |  1
|  |
++-+++--+--+-+-++--+

I've been doing some searching on the web and have no idea if/how this can
be sped up. Most searches these days reference MySQL 5.x which I'm just not
sure how much applies. I'm hoping that there is something obvious that I'm
missing, or that one of you experts knows what I might be able to change to
speed this query up.

Anyhow, thanks in advance for even so much as reading my message, let alone
replying :).

-- 
Kendall Gifford
zettab...@gmail.com


Re: Log Mysql slow query into table

2010-11-30 Thread Wagner Bianchi
Have a look on it:

mysql> show variables like '%slow%';
+-+--+
| Variable_name   | Value|
+-+--+
| log_slow_queries| OFF  |
| slow_launch_time| 2|
| slow_query_log  | OFF  |
| slow_query_log_file | /var/lib/mysql/grey-slow.log |
+-+--+
4 rows in set (0.05 sec)

mysql> set global slow_query_log = 1;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like '%slow%';
+-+--+
| Variable_name   | Value|
+-+--+
| log_slow_queries| ON   |
| slow_launch_time| 2|
| slow_query_log  | ON   |
| slow_query_log_file | /var/lib/mysql/grey-slow.log |
+-+--+
4 rows in set (0.06 sec)

one advise is, be aware that guide its slow queries to a tabel will impact
MySQL's performance.

Best regards.
--
WB

Skype: wbianchijr (preferred way to contact me)


2010/11/30 Cool Cool 

> Hi,
>
>  I am trying to log slow queries into both file and table.
> I had set  as > SET GLOBAL log_output =`TABLE,FILE`; But it isnt getting
> logged
> into tables.
>
> Can I know if I need to create table or am I missing anything ?
>
> Thanks.
>
> Regards,
> Ram
>
>


Log Mysql slow query into table

2010-11-30 Thread Cool Cool
Hi,

  I am trying to log slow queries into both file and table. 
I had set  as > SET GLOBAL log_output =`TABLE,FILE`; But it isnt getting logged 
into tables. 

Can I know if I need to create table or am I missing anything ?

Thanks.

Regards,
Ram



RE: Slow query using string operator

2010-08-11 Thread Jerry Schwartz
Yowsers! I expected that eliminating half of the string manipulation would 
help, but I never imagined that the difference would be so great.

The SELECT now runs in well under a second.
=

SELECT pub_id FROM pub WHERE pub_code = 'DC' INTO @PUBID;

DROP TEMPORARY TABLE IF EXISTS feed_new;

CREATE TEMPORARY TABLE feed_new (
new_title VARCHAR(255), PRIMARY KEY (new_title),
new_title_truncated VARCHAR(255), INDEX (new_title_truncated)
);

#   The next line loads up dummy data, but it "looks like"
#   what I'm really using.

INSERT IGNORE INTO feed_new
SELECT prod.prod_title, LEFT(prod.prod_title, LENGTH(prod.prod_title) - 
5)
FROM prod
WHERE prod.prod_discont = 0
AND prod.pub_id = @PUBID
AND RAND() < .01;

SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON feed_new.new_title_truncated = LEFT(prod.prod_title, 
LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title;
117 rows in set (0.25 sec)

EXPLAIN
SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON feed_new.new_title_truncated = LEFT(prod.prod_title, 
LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title\G

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,pub_id_2
  key: pub_id
  key_len: 48
  ref: const
 rows: 11041
Extra: Using where; Using temporary; Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: ref
possible_keys: new_title_truncated
  key: new_title_truncated
  key_len: 768
  ref: func
 rows: 1
Extra: Using where; Using index
==

It look like the optimizer flipped the JOIN around so that it could use the 
key in feed_new.

Thanks for your help, all.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com


>-Original Message-
>From: Travis Ard [mailto:travis_...@hotmail.com]
>Sent: Tuesday, August 10, 2010 6:53 PM
>To: 'Jerry Schwartz'; mysql@lists.mysql.com
>Subject: RE: Slow query using string operator
>
>Can you create a second, indexed column in your feed_new temp table that
>includes the title without the year appended?  That might allow you to get
>by with a single pass through the larger prod table and avoid reading rows
>from the feed_new table.
>
>-Travis
>
>-Original Message-
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Tuesday, August 10, 2010 3:39 PM
>To: mysql@lists.mysql.com
>Subject: Slow query using string operator
>
>I'm running a set of queries that look like this:
>
>===
>SET @PUBID = (SELECT pub_id FROM pub WHERE pub_code = 'DC');
>
>DROP TEMPORARY TABLE IF EXISTS feed_new;
>
>CREATE TEMPORARY TABLE feed_new (
>   new_title VARCHAR(255), INDEX (new_title)
>   );
>
>INSERT INTO feed_new
>VALUES
>
>('UK Investment Bonds 2010'),
>('UK Protection 2010'),
>('UK Personal Insurance Distribution 2010'),
>('UK Private Medical Insurance 2010'),
>...
>('UK Private Motor Insurance 2010'),
>('Wealth Management for Non-Resident Indians 2010'),
>('Middle Eastern Cards Database 2010')
>;
>
>SELECT
>   feed_new.new_title AS `New Title FROM Feed`,
>   prod.prod_pub_prod_id AS `Lib Code FROM DB`,
>   prod.prod_title AS `Title FROM DB`,
>   prod.prod_num AS `Prod Num`,
>   prod.prod_published AS `Published FROM DB`
>FROM feed_new JOIN prod
>ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) =
>LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
>WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
>ORDER BY feed_new.new_title;
>
>
>
>With a relatively small number of rows in `feed_new`, this can take many
>seconds. With 163 rows in `feed_new`, compared against 11234 eligible rows
>in
>prod, it took abou

RE: Slow query using string operator

2010-08-11 Thread Jerry Schwartz
>-Original Message-
>From: Travis Ard [mailto:travis_...@hotmail.com]
>Sent: Tuesday, August 10, 2010 6:53 PM
>To: 'Jerry Schwartz'; mysql@lists.mysql.com
>Subject: RE: Slow query using string operator
>
>Can you create a second, indexed column in your feed_new temp table that
>includes the title without the year appended?  That might allow you to get
>by with a single pass through the larger prod table and avoid reading rows
>from the feed_new table.
>
[JS] I have thought about that, but haven't tried it yet.

I'll let you know.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com



>-Travis
>
>-Original Message-
>From: Jerry Schwartz [mailto:je...@gii.co.jp]
>Sent: Tuesday, August 10, 2010 3:39 PM
>To: mysql@lists.mysql.com
>Subject: Slow query using string operator
>
>I'm running a set of queries that look like this:
>
>===
>SET @PUBID = (SELECT pub_id FROM pub WHERE pub_code = 'DC');
>
>DROP TEMPORARY TABLE IF EXISTS feed_new;
>
>CREATE TEMPORARY TABLE feed_new (
>   new_title VARCHAR(255), INDEX (new_title)
>   );
>
>INSERT INTO feed_new
>VALUES
>
>('UK Investment Bonds 2010'),
>('UK Protection 2010'),
>('UK Personal Insurance Distribution 2010'),
>('UK Private Medical Insurance 2010'),
>...
>('UK Private Motor Insurance 2010'),
>('Wealth Management for Non-Resident Indians 2010'),
>('Middle Eastern Cards Database 2010')
>;
>
>SELECT
>   feed_new.new_title AS `New Title FROM Feed`,
>   prod.prod_pub_prod_id AS `Lib Code FROM DB`,
>   prod.prod_title AS `Title FROM DB`,
>   prod.prod_num AS `Prod Num`,
>   prod.prod_published AS `Published FROM DB`
>FROM feed_new JOIN prod
>ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) =
>LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
>WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
>ORDER BY feed_new.new_title;
>
>
>
>With a relatively small number of rows in `feed_new`, this can take many
>seconds. With 163 rows in `feed_new`, compared against 11234 eligible rows
>in
>prod, it took about 28 seconds. Here's what an EXPLAIN looks like:
>
>*** 1. row ***
>   id: 1
>  select_type: SIMPLE
>table: feed_new
> type: index
>possible_keys: NULL
>  key: PRIMARY
>  key_len: 767
>  ref: NULL
> rows: 1
>Extra: Using index
>*** 2. row ***
>   id: 1
>  select_type: SIMPLE
>table: prod
> type: ref
>possible_keys: pub_id,pub_id_2
>  key: pub_id
>  key_len: 48
>  ref: const
> rows: 11040
>Extra: Using where
>
>=
>
>prod.pub_id is an indexed VARCHAR(15).
>
>If I remove the string functions, I don't get what I want -- but the
>remaining
>query runs in .05 seconds. Here's an EXPLAIN of that one:
>
>===
>us-gii >EXPLAIN
>-> SELECT
>->  feed_new.new_title AS `New Title FROM Feed`,
>->  prod.prod_pub_prod_id AS `Lib Code FROM DB`,
>->  prod.prod_title AS `Title FROM DB`,
>->  prod.prod_num AS `Prod Num`,
>->  prod.prod_published AS `Published FROM DB`
>-> FROM feed_new JOIN prod
>-> ON feed_new.new_title = prod.prod_title
>-> WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
>-> ORDER BY feed_new.new_title\G
>*** 1. row ***
>   id: 1
>  select_type: SIMPLE
>table: feed_new
> type: index
>possible_keys: PRIMARY
>  key: PRIMARY
>  key_len: 767
>  ref: NULL
> rows: 163
>Extra: Using index
>*** 2. row ***
>   id: 1
>  select_type: SIMPLE
>table: prod
> type: ref
>possible_keys: pub_id,prod_title,pub_id_2,prod_title_fulltext
>  key: prod_title
>  key_len: 768
>  ref: giiexpr_db.feed_new.new_title
> rows: 1
>Extra: Using where
>
>
>Obviously the string manipulation is keeping MySQL from using `prod_title`
>as
>a key, but I wouldn't have thought that using `pub_id` instead would be that
>
>horrific.
>
>Does anyone have any suggestions as to how to speed this business up? I
>can't
>get away without som

Re: Slow query using string operator

2010-08-11 Thread Joerg Bruehe
Hi Jerry, all!


I second Travis' advice:

Travis Ard schrieb:
> Can you create a second, indexed column in your feed_new temp table that
> includes the title without the year appended?  That might allow you to get
> by with a single pass through the larger prod table and avoid reading rows
> from the feed_new table. 

The original query does a string operation on the values from both sides
before checking the result for equality:

> -Original Message-
> From: Jerry Schwartz [mailto:je...@gii.co.jp] 
> Sent: Tuesday, August 10, 2010 3:39 PM
> To: mysql@lists.mysql.com
> Subject: Slow query using string operator
> 
> I'm running a set of queries that look like this:
> 
> [[...]]
> 
> SELECT
>   feed_new.new_title AS `New Title FROM Feed`,
>   prod.prod_pub_prod_id AS `Lib Code FROM DB`,
>   prod.prod_title AS `Title FROM DB`,
>   prod.prod_num AS `Prod Num`,
>   prod.prod_published AS `Published FROM DB`
> FROM feed_new JOIN prod
> ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) = 
> LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
> WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
> ORDER BY feed_new.new_title;

So neither value is taken directly, which means the values in the
indexes (if defined) cannot be used anyway.

If you need these calculations, you should compute and maintain these
values when inserting/updating data (define triggers doing this, or run
periodic maintenance/check jobs), and store them in suitable indexes.

AFAIK, this applies to all comparisons which use function results rather
than column values directly.


HTH,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@oracle.com
ORACLE Deutschland B.V. & Co. KG,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz, Marcel v.d. Molen, Alexander v.d. Ven
Amtsgericht Muenchen: HRA 95603


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow query using string operator

2010-08-10 Thread Travis Ard
Can you create a second, indexed column in your feed_new temp table that
includes the title without the year appended?  That might allow you to get
by with a single pass through the larger prod table and avoid reading rows
from the feed_new table. 

-Travis

-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp] 
Sent: Tuesday, August 10, 2010 3:39 PM
To: mysql@lists.mysql.com
Subject: Slow query using string operator

I'm running a set of queries that look like this:

===
SET @PUBID = (SELECT pub_id FROM pub WHERE pub_code = 'DC');

DROP TEMPORARY TABLE IF EXISTS feed_new;

CREATE TEMPORARY TABLE feed_new (
new_title VARCHAR(255), INDEX (new_title)
);

INSERT INTO feed_new
VALUES

('UK Investment Bonds 2010'),
('UK Protection 2010'),
('UK Personal Insurance Distribution 2010'),
('UK Private Medical Insurance 2010'),
...
('UK Private Motor Insurance 2010'),
('Wealth Management for Non-Resident Indians 2010'),
('Middle Eastern Cards Database 2010')
;

SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) = 
LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title;



With a relatively small number of rows in `feed_new`, this can take many 
seconds. With 163 rows in `feed_new`, compared against 11234 eligible rows
in 
prod, it took about 28 seconds. Here's what an EXPLAIN looks like:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: NULL
  key: PRIMARY
  key_len: 767
  ref: NULL
 rows: 1
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,pub_id_2
  key: pub_id
  key_len: 48
  ref: const
 rows: 11040
Extra: Using where

=

prod.pub_id is an indexed VARCHAR(15).

If I remove the string functions, I don't get what I want -- but the
remaining 
query runs in .05 seconds. Here's an EXPLAIN of that one:

===
us-gii >EXPLAIN
-> SELECT
->  feed_new.new_title AS `New Title FROM Feed`,
->  prod.prod_pub_prod_id AS `Lib Code FROM DB`,
->  prod.prod_title AS `Title FROM DB`,
->  prod.prod_num AS `Prod Num`,
->  prod.prod_published AS `Published FROM DB`
-> FROM feed_new JOIN prod
-> ON feed_new.new_title = prod.prod_title
-> WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
-> ORDER BY feed_new.new_title\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 767
  ref: NULL
 rows: 163
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,prod_title,pub_id_2,prod_title_fulltext
  key: prod_title
  key_len: 768
  ref: giiexpr_db.feed_new.new_title
 rows: 1
Extra: Using where


Obviously the string manipulation is keeping MySQL from using `prod_title`
as 
a key, but I wouldn't have thought that using `pub_id` instead would be that

horrific.

Does anyone have any suggestions as to how to speed this business up? I
can't 
get away without some string manipulation, because I'm looking for "near 
matches" by ignoring the year at the end of the title.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=travis_...@hotmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Slow query using string operator

2010-08-10 Thread Jerry Schwartz
I'm running a set of queries that look like this:

===
SET @PUBID = (SELECT pub_id FROM pub WHERE pub_code = 'DC');

DROP TEMPORARY TABLE IF EXISTS feed_new;

CREATE TEMPORARY TABLE feed_new (
new_title VARCHAR(255), INDEX (new_title)
);

INSERT INTO feed_new
VALUES

('UK Investment Bonds 2010'),
('UK Protection 2010'),
('UK Personal Insurance Distribution 2010'),
('UK Private Medical Insurance 2010'),
...
('UK Private Motor Insurance 2010'),
('Wealth Management for Non-Resident Indians 2010'),
('Middle Eastern Cards Database 2010')
;

SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) = 
LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title;



With a relatively small number of rows in `feed_new`, this can take many 
seconds. With 163 rows in `feed_new`, compared against 11234 eligible rows in 
prod, it took about 28 seconds. Here's what an EXPLAIN looks like:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: NULL
  key: PRIMARY
  key_len: 767
  ref: NULL
 rows: 1
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,pub_id_2
  key: pub_id
  key_len: 48
  ref: const
 rows: 11040
Extra: Using where

=

prod.pub_id is an indexed VARCHAR(15).

If I remove the string functions, I don't get what I want -- but the remaining 
query runs in .05 seconds. Here's an EXPLAIN of that one:

===
us-gii >EXPLAIN
-> SELECT
->  feed_new.new_title AS `New Title FROM Feed`,
->  prod.prod_pub_prod_id AS `Lib Code FROM DB`,
->  prod.prod_title AS `Title FROM DB`,
->  prod.prod_num AS `Prod Num`,
->  prod.prod_published AS `Published FROM DB`
-> FROM feed_new JOIN prod
-> ON feed_new.new_title = prod.prod_title
-> WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
-> ORDER BY feed_new.new_title\G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: PRIMARY
  key: PRIMARY
  key_len: 767
  ref: NULL
 rows: 163
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,prod_title,pub_id_2,prod_title_fulltext
  key: prod_title
  key_len: 768
  ref: giiexpr_db.feed_new.new_title
 rows: 1
Extra: Using where


Obviously the string manipulation is keeping MySQL from using `prod_title` as 
a key, but I wouldn't have thought that using `pub_id` instead would be that 
horrific.

Does anyone have any suggestions as to how to speed this business up? I can't 
get away without some string manipulation, because I'm looking for "near 
matches" by ignoring the year at the end of the title.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow query using string functions

2010-05-27 Thread Jerry Schwartz
I haven't had a chance to try that, and this might be totally useless 
information, but to create a clone of the bigger table (CREATE LIKE and INSERT 
INTO) takes about 12 seconds.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com


>-Original Message-
>From: baron.schwa...@gmail.com [mailto:baron.schwa...@gmail.com] On Behalf Of
>Baron Schwartz
>Sent: Thursday, May 27, 2010 9:09 AM
>To: MySql
>Subject: Re: Slow query using string functions
>
>Jerry,
>
>On Wed, May 26, 2010 at 5:13 PM, Jerry Schwartz  wrote:
>> I have a pretty simple query that seems to take a lot longer than it ought 
>> to
>> (over 2 minutes).
>>
>
>I suspect that if you watch Handler_ stats, you'll find that the
>EXPLAIN estimate is wrong for some reason and it's accessing many more
>rows than you think in the second table, or something similar.  In any
>case, I'd start by measuring what the query is actually doing, not
>what EXPLAIN thinks.  What does that show?
>
>--
>MySQL General Mailing List
>For list archives: http://lists.mysql.com/mysql
>To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow query using string functions

2010-05-27 Thread Jerry Schwartz

>-Original Message-
>From: Gavin Towey [mailto:gto...@ffn.com]
>Sent: Wednesday, May 26, 2010 7:39 PM
>To: je...@gii.co.jp; mysql@lists.mysql.com
>Subject: RE: Slow query using string functions
>
>Jerry,
>
>Are you sure this is really your explain plan for this query?  That's not at
>all what I would expect to see.
>
>Regards,
>Gavin Towey
>
[JS] I'm as sure as I can be. Here's a lot more information:

==

SET @PUBID = (SELECT pub_id FROM pub WHERE pub_code = 'DC');

DROP TEMPORARY TABLE IF EXISTS feed_new;

CREATE TEMPORARY TABLE feed_new (
new_title VARCHAR(255), INDEX (new_title)
);

INSERT INTO feed_new
VALUES

('Automotive Aftermarket in France - Channel Analysis to 2014'),
('Automotive Aftermarket in Germany - Channel Analysis to 2014'),
('Automotive Aftermarket in Italy - Channel Analysis to 2014'),
... about 900 more rows

SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) = 
LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title;

==

>explain
-> SELECT
->  feed_new.new_title AS `New Title FROM Feed`,
->  prod.prod_pub_prod_id AS `Lib Code FROM DB`,
->  prod.prod_title AS `Title FROM DB`,
->  prod.prod_num AS `Prod Num`,
->  prod.prod_published AS `Published FROM DB`
-> FROM feed_new JOIN prod
-> ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) = 
LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
-> WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
-> ORDER BY feed_new.new_title
-> \G
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: NULL
  key: new_title
  key_len: 768
  ref: NULL
 rows: 1
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,pub_id_2
  key: pub_id
  key_len: 48
  ref: const
 rows: 9817
Extra: Using where
2 rows in set (0.00 sec)

==

I assume that pub_id is the best key to use to limit the rows pulled from 
prod, since it cuts it down to 9817 rows from 11. I'm guessing that even 
though prod_title is a key (MUL), using it in the expression

  LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)

keeps it from being used as a key.

If I replace the expressions in the ON clause with simple matches, I get a 
very different result:

>explain
-> SELECT
->  feed_new.new_title AS `New Title FROM Feed`,
->  prod.prod_pub_prod_id AS `Lib Code FROM DB`,
->  prod.prod_title AS `Title FROM DB`,
->  prod.prod_num AS `Prod Num`,
->  prod.prod_published AS `Published FROM DB`
-> FROM feed_new JOIN prod
-> ON feed_new.new_title = prod.prod_title
-> WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
-> ORDER BY feed_new.new_title
-> \G
*** 1. row **
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: new_title
  key: new_title
  key_len: 768
  ref: NULL
 rows: 882
Extra: Using index
*** 2. row **
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,prod_title,pub_id_2,prod_title_fulltext
  key: prod_title
  key_len: 768
  ref: giiexpr_db.feed_new.new_title
 rows: 1
Extra: Using where
2 rows in set (0.00 sec)
=

Of course, in this case the number of qualifying rows is vastly different; but 
I would think the optimization has to happen before the results of the SELECT 
are known. No optimizer is that prescient. :-)

I'm not really surprised by the different optimizations, I'm surprised by the 
difference in execution time. Using the function-based expressions in the ON 
clause takes 2:03.38 minutes to return 267 rows (the first time, the second 
time is a little faster); using a simple comparison with no functions takes 
.03 seconds to return 1 row.

I benchmarked the expression I'm using

   select benchmark(1000,left("asbcdkdfsaklfjdakl;fjasdl;fjasdl;fjasd", 
length("asbcdkdfsaklfjdakl;fjasdl;fjasdl;fjasd") - 5));

and the result was 2.87 seconds. That's ten million evaluations.

So where is the time going?

Re: Slow query using string functions

2010-05-27 Thread Baron Schwartz
Jerry,

On Wed, May 26, 2010 at 5:13 PM, Jerry Schwartz  wrote:
> I have a pretty simple query that seems to take a lot longer than it ought to
> (over 2 minutes).
>

I suspect that if you watch Handler_ stats, you'll find that the
EXPLAIN estimate is wrong for some reason and it's accessing many more
rows than you think in the second table, or something similar.  In any
case, I'd start by measuring what the query is actually doing, not
what EXPLAIN thinks.  What does that show?

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query using string functions

2010-05-27 Thread Joerg Bruehe
Hi!


Jerry Schwartz wrote:
> I have a pretty simple query that seems to take a lot longer than it ought to 
> (over 2 minutes).
> 
> [[...]]
> 
> SELECT
>   feed_new.new_title AS `New Title FROM Feed`,
>   prod.prod_pub_prod_id AS `Lib Code FROM DB`,
>   prod.prod_title AS `Title FROM DB`,
>   prod.prod_num AS `Prod Num`,
>   prod.prod_published AS `Published FROM DB`
> FROM feed_new JOIN prod
> ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) = 
> LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
> WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
> ORDER BY feed_new.new_title;
> 
> [[...]]
> 
> The query is doing a scan of the 9816 records that have pub_id = @PUBID, but 
> even so this seems like a long time. Are the built-in string functions really 
> that slow?

The general rule is:
If you are not taking the value of a column directly but are applying
any function to it (like "LEFT" in your statement), an index cannot be
used. So the table needs to be accessed (scanned) and the function
computed on each row, to evaluate the predicate (the comparison).

I am no expert in checking "explain" output, so I may well be wrong in
my guess:
I think the execution will scan the whole "feed_new" table (895 records)
for each of those 9816 matches of pub_id, so it is doing 8785320 calls
of "LEFT()" followed by a string comparison.

> 
> I suspect it would be faster if I built separate tables that had just the 
> shortened versions of the titles, but I wouldn't think that would be 
> necessary.

So IMO you have two choices:
- Either you accept the performance implications of a table scan
  (which will of course get worse when your data grows),
- or you introduce another column in your table in which you store the
  function result (maintained on INSERT and UPDATE) and create an index
  on this column.

An additional table with the shortened columns is no good idea IMO,
because you would need to maintain it in sync with your "real" data.
In your example, it should be sufficient to add the new column to table
"feed", because your execution strategy should start by evaluating
prod.pub_id = @PUBID


HTH,
Jörg

-- 
Joerg Bruehe,  MySQL Build Team,  joerg.bru...@sun.com
Sun Microsystems GmbH,   Komturstrasse 18a,   D-12099 Berlin
Geschaeftsfuehrer: Juergen Kunz
Amtsgericht Muenchen: HRB161028


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow query using string functions

2010-05-26 Thread Gavin Towey
Jerry,

Are you sure this is really your explain plan for this query?  That's not at 
all what I would expect to see.

Regards,
Gavin Towey


-Original Message-
From: Jerry Schwartz [mailto:je...@gii.co.jp]
Sent: Wednesday, May 26, 2010 2:14 PM
To: mysql@lists.mysql.com
Subject: Slow query using string functions

I have a pretty simple query that seems to take a lot longer than it ought to
(over 2 minutes).

Table `feed_new` has a single VARCHAR(255) column, `new_title`, that is an
index.

Table `prod` has many fields: `prod_title` and `pub_id` are both indexes
(VARCHAR).

`feed_new` has 895 records, `prod` has 110432.

SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) =
LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title;

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: NULL
  key: new_title
  key_len: 768
  ref: NULL
 rows: 1
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,pub_id_2
  key: pub_id
  key_len: 48
  ref: const
 rows: 9816
Extra: Using where

The query is doing a scan of the 9816 records that have pub_id = @PUBID, but
even so this seems like a long time. Are the built-in string functions really
that slow?

I suspect it would be faster if I built separate tables that had just the
shortened versions of the titles, but I wouldn't think that would be
necessary.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


This message contains confidential information and is intended only for the 
individual named.  If you are not the named addressee, you are notified that 
reviewing, disseminating, disclosing, copying or distributing this e-mail is 
strictly prohibited.  Please notify the sender immediately by e-mail if you 
have received this e-mail by mistake and delete this e-mail from your system. 
E-mail transmission cannot be guaranteed to be secure or error-free as 
information could be intercepted, corrupted, lost, destroyed, arrive late or 
incomplete, or contain viruses. The sender therefore does not accept liability 
for any loss or damage caused by viruses or errors or omissions in the contents 
of this message, which arise as a result of e-mail transmission. [FriendFinder 
Networks, Inc., 220 Humbolt court, Sunnyvale, CA 94089, USA, FriendFinder.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Slow query using string functions

2010-05-26 Thread Jerry Schwartz
I have a pretty simple query that seems to take a lot longer than it ought to 
(over 2 minutes).

Table `feed_new` has a single VARCHAR(255) column, `new_title`, that is an 
index.

Table `prod` has many fields: `prod_title` and `pub_id` are both indexes 
(VARCHAR).

`feed_new` has 895 records, `prod` has 110432.

SELECT
feed_new.new_title AS `New Title FROM Feed`,
prod.prod_pub_prod_id AS `Lib Code FROM DB`,
prod.prod_title AS `Title FROM DB`,
prod.prod_num AS `Prod Num`,
prod.prod_published AS `Published FROM DB`
FROM feed_new JOIN prod
ON LEFT(feed_new.new_title, LENGTH(feed_new.new_title) - 5) = 
LEFT(prod.prod_title, LENGTH(prod.prod_title) - 5)
WHERE prod.pub_id = @PUBID AND prod.prod_discont = 0
ORDER BY feed_new.new_title;

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: feed_new
 type: index
possible_keys: NULL
  key: new_title
  key_len: 768
  ref: NULL
 rows: 1
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: prod
 type: ref
possible_keys: pub_id,pub_id_2
  key: pub_id
  key_len: 48
  ref: const
 rows: 9816
Extra: Using where

The query is doing a scan of the 9816 records that have pub_id = @PUBID, but 
even so this seems like a long time. Are the built-in string functions really 
that slow?

I suspect it would be faster if I built separate tables that had just the 
shortened versions of the titles, but I wouldn't think that would be 
necessary.


Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341

www.the-infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: slow query on replication master and slave

2010-04-27 Thread Johan De Meersman
On Wed, Apr 28, 2010 at 12:17 AM, Kandy Wong  wrote:

> Is it true that the performance of running a query on a live replication
> master and slave has to be much slower than running a query on a static
> server?
>
> I've tried to run the following query on a replication master and it takes
> 1 min 13.76 sec to finish.
> SELECT *, ABS(timeA-1266143632) as distance FROM tableA WHERE timeA -
> 1266143632 <= 0  ORDER BY distance LIMIT 1;
>
> And if I run it on the replication slave, it takes 24.15 sec.
>

The slave most likely didn't have the table data in memory, so had to do a
disk read. Also, if you've performed that query before on your master, it
may still have been in the query cache.

Add the SQL_NO_CACHE keyword right after the word "select" to disable the
query cache when checking query performance.


> But if I dump the whole database to another machine as static, it only
> takes 3.70 sec or even less to finish.
>

When you load just that table, the data you just inserted will still have
been in memory and/or in the OS cache, avoiding a disk read.


> Is there a way to improve the query or any other factors that would affect
> the performance?
>

Rewrite your where clause to "where timeA <= 1266143632" - that will allow
you to put an index on timeA and benefiting from it. Also, ordering by timeA
should give the same ordering as by distance, and may also use the index to
sort instead of a filesort.


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


slow query on replication master and slave

2010-04-27 Thread Kandy Wong

Hi,

Is it true that the performance of running a query on a live replication 
master and slave has to be much slower than running a query on a static 
server?


I've tried to run the following query on a replication master and it 
takes 1 min 13.76 sec to finish.
SELECT *, ABS(timeA-1266143632) as distance FROM tableA WHERE timeA - 
1266143632 <= 0  ORDER BY distance LIMIT 1;


And if I run it on the replication slave, it takes 24.15 sec.
But if I dump the whole database to another machine as static, it only 
takes 3.70 sec or even less to finish.


The table has 386 columns and timeA is an index. 

Is there a way to improve the query or any other factors that would 
affect the performance?


Thanks.

Kandy


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query, unknown why

2010-04-26 Thread mos

Yves,
 What happens if you replace the  "tk.UserId IN (22943, 10899)" with 
just one argument " tk.UserId = 22943".


Does it run much faster? If so, the In() statement may not be using an 
index.  You could try using a Union instead of In() to see if that is any 
faster.
I have also found that if the tables used in the join are not going to be 
updated, then I copy the rows to a Memory table and apply the appropriate 
indexes and the joins will run 2x-3x faster.


Mike

At 11:12 AM 4/25/2010, Yves Goergen wrote:

Hi,

I'm still stuck with my SQL query that is slow but really shouldn't be.

The problem is that I cannot create a simple test case. I could only
provide you a whole lot of pages of PHP code and SQL queries to explain
the problem.

I have now three versions of my query. One with a sub select, which
takes 40 ms and works. One with a left join instead, which takes 40 ms
and works. And one with an inner join instead, which takes 3 ms and
doesn't work. The number of left-joined rows should be around 5, so what
can make it take 35 ms to join those handful of rows?

MySQL server version is 5.0.67 and 5.1.41 (just updated).

Here's a small impression of my query:

SELECT t.TagId, t.TagName, tk.UserId
FROM message_revision_tag mrt
  JOIN tag t USING (TagId)
  LEFT JOIN keylist tk ON   -- Here's the left join
(tk.KeylistId = t.ReadAccessKeylistId AND
tk.UserId IN (22943, 10899))
WHERE mrt.MessageId = 72 AND
  mrt.RevisionNumber = 1 AND
  t.ReadAccessKeylistId IS NOT NULL;

This is only a sub-query of a larger search query in my PHP application.
MySQL workbench can't show query timings so I can'T say how long this
part of the query takes. It's probably fast, but it is applied to ~600
other rows to determine whether they should be included in the results
or not.

--
Yves Goergen "LonelyPixel" 
Visit my web laboratory at http://beta.unclassified.de

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query, unknown why

2010-04-25 Thread Rob Wultsch
On Sun, Apr 25, 2010 at 9:12 AM, Yves Goergen
 wrote:
> Hi,
>
> I'm still stuck with my SQL query that is slow but really shouldn't be.
>
> The problem is that I cannot create a simple test case. I could only
> provide you a whole lot of pages of PHP code and SQL queries to explain
> the problem.
>
> I have now three versions of my query. One with a sub select, which
> takes 40 ms and works. One with a left join instead, which takes 40 ms
> and works. And one with an inner join instead, which takes 3 ms and
> doesn't work. The number of left-joined rows should be around 5, so what
> can make it take 35 ms to join those handful of rows?
>
> MySQL server version is 5.0.67 and 5.1.41 (just updated).
>
> Here's a small impression of my query:
>
> SELECT t.TagId, t.TagName, tk.UserId
> FROM message_revision_tag mrt
>  JOIN tag t USING (TagId)
>  LEFT JOIN keylist tk ON   -- Here's the left join
>    (tk.KeylistId = t.ReadAccessKeylistId AND
>    tk.UserId IN (22943, 10899))
> WHERE mrt.MessageId = 72 AND
>  mrt.RevisionNumber = 1 AND
>  t.ReadAccessKeylistId IS NOT NULL;
>
> This is only a sub-query of a larger search query in my PHP application.
> MySQL workbench can't show query timings so I can'T say how long this
> part of the query takes. It's probably fast, but it is applied to ~600
> other rows to determine whether they should be included in the results
> or not.
>

Please post create table statements, show indexes and explain for any
queries with which you want help. Please then post each of your
attempted queries coupled with a written description of what you think
you are asking the database for.


-- 
Rob Wultsch
wult...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Slow query, unknown why

2010-04-25 Thread Yves Goergen
Hi,

I'm still stuck with my SQL query that is slow but really shouldn't be.

The problem is that I cannot create a simple test case. I could only
provide you a whole lot of pages of PHP code and SQL queries to explain
the problem.

I have now three versions of my query. One with a sub select, which
takes 40 ms and works. One with a left join instead, which takes 40 ms
and works. And one with an inner join instead, which takes 3 ms and
doesn't work. The number of left-joined rows should be around 5, so what
can make it take 35 ms to join those handful of rows?

MySQL server version is 5.0.67 and 5.1.41 (just updated).

Here's a small impression of my query:

SELECT t.TagId, t.TagName, tk.UserId
FROM message_revision_tag mrt
  JOIN tag t USING (TagId)
  LEFT JOIN keylist tk ON   -- Here's the left join
(tk.KeylistId = t.ReadAccessKeylistId AND
tk.UserId IN (22943, 10899))
WHERE mrt.MessageId = 72 AND
  mrt.RevisionNumber = 1 AND
  t.ReadAccessKeylistId IS NOT NULL;

This is only a sub-query of a larger search query in my PHP application.
MySQL workbench can't show query timings so I can'T say how long this
part of the query takes. It's probably fast, but it is applied to ~600
other rows to determine whether they should be included in the results
or not.

-- 
Yves Goergen "LonelyPixel" 
Visit my web laboratory at http://beta.unclassified.de

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Enabling Slow query log in Mysql 5.0

2010-02-16 Thread Shawn Green

Machiel Richards wrote:

Hi All

 


I hope that someone can assist me with this.

 


We have a client with a production MySQL database running
MySQL 5.0.




Their slow query counts have skyrocketed over the last week
and I found that their slow query logs are not enabled.

 


However when trying to configure this I get the following
message:

 


mysql> set global log_slow_queries=ON;

ERROR 1238 (HY000): Variable 'log_slow_queries' is a read only variable

mysql>

 


I did the same thing many times before on other databases
but this specific one gives me this message.

 


Can anyone perhaps give me some insight as to why i'm
getting this and how to enable it (preferably without having to restart the
database seeing it is a high availability production system)?

 


You can do what you described in version 5.1 but not in 5.0 . For 5.0 
that variable is not dynamic. That means you cannot change it while the 
system is running:


http://dev.mysql.com/doc/refman/5.0/en/server-options.html#option_mysqld_log-slow-queries

One way around this is to setup the machine with the Slow Query Log 
enabled but to use a very large value of --long-query-time to 
essentially ignore every query. Then, when you want to capture slow 
queries, you reset --long-query-time to a reasonable value. 
Unfortunately, this requires a restart to initialize. After that you can 
adjust the --long-query-time to throttle the contents of the log.


http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_long_query_time

--
Shawn Green, MySQL Senior Support Engineer
Sun Microsystems, Inc.
Office: Blountville, TN



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Enabling Slow query log in Mysql 5.0

2010-02-16 Thread Krishna Chandra Prajapati
Hi Machiel,

The below link will help you.
mk-query-digesthttp://www.xaprb.com/blog/category/maatkit/

Regards,
Krishna



On Tue, Feb 16, 2010 at 12:51 PM, Machiel Richards wrote:

> Hi All
>
>
>
>I hope that someone can assist me with this.
>
>
>
>We have a client with a production MySQL database running
> MySQL 5.0.
>
>
>
>Their slow query counts have skyrocketed over the last week
> and I found that their slow query logs are not enabled.
>
>
>
>However when trying to configure this I get the following
> message:
>
>
>
> mysql> set global log_slow_queries=ON;
>
> ERROR 1238 (HY000): Variable 'log_slow_queries' is a read only variable
>
> mysql>
>
>
>
>I did the same thing many times before on other databases
> but this specific one gives me this message.
>
>
>
>Can anyone perhaps give me some insight as to why i'm
> getting this and how to enable it (preferably without having to restart the
> database seeing it is a high availability production system)?
>
>
>
>Help is much appreciated.
>
>
>
> Regards
>
>


Enabling Slow query log in Mysql 5.0

2010-02-15 Thread Machiel Richards
Hi All

 

I hope that someone can assist me with this.

 

We have a client with a production MySQL database running
MySQL 5.0.



Their slow query counts have skyrocketed over the last week
and I found that their slow query logs are not enabled.

 

However when trying to configure this I get the following
message:

 

mysql> set global log_slow_queries=ON;

ERROR 1238 (HY000): Variable 'log_slow_queries' is a read only variable

mysql>

 

I did the same thing many times before on other databases
but this specific one gives me this message.

 

Can anyone perhaps give me some insight as to why i'm
getting this and how to enable it (preferably without having to restart the
database seeing it is a high availability production system)?

 

Help is much appreciated.

 

Regards



RE: MYSQL slow query log in table.

2009-10-09 Thread John Daisley
Use an etl tool like Talend to load the slow query log into a table. We do 
something similar with the general query log very successfully with mysql 5 

Even with 5.1 i would use this approach over the built in log tables as the 
built in log tables impact performance quite significantly.

Regards

John Daisley
Mobile +44(0)7812 451238
Email j...@butterflysystems.co.uk

Certified MySQL 5 Database Administrator (CMDBA)
Certified MySQL 5 Developer
Cognos BI Developer

---
Sent from HP IPAQ mobile device.



-Original Message-
From: Brown, Charles 
Sent: Thursday, October 08, 2009 4:37 PM
To: Jaime Crespo Rincón ; mysql@lists.mysql.com 

Subject: RE: MYSQL slow query log in table.

Hello. Thanks for quick response. I'm running Mysql 5.0. We have no plans to 
upgrade to 5.1 that supports slow-log table. Do you know of any work around to 
get my slow query into a table -- just wondering. Desperate for a resolution or 
circumvention. 



-Original Message-
From: Jaime Crespo Rincón [mailto:jcre...@warp.es] 
Sent: Thursday, October 08, 2009 10:17 AM
To: Brown, Charles
Cc: Daevid Vincent; mysql@lists.mysql.com
Subject: Re: MYSQL slow query log in table.

2009/10/7 Brown, Charles :
> Hello All. I would like to implement  MYSQL slow query log in table. Can 
> someone kindly assist me with the table definition and implementation.

SHOW CREATE TABLE mysql.slow_log;

<http://dev.mysql.com/doc/refman/5.1/en/log-tables.html>

-- 
Jaime Crespo
MySQL & Java Instructor
Warp Networks
<http://warp.es>


This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.



[The entire original message is not included]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: MYSQL slow query log in table.

2009-10-08 Thread Brown, Charles
Hello. Thanks for quick response. I'm running Mysql 5.0. We have no plans to 
upgrade to 5.1 that supports slow-log table. Do you know of any work around to 
get my slow query into a table -- just wondering. Desperate for a resolution or 
circumvention. 



-Original Message-
From: Jaime Crespo Rincón [mailto:jcre...@warp.es] 
Sent: Thursday, October 08, 2009 10:17 AM
To: Brown, Charles
Cc: Daevid Vincent; mysql@lists.mysql.com
Subject: Re: MYSQL slow query log in table.

2009/10/7 Brown, Charles :
> Hello All. I would like to implement  MYSQL slow query log in table. Can 
> someone kindly assist me with the table definition and implementation.

SHOW CREATE TABLE mysql.slow_log;

<http://dev.mysql.com/doc/refman/5.1/en/log-tables.html>

-- 
Jaime Crespo
MySQL & Java Instructor
Warp Networks
<http://warp.es>


This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.



Re: MYSQL slow query log in table.

2009-10-08 Thread Jaime Crespo Rincón
2009/10/7 Brown, Charles :
> Hello All. I would like to implement  MYSQL slow query log in table. Can 
> someone kindly assist me with the table definition and implementation.

SHOW CREATE TABLE mysql.slow_log;

<http://dev.mysql.com/doc/refman/5.1/en/log-tables.html>

-- 
Jaime Crespo
MySQL & Java Instructor
Warp Networks
<http://warp.es>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MYSQL slow query log in table.

2009-10-07 Thread Brown, Charles
Hello All. I would like to implement  MYSQL slow query log in table. Can 
someone kindly assist me with the table definition and implementation. 

Thanks so much in advance




This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Questions on un-index searches and slow-query-log

2009-10-05 Thread Walter Heck - OlinData.com
And an answer to 2): http://www.maatkit.org/doc/mk-query-digest.html

On Tue, Oct 6, 2009 at 02:59, Brown, Charles  wrote:
> Questions Folks:
> (1) What do you about un-index searches. How can one report and monitor them?
> (2) What do you do with the slow-query log. Are there any utilities or 
> scripts out there to filter and manage this log?
>
> Thanks
>
>
>
> 
> This message is intended only for the use of the Addressee and
> may contain information that is PRIVILEGED and CONFIDENTIAL.
>
> If you are not the intended recipient, you are hereby notified
> that any dissemination of this communication is strictly prohibited.
>
> If you have received this communication in error, please erase
> all copies of the message and its attachments and notify us
> immediately.
>
> Thank you.
> 
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:    http://lists.mysql.com/mysql?unsub=li...@olindata.com
>
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Questions on un-index searches and slow-query-log

2009-10-05 Thread Gavin Towey
See
 log-queries-not-using-indexes
option in my.cnf, used with the slow log.

http://dev.mysql.com/doc/refman/5.1/en/slow-query-log.html


Regards,
Gavin Towey


-Original Message-
From: Brown, Charles [mailto:cbr...@bmi.com]
Sent: Monday, October 05, 2009 12:59 PM
To: Mysql List
Cc: John Meyer; Mark Phillips
Subject: Questions on un-index searches and slow-query-log

Questions Folks:
(1) What do you about un-index searches. How can one report and monitor them?
(2) What do you do with the slow-query log. Are there any utilities or scripts 
out there to filter and manage this log?

Thanks




This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Questions on un-index searches and slow-query-log

2009-10-05 Thread Brown, Charles
Questions Folks:  
(1) What do you about un-index searches. How can one report and monitor them? 
(2) What do you do with the slow-query log. Are there any utilities or scripts 
out there to filter and manage this log?

Thanks




This message is intended only for the use of the Addressee and
may contain information that is PRIVILEGED and CONFIDENTIAL.

If you are not the intended recipient, you are hereby notified
that any dissemination of this communication is strictly prohibited.

If you have received this communication in error, please erase
all copies of the message and its attachments and notify us
immediately.

Thank you.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: How to optimize a slow query?

2009-09-07 Thread Jia Chen
ne. It only takes 
1 minute and 3 seconds.


Yes, it is a 1:1 relationship between table RItime and MVtime. 
However, I don't get your suggestion, "I'd recommend joining the 
two tables into 1 table so you don't have to join them in the first 
place."  Could you elaborate that?


I was using B-tree index. Switching to HASH does help. Now, it 
takes 4 min 50.17 sec to run the query.

I also turn on profiling by using
mysql> set profiling = 1;
Query OK, 0 rows affected (0.01 sec)

After the query finishes, I get
mysql> show profile;
+--++
| Status   | Duration   |
+--++
| starting |   0.000123 |
| checking permissions |   0.10 |
| Opening tables   |   0.44 |
| System lock  |   0.07 |
| Table lock   |   0.11 |
| init |   0.83 |
| creating table   |   0.003428 |
| After create |   0.000124 |
| System lock  |   0.04 |
| Table lock   |   0.51 |
| optimizing   |   0.07 |
| statistics   |   0.33 |
| preparing|   0.20 |
| executing|   0.04 |
| Sending data | 290.153530 |
| end  |   0.08 |
| end  |   0.04 |
| query end|   0.03 |
| freeing items|   0.10 |
| closing tables   |   0.25 |
| logging slow query   |   0.01 |
| logging slow query   |   0.013429 |
| cleaning up  |   0.04 |
+--++
23 rows in set (0.02 sec)

MySQL spends most of its time sending data. According to 
http://dev.mysql.com/doc/refman/5.0/en/general-thread-states.html, 
sending data means that "the thread is processing rows for a 
|SELECT| <http://dev.mysql.com/doc/refman/5.0/en/select.html> 
statement and also is sending data to the client."  Is there more 
room to optimize this query?  Thanks again.


Best,
Jia

mos wrote:
How many rows were added to rmpdata1 table? If it is 13.4 million 
rows then it is going to take several minutes to join this many 
rows from the 2 tables.
Is there a 1:1 relationship between the two tables or a 1:Many? If 
there is a 1:1 then I'd recommend joining the two tables into 1 
table so you don't have to join them in the first place.
The only other thing I can suggest is to change the type of index 
on the tables being joined to see if that makes a speed 
difference. For example, if you are using BTREE then switch to 
HASH or vice versa. See 
http://dev.mysql.com/doc/refman/5.0/en/create-index.html for more 
info.


Mike

At 10:05 AM 9/5/2009, Jia Chen wrote:

Hi there,

One simple query took more than 10 minutes.  Here is how relevant 
rows in the slow query log looks like:


# Time: 090905 10:49:57
# u...@host: root[root] @ localhost []
# Query_time: 649  Lock_time: 0  Rows_sent: 0  Rows_examined: 
26758561

use world;
create table rmpdata1 select ri.*,
mv.MV, coalesce(ri.code,mv.code) as ccode,
coalesce(ri.ndate,mv.ndate) as cndate
from RItime as ri left outer join
MVtime as mv
on (ri.code=mv.code and
ri.ndate=mv.ndate);

When I "explain" only the select clause, I get
+--+---+
| id | select_type | table | type   | possible_keys | key | 
key_len | ref  | rows | Extra |
++-+---++---+-+-+--+--+---+ 


|  1 | SIMPLE  | ri| ALL| NULL  | NULL| NULL
| NULL | 13419851 |   |
|  1 | SIMPLE  | mv| eq_ref | PRIMARY   | PRIMARY | 11
| world.ri.code,world.ri.ndate |1 |   |
++-+---++---+-+-+--+--+---+ 


2 rows in set (0.00 sec)

I use "show table status from world;" to get information about 
two tables, RItime and MVtime, in the join clause:

  Name: RItime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13419851
Avg_row_length: 31
   Data_length: 427721848
Max_data_length: 281474976710655
  Index_length: 347497472
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 10:17:57
   Update_time: 2009-09-03 12:04:02
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:
*** 2. row ***
  Name: MVtime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13562373
Avg_row_length: 31
   Data_length: 430220056
Max_data_length: 281474976710655
  Index_length: 350996480
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 13:31:33
   Update_time: 2009-09-03 13:43:51
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Com

Re: How to optimize a slow query?

2009-09-06 Thread mos
a 1:1 relationship between table RItime and MVtime. However, 
I don't get your suggestion, "I'd recommend joining the two tables into 
1 table so you don't have to join them in the first place."  Could you 
elaborate that?


I was using B-tree index. Switching to HASH does help. Now, it takes 4 
min 50.17 sec to run the query.

I also turn on profiling by using
mysql> set profiling = 1;
Query OK, 0 rows affected (0.01 sec)

After the query finishes, I get
mysql> show profile;
+--++
| Status   | Duration   |
+--++
| starting |   0.000123 |
| checking permissions |   0.10 |
| Opening tables   |   0.44 |
| System lock  |   0.07 |
| Table lock   |   0.11 |
| init |   0.83 |
| creating table   |   0.003428 |
| After create |   0.000124 |
| System lock  |   0.04 |
| Table lock   |   0.51 |
| optimizing   |   0.07 |
| statistics   |   0.33 |
| preparing|   0.20 |
| executing|   0.04 |
| Sending data | 290.153530 |
| end  |   0.08 |
| end  |   0.04 |
| query end|   0.03 |
| freeing items|   0.10 |
| closing tables   |   0.25 |
| logging slow query   |   0.01 |
| logging slow query   |   0.013429 |
| cleaning up  |   0.04 |
+--++
23 rows in set (0.02 sec)

MySQL spends most of its time sending data. According to 
http://dev.mysql.com/doc/refman/5.0/en/general-thread-states.html, 
sending data means that "the thread is processing rows for a |SELECT| 
<http://dev.mysql.com/doc/refman/5.0/en/select.html> statement and also 
is sending data to the client."  Is there more room to optimize this 
query?  Thanks again.


Best,
Jia

mos wrote:
How many rows were added to rmpdata1 table? If it is 13.4 million rows 
then it is going to take several minutes to join this many rows from 
the 2 tables.
Is there a 1:1 relationship between the two tables or a 1:Many? If 
there is a 1:1 then I'd recommend joining the two tables into 1 table 
so you don't have to join them in the first place.
The only other thing I can suggest is to change the type of index on 
the tables being joined to see if that makes a speed difference. For 
example, if you are using BTREE then switch to HASH or vice versa. See 
http://dev.mysql.com/doc/refman/5.0/en/create-index.html for more info.


Mike

At 10:05 AM 9/5/2009, Jia Chen wrote:

Hi there,

One simple query took more than 10 minutes.  Here is how relevant rows 
in the slow query log looks like:


# Time: 090905 10:49:57
# u...@host: root[root] @ localhost []
# Query_time: 649  Lock_time: 0  Rows_sent: 0  Rows_examined: 26758561
use world;
create table rmpdata1 select ri.*,
mv.MV, coalesce(ri.code,mv.code) as ccode,
coalesce(ri.ndate,mv.ndate) as cndate
from RItime as ri left outer join
MVtime as mv
on (ri.code=mv.code and
ri.ndate=mv.ndate);

When I "explain" only the select clause, I get
+--+---+
| id | select_type | table | type   | possible_keys | key | 
key_len | ref  | rows | Extra |
++-+---++---+-+-+--+--+---+ 


|  1 | SIMPLE  | ri| ALL| NULL  | NULL| NULL
| NULL | 13419851 |   |
|  1 | SIMPLE  | mv| eq_ref | PRIMARY   | PRIMARY | 11
| world.ri.code,world.ri.ndate |1 |   |
++-+---++---+-+-+--+--+---+ 


2 rows in set (0.00 sec)

I use "show table status from world;" to get information about two 
tables, RItime and MVtime, in the join clause:

  Name: RItime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13419851
Avg_row_length: 31
   Data_length: 427721848
Max_data_length: 281474976710655
  Index_length: 347497472
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 10:17:57
   Update_time: 2009-09-03 12:04:02
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:
*** 2. row ***
  Name: MVtime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13562373
Avg_row_length: 31
   Data_length: 430220056
Max_data_length: 281474976710655
  Index_length: 350996480
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 13:31:33
   Update_time: 2009-09-03 13:43:51
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:

I also describe these two tables:
mysql> desc RItime;
+---+

Re: How to optimize a slow query?

2009-09-06 Thread Jia Chen
| checking permissions |   0.10 |
| Opening tables   |   0.44 |
| System lock  |   0.07 |
| Table lock   |   0.11 |
| init |   0.83 |
| creating table   |   0.003428 |
| After create |   0.000124 |
| System lock  |   0.04 |
| Table lock   |   0.51 |
| optimizing   |   0.07 |
| statistics   |   0.33 |
| preparing|   0.20 |
| executing|   0.04 |
| Sending data | 290.153530 |
| end  |   0.08 |
| end  |   0.04 |
| query end|   0.03 |
| freeing items|   0.10 |
| closing tables   |   0.25 |
| logging slow query   |   0.01 |
| logging slow query   |   0.013429 |
| cleaning up  |   0.04 |
+--++
23 rows in set (0.02 sec)

MySQL spends most of its time sending data. According to 
http://dev.mysql.com/doc/refman/5.0/en/general-thread-states.html, 
sending data means that "the thread is processing rows for a |SELECT| 
<http://dev.mysql.com/doc/refman/5.0/en/select.html> statement and 
also is sending data to the client."  Is there more room to optimize 
this query?  Thanks again.


Best,
Jia

mos wrote:
How many rows were added to rmpdata1 table? If it is 13.4 million 
rows then it is going to take several minutes to join this many rows 
from the 2 tables.
Is there a 1:1 relationship between the two tables or a 1:Many? If 
there is a 1:1 then I'd recommend joining the two tables into 1 
table so you don't have to join them in the first place.
The only other thing I can suggest is to change the type of index on 
the tables being joined to see if that makes a speed difference. For 
example, if you are using BTREE then switch to HASH or vice versa. 
See http://dev.mysql.com/doc/refman/5.0/en/create-index.html for 
more info.


Mike

At 10:05 AM 9/5/2009, Jia Chen wrote:

Hi there,

One simple query took more than 10 minutes.  Here is how relevant 
rows in the slow query log looks like:


# Time: 090905 10:49:57
# u...@host: root[root] @ localhost []
# Query_time: 649  Lock_time: 0  Rows_sent: 0  Rows_examined: 26758561
use world;
create table rmpdata1 select ri.*,
mv.MV, coalesce(ri.code,mv.code) as ccode,
coalesce(ri.ndate,mv.ndate) as cndate
from RItime as ri left outer join
MVtime as mv
on (ri.code=mv.code and
ri.ndate=mv.ndate);

When I "explain" only the select clause, I get
+--+---+
| id | select_type | table | type   | possible_keys | key | 
key_len | ref  | rows | Extra |
++-+---++---+-+-+--+--+---+ 


|  1 | SIMPLE  | ri| ALL| NULL  | NULL| NULL
| NULL | 13419851 |   |
|  1 | SIMPLE  | mv| eq_ref | PRIMARY   | PRIMARY | 11
| world.ri.code,world.ri.ndate |1 |   |
++-+---++---+-+-+--+--+---+ 


2 rows in set (0.00 sec)

I use "show table status from world;" to get information about two 
tables, RItime and MVtime, in the join clause:

  Name: RItime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13419851
Avg_row_length: 31
   Data_length: 427721848
Max_data_length: 281474976710655
  Index_length: 347497472
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 10:17:57
   Update_time: 2009-09-03 12:04:02
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:
*** 2. row ***
  Name: MVtime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13562373
Avg_row_length: 31
   Data_length: 430220056
Max_data_length: 281474976710655
  Index_length: 350996480
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 13:31:33
   Update_time: 2009-09-03 13:43:51
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:

I also describe these two tables:
mysql> desc RItime;
+---++--+-++---+
| Field | Type   | Null | Key | Default| Extra |
+---++--+-++---+
| code  | varchar(6) | NO   | PRI ||   |
| ndate | date   | NO   | PRI | -00-00 |   |
| ri| double | YES  | | NULL   |   |
| time  | date   | YES  | | NULL   |   |
| bdate | date   | YES  | | NULL   |   |
+---++--+-++---+
5 rows in set (0.00 sec)

mysql> desc MVtime;
+---++--+-++---+
| Field | Type   | Null | Key | Default| Extra |
+---+

Re: How to optimize a slow query?

2009-09-06 Thread mos

Jia,

Yes, it is a 1:1 relationship between table RItime and MVtime. However, I 
don't get your suggestion, "I'd recommend joining the two tables into 1 
table so you don't have to join them in the first place."  Could you 
elaborate that?


Sure but first I have to relate it to my own experience. I had 8 tables of 
around 25 million rows each. They all had a 1:1 relationship and 
occasionally some of the tables did not have a corresponding row. I felt it 
was better from a design point of view to have 8 different tables and do 
the joins on the tables that I needed for each of my particular queries. 
I'd be joining anywhere from 2 to 5 or 6 or even all 8 tables at a time, 
using a where clause to select 15k rows at a time. This is the way to do it 
from a normalized point of view. All of the information is in its 
respective table and only assemble the tables for each particular query.


Well, this was slwww! A heck of a lot of work was done to join the 
tables together on a 2 column key (like yours). I also had to run 
maintenance on the tables to see which tables where corrupted or were 
missing rows that should have been there. The tables also repeated columns 
from the other tables like date and product_id that is used to help 
identify each row. Well to make a long story short, it was far too much 
effort to juggle the relationships between all of these tables.


Then a colleague made the monumental announcement by saying "I've never 
found the need to use more than 1 table when there was a 1:1 relationship. 
There is a tremendous speed cost involved in piecing the data back 
together. I put all of the data into 1 table".  So the light went on for 
me. From then on I've merged all 8 tables into one and if any of the 
subordinate table data isn't available for a row, its columns are set to 
NULL, which is the values they would have had anyway after a left join.


I am perfectly happy know with one wide table with over 100 columns. 
Everything is in its place and maintenance is a dream. Queries are also 
quite fast because all of the information is under one table and not 8. I 
don't have to worry about optimizing the indexes for the table joins 
because there aren't any joins between these tables because it is all in 1 row.


So you really have to ask yourself, why spend  10 minutes each time your 
query is run? Instead you eliminate the query altogether by keeping the 
data of the 2 tables into 1 table in the first place.


Mike



At 09:45 AM 9/6/2009, Jia Chen wrote:

Thanks for your reply, Mike.

Yes, 13419851 rows were added to rmpdata1. However, 10 minutes seem to be 
too long. I run the same join by using SQL procedure in a statistical 
software called SAS on a similar machine. It only takes 1 minute and 3 seconds.


Yes, it is a 1:1 relationship between table RItime and MVtime. However, I 
don't get your suggestion, "I'd recommend joining the two tables into 1 
table so you don't have to join them in the first place."  Could you 
elaborate that?


I was using B-tree index. Switching to HASH does help. Now, it takes 4 min 
50.17 sec to run the query.

I also turn on profiling by using
mysql> set profiling = 1;
Query OK, 0 rows affected (0.01 sec)

After the query finishes, I get
mysql> show profile;
+--++
| Status   | Duration   |
+--++
| starting |   0.000123 |
| checking permissions |   0.10 |
| Opening tables   |   0.44 |
| System lock  |   0.07 |
| Table lock   |   0.11 |
| init |   0.83 |
| creating table   |   0.003428 |
| After create |   0.000124 |
| System lock  |   0.04 |
| Table lock   |   0.51 |
| optimizing   |   0.07 |
| statistics   |   0.33 |
| preparing|   0.20 |
| executing|   0.04 |
| Sending data | 290.153530 |
| end  |   0.08 |
| end  |   0.04 |
| query end|   0.000003 |
| freeing items|   0.000010 |
| closing tables   |   0.25 |
| logging slow query   |   0.01 |
| logging slow query   |   0.013429 |
| cleaning up  |   0.04 |
+--++
23 rows in set (0.02 sec)

MySQL spends most of its time sending data. According to 
http://dev.mysql.com/doc/refman/5.0/en/general-thread-states.html, sending 
data means that "the thread is processing rows for a |SELECT| 
<http://dev.mysql.com/doc/refman/5.0/en/select.html> statement and also is 
sending data to the client."  Is there more room to optimize this 
query?  Thanks again.


Best,
Jia

mos wrote:
How many rows were added to rmpdata1 table? If it is 13.4 million rows 
then it is going to take several minutes to join this many rows from the 
2 tables.
Is there a 1

Re: How to optimize a slow query?

2009-09-06 Thread Jia Chen

Thanks for your reply, Mike.

Yes, 13419851 rows were added to rmpdata1. However, 10 minutes seem to 
be too long. I run the same join by using SQL procedure in a statistical 
software called SAS on a similar machine. It only takes 1 minute and 3 
seconds.


Yes, it is a 1:1 relationship between table RItime and MVtime. However, 
I don't get your suggestion, "I'd recommend joining the two tables into 
1 table so you don't have to join them in the first place."  Could you 
elaborate that?


I was using B-tree index. Switching to HASH does help. Now, it takes 4 
min 50.17 sec to run the query. 


I also turn on profiling by using
mysql> set profiling = 1;
Query OK, 0 rows affected (0.01 sec)

After the query finishes, I get
mysql> show profile;
+--++
| Status   | Duration   |
+--++
| starting |   0.000123 |
| checking permissions |   0.10 |
| Opening tables   |   0.44 |
| System lock  |   0.07 |
| Table lock   |   0.11 |
| init |   0.83 |
| creating table   |   0.003428 |
| After create |   0.000124 |
| System lock  |   0.04 |
| Table lock   |   0.51 |
| optimizing   |   0.07 |
| statistics   |   0.33 |
| preparing|   0.20 |
| executing|   0.04 |
| Sending data | 290.153530 |
| end  |   0.08 |
| end  |   0.04 |
| query end|   0.03 |
| freeing items|   0.10 |
| closing tables   |   0.25 |
| logging slow query   |   0.01 |
| logging slow query   |   0.013429 |
| cleaning up  |   0.04 |
+--++
23 rows in set (0.02 sec)

MySQL spends most of its time sending data. According to 
http://dev.mysql.com/doc/refman/5.0/en/general-thread-states.html, 
sending data means that "the thread is processing rows for a |SELECT| 
<http://dev.mysql.com/doc/refman/5.0/en/select.html> statement and also 
is sending data to the client."  Is there more room to optimize this 
query?  Thanks again.


Best,
Jia

mos wrote:
How many rows were added to rmpdata1 table? If it is 13.4 million rows 
then it is going to take several minutes to join this many rows from 
the 2 tables.
Is there a 1:1 relationship between the two tables or a 1:Many? If 
there is a 1:1 then I'd recommend joining the two tables into 1 table 
so you don't have to join them in the first place.
The only other thing I can suggest is to change the type of index on 
the tables being joined to see if that makes a speed difference. For 
example, if you are using BTREE then switch to HASH or vice versa. See 
http://dev.mysql.com/doc/refman/5.0/en/create-index.html for more info.


Mike

At 10:05 AM 9/5/2009, Jia Chen wrote:

Hi there,

One simple query took more than 10 minutes.  Here is how relevant 
rows in the slow query log looks like:


# Time: 090905 10:49:57
# u...@host: root[root] @ localhost []
# Query_time: 649  Lock_time: 0  Rows_sent: 0  Rows_examined: 26758561
use world;
create table rmpdata1 select ri.*,
mv.MV, coalesce(ri.code,mv.code) as ccode,
coalesce(ri.ndate,mv.ndate) as cndate
from RItime as ri left outer join
MVtime as mv
on (ri.code=mv.code and
ri.ndate=mv.ndate);

When I "explain" only the select clause, I get
+--+---+
| id | select_type | table | type   | possible_keys | key | 
key_len | ref  | rows | Extra |
++-+---++---+-+-+--+--+---+ 


|  1 | SIMPLE  | ri| ALL| NULL  | NULL| NULL
| NULL | 13419851 |   |
|  1 | SIMPLE  | mv| eq_ref | PRIMARY   | PRIMARY | 11
| world.ri.code,world.ri.ndate |1 |   |
++-+---++---+-+-+--+--+---+ 


2 rows in set (0.00 sec)

I use "show table status from world;" to get information about two 
tables, RItime and MVtime, in the join clause:

  Name: RItime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13419851
Avg_row_length: 31
   Data_length: 427721848
Max_data_length: 281474976710655
  Index_length: 347497472
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 10:17:57
   Update_time: 2009-09-03 12:04:02
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:
*** 2. row ***
  Name: MVtime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13562373
Avg_row_length: 31
   Data_length: 430220056
Max_data_length: 281474976710655
  Index_length: 350996480
 Data_free: 0

Re: How to optimize a slow query?

2009-09-05 Thread mos
How many rows were added to rmpdata1 table? If it is 13.4 million rows then 
it is going to take several minutes to join this many rows from the 2 tables.
Is there a 1:1 relationship between the two tables or a 1:Many? If there is 
a 1:1 then I'd recommend joining the two tables into 1 table so you don't 
have to join them in the first place.
The only other thing I can suggest is to change the type of index on the 
tables being joined to see if that makes a speed difference. For example, 
if you are using BTREE then switch to HASH or vice versa. See 
http://dev.mysql.com/doc/refman/5.0/en/create-index.html for more info.


Mike

At 10:05 AM 9/5/2009, Jia Chen wrote:

Hi there,

One simple query took more than 10 minutes.  Here is how relevant rows in 
the slow query log looks like:


# Time: 090905 10:49:57
# u...@host: root[root] @ localhost []
# Query_time: 649  Lock_time: 0  Rows_sent: 0  Rows_examined: 26758561
use world;
create table rmpdata1 select ri.*,
mv.MV, coalesce(ri.code,mv.code) as ccode,
coalesce(ri.ndate,mv.ndate) as cndate
from RItime as ri left outer join
MVtime as mv
on (ri.code=mv.code and
ri.ndate=mv.ndate);

When I "explain" only the select clause, I get
+--+---+
| id | select_type | table | type   | possible_keys | key | key_len | 
ref  | rows | Extra |

++-+---++---+-+-+--+--+---+
|  1 | SIMPLE  | ri| ALL| NULL  | NULL| NULL
| NULL | 13419851 |   |
|  1 | SIMPLE  | mv| eq_ref | PRIMARY   | PRIMARY | 11
| world.ri.code,world.ri.ndate |1 |   |
++-+---++---+-+-+--+--+---+
2 rows in set (0.00 sec)

I use "show table status from world;" to get information about two tables, 
RItime and MVtime, in the join clause:

  Name: RItime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13419851
Avg_row_length: 31
   Data_length: 427721848
Max_data_length: 281474976710655
  Index_length: 347497472
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 10:17:57
   Update_time: 2009-09-03 12:04:02
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:
*** 2. row ***
  Name: MVtime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13562373
Avg_row_length: 31
   Data_length: 430220056
Max_data_length: 281474976710655
  Index_length: 350996480
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 13:31:33
   Update_time: 2009-09-03 13:43:51
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:

I also describe these two tables:
mysql> desc RItime;
+---++--+-++---+
| Field | Type   | Null | Key | Default| Extra |
+---++--+-++---+
| code  | varchar(6) | NO   | PRI ||   |
| ndate | date   | NO   | PRI | -00-00 |   |
| ri| double | YES  | | NULL   |   |
| time  | date   | YES  | | NULL   |   |
| bdate | date   | YES  | | NULL   |   |
+---++--+-++---+
5 rows in set (0.00 sec)

mysql> desc MVtime;
+---++--+-++---+
| Field | Type   | Null | Key | Default| Extra |
+---++--+-++---+
| code  | varchar(6) | NO   | PRI ||   |
| ndate | date   | NO   | PRI | -00-00 |   |
| MV| double | YES  | | NULL   |   |
| time  | date   | YES  | | NULL   |   |
| bdate | date   | YES  | | NULL   |   |
+---++--+-++---+
5 rows in set (0.00 sec)

Could you give me some hint on how to improve the speed of this query?
Thanks.

Best,
Jia




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



How to optimize a slow query?

2009-09-05 Thread Jia Chen

Hi there,

One simple query took more than 10 minutes.  Here is how relevant rows 
in the slow query log looks like:


# Time: 090905 10:49:57
# u...@host: root[root] @ localhost []
# Query_time: 649  Lock_time: 0  Rows_sent: 0  Rows_examined: 26758561
use world;
create table rmpdata1 select ri.*,
mv.MV, coalesce(ri.code,mv.code) as ccode,
coalesce(ri.ndate,mv.ndate) as cndate
from RItime as ri left outer join
MVtime as mv
on (ri.code=mv.code and
ri.ndate=mv.ndate);

When I "explain" only the select clause, I get
+--+---+
| id | select_type | table | type   | possible_keys | key | key_len 
| ref  | rows | Extra |

++-+---++---+-+-+--+--+---+
|  1 | SIMPLE  | ri| ALL| NULL  | NULL| NULL
| NULL | 13419851 |   |
|  1 | SIMPLE  | mv| eq_ref | PRIMARY   | PRIMARY | 11  
| world.ri.code,world.ri.ndate |1 |   |

++-+---++---+-+-+--+--+---+
2 rows in set (0.00 sec)

I use "show table status from world;" to get information about two 
tables, RItime and MVtime, in the join clause:

  Name: RItime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13419851
Avg_row_length: 31
   Data_length: 427721848
Max_data_length: 281474976710655
  Index_length: 347497472
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 10:17:57
   Update_time: 2009-09-03 12:04:02
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:
*** 2. row ***
  Name: MVtime
Engine: MyISAM
   Version: 10
Row_format: Dynamic
  Rows: 13562373
Avg_row_length: 31
   Data_length: 430220056
Max_data_length: 281474976710655
  Index_length: 350996480
 Data_free: 0
Auto_increment: NULL
   Create_time: 2009-09-03 13:31:33
   Update_time: 2009-09-03 13:43:51
Check_time: NULL
 Collation: latin1_swedish_ci
  Checksum: NULL
Create_options:
   Comment:

I also describe these two tables:
mysql> desc RItime;
+---++--+-++---+
| Field | Type   | Null | Key | Default| Extra |
+---++--+-++---+
| code  | varchar(6) | NO   | PRI ||   |
| ndate | date   | NO   | PRI | -00-00 |   |
| ri| double | YES  | | NULL   |   |
| time  | date   | YES  | | NULL   |   |
| bdate | date   | YES  | | NULL   |   |
+---++--+-++---+
5 rows in set (0.00 sec)

mysql> desc MVtime;
+---++--+-++---+
| Field | Type   | Null | Key | Default| Extra |
+---++--+-++---+
| code  | varchar(6) | NO   | PRI ||   |
| ndate | date   | NO   | PRI | -00-00 |   |
| MV| double | YES  | | NULL   |   |
| time  | date   | YES  | | NULL   |   |
| bdate | date   | YES  | | NULL   |   |
+---++--+-++---+
5 rows in set (0.00 sec)

Could you give me some hint on how to improve the speed of this query?  
Thanks.


Best,
Jia




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Very Slow Query

2009-08-28 Thread Jia Chen

Hi Dan and John,

Thanks for your time!

You guys are right. I did not index any columns when I created these 
tables. After I indexed assignee columns in both tables, the select 
clause runs in seconds.


Best,
Jia

Dan Nelson wrote:

In the last episode (Aug 28), Jia Chen said:
  

Thanks for reply!

Yes, it is very slow too  if I just execute the "select ..." part.

When I run
mysql> explain extended select a.*, b.assname, b.cname, b.cusip, b.own, 
b.pname, b.sname from nber1999.pat1 as a inner join nber1999.compusta1 
asb  on a.assignee=b.assignee;

I got
++-+---+--+---+--+-+--+-+-+
| id | select_type | table | type | possible_keys | key  | key_len |  ref  | 
rows| Extra   |
++-+---+--+---+--+-+--+-+-+
|  1 | SIMPLE  | b | ALL  | NULL  | NULL | NULL|  NULL |
4906 | |
|  1 | SIMPLE  | a | ALL  | NULL  | NULL | NULL|  NULL | 
2089903 | Using where |
++-+---+--+---+--+-+--+-+-+
2 rows in set, 1 warning (0.00 sec)



Ouch.  Add an index on pat1.assignee.  Mysql currently has to scan your
entire pat1 table for every row in compusta1 to find matching rows.  In
general, you want an index on any fields used in a WHERE clause.

  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Very Slow Query

2009-08-28 Thread John
Can you show us the output of SHOW CREATE TABLE for the tables in your
query? Looks like you just need some indexing!

Regards
John


John Daisley
MySQL & Cognos Contractor

Certified MySQL 5 Database Administrator (CMDBA)
Certified MySQL 5 Developer (CMDEV)
IBM Cognos BI Developer

Telephone +44 (0)7812 451238
Email j...@butterflysystems.co.uk



-Original Message-
From: Jia Chen [mailto:chen.1...@gmail.com] 
Sent: 28 August 2009 17:17
To: Dan Nelson; mysql@lists.mysql.com
Subject: Re: Very Slow Query

Thanks for reply!

Yes, it is very slow too  if I just execute the "select ..." part.

When I run
mysql> explain extended select a.*, b.assname, b.cname, b.cusip, b.own, 
b.pname, b.sname from nber1999.pat1 as a inner join nber1999.compusta1 
asb  on a.assignee=b.assignee;
I got
++-+---+--+---+--+-+--+-
+-+
| id | select_type | table | type | possible_keys | key  | key_len | 
ref  | rows| Extra   |
++-+---+--+---+--+-+--+-
+-+
|  1 | SIMPLE  | b | ALL  | NULL  | NULL | NULL| 
NULL |4906 | |
|  1 | SIMPLE  | a | ALL  | NULL  | NULL | NULL| 
NULL | 2089903 | Using where |
++-+---+--+---+--+-+--+-
+-+
2 rows in set, 1 warning (0.00 sec)

Best,
Jia


Dan Nelson wrote:
> In the last episode (Aug 28), Jia Chen said:
>   
>> One seemingly simple query that joins two tables takes a long time for
me.
>>
>> This is my library.
>>
>> mysql> show table status from nber1999;
>>
+---++-++--++---
--+--+--+---++--
---+-++---+-
-++-+
>> | Name  | Engine | Version | Row_format | Rows | Avg_row_length
| Data_length | Max_data_length  | Index_length | Data_free |
Auto_increment | Create_time | Update_time | Check_time  |
Collation | Checksum | Create_options | Comment |
>>
+---++-++--++---
--+--+--+---++--
---+-++---+-
-++-+
>> | compusta1 | MyISAM |  10 | Dynamic| 4906 | 77
|  379464 |  281474976710655 | 1024 | 0 |
NULL | 2009-08-27 23:56:47 | 2009-08-27 23:56:47 | NULL   |
latin1_swedish_ci | NULL || |
>> | pat1  | MyISAM |  10 | Dynamic|  2089903 | 96
|   201936072 |  281474976710655 | 1024 | 0 |
NULL | 2009-08-27 23:55:48 | 2009-08-27 23:55:56 | NULL   |
latin1_swedish_ci | NULL || |
>>
+---++-++--++---
--+--+--+---++--
---+-++---+-
-+----+-+
>> 5 rows in set (0.00 sec)
>>
>> And the relevant rows in my slow query log file is:
>>
>> # Time: 090828 10:36:17
>> # u...@host: root[root] @ localhost []
>> # Query_time: 478  Lock_time: 0  Rows_sent: 0  Rows_examined: 1251
>> use nber1999;
>> create table nber1999.pat select a.*, b.assname, b.cname, b.cusip, 
>> b.own, b.pname, b.sname
>>  from nber1999.pat1 as a inner join nber1999.compusta1 as
b
>> on a.assignee=b.assignee;
>> 
>
> If you run just the "select ..." part, is it slow also?  Do you have an
> index on pat1.assignee?  What does an EXPLAIN on the select print?
>  
>   


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk

No virus found in this incoming message.
Checked by AVG - www.avg.com 
Version: 8.5.409 / Virus Database: 270.13.71/2331 - Release Date: 08/28/09
06:26:00


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Very Slow Query

2009-08-28 Thread Dan Nelson
In the last episode (Aug 28), Jia Chen said:
> Thanks for reply!
> 
> Yes, it is very slow too  if I just execute the "select ..." part.
> 
> When I run
> mysql> explain extended select a.*, b.assname, b.cname, b.cusip, b.own, 
> b.pname, b.sname from nber1999.pat1 as a inner join nber1999.compusta1 
> asb  on a.assignee=b.assignee;
> I got
> ++-+---+--+---+--+-+--+-+-+
> | id | select_type | table | type | possible_keys | key  | key_len |  ref  | 
> rows| Extra   |
> ++-+---+--+---+--+-+--+-+-+
> |  1 | SIMPLE  | b | ALL  | NULL  | NULL | NULL|  NULL |  
>   4906 | |
> |  1 | SIMPLE  | a | ALL  | NULL  | NULL | NULL|  NULL | 
> 2089903 | Using where |
> ++-+---+--+---+--+-+--+-+-+
> 2 rows in set, 1 warning (0.00 sec)

Ouch.  Add an index on pat1.assignee.  Mysql currently has to scan your
entire pat1 table for every row in compusta1 to find matching rows.  In
general, you want an index on any fields used in a WHERE clause.

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Very Slow Query

2009-08-28 Thread Jia Chen

Thanks for reply!

Yes, it is very slow too  if I just execute the "select ..." part.

When I run
mysql> explain extended select a.*, b.assname, b.cname, b.cusip, b.own, 
b.pname, b.sname from nber1999.pat1 as a inner join nber1999.compusta1 
asb  on a.assignee=b.assignee;

I got
++-+---+--+---+--+-+--+-+-+
| id | select_type | table | type | possible_keys | key  | key_len | 
ref  | rows| Extra   |

++-+---+--+---+--+-+--+-+-+
|  1 | SIMPLE  | b | ALL  | NULL  | NULL | NULL| 
NULL |4906 | |
|  1 | SIMPLE  | a | ALL  | NULL  | NULL | NULL| 
NULL | 2089903 | Using where |

++-+---+--+---+--+-+--+-+-+
2 rows in set, 1 warning (0.00 sec)

Best,
Jia


Dan Nelson wrote:

In the last episode (Aug 28), Jia Chen said:
  

One seemingly simple query that joins two tables takes a long time for me.

This is my library.

mysql> show table status from nber1999;
+---++-++--++-+--+--+---++-+-++---+--++-+
| Name  | Engine | Version | Row_format | Rows | Avg_row_length  | 
Data_length | Max_data_length  | Index_length | Data_free |  Auto_increment | 
Create_time | Update_time | Check_time  | Collation | 
Checksum | Create_options | Comment |
+---++-++--++-+--+--+---++-+-++---+--++-+
| compusta1 | MyISAM |  10 | Dynamic| 4906 | 77  |  
379464 |  281474976710655 | 1024 | 0 |NULL | 
2009-08-27 23:56:47 | 2009-08-27 23:56:47 | NULL   |  latin1_swedish_ci |   
  NULL || |
| pat1  | MyISAM |  10 | Dynamic|  2089903 | 96  |   
201936072 |  281474976710655 | 1024 | 0 |NULL | 
2009-08-27 23:55:48 | 2009-08-27 23:55:56 | NULL   |  latin1_swedish_ci |   
  NULL || |
+---++-++--++-+--+--+---++-+-++---+--++-+
5 rows in set (0.00 sec)

And the relevant rows in my slow query log file is:

# Time: 090828 10:36:17
# u...@host: root[root] @ localhost []
# Query_time: 478  Lock_time: 0  Rows_sent: 0  Rows_examined: 1251
use nber1999;
create table nber1999.pat select a.*, b.assname, b.cname, b.cusip, 
b.own, b.pname, b.sname

 from nber1999.pat1 as a inner join nber1999.compusta1 as b
on a.assignee=b.assignee;



If you run just the "select ..." part, is it slow also?  Do you have an
index on pat1.assignee?  What does an EXPLAIN on the select print?
 
  



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Very Slow Query

2009-08-28 Thread Dan Nelson
In the last episode (Aug 28), Jia Chen said:
> One seemingly simple query that joins two tables takes a long time for me.
> 
> This is my library.
> 
> mysql> show table status from nber1999;
> +---++-++--++-+--+--+---++-+-++---+--++-+
> | Name  | Engine | Version | Row_format | Rows | Avg_row_length  | 
> Data_length | Max_data_length  | Index_length | Data_free |  Auto_increment | 
> Create_time | Update_time | Check_time  | Collation | 
> Checksum | Create_options | Comment |
> +---++-++--++-+--+--+---++-+-++---+--++-+
> | compusta1 | MyISAM |  10 | Dynamic| 4906 | 77  |
>   379464 |  281474976710655 | 1024 | 0 |NULL | 
> 2009-08-27 23:56:47 | 2009-08-27 23:56:47 | NULL   |  latin1_swedish_ci | 
> NULL || |
> | pat1  | MyISAM |  10 | Dynamic|  2089903 | 96  |   
> 201936072 |  281474976710655 | 1024 | 0 |NULL | 
> 2009-08-27 23:55:48 | 2009-08-27 23:55:56 | NULL   |  latin1_swedish_ci | 
> NULL || |
> +---++-++--++-+--+--+---++-+-++---+--++-+
> 5 rows in set (0.00 sec)
> 
> And the relevant rows in my slow query log file is:
> 
> # Time: 090828 10:36:17
> # u...@host: root[root] @ localhost []
> # Query_time: 478  Lock_time: 0  Rows_sent: 0  Rows_examined: 1251
> use nber1999;
> create table nber1999.pat select a.*, b.assname, b.cname, b.cusip, 
> b.own, b.pname, b.sname
>  from nber1999.pat1 as a inner join nber1999.compusta1 as b
> on a.assignee=b.assignee;

If you run just the "select ..." part, is it slow also?  Do you have an
index on pat1.assignee?  What does an EXPLAIN on the select print?
 
-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Very Slow Query

2009-08-28 Thread Jia Chen

Hi all,

One seemingly simple query that joins two tables takes a long time for me.

This is my library.

mysql> show table status from nber1999;
+---++-++--++-+--+--+---++-+-++---+--++-+
| Name  | Engine | Version | Row_format | Rows | Avg_row_length 
| Data_length | Max_data_length  | Index_length | Data_free | 
Auto_increment | Create_time | Update_time | Check_time 
| Collation | Checksum | Create_options | Comment |

+---++-++--++-+--+--+---++-+-++---+--++-+
| compusta1 | MyISAM |  10 | Dynamic| 4906 | 77 
|  379464 |  281474976710655 | 1024 | 0 |   
NULL | 2009-08-27 23:56:47 | 2009-08-27 23:56:47 | NULL   | 
latin1_swedish_ci | NULL || |
| pat1  | MyISAM |  10 | Dynamic|  2089903 | 96 
|   201936072 |  281474976710655 | 1024 | 0 |   
NULL | 2009-08-27 23:55:48 | 2009-08-27 23:55:56 | NULL   | 
latin1_swedish_ci | NULL || |

+---++-++--++-+--+--+---++-+-++---+--++-+
5 rows in set (0.00 sec)

And the relevant rows in my slow query log file is:

/usr/sbin/mysqld, Version: 5.0.75-0ubuntu10.2-log ((Ubuntu)). started with:
Tcp port: 3306  Unix socket: /var/run/mysqld/mysqld.sock
Time Id CommandArgument
# Time: 090828 10:36:17
# u...@host: root[root] @ localhost []
# Query_time: 478  Lock_time: 0  Rows_sent: 0  Rows_examined: 1251
use nber1999;
create table nber1999.pat select a.*, b.assname, b.cname, b.cusip, 
b.own, b.pname, b.sname

from nber1999.pat1 as a inner join nber1999.compusta1 as b
   on a.assignee=b.assignee;

My operating system is ubuntu 9.04.  I set configuration variables as 
follows:

[mysqld]
key_buffer = 1024M
table_cache = 256
query_cache_type= 1
query_cache_limit   = 2M
query_cache_size= 20M

[isamchk]
key_buffer= 16M

Can anyone give me some hint on how to speed this query up?  Thanks.

I tried to tune mysql by using a script from 
http://mediakey.dk/~cc/optimize-mysql-performance-with-mysqltuner/

and got

>>  MySQLTuner 1.0.0 - Major Hayden 
>>  Bug reports, feature requests, and downloads at http://mysqltuner.com/
>>  Run with '--help' for additional options and output filtering
Please enter your MySQL administrative login: root
Please enter your MySQL administrative password:


 General Statistics 
--

[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 5.0.75-0ubuntu10.2-log
[OK] Operating on 64-bit architecture

 Storage Engine Statistics 
---

[--] Status: +Archive -BDB -Federated -InnoDB -ISAM -NDBCluster
[--] Data in MyISAM tables: 584M (Tables: 6)
[OK] Total fragmented tables: 0

 Performance Metrics 
-

[--] Up for: 30m 47s (131 q [0.071 qps], 42 conn, TX: 35K, RX: 7K)
[--] Reads / Writes: 100% / 0%
[--] Total buffers: 1.0G global + 2.6M per thread (100 max threads)
[OK] Maximum possible memory usage: 1.3G (34% of installed RAM)
[OK] Slow queries: 0% (1/131)
[OK] Highest usage of available connections: 2% (2/100)
[OK] Key buffer size / total MyISAM indexes: 1.0G/74.0K
[!!] Query cache efficiency: 0.0% (0 cached / 67 selects)
[OK] Query cache prunes per day: 0
[OK] Temporary tables created on disk: 19% (21 on disk / 107 total)
[OK] Thread cache hit rate: 95% (2 created / 42 connections)
[OK] Table cache hit rate: 75% (24 open / 32 opened)
[OK] Open file limit used: 4% (49/1K)
[OK] Table locks acquired immediately: 100% (41 immediate / 41 locks)

 Recommendations 
-

General recommendations:
   MySQL started within last 24 hours - recommendations may be inaccurate
Variables to adjust:
   query_cache_limit (> 2M, or use smaller result sets)

Best,
Jia

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query Performance

2009-07-16 Thread Dan Nelson
In the last episode (Jul 15), Tachu(R) said:
> I'm having random query slowness that i can only reproduce once. My main
> question is that the query runs faster the second time around but i dont
> have query cache enabled here is some info from mysql profiler;
> 
> The time is spent mostly on the sending data step
> first time around
> 
> 63 rows in set (0.51 sec)
> 
> Second time around
> 
> 63 rows in set (0.00 sec)
> 
> Any ideas how i can improve the performance of the query. the explain
> 
> explain select user_id,result_id from score where quiz_id='495536' and
> user_id in (594939703, 641833475, 648583496, 663932271, 791002140,
> 844089643, 1014189359, 1014996058, 1021011357, 1035297313, 1043753292,
> 1103136802, 1107070131, 1114628916, 1129457032, 1133091309, 1188705251,
> 1211995704, 1219452575, 1219631303, 1239604246, 1241474238, 1266412488,
> 1266549868, 1288719892, 1289732597, 1317205736, 1346089661, 1350738033,
> 1354967647, 1356046070, 1369391720, 1374076904, 1406156780, 1407302487,
> 1414151928, 1425275210, 1457839666, 1466635900, 1484315366, 1493410149,
> 1536834812, 1544094394, 1572354290, 1575139632, 1578136049, 1587129534,
> 1592996678, 1594617334, 1615538051, 1615906710, 1621733854, 1622940529,
> 1646693120, 1674002418, 1684166314, 1684535294, 1701864533, 1703227082,
> 1711823847, 1713353427, 1732903860, 1752595138, 1758240924, 1813245914,
> 1815724221, 1839942291, 10015406640);
>
> ++-+---+---+-+-+-+--+--+-+
> | id | select_type | table | type  | possible_keys   | key | key_len | 
> ref  | rows | Extra   |
> ++-+---+---+-+-+-+--+--+-+
> |  1 | SIMPLE  | score | range | user_id,quiz_id | user_id | 12  | 
> NULL |   68 | Using where |
> ++-+---+---+-+-+-+--+--+-+
> 1 row in set (0.02 sec)
> 
>   UNIQUE KEY `user_id` (`user_id`, `quiz_id`), 

Try swapping those fields in the compound index.  The way you have it, mysql
has to jump to each of the 68 user_id values in the index and see if one of
the quiz_ids is 495536.  If you have an index on (quiz_id,user_id), mysql
only has to jump to the 495536 quiz_id section, and all the user_ids are all
right there.  

That should cut your query time by 50% (since you still have to do 68 seeks
to the table rows to fetch result_id).  If you also add result_id to your
compound index, then mysql will be able to get all its information from the
index without having to go to the table at all.

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Slow query Performance

2009-07-16 Thread Martin Gainty

when my.ini has query-cache-type = 1 setting
the query results are placed in cache on first read
second and consequent reads reference resultset from cache

http://www.databasejournal.com/features/mysql/article.php/3110171/MySQLs-Query-Cache.htm

Martin Gainty 
Confucius say "Big Dog in door prevents newspaper from being stolen"
__ 
Verzicht und Vertraulichkeitanmerkung/Note de déni et de confidentialité
 
Diese Nachricht ist vertraulich. Sollten Sie nicht der vorgesehene Empfaenger 
sein, so bitten wir hoeflich um eine Mitteilung. Jede unbefugte Weiterleitung 
oder Fertigung einer Kopie ist unzulaessig. Diese Nachricht dient lediglich dem 
Austausch von Informationen und entfaltet keine rechtliche Bindungswirkung. 
Aufgrund der leichten Manipulierbarkeit von E-Mails koennen wir keine Haftung 
fuer den Inhalt uebernehmen.
Ce message est confidentiel et peut être privilégié. Si vous n'êtes pas le 
destinataire prévu, nous te demandons avec bonté que pour satisfaire informez 
l'expéditeur. N'importe quelle diffusion non autorisée ou la copie de ceci est 
interdite. Ce message sert à l'information seulement et n'aura pas n'importe 
quel effet légalement obligatoire. Étant donné que les email peuvent facilement 
être sujets à la manipulation, nous ne pouvons accepter aucune responsabilité 
pour le contenu fourni.




> Date: Thu, 16 Jul 2009 10:08:03 -0400
> From: don_r...@att.net
> To: dstepli...@gmail.com
> CC: mysql@lists.mysql.com
> Subject: Re: Slow query Performance
> 
> On Wed, 15 Jul 2009 23:53:05 -0400 Darryle Steplight said:
> 
> > Can you show us the output of DESCRIBE score and SHOW INDEX FROM score?
> > 
> > On Wed, Jul 15, 2009 at 6:44 PM, Tachu® wrote:
> > > I'm having random query slowness that i can only reproduce once. My main
> > > question is that the query runs faster the second time around but i dont
> > > have ...
> 
> 
> 
> > > 
> > 
> > -- 
> > A: It reverses the normal flow of conversation.
> > Q: What's wrong with top-posting?
> > A: Top-posting.
> > Q: What's the biggest scourge on plain text email discussions?
> > 
> 
> Anybody else see the irony here?
> 
> -- 
> Don Readdon_r...@att.net
>  It's always darkest before the dawn. So if you are going to
>  steal the neighbor's newspaper, that's the time to do it.
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql?unsub=mgai...@hotmail.com
> 

_
Insert movie times and more without leaving Hotmail®. 
http://windowslive.com/Tutorial/Hotmail/QuickAdd?ocid=TXT_TAGLM_WL_HM_Tutorial_QuickAdd_062009

Re: Slow query Performance

2009-07-16 Thread Don Read
On Wed, 15 Jul 2009 23:53:05 -0400 Darryle Steplight said:

> Can you show us the output of DESCRIBE score and SHOW INDEX FROM score?
> 
> On Wed, Jul 15, 2009 at 6:44 PM, Tachu® wrote:
> > I'm having random query slowness that i can only reproduce once. My main
> > question is that the query runs faster the second time around but i dont
> > have ...



> > 
> 
> -- 
> A: It reverses the normal flow of conversation.
> Q: What's wrong with top-posting?
> A: Top-posting.
> Q: What's the biggest scourge on plain text email discussions?
> 

Anybody else see the irony here?

-- 
Don Readdon_r...@att.net
 It's always darkest before the dawn. So if you are going to
 steal the neighbor's newspaper, that's the time to do it.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow query Performance

2009-07-15 Thread Darryle Steplight
Can you show us the output of DESCRIBE score and SHOW INDEX FROM score?

On Wed, Jul 15, 2009 at 6:44 PM, Tachu® wrote:
> I'm having random query slowness that i can only reproduce once. My main
> question is that the query runs faster the second time around but i dont
> have query cache enabled here is some info from mysql profiler;
>
> The time is spent mostly on the sending data step
> first time around
>
> 63 rows in set (0.51 sec)
>
> show profile all;
> ++--+--++---+-+--+---+---+---+---+---+---+---+---+-+
> | Status             | Duration | CPU_user | CPU_system | Context_voluntary
> | Context_involuntary | Block_ops_in | Block_ops_out | Messages_sent |
> Messages_received | Page_faults_major | Page_faults_minor | Swaps |
> Source_function       | Source_file   | Source_line |
> ++--+--++---+-+--+---+---+---+---+---+---+---+---+-+
> | starting           | 0.000165 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> NULL                  | NULL          |        NULL |
> | Opening tables     | 0.33 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> open_tables           | sql_base.cc   |        4450 |
> | System lock        | 0.20 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> mysql_lock_tables     | lock.cc       |         258 |
> | Table lock         | 0.28 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> mysql_lock_tables     | lock.cc       |         269 |
> | init               | 0.52 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> mysql_select          | sql_select.cc |        2337 |
> | optimizing         | 0.36 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> optimize              | sql_select.cc |         762 |
> | statistics         | 0.000233 | 0.001000 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> optimize              | sql_select.cc |         944 |
> | preparing          | 0.31 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> optimize              | sql_select.cc |         954 |
> | executing          | 0.17 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> exec                  | sql_select.cc |        1638 |
> | Sending data       | 0.504797 | 0.129980 |   0.012998 |               429
> |                  38 |         2456 |            64 |             0
> |                 0 |                 0 |                 0 |     0 |
> exec                  | sql_select.cc |        2177 |
> | end                | 0.54 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> mysql_select          | sql_select.cc |        2382 |
> | query end          | 0.23 | 0.00 |   0.00 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |                 0 |                 0 |     0 |
> mysql_execute_command | sql_parse.cc  |        4799 |
> | freeing items      | 0.63 | 0.00 |   0.000999 |                 0
> |                   0 |            0 |             0 |             0
> |                 0 |               

Slow query Performance

2009-07-15 Thread Tachu®
I'm having random query slowness that i can only reproduce once. My main
question is that the query runs faster the second time around but i dont
have query cache enabled here is some info from mysql profiler;

The time is spent mostly on the sending data step
first time around

63 rows in set (0.51 sec)

show profile all;
++--+--++---+-+--+---+---+---+---+---+---+---+---+-+
| Status | Duration | CPU_user | CPU_system | Context_voluntary
| Context_involuntary | Block_ops_in | Block_ops_out | Messages_sent |
Messages_received | Page_faults_major | Page_faults_minor | Swaps |
Source_function   | Source_file   | Source_line |
++--+--++---+-+--+---+---+---+---+---+---+---+---+-+
| starting   | 0.000165 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
NULL  | NULL  |NULL |
| Opening tables | 0.33 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
open_tables   | sql_base.cc   |4450 |
| System lock| 0.20 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
mysql_lock_tables | lock.cc   | 258 |
| Table lock | 0.28 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
mysql_lock_tables | lock.cc   | 269 |
| init   | 0.52 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
mysql_select  | sql_select.cc |2337 |
| optimizing | 0.36 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
optimize  | sql_select.cc | 762 |
| statistics | 0.000233 | 0.001000 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
optimize  | sql_select.cc | 944 |
| preparing  | 0.31 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
optimize  | sql_select.cc | 954 |
| executing  | 0.17 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
exec  | sql_select.cc |1638 |
| Sending data   | 0.504797 | 0.129980 |   0.012998 |   429
|  38 | 2456 |64 | 0
| 0 | 0 | 0 | 0 |
exec  | sql_select.cc |2177 |
| end| 0.54 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
mysql_select  | sql_select.cc |2382 |
| query end  | 0.23 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
mysql_execute_command | sql_parse.cc  |4799 |
| freeing items  | 0.63 | 0.00 |   0.000999 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
mysql_parse   | sql_parse.cc  |5805 |
| logging slow query | 0.18 | 0.00 |   0.00 | 0
|   0 |0 | 0 | 0
| 0 | 0 | 0 | 0 |
log_slow_statement| sql_parse.cc  |1608 |
| cleaning up| 0.20 | 0.00 |   0.00 | 0
|   

updates in slow query log

2009-02-15 Thread AM Corona
Is there anyway to keep updates and deletes from showing up in the slow
query logs?


Re: slow query log

2009-01-02 Thread Ananda Kumar
# Query_time: 0  Lock_time: 0  Rows_sent: 1  Rows_examined: 150
SELECT SUM(COUNTER_VALUE)  FROM STO_LIS sl, SCAT_LIS sfl WHERE l.STO_LIS_ID
=sfl.LIS_ID AND sfl.CAT_ID = '-1';

This is what is there in the slow-query log


On 1/2/09, Baron Schwartz  wrote:
>
> It executes in 0 sec when you run it.  It might be in the query cache.
> Try it with SQL_NO_CACHE.  But even then it might run faster than it
> did when it got logged in the slow log, because the table's data might
> be in memory and therefore faster to access.
>
> The point is that the slow query log shows you how long the statement
> took to execute at the time of logging.  It doesn't say anything about
> how fast the query will execute at other times, or WHY it was a long
> query at the time of logging.  It could be a lot of things including
> locking, other queries running at the same time, a background Patrol
> Read on your RAID controller, etc.  That's why I asked you to paste
> the slow query log entry for this query.
>
> On Fri, Jan 2, 2009 at 12:14 AM, Ananda Kumar  wrote:
> > mysql> explain SELECT SUM(COUNTER_VALUE)  FROM STO_LIS sl,
> > -> SCAT_LIS sfl WHERE sl.STO_LIS_ID =
> > -> sfl.LIS_ID AND sfl.CAT_ID = '-1';
> >
> >
> ++-+---+--+---+---+-+-+--+-+
> > | id | select_type | table | type | possible_keys |
> > key   | key_len | ref | rows | Extra   |
> >
> ++-+---+--+---+---+-+-+--+-+
> > |  1 | SIMPLE  | sfl   | ref  | PRIMARY,sfcatlist_lst |
> > PRIMARY   | 4   | const   |   23 | Using index |
> > |  1 | SIMPLE  | sl| ref  | STO_LIST_UK | STO_LIS_UK |
> 4   |
> > sm15.sfl.lis_id |1 | Using where |
> >
> ++-+---+--+---+---+-+-+--+-+
> > 2 rows in set (0.31 sec)
> >
> > mysql>  SELECT SUM(COUNTER_VALUE)  FROM STORES_LISTING sl,
> > SFCATEGORY_LISTING sfl WHERE sl.STORES_LISTING_ID =  sfl.LISTING_ID AND
> > sfl.CATEGORY_ID = '-1';
> > ++
> > | SUM(COUNTER_VALUE) |
> > ++
> > |   NULL |
> > ++
> > 1 row in set (0.00 sec)
> >
> >
> >
> > On 12/31/08, Baron Schwartz  wrote:
> >>
> >> Hi,
> >>
> >> On Wed, Dec 31, 2008 at 8:44 AM, Ananda Kumar 
> wrote:
> >> > Hi All,
> >> > I have enabled slow query log.
> >> > Generally this file will have sql's which take more than long-query
> time
> >> > to
> >> > execute and also sql's not using indexes.
> >> > But i see sql's which does not come under the above condition.
> >> > I have set the long-query time to 1 Sec .
> >> >
> >> > The query takes less than 1 sec to execute and also uses indexes, but
> >> > still
> >> > its recored in slow query log.
> >>
> >> What is the execution time shown in the slow query log?
> >>
> >> What is the query?
> >>
> >> If you can paste the whole log entry here, that would be good.
> >>
> >>
> >> --
> >> Baron Schwartz, Director of Consulting, Percona Inc.
> >> Our Blog: http://www.mysqlperformanceblog.com/
> >> Our Services: http://www.percona.com/services.html
> >
> >
>
>
>
> --
> Baron Schwartz, Director of Consulting, Percona Inc.
> Our Blog: http://www.mysqlperformanceblog.com/
> Our Services: http://www.percona.com/services.html
>


Re: slow query log

2009-01-02 Thread Baron Schwartz
It executes in 0 sec when you run it.  It might be in the query cache.
 Try it with SQL_NO_CACHE.  But even then it might run faster than it
did when it got logged in the slow log, because the table's data might
be in memory and therefore faster to access.

The point is that the slow query log shows you how long the statement
took to execute at the time of logging.  It doesn't say anything about
how fast the query will execute at other times, or WHY it was a long
query at the time of logging.  It could be a lot of things including
locking, other queries running at the same time, a background Patrol
Read on your RAID controller, etc.  That's why I asked you to paste
the slow query log entry for this query.

On Fri, Jan 2, 2009 at 12:14 AM, Ananda Kumar  wrote:
> mysql> explain SELECT SUM(COUNTER_VALUE)  FROM STO_LIS sl,
> -> SCAT_LIS sfl WHERE sl.STO_LIS_ID =
> -> sfl.LIS_ID AND sfl.CAT_ID = '-1';
>
> ++-+---+--+---+---+-+-+--+-+
> | id | select_type | table | type | possible_keys |
> key   | key_len | ref | rows | Extra   |
> ++-+---+--+---+---+-+-+--+-+
> |  1 | SIMPLE  | sfl   | ref  | PRIMARY,sfcatlist_lst |
> PRIMARY   | 4   | const   |   23 | Using index |
> |  1 | SIMPLE  | sl| ref  | STO_LIST_UK | STO_LIS_UK | 4   |
> sm15.sfl.lis_id |1 | Using where |
> ++-+---+--+---+---+-+-+--+-+
> 2 rows in set (0.31 sec)
>
> mysql>  SELECT SUM(COUNTER_VALUE)  FROM STORES_LISTING sl,
> SFCATEGORY_LISTING sfl WHERE sl.STORES_LISTING_ID =  sfl.LISTING_ID AND
> sfl.CATEGORY_ID = '-1';
> ++
> | SUM(COUNTER_VALUE) |
> ++
> |   NULL |
> ++
> 1 row in set (0.00 sec)
>
>
>
> On 12/31/08, Baron Schwartz  wrote:
>>
>> Hi,
>>
>> On Wed, Dec 31, 2008 at 8:44 AM, Ananda Kumar  wrote:
>> > Hi All,
>> > I have enabled slow query log.
>> > Generally this file will have sql's which take more than long-query time
>> > to
>> > execute and also sql's not using indexes.
>> > But i see sql's which does not come under the above condition.
>> > I have set the long-query time to 1 Sec .
>> >
>> > The query takes less than 1 sec to execute and also uses indexes, but
>> > still
>> > its recored in slow query log.
>>
>> What is the execution time shown in the slow query log?
>>
>> What is the query?
>>
>> If you can paste the whole log entry here, that would be good.
>>
>>
>> --
>> Baron Schwartz, Director of Consulting, Percona Inc.
>> Our Blog: http://www.mysqlperformanceblog.com/
>> Our Services: http://www.percona.com/services.html
>
>



-- 
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: slow query log

2009-01-01 Thread Ananda Kumar
mysql> explain SELECT SUM(COUNTER_VALUE)  FROM STO_LIS sl,
-> SCAT_LIS sfl WHERE sl.STO_LIS_ID =
-> sfl.LIS_ID AND sfl.CAT_ID = '-1';

++-+---+--+---+---+-+-+--+-+
| id | select_type | table | type | possible_keys |
key   | key_len | ref | rows | Extra   |
++-+---+--+---+---+-+-+--+-+
|  1 | SIMPLE  | sfl   | ref  | PRIMARY,sfcatlist_lst |
PRIMARY   | 4   | const   |   23 | Using index |
|  1 | SIMPLE  | sl| ref  | STO_LIST_UK | STO_LIS_UK | 4   |
sm15.sfl.lis_id |1 | Using where |
++-+---+--+---+---+-+-+--+-+
2 rows in set (0.31 sec)

mysql>  SELECT SUM(COUNTER_VALUE)  FROM STORES_LISTING sl,
SFCATEGORY_LISTING sfl WHERE sl.STORES_LISTING_ID =  sfl.LISTING_ID AND
sfl.CATEGORY_ID = '-1';
++
| SUM(COUNTER_VALUE) |
++
|   NULL |
++
1 row in set (0.00 sec)




On 12/31/08, Baron Schwartz  wrote:
>
> Hi,
>
> On Wed, Dec 31, 2008 at 8:44 AM, Ananda Kumar  wrote:
> > Hi All,
> > I have enabled slow query log.
> > Generally this file will have sql's which take more than long-query time
> to
> > execute and also sql's not using indexes.
> > But i see sql's which does not come under the above condition.
> > I have set the long-query time to 1 Sec .
> >
> > The query takes less than 1 sec to execute and also uses indexes, but
> still
> > its recored in slow query log.
>
> What is the execution time shown in the slow query log?
>
> What is the query?
>
> If you can paste the whole log entry here, that would be good.
>
>
> --
> Baron Schwartz, Director of Consulting, Percona Inc.
> Our Blog: http://www.mysqlperformanceblog.com/
> Our Services: http://www.percona.com/services.html
>


  1   2   3   4   >