RE: Are There Slow Queries that Don't Show in the Slow Query Logs?

2013-05-31 Thread Robinson, Eric
 -Original Message-
 From: Vikas Shukla [mailto:myfriendvi...@gmail.com] 
 Sent: Thursday, May 30, 2013 7:19 PM
 To: Robinson, Eric; mysql@lists.mysql.com
 Subject: RE: Are There Slow Queries that Don't Show in the 
 Slow Query Logs?
 
 Hi,
 
 No, it does not represents the time from request to response 
 not does it includes the time that is spent in waiting for 
 the locks to be released.
 The slow query log consists of SQL statements that took more 
 than long_query_time seconds to EXECUTE. The default value of 
 long_query_time is 10.
 The time to acquire the initial locks is not counted as 
 execution time.
 mysqld writes a statement to the slow query log after it has 
 been executed and after all locks have been released, so log 
 order might differ from execution order.
 
 Lets take an example, if a query is received at 10:00 hrs and 
 it waits till 10:05 hrs , it starts getting executed at 
 10:05:00 and completed at 10:05:24 (HH:MM:SS). So, here it 
 took 24 seconds to execute. So only
 24 seconds is counted.
 So if long_query_time is equal to 10, which is by default, 
 this would be logged in slow query log as it takes more than 
 10 seconds to execute.
 
 Sent from my Windows Phone From: Robinson, Eric
 Sent: 31-05-2013 03:48
 To: mysql@lists.mysql.com
 Subject: Are There Slow Queries that Don't Show in the Slow 
 Query Logs?
 As everyone knows, with MyISAM, queries and inserts can lock 
 tables and force other queries to wait in a queue. When that 
 happens, does the time shown in the slow query logs represent 
 the whole time from when the server received the request to 
 when the response was sent to the client? Or is the time a 
 query spends waiting for a table lock to be released omitted 
 from what is recorded in the slow query logs?
 
 --
 Eric Robinson
 
 

Very good answer, Vikas. Thank you for the clarification!

--Eric




Disclaimer - May 31, 2013 
This email and any files transmitted with it are confidential and intended 
solely for Vikas Shukla,mysql@lists.mysql.com. If you are not the named 
addressee you should not disseminate, distribute, copy or alter this email. Any 
views or opinions presented in this email are solely those of the author and 
might not represent those of Physicians' Managed Care or Physician Select 
Management. Warning: Although Physicians' Managed Care or Physician Select 
Management has taken reasonable precautions to ensure no viruses are present in 
this email, the company cannot accept responsibility for any loss or damage 
arising from the use of this email or attachments. 
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Are There Slow Queries that Don't Show in the Slow Query Logs?

2013-05-30 Thread Vikas Shukla
Hi,

No, it does not represents the time from request to response not does
it includes the time that is spent in waiting for the locks to be
released.
The slow query log consists of SQL statements that took more than
long_query_time seconds to EXECUTE. The default value of
long_query_time is 10.
The time to acquire the initial locks is not counted as execution time.
mysqld writes a statement to the slow query log after it has been
executed and after all locks have been released, so log order might
differ from execution order.

Lets take an example, if a query is received at 10:00 hrs and it waits
till 10:05 hrs , it starts getting executed at 10:05:00 and completed
at 10:05:24 (HH:MM:SS). So, here it took 24 seconds to execute. So only
24 seconds is counted.
So if long_query_time is equal to 10, which is by default, this would
be logged in slow query log as it takes more than 10 seconds to execute.

Sent from my Windows Phone From: Robinson, Eric
Sent: 31-05-2013 03:48
To: mysql@lists.mysql.com
Subject: Are There Slow Queries that Don't Show in the Slow Query Logs?
As everyone knows, with MyISAM, queries and inserts can lock tables
and force other queries to wait in a queue. When that happens, does
the time shown in the slow query logs represent the whole time from
when the server received the request to when the response was sent to
the client? Or is the time a query spends waiting for a table lock to
be released omitted from what is recorded in the slow query logs?

--
Eric Robinson






Disclaimer - May 30, 2013
This email and any files transmitted with it are confidential and
intended solely for 'mysql@lists.mysql.com'. If you are not the named
addressee you should not disseminate, distribute, copy or alter this
email. Any views or opinions presented in this email are solely those
of the author and might not represent those of Physicians' Managed
Care or Physician Select Management. Warning: Although Physicians'
Managed Care or Physician Select Management has taken reasonable
precautions to ensure no viruses are present in this email, the
company cannot accept responsibility for any loss or damage arising
from the use of this email or attachments.
This disclaimer was added by Policy Patrol: http://www.policypatrol.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Slow queries / inserts InnoDB

2012-10-09 Thread Andrés Tello
You are forcing mysql to do full table scans with the substr...

Use explain to see that you aren't using any index.

Avoid the use of substr in the where clause, by splitting your data, index
that field and do you query over that field.



That is why your query is so slow.

the slow insert, is due you S.O...


On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno 
aespinosamor...@gmail.com wrote:

 Hi.



 I have developed my first application with VB.NET at work (junior
 developer) and basically it reads several thousands of lines from X number
 of files, and inserts them into a mysql database.



 The application consists of several steps:

 1)  Dump all lines with all fields into a temp table.

 a.   This works fine. Optimal speed for the hardware we have.

 2)  Query temp table to obtain certain. I query by a unique ID. I
 insert all unique ID (isn field). If the ID matches my interests, I insert
 it into an ArrayList.

 a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
 SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
 COUNT(SUBSTR(ISN,2)) = 4

 b.  The isn is not unique per line, but per data (sms)

 c.   Once I have all isn on an arraylist, I do the following query:

i.
  SELECT
  select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
 = '  isn  ' ORDER BY type LIMIT 1

 d.  To obtain some data. During the process I query around 10 times
 other table per ISN.

 e.  Here is the problem. If I have a few files to process (around
 3000-4000 lines in total, small array) this steps work fine, good speed.
 But If I have big files or a lot of files (more than 1 lines in total,
 big array), this steps are incredibly slow. Queries and inserts are too
 slow. Meaning, one-two inserts per second, while the other case inserts are
 around 800 per second.



 Our hardware is not optimized for database server, but I don’t have other
 choice. It is mostly a desktop computer

 Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.



 I have tried some optimizations commented in mysqlperformance blog without
 success.

 Any way to optimize this?



 Thank you very much in advance.





 Adrián Espinosa Moreno.



RE: Slow queries / inserts InnoDB

2012-10-09 Thread Rick James
When creating the temp table, add another column, isn2, that is SUBSTR(ISN, 2).
Add
  INDEX(isn2, type)
Change the query to...
  isn2 = '  isn  ' ORDER BY type LIMIT 1

It that does not work, please provide clearer details, including
SHOW CREATE TABLE
SHOW TABLE STATUS
EXPLAIN SELECT (with substitutions filled in)

 -Original Message-
 From: Andrés Tello [mailto:mr.crip...@gmail.com]
 Sent: Tuesday, October 09, 2012 7:04 AM
 To: Adrián Espinosa Moreno
 Cc: mysql@lists.mysql.com
 Subject: Re: Slow queries / inserts InnoDB
 
 You are forcing mysql to do full table scans with the substr...
 
 Use explain to see that you aren't using any index.
 
 Avoid the use of substr in the where clause, by splitting your data,
 index that field and do you query over that field.
 
 
 
 That is why your query is so slow.
 
 the slow insert, is due you S.O...
 
 
 On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno 
 aespinosamor...@gmail.com wrote:
 
  Hi.
 
 
 
  I have developed my first application with VB.NET at work (junior
  developer) and basically it reads several thousands of lines from X
  number of files, and inserts them into a mysql database.
 
 
 
  The application consists of several steps:
 
  1)  Dump all lines with all fields into a temp table.
 
  a.   This works fine. Optimal speed for the hardware we have.
 
  2)  Query temp table to obtain certain. I query by a unique ID. I
  insert all unique ID (isn field). If the ID matches my interests, I
  insert it into an ArrayList.
 
  a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
  SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
  COUNT(SUBSTR(ISN,2)) = 4
 
  b.  The isn is not unique per line, but per data (sms)
 
  c.   Once I have all isn on an arraylist, I do the following
 query:
 
 i.
   SELECT
   select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE
  SUBSTR(ISN, 2) = '  isn  ' ORDER BY type LIMIT 1
 
  d.  To obtain some data. During the process I query around 10
 times
  other table per ISN.
 
  e.  Here is the problem. If I have a few files to process (around
  3000-4000 lines in total, small array) this steps work fine, good
 speed.
  But If I have big files or a lot of files (more than 1 lines in
  total, big array), this steps are incredibly slow. Queries and
 inserts
  are too slow. Meaning, one-two inserts per second, while the other
  case inserts are around 800 per second.
 
 
 
  Our hardware is not optimized for database server, but I don't have
  other choice. It is mostly a desktop computer
 
  Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.
 
 
 
  I have tried some optimizations commented in mysqlperformance blog
  without success.
 
  Any way to optimize this?
 
 
 
  Thank you very much in advance.
 
 
 
 
 
  Adrián Espinosa Moreno.
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Slow queries / inserts InnoDB

2012-10-09 Thread Cheng Xianming
Tello is right ! Moreno, and I sugest that: if you want query by 
SUBSTR(ISN,2) ,you should create a field named like isnInfo, and 
save SUBSTR(ISN,2) to this filed when you insert . don't forget create a 
index on this field .



于 12-10-9 下午10:04, Andrés Tello 写道:

You are forcing mysql to do full table scans with the substr...

Use explain to see that you aren't using any index.

Avoid the use of substr in the where clause, by splitting your data, index
that field and do you query over that field.



That is why your query is so slow.

the slow insert, is due you S.O...


On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno 
aespinosamor...@gmail.com wrote:


Hi.



I have developed my first application with VB.NET at work (junior
developer) and basically it reads several thousands of lines from X number
of files, and inserts them into a mysql database.



The application consists of several steps:

1)  Dump all lines with all fields into a temp table.

a.   This works fine. Optimal speed for the hardware we have.

2)  Query temp table to obtain certain. I query by a unique ID. I
insert all unique ID (isn field). If the ID matches my interests, I insert
it into an ArrayList.

a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
COUNT(SUBSTR(ISN,2)) = 4

b.  The isn is not unique per line, but per data (sms)

c.   Once I have all isn on an arraylist, I do the following query:

i.
  SELECT
  select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
= '  isn  ' ORDER BY type LIMIT 1

d.  To obtain some data. During the process I query around 10 times
other table per ISN.

e.  Here is the problem. If I have a few files to process (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other case inserts are
around 800 per second.



Our hardware is not optimized for database server, but I don’t have other
choice. It is mostly a desktop computer

Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.



I have tried some optimizations commented in mysqlperformance blog without
success.

Any way to optimize this?



Thank you very much in advance.





Adrián Espinosa Moreno.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Slow queries / inserts InnoDB

2012-10-08 Thread Adrián Espinosa Moreno
Hi.



I have developed my first application with VB.NET at work (junior
developer) and basically it reads several thousands of lines from X number
of files, and inserts them into a mysql database.



The application consists of several steps:

1)  Dump all lines with all fields into a temp table.

a.   This works fine. Optimal speed for the hardware we have.

2)  Query temp table to obtain certain. I query by a unique ID. I
insert all unique ID (isn field). If the ID matches my interests, I insert
it into an ArrayList.

a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
COUNT(SUBSTR(ISN,2)) = 4

b.  The isn is not unique per line, but per data (sms)

c.   Once I have all isn on an arraylist, I do the following query:

   i.  SELECT
 select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
= '  isn  ' ORDER BY type LIMIT 1

d.  To obtain some data. During the process I query around 10 times
other table per ISN.

e.  Here is the problem. If I have a few files to process (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other case inserts are
around 800 per second.



Our hardware is not optimized for database server, but I don’t have other
choice. It is mostly a desktop computer

Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.



I have tried some optimizations commented in mysqlperformance blog without
success.

Any way to optimize this?



Thank you very much in advance.





Adrián Espinosa Moreno.


log-slow-queries

2010-05-07 Thread Stephen Sunderlin

Can't get slow querys to log.  Does this not work in myisam?

*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*

restarted mysqld  - no log.

Created in file in /var/log/mysql/

*snip*
-rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
*snip*

still not writing to the file

I've read
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


looks pretty simple -  not sure what I'm missing.

Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log-slow-queries

2010-05-07 Thread mos

At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:

Can't get slow querys to log.  Does this not work in myisam?


Sure it does. Have you tried:

slow_query_time = 1

Mike



*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*

restarted mysqld  - no log.

Created in file in /var/log/mysql/

*snip*
-rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
*snip*

still not writing to the file

I've read
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


looks pretty simple -  not sure what I'm missing.

Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log-slow-queries

2010-05-07 Thread mos

At 03:58 PM 5/7/2010, you wrote:

At 12:04 PM 5/7/2010, Stephen Sunderlin wrote:

Can't get slow querys to log.  Does this not work in myisam?


Sure it does. Have you tried:

slow_query_time = 1

Mike


Sorry, ignore that previous message. (Serves me right for trying to 
remember it from the top of my head.)


I'm using (Windows):

general_log=0
log-output=FILE
log_queries_not_using_indexes=1
long_query_time=3
slow_query_log=1
slow_query_log_file=U:/mysql5.5/data/SLOWLOG.TXT

I assume you are outputting the slow query log to a text file and not to a 
table.


Mike



*snip*
[mysqld]
log-slow-queries = /var/log/mysql/mysql-slow.log
long_query_time = 1
*snip*

restarted mysqld  - no log.

Created in file in /var/log/mysql/

*snip*
-rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
*snip*

still not writing to the file

I've read
http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html
http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


looks pretty simple -  not sure what I'm missing.

Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: log-slow-queries

2010-05-07 Thread Anirudh Sundar
Hello Stephen,

Did u try this ??

mysql show global variables like '%log_output%';
+---+---+
| Variable_name | Value |
+---+---+
| log_output| FILE  |
+---+---+

If only the log_output is FILE, then the slow queries will get logged in the
log.
mysql set global log_output = FILE; [if you find log_output as TABLE or
NONE]

Let me know if you have any issues...

Cheers,
Anirudh Sundar


On Fri, May 7, 2010 at 10:34 PM, Stephen Sunderlin 
stephen.sunder...@verizon.net wrote:

 Can't get slow querys to log.  Does this not work in myisam?

 *snip*
 [mysqld]
 log-slow-queries = /var/log/mysql/mysql-slow.log
 long_query_time = 1
 *snip*

 restarted mysqld  - no log.

 Created in file in /var/log/mysql/

 *snip*
 -rwxr--r-- 1 mysql mysql 0 May  7 10:33 mysql-slow.log
 *snip*

 still not writing to the file

 I've read
 http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html

 http://www.mydigitallife.info/2007/01/22/enable-logging-of-slow-queries-slow-query-log-in-mysql-database/


 looks pretty simple -  not sure what I'm missing.

 Thanks!



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=sundar.anir...@gmail.com




Slow queries when using left join

2010-03-19 Thread Olav Mørkrid
Dear MySQL forum.

I have performance problems when using left join x combined with
where x.y is null, in particularily when combining three tables this
way.

Please contact me by e-mail if you are familiar with these issues and
know how to eliminate slow queries.

I would really appreciate your help.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slow queries when using left join

2010-03-19 Thread Johan De Meersman
2010/3/19 Olav Mørkrid olav.mork...@gmail.com

 Dear MySQL forum.

 I have performance problems when using left join x combined with
 where x.y is null, in particularily when combining three tables this
 way.


With a left join, particularly when you're using *is (not) null*, you can't
use index selecting on your right table. That is, you're bound to do a
tablescan on what is essentially the cartesian product of your tables.

Every additional table only compounds the problem. 100x100 is 10.000., but
100x100x100 is 1.000.000.

Avoid left joins whenever possible - in some cases it's quicker to split out
the complex query and implement it in code with loops - not always, though,
you'll have to apply some elbow grease to find out the optimal solution.

The most recent example of this, was a hierarchical lookup query in Drupal's
taxonomy module: the hierarchy table was left-joined to itself five times.
Execution time on an unloaded machine was 0.54 seconds. By doing individual
lookups in a code loop until I got to the top level, I replaced that query
with a maximum of five (and usually less) 0.00 second ones over an existing
covering index.


Another thing - and maybe one you should look at first, is wether you can
add more selective where-clauses for you base table. That doesn't always
stop at the actual data you want, either. Another example from here: for a
radiostation, there was a multiple left-join query to display the last 20
played songs on the homepage. However, the playlist table keeps growing, so
I got the website people to agree that it's pretty unlikely that songs from
yesterday end up in those 20: we added an index on the playdate and selected
on that. Boom, execution time down from 0.35 to 0.01. In addition, killing
off old playlist items would've been very beneficial, but this was not an
option due to business requirements. Shame, I love to delete people's data
:-D


And, of course, check if you have indexes on the major parts of your where
clause. Selectivity brings speed.


I seem to have the order of obviousness in this mail wrong, though. Please
read it from bottom to top :-)


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


RE: slow queries not being logged

2010-02-23 Thread John Daisley



 From: machi...@rdc.co.za
 To: mysql@lists.mysql.com
 Subject: slow queries not being logged
 Date: Tue, 23 Feb 2010 09:59:13 +0200
 
 Good day all
 
  
 
 I hope you can assist me with this one...
 
  
 
 We have a client where the slow query log was disabled.
 

Slow query log is on the server only.  If you are saying you have enabled the 
slow query log and the servers query log is empty can you post your 
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


  
 
 We noticed that the slow query amount (when viewing global
 status) has skyrocketed during the last 2 weeks going up to over 2 million
 (from 160 million queries).
 
  
 
 We wanted to look at these queries to see if it can be
 optimised to reduce the amount and went through the whole database restart
 routine to enable the slow query log again (they are running version 5.0 so
 had to restart).
 
  
 
  
 
 However, even though the slow query log is enabled, it is
 not logging the queries to the file specified.
 
  
 
 Can someone please assist in why this is not being done? I
 thought that it might be logging to a default filename but there is only one
 slow queries log file in the directory and it is empty.
 
  
 
 Checking the global status again, it showed 29 000 slow
 queries since this morning (3 hours ago) but nothing in the logs.
 
  
 
  
 
 Your help will be appreciated.
 
  
 
  
 
 Regards
 
  
 
  
_
Do you have a story that started on Hotmail? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

RE: slow queries not being logged

2010-02-23 Thread Machiel Richards
Hi All

 

I found my problem and this was kind of a blonde moment for
me...

 

When configuring the log_slow_queries parameter, it was
configured as follows:  log_slow_queries=1

 

This the file being created is called 1 and the 1 does not
mean it is enabled.

 

I have fixed this now but need to wait for a gap to reboot
again to have it set properly. (have to live with the filename 1 for the
time being.)

 

I did however find something interesting though, while
looking at the queries being logged.

 

The slow_query_time is set to 2 (2 seconds i am assuming)
however all the queries being logged states that it ran for 0 seconds.

 

I am busy doing explain plans on some of them now but not
really sure what to look for yet (Rather new to MySQL and hope google will
have some answers J  )

 

 

Thank you

 

 

 

 

From: John Daisley [mailto:mg_s...@hotmail.com] 
Sent: 23 February 2010 10:24 AM
To: machi...@rdc.co.za; mysql@lists.mysql.com
Subject: RE: slow queries not being logged

 



 From: machi...@rdc.co.za
 To: mysql@lists.mysql.com
 Subject: slow queries not being logged
 Date: Tue, 23 Feb 2010 09:59:13 +0200
 
 Good day all
 
 
 
 I hope you can assist me with this one...
 
 
 
 We have a client where the slow query log was disabled.
 

Slow query log is on the server only.  If you are saying you have enabled
the slow query log and the servers query log is empty can you post your
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


 
 
 We noticed that the slow query amount (when viewing global
 status) has skyrocketed during the last 2 weeks going up to over 2 million
 (from 160 million queries).
 
 
 
 We wanted to look at these queries to see if it can be
 optimised to reduce the amount and went through the whole database restart
 routine to enable the slow query log again (they are running version 5.0
so
 had to restart).
 
 
 
 
 
 However, even though the slow query log is enabled, it is
 not logging the queries to the file specified.
 
 
 
 Can someone please assist in why this is not being done? I
 thought that it might be logging to a default filename but there is only
one
 slow queries log file in the directory and it is empty.
 
 
 
 Checking the global status again, it showed 29 000 slow
 queries since this morning (3 hours ago) but nothing in the logs.
 
 
 
 
 
 Your help will be appreciated.
 
 
 
 
 
 Regards
 
 
 

  _  

Do you want a Hotmail account? Sign-up
http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free



Re: slow queries not being logged

2010-02-23 Thread Ananda Kumar
slow query log will also have sql's which are not using indexes(doing full
table scan).
May be those queries with ZERO SECOND run on small table without using
indexes.

regards
anandkl

On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards machi...@rdc.co.zawrote:

 Hi All



I found my problem and this was kind of a blonde moment for
 me...



When configuring the log_slow_queries parameter, it was
 configured as follows:  log_slow_queries=1



This the file being created is called 1 and the 1 does not
 mean it is enabled.



I have fixed this now but need to wait for a gap to reboot
 again to have it set properly. (have to live with the filename 1 for the
 time being.)



I did however find something interesting though, while
 looking at the queries being logged.



The slow_query_time is set to 2 (2 seconds i am assuming)
 however all the queries being logged states that it ran for 0 seconds.



I am busy doing explain plans on some of them now but not
 really sure what to look for yet (Rather new to MySQL and hope google will
 have some answers J  )





 Thank you









 From: John Daisley [mailto:mg_s...@hotmail.com]
 Sent: 23 February 2010 10:24 AM
 To: machi...@rdc.co.za; mysql@lists.mysql.com
 Subject: RE: slow queries not being logged





  From: machi...@rdc.co.za
  To: mysql@lists.mysql.com
  Subject: slow queries not being logged
  Date: Tue, 23 Feb 2010 09:59:13 +0200
 
  Good day all
 
 
 
  I hope you can assist me with this one...
 
 
 
  We have a client where the slow query log was disabled.
 

 Slow query log is on the server only.  If you are saying you have enabled
 the slow query log and the servers query log is empty can you post your
 my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


 
 
  We noticed that the slow query amount (when viewing global
  status) has skyrocketed during the last 2 weeks going up to over 2
 million
  (from 160 million queries).
 
 
 
  We wanted to look at these queries to see if it can be
  optimised to reduce the amount and went through the whole database
 restart
  routine to enable the slow query log again (they are running version 5.0
 so
  had to restart).
 
 
 
 
 
  However, even though the slow query log is enabled, it is
  not logging the queries to the file specified.
 
 
 
  Can someone please assist in why this is not being done? I
  thought that it might be logging to a default filename but there is only
 one
  slow queries log file in the directory and it is empty.
 
 
 
  Checking the global status again, it showed 29 000 slow
  queries since this morning (3 hours ago) but nothing in the logs.
 
 
 
 
 
  Your help will be appreciated.
 
 
 
 
 
  Regards
 
 
 

  _

 Do you want a Hotmail account? Sign-up
 http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free




Re: slow queries not being logged

2010-02-23 Thread Carsten Pedersen

You might want to read the comments to this posting: 

http://www.bitbybit.dk/carsten/blog/?p=116



Several tools/methods for controlling and analyzing the slow query log are

suggested there.



Best,



/ Carsten



On Tue, 23 Feb 2010 14:09:30 +0530, Ananda Kumar anan...@gmail.com

wrote:

 slow query log will also have sql's which are not using indexes(doing

full

 table scan).

 May be those queries with ZERO SECOND run on small table without using

 indexes.

 

 regards

 anandkl

 

 On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards

 machi...@rdc.co.zawrote:

 

 Hi All







I found my problem and this was kind of a blonde moment

for

 me...







When configuring the log_slow_queries parameter, it was

 configured as follows:  log_slow_queries=1







This the file being created is called 1 and the 1 does

not

 mean it is enabled.







I have fixed this now but need to wait for a gap to

reboot

 again to have it set properly. (have to live with the filename 1 for

the

 time being.)







I did however find something interesting though, while

 looking at the queries being logged.







The slow_query_time is set to 2 (2 seconds i am

assuming)

 however all the queries being logged states that it ran for 0 seconds.







I am busy doing explain plans on some of them now but

not

 really sure what to look for yet (Rather new to MySQL and hope google

 will

 have some answers J  )











 Thank you



















 From: John Daisley [mailto:mg_s...@hotmail.com]

 Sent: 23 February 2010 10:24 AM

 To: machi...@rdc.co.za; mysql@lists.mysql.com

 Subject: RE: slow queries not being logged











  From: machi...@rdc.co.za

  To: mysql@lists.mysql.com

  Subject: slow queries not being logged

  Date: Tue, 23 Feb 2010 09:59:13 +0200

 

  Good day all

 

 

 

  I hope you can assist me with this one...

 

 

 

  We have a client where the slow query log was disabled.

 



 Slow query log is on the server only.  If you are saying you have

enabled

 the slow query log and the servers query log is empty can you post your

 my.cnf/my.ini file. Also make sure --long-query-time is set

 appropriately.





 

 

  We noticed that the slow query amount (when viewing global

  status) has skyrocketed during the last 2 weeks going up to over 2

 million

  (from 160 million queries).

 

 

 

  We wanted to look at these queries to see if it can be

  optimised to reduce the amount and went through the whole database

 restart

  routine to enable the slow query log again (they are running version

  5.0

 so

  had to restart).

 

 

 

 

 

  However, even though the slow query log is enabled, it is

  not logging the queries to the file specified.

 

 

 

  Can someone please assist in why this is not being done? I

  thought that it might be logging to a default filename but there is

  only

 one

  slow queries log file in the directory and it is empty.

 

 

 

  Checking the global status again, it showed 29 000 slow

  queries since this morning (3 hours ago) but nothing in the logs.

 

 

 

 

 

  Your help will be appreciated.

 

 

 

 

 

  Regards

 

 

 



  _



 Do you want a Hotmail account? Sign-up

 http://clk.atdmt.com/UKM/go/19780/direct/01/  now - Free





 

 

 !DSPAM:451,4b839535858212076517642!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



slow queries not being logged

2010-02-22 Thread Machiel Richards
Good day all

 

I hope you can assist me with this one...

 

We have a client where the slow query log was disabled.

 

We noticed that the slow query amount (when viewing global
status) has skyrocketed during the last 2 weeks going up to over 2 million
(from 160 million queries).

 

We wanted to look at these queries to see if it can be
optimised to reduce the amount and went through the whole database restart
routine to enable the slow query log again (they are running version 5.0 so
had to restart).

 

 

However, even though the slow query log is enabled, it is
not logging the queries to the file specified.

 

Can someone please assist in why this is not being done? I
thought that it might be logging to a default filename but there is only one
slow queries log file in the directory and it is empty.

 

Checking the global status again, it showed 29 000 slow
queries since this morning (3 hours ago) but nothing in the logs.

 

 

Your help will be appreciated.

 

 

Regards

 



Re: logging slow queries with time

2009-08-03 Thread Milan Andric
Getting the slow query data in the database was a breeze with
mk-query-digest, but now does anyone happen to know of scripts out
there that will generate an html page to view the output?   This is
probably a better question for the maatkit mailing list but figure
someone here might have a link.

Thanks,

Milan

On Sun, Aug 2, 2009 at 11:16 PM, Milan Andricmand...@gmail.com wrote:
 Nuno, thanks for the tips. I think I will work on getting
 mk-query-digest to log to a db table and run it periodically.  Sounds
 like a very useful thing to have.

 --
 Milan

 On Sun, Aug 2, 2009 at 10:02 AM, nuno.tava...@dri.pt wrote:
 Hi Milan,

 I can see many ways of accomplish what you want:
 * I'm almost sure mk-query-digest will allow you to do so;
 * Either crop the slow query log for the desired timespan (that's a couple of
 shell scripting commands) and run mk-query-digest against it;
 * Set the query log file to a filename which is a link to /dev/null and set a
 cron script to relink it to a real filename at noon and another to relink it 
 to
 /dev/null at 1pm - and then run the scripts you want.
 * In newer versions you can log the slow queries to tables for analysis;
 * Ultimately, you can also try a patched version of mysqldumpslow I was using
 for some time, explained here:
 http://gpshumano.blogs.dri.pt/2009/07/04/analysing-mysql-slow-queries/

 In the last two options you'll be logging to a table, so it will be easy to
 select a timed range of queries for consideration.

 Hope that helps,
 -NT

 Quoting Milan Andric mand...@gmail.com:

 Thanks for the quick replies guys. I won't be pulling queries our of
 Drupal anytime soon.  The optimizations I will do are minimal.  Sounds
 like we might just have to live with mediocre performance for now.

 I will definitely looks further at maatkit though.  I actually ran it
 mk-query-digest on my slow log and it's pretty amazing output.  Lots
 there but it might take me a while to digest it all.  And it does have
 a time span for the queries it spots, like:

 # Time range 2009-07-31 06:46:48 to 2009-07-31 12:51:21

 Which is pretty helpful, but ideally I would like to pass args to
 mk-query-digest to list specific queries that were slow between noon
 and 1pm for example.  If anyone happens to know how to that it would
 be helpful.  Otherwise I will keep grokking maatkit tools.

 Thanks,

 Milan


 On Fri, Jul 31, 2009 at 3:14 PM, Johan De Meersmanvegiv...@tuxera.be
 wrote:
  On Fri, Jul 31, 2009 at 8:14 PM, Milan Andric mand...@gmail.com wrote:
  Hello,
 
  I'm serving a burly Drupal install and at some points throughout the
  day the mysql threads go way up and iowait peaks.  I'm not sure which
  is causing which but during this time the server is unresponsive.  I
  would like to determine if there is a poorly optimized query causing
  this.
 
  Drupal is a disaster :-)
 
  I'm running a dozen drupals, and we've spent the last eight months heavily
  optimizing every aspect. It's reasonably good now, but I'd still like to
  move away from it.
 
  Drupal wants to remain compatible with multiple databases, notably both
  MySQL and Postgres, and for v7 also Oracle. This means that they can't
  optimize their DB layer and use the fancy tricks of a single DB, but must
 do
  things in ways that are compatible with all of them. Postgres doesn't have
  autoincrement ? No autoincrement. Mysql doesn't have sequences ? No
  sequences.
 
  Some points:
   - Drupal uses it's own internal unique ID system, not autoincrement, for
 a
  number of things. This means that every insert that uses this locks all
  other inserts that use this.
   - Drupal has the bothersome tendency to do full table locks around some
  inserts. This means that even if you move to InnoDB, you'll still get full
  table locks. We patched this out according to info found on drupal.org and
  other sites.
   - Drupal's innate caching mechanisms are DB bound. Fun and dandy, but
  rather irritating if as soon as a busy page expires, a hundred threads all
  try to update the same table at the same time. We implemented Memcached.
  Patches, at least partial ones, are on drupal.org, but I know we did more
  and better in-house.
   - Drupal can cache fine for anonymous users, but once you're logged in,
  it's nearly impossible to cache. We worked around this by basically making
  'static' cacheable pages, and pulling any dynamic content in through AJAX.
   - Get rid of Drupal's views (in drupal, that is, not wat is called a view
  in a database). They generate horribly inefficient queries.
   - Full text search modules use MySQL's full text indices, but that's not
  the best way of doing things. We implemented Sphinx search, there's an
  existing drupal module for integration.
 
  There's lots more that was done, but I can't provide all that info because
  a) it's company internal, and b) because I'm not a developer and thus
 don't
  know half of it :-)
 
  Suffice it to say, I don't like drupal for high-traffic

Re: logging slow queries with time

2009-08-02 Thread nuno . tavares
Hi Milan,

I can see many ways of accomplish what you want:
* I'm almost sure mk-query-digest will allow you to do so;
* Either crop the slow query log for the desired timespan (that's a couple of
shell scripting commands) and run mk-query-digest against it;
* Set the query log file to a filename which is a link to /dev/null and set a
cron script to relink it to a real filename at noon and another to relink it to
/dev/null at 1pm - and then run the scripts you want.
* In newer versions you can log the slow queries to tables for analysis;
* Ultimately, you can also try a patched version of mysqldumpslow I was using
for some time, explained here:
http://gpshumano.blogs.dri.pt/2009/07/04/analysing-mysql-slow-queries/

In the last two options you'll be logging to a table, so it will be easy to
select a timed range of queries for consideration.

Hope that helps,
-NT

Quoting Milan Andric mand...@gmail.com:

 Thanks for the quick replies guys. I won't be pulling queries our of
 Drupal anytime soon.  The optimizations I will do are minimal.  Sounds
 like we might just have to live with mediocre performance for now.
 
 I will definitely looks further at maatkit though.  I actually ran it
 mk-query-digest on my slow log and it's pretty amazing output.  Lots
 there but it might take me a while to digest it all.  And it does have
 a time span for the queries it spots, like:
 
 # Time range 2009-07-31 06:46:48 to 2009-07-31 12:51:21
 
 Which is pretty helpful, but ideally I would like to pass args to
 mk-query-digest to list specific queries that were slow between noon
 and 1pm for example.  If anyone happens to know how to that it would
 be helpful.  Otherwise I will keep grokking maatkit tools.
 
 Thanks,
 
 Milan
 
 
 On Fri, Jul 31, 2009 at 3:14 PM, Johan De Meersmanvegiv...@tuxera.be
 wrote:
  On Fri, Jul 31, 2009 at 8:14 PM, Milan Andric mand...@gmail.com wrote:
  Hello,
 
  I'm serving a burly Drupal install and at some points throughout the
  day the mysql threads go way up and iowait peaks.  I'm not sure which
  is causing which but during this time the server is unresponsive.  I
  would like to determine if there is a poorly optimized query causing
  this.
 
  Drupal is a disaster :-)
 
  I'm running a dozen drupals, and we've spent the last eight months heavily
  optimizing every aspect. It's reasonably good now, but I'd still like to
  move away from it.
 
  Drupal wants to remain compatible with multiple databases, notably both
  MySQL and Postgres, and for v7 also Oracle. This means that they can't
  optimize their DB layer and use the fancy tricks of a single DB, but must
 do
  things in ways that are compatible with all of them. Postgres doesn't have
  autoincrement ? No autoincrement. Mysql doesn't have sequences ? No
  sequences.
 
  Some points:
   - Drupal uses it's own internal unique ID system, not autoincrement, for
 a
  number of things. This means that every insert that uses this locks all
  other inserts that use this.
   - Drupal has the bothersome tendency to do full table locks around some
  inserts. This means that even if you move to InnoDB, you'll still get full
  table locks. We patched this out according to info found on drupal.org and
  other sites.
   - Drupal's innate caching mechanisms are DB bound. Fun and dandy, but
  rather irritating if as soon as a busy page expires, a hundred threads all
  try to update the same table at the same time. We implemented Memcached.
  Patches, at least partial ones, are on drupal.org, but I know we did more
  and better in-house.
   - Drupal can cache fine for anonymous users, but once you're logged in,
  it's nearly impossible to cache. We worked around this by basically making
  'static' cacheable pages, and pulling any dynamic content in through AJAX.
   - Get rid of Drupal's views (in drupal, that is, not wat is called a view
  in a database). They generate horribly inefficient queries.
   - Full text search modules use MySQL's full text indices, but that's not
  the best way of doing things. We implemented Sphinx search, there's an
  existing drupal module for integration.
 
  There's lots more that was done, but I can't provide all that info because
  a) it's company internal, and b) because I'm not a developer and thus
 don't
  know half of it :-)
 
  Suffice it to say, I don't like drupal for high-traffic interactive sites.
  Get away from it if you can.
 
 
 
 
    I'm logging slow queries but is there a way to see when the
  slow queries take place also?  I'd like to know what queries are being
  processed during this window of poor response time, usually around
  noon local time.
 
  If you want more than just the slow queries, you'll have to enable the
 full
  log. Be aware that this has noticeable impact on performance, and will
 spam
  your disks. Log on different spindles if able, and monitor your disk usage
  carefully - mysql stops functioning if it can't write logs.
 
 
 
 
  --
  Celsius is based on water temperature

Re: logging slow queries with time

2009-08-02 Thread Milan Andric
Nuno, thanks for the tips. I think I will work on getting
mk-query-digest to log to a db table and run it periodically.  Sounds
like a very useful thing to have.

--
Milan

On Sun, Aug 2, 2009 at 10:02 AM, nuno.tava...@dri.pt wrote:
 Hi Milan,

 I can see many ways of accomplish what you want:
 * I'm almost sure mk-query-digest will allow you to do so;
 * Either crop the slow query log for the desired timespan (that's a couple of
 shell scripting commands) and run mk-query-digest against it;
 * Set the query log file to a filename which is a link to /dev/null and set a
 cron script to relink it to a real filename at noon and another to relink it 
 to
 /dev/null at 1pm - and then run the scripts you want.
 * In newer versions you can log the slow queries to tables for analysis;
 * Ultimately, you can also try a patched version of mysqldumpslow I was using
 for some time, explained here:
 http://gpshumano.blogs.dri.pt/2009/07/04/analysing-mysql-slow-queries/

 In the last two options you'll be logging to a table, so it will be easy to
 select a timed range of queries for consideration.

 Hope that helps,
 -NT

 Quoting Milan Andric mand...@gmail.com:

 Thanks for the quick replies guys. I won't be pulling queries our of
 Drupal anytime soon.  The optimizations I will do are minimal.  Sounds
 like we might just have to live with mediocre performance for now.

 I will definitely looks further at maatkit though.  I actually ran it
 mk-query-digest on my slow log and it's pretty amazing output.  Lots
 there but it might take me a while to digest it all.  And it does have
 a time span for the queries it spots, like:

 # Time range 2009-07-31 06:46:48 to 2009-07-31 12:51:21

 Which is pretty helpful, but ideally I would like to pass args to
 mk-query-digest to list specific queries that were slow between noon
 and 1pm for example.  If anyone happens to know how to that it would
 be helpful.  Otherwise I will keep grokking maatkit tools.

 Thanks,

 Milan


 On Fri, Jul 31, 2009 at 3:14 PM, Johan De Meersmanvegiv...@tuxera.be
 wrote:
  On Fri, Jul 31, 2009 at 8:14 PM, Milan Andric mand...@gmail.com wrote:
  Hello,
 
  I'm serving a burly Drupal install and at some points throughout the
  day the mysql threads go way up and iowait peaks.  I'm not sure which
  is causing which but during this time the server is unresponsive.  I
  would like to determine if there is a poorly optimized query causing
  this.
 
  Drupal is a disaster :-)
 
  I'm running a dozen drupals, and we've spent the last eight months heavily
  optimizing every aspect. It's reasonably good now, but I'd still like to
  move away from it.
 
  Drupal wants to remain compatible with multiple databases, notably both
  MySQL and Postgres, and for v7 also Oracle. This means that they can't
  optimize their DB layer and use the fancy tricks of a single DB, but must
 do
  things in ways that are compatible with all of them. Postgres doesn't have
  autoincrement ? No autoincrement. Mysql doesn't have sequences ? No
  sequences.
 
  Some points:
   - Drupal uses it's own internal unique ID system, not autoincrement, for
 a
  number of things. This means that every insert that uses this locks all
  other inserts that use this.
   - Drupal has the bothersome tendency to do full table locks around some
  inserts. This means that even if you move to InnoDB, you'll still get full
  table locks. We patched this out according to info found on drupal.org and
  other sites.
   - Drupal's innate caching mechanisms are DB bound. Fun and dandy, but
  rather irritating if as soon as a busy page expires, a hundred threads all
  try to update the same table at the same time. We implemented Memcached.
  Patches, at least partial ones, are on drupal.org, but I know we did more
  and better in-house.
   - Drupal can cache fine for anonymous users, but once you're logged in,
  it's nearly impossible to cache. We worked around this by basically making
  'static' cacheable pages, and pulling any dynamic content in through AJAX.
   - Get rid of Drupal's views (in drupal, that is, not wat is called a view
  in a database). They generate horribly inefficient queries.
   - Full text search modules use MySQL's full text indices, but that's not
  the best way of doing things. We implemented Sphinx search, there's an
  existing drupal module for integration.
 
  There's lots more that was done, but I can't provide all that info because
  a) it's company internal, and b) because I'm not a developer and thus
 don't
  know half of it :-)
 
  Suffice it to say, I don't like drupal for high-traffic interactive sites.
  Get away from it if you can.
 
 
 
 
    I'm logging slow queries but is there a way to see when the
  slow queries take place also?  I'd like to know what queries are being
  processed during this window of poor response time, usually around
  noon local time.
 
  If you want more than just the slow queries, you'll have to enable the
 full
  log. Be aware that this has

Re: logging slow queries with time

2009-08-01 Thread Milan Andric
Thanks for the quick replies guys. I won't be pulling queries our of
Drupal anytime soon.  The optimizations I will do are minimal.  Sounds
like we might just have to live with mediocre performance for now.

I will definitely looks further at maatkit though.  I actually ran it
mk-query-digest on my slow log and it's pretty amazing output.  Lots
there but it might take me a while to digest it all.  And it does have
a time span for the queries it spots, like:

# Time range 2009-07-31 06:46:48 to 2009-07-31 12:51:21

Which is pretty helpful, but ideally I would like to pass args to
mk-query-digest to list specific queries that were slow between noon
and 1pm for example.  If anyone happens to know how to that it would
be helpful.  Otherwise I will keep grokking maatkit tools.

Thanks,

Milan


On Fri, Jul 31, 2009 at 3:14 PM, Johan De Meersmanvegiv...@tuxera.be wrote:
 On Fri, Jul 31, 2009 at 8:14 PM, Milan Andric mand...@gmail.com wrote:
 Hello,

 I'm serving a burly Drupal install and at some points throughout the
 day the mysql threads go way up and iowait peaks.  I'm not sure which
 is causing which but during this time the server is unresponsive.  I
 would like to determine if there is a poorly optimized query causing
 this.

 Drupal is a disaster :-)

 I'm running a dozen drupals, and we've spent the last eight months heavily
 optimizing every aspect. It's reasonably good now, but I'd still like to
 move away from it.

 Drupal wants to remain compatible with multiple databases, notably both
 MySQL and Postgres, and for v7 also Oracle. This means that they can't
 optimize their DB layer and use the fancy tricks of a single DB, but must do
 things in ways that are compatible with all of them. Postgres doesn't have
 autoincrement ? No autoincrement. Mysql doesn't have sequences ? No
 sequences.

 Some points:
  - Drupal uses it's own internal unique ID system, not autoincrement, for a
 number of things. This means that every insert that uses this locks all
 other inserts that use this.
  - Drupal has the bothersome tendency to do full table locks around some
 inserts. This means that even if you move to InnoDB, you'll still get full
 table locks. We patched this out according to info found on drupal.org and
 other sites.
  - Drupal's innate caching mechanisms are DB bound. Fun and dandy, but
 rather irritating if as soon as a busy page expires, a hundred threads all
 try to update the same table at the same time. We implemented Memcached.
 Patches, at least partial ones, are on drupal.org, but I know we did more
 and better in-house.
  - Drupal can cache fine for anonymous users, but once you're logged in,
 it's nearly impossible to cache. We worked around this by basically making
 'static' cacheable pages, and pulling any dynamic content in through AJAX.
  - Get rid of Drupal's views (in drupal, that is, not wat is called a view
 in a database). They generate horribly inefficient queries.
  - Full text search modules use MySQL's full text indices, but that's not
 the best way of doing things. We implemented Sphinx search, there's an
 existing drupal module for integration.

 There's lots more that was done, but I can't provide all that info because
 a) it's company internal, and b) because I'm not a developer and thus don't
 know half of it :-)

 Suffice it to say, I don't like drupal for high-traffic interactive sites.
 Get away from it if you can.




   I'm logging slow queries but is there a way to see when the
 slow queries take place also?  I'd like to know what queries are being
 processed during this window of poor response time, usually around
 noon local time.

 If you want more than just the slow queries, you'll have to enable the full
 log. Be aware that this has noticeable impact on performance, and will spam
 your disks. Log on different spindles if able, and monitor your disk usage
 carefully - mysql stops functioning if it can't write logs.




 --
 Celsius is based on water temperature.
 Fahrenheit is based on alcohol temperature.
 Ergo, Fahrenheit is better than Celsius. QED.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



logging slow queries with time

2009-07-31 Thread Milan Andric
Hello,

I'm serving a burly Drupal install and at some points throughout the
day the mysql threads go way up and iowait peaks.  I'm not sure which
is causing which but during this time the server is unresponsive.  I
would like to determine if there is a poorly optimized query causing
this.  I'm logging slow queries but is there a way to see when the
slow queries take place also?  I'd like to know what queries are being
processed during this window of poor response time, usually around
noon local time.

Thanks in advance,

--
Milan

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: logging slow queries with time

2009-07-31 Thread Gavin Towey
Entries in the slow log have a timestamp.  You can read the file directly, but 
it's much easier to use a tool like maatkit for parsing the results of the log.

Try this:
http://www.maatkit.org/doc/mk-query-digest.html

Regards,
Gavin Towey


-Original Message-
From: Milan Andric [mailto:mand...@gmail.com]
Sent: Friday, July 31, 2009 11:15 AM
To: mysql@lists.mysql.com
Subject: logging slow queries with time

Hello,

I'm serving a burly Drupal install and at some points throughout the
day the mysql threads go way up and iowait peaks.  I'm not sure which
is causing which but during this time the server is unresponsive.  I
would like to determine if there is a poorly optimized query causing
this.  I'm logging slow queries but is there a way to see when the
slow queries take place also?  I'd like to know what queries are being
processed during this window of poor response time, usually around
noon local time.

Thanks in advance,

--
Milan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=gto...@ffn.com


The information contained in this transmission may contain privileged and 
confidential information. It is intended only for the use of the person(s) 
named above. If you are not the intended recipient, you are hereby notified 
that any review, dissemination, distribution or duplication of this 
communication is strictly prohibited. If you are not the intended recipient, 
please contact the sender by reply email and destroy all copies of the original 
message.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: logging slow queries with time

2009-07-31 Thread Johan De Meersman
On Fri, Jul 31, 2009 at 8:14 PM, Milan Andric mand...@gmail.com wrote:
Hello,

I'm serving a burly Drupal install and at some points throughout the
day the mysql threads go way up and iowait peaks.  I'm not sure which
is causing which but during this time the server is unresponsive.  I
would like to determine if there is a poorly optimized query causing
this.

Drupal is a disaster :-)

I'm running a dozen drupals, and we've spent the last eight months heavily
optimizing every aspect. It's reasonably good now, but I'd still like to
move away from it.

Drupal wants to remain compatible with multiple databases, notably both
MySQL and Postgres, and for v7 also Oracle. This means that they can't
optimize their DB layer and use the fancy tricks of a single DB, but must do
things in ways that are compatible with all of them. Postgres doesn't have
autoincrement ? No autoincrement. Mysql doesn't have sequences ? No
sequences.

Some points:
 - Drupal uses it's own internal unique ID system, not autoincrement, for a
number of things. This means that every insert that uses this locks all
other inserts that use this.
 - Drupal has the bothersome tendency to do full table locks around some
inserts. This means that even if you move to InnoDB, you'll still get full
table locks. We patched this out according to info found on drupal.org and
other sites.
 - Drupal's innate caching mechanisms are DB bound. Fun and dandy, but
rather irritating if as soon as a busy page expires, a hundred threads all
try to update the same table at the same time. We implemented Memcached.
Patches, at least partial ones, are on drupal.org, but I know we did more
and better in-house.
 - Drupal can cache fine for anonymous users, but once you're logged in,
it's nearly impossible to cache. We worked around this by basically making
'static' cacheable pages, and pulling any dynamic content in through AJAX.
 - Get rid of Drupal's views (in drupal, that is, not wat is called a view
in a database). They generate horribly inefficient queries.
 - Full text search modules use MySQL's full text indices, but that's not
the best way of doing things. We implemented Sphinx search, there's an
existing drupal module for integration.

There's lots more that was done, but I can't provide all that info because
a) it's company internal, and b) because I'm not a developer and thus don't
know half of it :-)

Suffice it to say, I don't like drupal for high-traffic interactive sites.
Get away from it if you can.




  I'm logging slow queries but is there a way to see when the
 slow queries take place also?  I'd like to know what queries are being
 processed during this window of poor response time, usually around
 noon local time.


If you want more than just the slow queries, you'll have to enable the full
log. Be aware that this has noticeable impact on performance, and will spam
your disks. Log on different spindles if able, and monitor your disk usage
carefully - mysql stops functioning if it can't write logs.




-- 
Celsius is based on water temperature.
Fahrenheit is based on alcohol temperature.
Ergo, Fahrenheit is better than Celsius. QED.


Re: Slow Queries

2008-04-24 Thread Sebastian Mendel

Perrin Harkins schrieb:

On Wed, Apr 23, 2008 at 9:22 PM, D Hill [EMAIL PROTECTED] wrote:

 Can anyone shed some light if I should index wite_desc to speed things up?


No, since you don't use that column at all.  If you're not on MySQL 5,
upgrading to MySQL 5 will help.  Otherwise, you're best bet is to
rewrite the query as UNION clauses with one of your WHERE conditions
in each.  I know it sounds crazy, but before MySQL 5 the use of
indexes with OR queries was not very good.


IMHO not in this case, cause it is just a simple WHERE field IN ()

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread Sebastian Mendel

D Hill schrieb:


I have something I am trying to resolve with an over abundant number of 
slow queries. Perhaps it is because of some additional indexes needed. 
As soon as I enabled the option 'log_queries_not_using_indexes = 1' in 
the configuration file, I started getting messages relating to the 
select query:


  SELECT wite_what, wite_desc FROM witelist
WHERE
  wite_what = '$oct1' OR
  wite_what = '$oct1.$oct2' OR
  wite_what = '$oct1.$oct2.$oct3' OR
  wite_what = '$oct1.$oct2.$oct3.$oct4' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '$from_dom' OR
  wite_what = '$rcpt_dom';


did you tried (result depending on your MySQL version):

WHERE wite_what IN ('$oct1', '$oct1.$oct2', '$oct1.$oct2.$oct3',
'$oct1.$oct2.$oct3.$oct4', '[EMAIL PROTECTED]',
'[EMAIL PROTECTED]', '$from_dom', '$rcpt_dom');

you could also vary with thee index length if wite_what.

and what indexes do you have currently exactly?

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread Perrin Harkins
On Thu, Apr 24, 2008 at 2:54 AM, Sebastian Mendel
[EMAIL PROTECTED] wrote:
  IMHO not in this case, cause it is just a simple WHERE field IN ()

I'm pretty sure that just looks like a bunch of ORs to MySQL.  If it
didn't use the index with OR, it won't use it with IN.

What usually works is to change it to UNION:

SELECT wite_what, wite_desc FROM witelist
   WHERE wite_what = '$oct1'
UNION
SELECT wite_what, wite_desc FROM witelist
   WHERE wite_what = '$oct1.$oct2'
UNION
...etc.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread D Hill

On Thu, 24 Apr 2008 at 08:58 +0200, [EMAIL PROTECTED] confabulated:


D Hill schrieb:


I have something I am trying to resolve with an over abundant number of 
slow queries. Perhaps it is because of some additional indexes needed. As 
soon as I enabled the option 'log_queries_not_using_indexes = 1' in the 
configuration file, I started getting messages relating to the select 
query:


  SELECT wite_what, wite_desc FROM witelist
WHERE
  wite_what = '$oct1' OR
  wite_what = '$oct1.$oct2' OR
  wite_what = '$oct1.$oct2.$oct3' OR
  wite_what = '$oct1.$oct2.$oct3.$oct4' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '$from_dom' OR
  wite_what = '$rcpt_dom';


did you tried (result depending on your MySQL version):

WHERE wite_what IN ('$oct1', '$oct1.$oct2', '$oct1.$oct2.$oct3',
   '$oct1.$oct2.$oct3.$oct4', '[EMAIL PROTECTED]',
   '[EMAIL PROTECTED]', '$from_dom', '$rcpt_dom');


MySQL version is 5.0.51. Sorry I forgot to mention that. I did change the 
query to what you have shown. I'll have to wait till the server comes 
under a load to tell. I have noted when the last slow query was logged for 
this and will see.



you could also vary with thee index length if wite_what.


Right now the index is for the full length of the field (128). I just ran 
a query for the length of wite_what and the maximum length so far is 34. 
So, I will cut the index length down to 64.



and what indexes do you have currently exactly?


id -  is the primary and has an index type btree
wite_what - is a unique and has an index type of btree


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread D Hill

On Thu, 24 Apr 2008 at 10:16 -0400, [EMAIL PROTECTED] confabulated:


On Thu, Apr 24, 2008 at 2:54 AM, Sebastian Mendel
[EMAIL PROTECTED] wrote:

 IMHO not in this case, cause it is just a simple WHERE field IN ()


I'm pretty sure that just looks like a bunch of ORs to MySQL.  If it
didn't use the index with OR, it won't use it with IN.

What usually works is to change it to UNION:

SELECT wite_what, wite_desc FROM witelist
  WHERE wite_what = '$oct1'
UNION
SELECT wite_what, wite_desc FROM witelist
  WHERE wite_what = '$oct1.$oct2'
UNION
...etc.


I'm still new to MySQL. The input is greatly appreciated.

It took some minor thought and the documentation, but I understand what is 
going on with the UNION.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread Sebastian Mendel

D Hill schrieb:

On Thu, 24 Apr 2008 at 08:58 +0200, [EMAIL PROTECTED] confabulated:


D Hill schrieb:


I have something I am trying to resolve with an over abundant number 
of slow queries. Perhaps it is because of some additional indexes 
needed. As soon as I enabled the option 
'log_queries_not_using_indexes = 1' in the configuration file, I 
started getting messages relating to the select query:


  SELECT wite_what, wite_desc FROM witelist
WHERE
  wite_what = '$oct1' OR
  wite_what = '$oct1.$oct2' OR
  wite_what = '$oct1.$oct2.$oct3' OR
  wite_what = '$oct1.$oct2.$oct3.$oct4' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '$from_dom' OR
  wite_what = '$rcpt_dom';


did you tried (result depending on your MySQL version):

WHERE wite_what IN ('$oct1', '$oct1.$oct2', '$oct1.$oct2.$oct3',
   '$oct1.$oct2.$oct3.$oct4', '[EMAIL PROTECTED]',
   '[EMAIL PROTECTED]', '$from_dom', '$rcpt_dom');


MySQL version is 5.0.51. Sorry I forgot to mention that. I did change 
the query to what you have shown. I'll have to wait till the server 
comes under a load to tell. I have noted when the last slow query was 
logged for this and will see.



you could also vary with thee index length if wite_what.


Right now the index is for the full length of the field (128). I just 
ran a query for the length of wite_what and the maximum length so far is 
34. So, I will cut the index length down to 64.



and what indexes do you have currently exactly?


id -  is the primary and has an index type btree
wite_what - is a unique and has an index type of btree


so this looks all ok, i am not sure if the query time includes the time if 
the query needs to wait for locked tables ...


--
Sebastian

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow Queries

2008-04-23 Thread D Hill


I have something I am trying to resolve with an over abundant number of 
slow queries. Perhaps it is because of some additional indexes needed. As 
soon as I enabled the option 'log_queries_not_using_indexes = 1' in the 
configuration file, I started getting messages relating to the select 
query:


  SELECT wite_what, wite_desc FROM witelist
WHERE
  wite_what = '$oct1' OR
  wite_what = '$oct1.$oct2' OR
  wite_what = '$oct1.$oct2.$oct3' OR
  wite_what = '$oct1.$oct2.$oct3.$oct4' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '$from_dom' OR
  wite_what = '$rcpt_dom';

The table used is defined as such:

  mysql desc witelist;
  +---+--+--+-+-++
  | Field | Type | Null | Key | Default | Extra  |
  +---+--+--+-+-++
  | id| int(10) unsigned | NO   | PRI | NULL| auto_increment |
  | wite_what | varchar(128) | NO   | UNI | NULL||
  | wite_desc | varchar(128) | NO   | | NULL||
  +---+--+--+-+-++

Can anyone shed some light if I should index wite_desc to speed things up?

-d

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-23 Thread Perrin Harkins
On Wed, Apr 23, 2008 at 9:22 PM, D Hill [EMAIL PROTECTED] wrote:
  Can anyone shed some light if I should index wite_desc to speed things up?

No, since you don't use that column at all.  If you're not on MySQL 5,
upgrading to MySQL 5 will help.  Otherwise, you're best bet is to
rewrite the query as UNION clauses with one of your WHERE conditions
in each.  I know it sounds crazy, but before MySQL 5 the use of
indexes with OR queries was not very good.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Extremely slow queries on large database

2008-03-22 Thread m3m63r . 0f . 61a9 . p3ar1
Hello
Recently I changed my servers.
The old server spec is Core2Duo E6600 with 4gb ram and 320gb SATA.
The new server spec is Dual Opteron 2.1ghz with 4gb ram and 73gb 15kRPM SAS.

Now here comes the problem.
I generated 3gb forum backup (sql format) and putted it back in to the new 
server.
It took a lot of time so my ssh connection was cut off but I'm assuming it was 
fine.
Tried loading the forum and it took 15 times the time needed to load a page.
Page generation of one page takes about 14sec.

So I decided to dig in.
I tried installing phpBB 3 on a new database and it is pretty fast..

The old server and the new server uses the same my.cnf file.
Any ideas?
Thanks
Yudai Yamagishi

P.S. old server uses older mysqld

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Extremely slow queries on large database

2008-03-22 Thread m3m63r . 0f . 61a9 . p3ar1
Sorry my question may have been bad.
MySQL version is 5.0.37.
I tried slow query logging with long query set to 1 sec.
I came up with 3 sec and 6 sec queries and both of them are select query.

When I searched google it said that I need to use paging but since it was 
working on the old server I don't really see the reason to start changing the 
database driver of IPB.
Thanks,
Yudai Yamagishi

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow queries

2006-08-17 Thread Jon Molin

Hi list

I have 5 tables:

words (word_id int auto_increment, word varbinary(40)) (has ~3.5M
rows) with the keys:
PRIMARY KEY  (`word_id`),UNIQUE KEY `word_ind` (`word`)

phrases (phrase_id int auto_increment, phrase varbinary(100)) (has
~11M rows) with the keys:
PRIMARY KEY  (`phrase_id`),UNIQUE KEY `phrase_ind` (`phrase`)

phrase_words (phrase_id, word_id) (has ~31M rows) with:
UNIQUE KEY `phrase_ind` (`phrase_id`,`word_id`)
KEY `word` (`word_id`), KEY `phrase` (`phrase_id`)

word_searches (word_id, search_date date, search hour char(2), amount
smallint, type char(8), location char(2)) with:
KEY `word_search` (`word_id`),
KEY `id_search` (`search_date`),
KEY `word_date` (`word_id`,`search_date`)

(and a similar for phrase_searches, these two tables are merge tables
with one table for each month, each table having 15-30M rows)

phrases are built of  words identified by phrase_words (these are
not human language words and phrases but rather random bytes where
some are human readable).

Now, I'm trying to find out how many times has word 1..n been
searched for and how many times has phrases containing 1..n been
searched for?

These queries take a really long time to execute, first I select for the words:
explain sELECT w.word as word, w.word_id as word_id, sum(ws.amount) as
amount FROM words w, word_searches ws WHERE
ws.word_id=w.word_id AND w.word IN (p, xyz, zzz, abc) AND
 ws.search_date = '2006-07-17' AND ws.search_date =
'2006-08-16' group by ws.word_id;
++-+---+---+-+-+-+--+--+--+
| id | select_type | table | type  | possible_keys   |
key | key_len | ref  | rows | Extra
   |
++-+---+---+-+-+-+--+--+--+
|  1 | SIMPLE  | w | range | PRIMARY,word_ind|
word_ind| 42  | NULL |4 | Using where;
Using temporary; Using filesort |
|  1 | SIMPLE  | ws| ref   | word_search,id_search,word_date |
word_search | 4   | statistics.w.word_id |   15 | Using where
   |
++-+---+---+-+-+-+--+--+--+

and then for phrases:
explain SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
   pw.word_id IN (966,1,1250,1741) AND
   pw.phrase_id = ps.phrase_id AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
   GROUP by pw.word_id;
++-+---+---+-+---+-+-++-+
| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows   | Extra
 |
++-+---+---+-+---+-+-++-+
|  1 | SIMPLE  | pw| range | phrase_ind,word,phrase
 | word  | 4   | NULL| 226847 | Using
where |
|  1 | SIMPLE  | ps| ref   |
phrase_search,id_search,phrase_date | phrase_search | 4   |
statistics.pw.phrase_id | 15 | Using where |
++-+---+---+-+---+-+-++-+

The queries takes 40s-several minutes on a dual xeon 3GHz with 4GB ram
only running Msql.

Can someone see something I've done wrong? I have the same data in
flat files with one word and phrase on each row and one file for each
day and doing grep/sort/uniq -c in all thoose files is quicker on a
slower server with a lot of other procesess and with the files nfs
mounted.

mysqladmin status doesn't show any slow queries:
Uptime: 1215323  Threads: 2  Questions: 2191970  Slow queries: 0
Opens: 0  Flush tables: 1  Open tables: 64  Queries per second avg:
1.804

Thanks in advance
/Jon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries

2006-08-17 Thread Chris

Jon Molin wrote:

Hi list

I have 5 tables:

words (word_id int auto_increment, word varbinary(40)) (has ~3.5M
rows) with the keys:
PRIMARY KEY  (`word_id`),UNIQUE KEY `word_ind` (`word`)

phrases (phrase_id int auto_increment, phrase varbinary(100)) (has
~11M rows) with the keys:
PRIMARY KEY  (`phrase_id`),UNIQUE KEY `phrase_ind` (`phrase`)

phrase_words (phrase_id, word_id) (has ~31M rows) with:
UNIQUE KEY `phrase_ind` (`phrase_id`,`word_id`)
KEY `word` (`word_id`), KEY `phrase` (`phrase_id`)

word_searches (word_id, search_date date, search hour char(2), amount
smallint, type char(8), location char(2)) with:
KEY `word_search` (`word_id`),
KEY `id_search` (`search_date`),
KEY `word_date` (`word_id`,`search_date`)

(and a similar for phrase_searches, these two tables are merge tables
with one table for each month, each table having 15-30M rows)

phrases are built of  words identified by phrase_words (these are
not human language words and phrases but rather random bytes where
some are human readable).

Now, I'm trying to find out how many times has word 1..n been
searched for and how many times has phrases containing 1..n been
searched for?

These queries take a really long time to execute, first I select for the 
words:

explain sELECT w.word as word, w.word_id as word_id, sum(ws.amount) as
amount FROM words w, word_searches ws WHERE
ws.word_id=w.word_id AND w.word IN (p, xyz, zzz, abc) AND
 ws.search_date = '2006-07-17' AND ws.search_date =
'2006-08-16' group by ws.word_id;
++-+---+---+-+-+-+--+--+--+ 


| id | select_type | table | type  | possible_keys   |
key | key_len | ref  | rows | Extra
   |
++-+---+---+-+-+-+--+--+--+ 


|  1 | SIMPLE  | w | range | PRIMARY,word_ind|
word_ind| 42  | NULL |4 | Using where;
Using temporary; Using filesort |
|  1 | SIMPLE  | ws| ref   | word_search,id_search,word_date |
word_search | 4   | statistics.w.word_id |   15 | Using where
   |
++-+---+---+-+-+-+--+--+--+ 



and then for phrases:
explain SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
   pw.word_id IN (966,1,1250,1741) AND
   pw.phrase_id = ps.phrase_id AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
   GROUP by pw.word_id;
++-+---+---+-+---+-+-++-+ 


| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows   | Extra
 |
++-+---+---+-+---+-+-++-+ 


|  1 | SIMPLE  | pw| range | phrase_ind,word,phrase
 | word  | 4   | NULL| 226847 | Using
where |
|  1 | SIMPLE  | ps| ref   |
phrase_search,id_search,phrase_date | phrase_search | 4   |
statistics.pw.phrase_id | 15 | Using where |
++-+---+---+-+---+-+-++-+ 


The problem is it's picking the word index which apparently is 
returning 226,000+ areas.


Test this:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps IGNORE INDEX (word) WHERE
pw.word_id IN (966,1,1250,1741) AND
pw.phrase_id = ps.phrase_id AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

(that should ignore the 'word' index and instead use the 'phrase' index).

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries

2006-08-17 Thread Jon Molin

On 8/17/06, Chris [EMAIL PROTECTED] wrote:

Jon Molin wrote:
 Hi list

 I have 5 tables:

 words (word_id int auto_increment, word varbinary(40)) (has ~3.5M
 rows) with the keys:
 PRIMARY KEY  (`word_id`),UNIQUE KEY `word_ind` (`word`)

 phrases (phrase_id int auto_increment, phrase varbinary(100)) (has
 ~11M rows) with the keys:
 PRIMARY KEY  (`phrase_id`),UNIQUE KEY `phrase_ind` (`phrase`)

 phrase_words (phrase_id, word_id) (has ~31M rows) with:
 UNIQUE KEY `phrase_ind` (`phrase_id`,`word_id`)
 KEY `word` (`word_id`), KEY `phrase` (`phrase_id`)

 word_searches (word_id, search_date date, search hour char(2), amount
 smallint, type char(8), location char(2)) with:
 KEY `word_search` (`word_id`),
 KEY `id_search` (`search_date`),
 KEY `word_date` (`word_id`,`search_date`)

 (and a similar for phrase_searches, these two tables are merge tables
 with one table for each month, each table having 15-30M rows)

 phrases are built of  words identified by phrase_words (these are
 not human language words and phrases but rather random bytes where
 some are human readable).

 Now, I'm trying to find out how many times has word 1..n been
 searched for and how many times has phrases containing 1..n been
 searched for?

 These queries take a really long time to execute, first I select for the
 words:
 explain sELECT w.word as word, w.word_id as word_id, sum(ws.amount) as
 amount FROM words w, word_searches ws WHERE
 ws.word_id=w.word_id AND w.word IN (p, xyz, zzz, abc) AND
  ws.search_date = '2006-07-17' AND ws.search_date =
 '2006-08-16' group by ws.word_id;
 
++-+---+---+-+-+-+--+--+--+

 | id | select_type | table | type  | possible_keys   |
 key | key_len | ref  | rows | Extra
|
 
++-+---+---+-+-+-+--+--+--+

 |  1 | SIMPLE  | w | range | PRIMARY,word_ind|
 word_ind| 42  | NULL |4 | Using where;
 Using temporary; Using filesort |
 |  1 | SIMPLE  | ws| ref   | word_search,id_search,word_date |
 word_search | 4   | statistics.w.word_id |   15 | Using where
|
 
++-+---+---+-+-+-+--+--+--+


 and then for phrases:
 explain SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
 phrase_words pw, phrase_searches ps WHERE
pw.word_id IN (966,1,1250,1741) AND
pw.phrase_id = ps.phrase_id AND ps.search_date =
 '2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;
 
++-+---+---+-+---+-+-++-+

 | id | select_type | table | type  | possible_keys
  | key   | key_len | ref | rows   | Extra
  |
 
++-+---+---+-+---+-+-++-+

 |  1 | SIMPLE  | pw| range | phrase_ind,word,phrase
  | word  | 4   | NULL| 226847 | Using
 where |
 |  1 | SIMPLE  | ps| ref   |
 phrase_search,id_search,phrase_date | phrase_search | 4   |
 statistics.pw.phrase_id | 15 | Using where |
 
++-+---+---+-+---+-+-++-+

The problem is it's picking the word index which apparently is
returning 226,000+ areas.

Test this:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps IGNORE INDEX (word) WHERE
pw.word_id IN (966,1,1250,1741) AND
pw.phrase_id = ps.phrase_id AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

(that should ignore the 'word' index and instead use the 'phrase' index).



Unfortunately didn't that help, it leads to:
++-+---+---+-+---+-+-+-+--+
| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows| Extra
   |
++-+---+---+-+---+-+-+-+--+
|  1 | SIMPLE  | ps| range |
phrase_search,id_search,phrase_date | id_search | 3   | NULL
| 3836930 | 

Re: Slow queries

2006-08-17 Thread Chris



Unfortunately didn't that help, it leads to:
++-+---+---+---

| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows| Extra
   |
++-+---+---+---

|  1 | SIMPLE  | ps| range |
phrase_search,id_search,phrase_date | id_search | 3   | NULL
| 3836930 | Using where; Using temporary; Using filesort


Yeh it's finding a lot more rows there which isn't what you want so the 
extra time isn't surprising.



Does rewriting the query to be an inner join help?

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw INNER JOIN phrase_searches ps ON 
(ps.phrase_id=pw.phrase_id) WHERE

pw.word_id IN (966,1,1250,1741) AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

or even:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
pw.phrase_id = ps.phrase_id AND pw.word_id IN (966,1,1250,1741) AND
ps.search_date = '2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

(which puts the join between the two tables first).


That would help with this discussion too: 
http://lists.mysql.com/mysql/201015


;)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries

2006-08-17 Thread Jon Molin

On 8/17/06, Chris [EMAIL PROTECTED] wrote:


 Unfortunately didn't that help, it leads to:
 ++-+---+---+---

 | id | select_type | table | type  | possible_keys
  | key   | key_len | ref | rows| Extra
|
 ++-+---+---+---

 |  1 | SIMPLE  | ps| range |
 phrase_search,id_search,phrase_date | id_search | 3   | NULL
 | 3836930 | Using where; Using temporary; Using filesort

Yeh it's finding a lot more rows there which isn't what you want so the
extra time isn't surprising.


Does rewriting the query to be an inner join help?

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw INNER JOIN phrase_searches ps ON
(ps.phrase_id=pw.phrase_id) WHERE
pw.word_id IN (966,1,1250,1741) AND ps.search_date =
'2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

or even:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
pw.phrase_id = ps.phrase_id AND pw.word_id IN (966,1,1250,1741) AND
ps.search_date = '2006-07-17' AND ps.search_date = '2006-08-16'
GROUP by pw.word_id;

(which puts the join between the two tables first).



That didn't help either. Same amount of rows as my first join and
about the same speed as well (only a few seconds differing when
executed).



That would help with this discussion too:
http://lists.mysql.com/mysql/201015

;)


Yes, it'd be sweet if that mysql internals guru revelead her/him-self
from the cloud of guruness and spoke the true way of doing it.

What pisses me off most is that 'grep -E ^word$| word$|^word | word 
2006/07/*/phrases |wc -l' is so much quicker than the db :(

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Monitoring Slow Queries

2006-08-03 Thread Asif Lodhi

Thanks, Philip.
On 8/2/06, Philip Hallstrom [EMAIL PROTECTED] wrote:

..
http://hackmysql.com/mysqlsla
.

That's definitely of immense help.

--
Thanks a zillion,

Asif

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Monitoring Slow Queries

2006-08-02 Thread Asif Lodhi

Hi,

Though I understand very well that it has been discussed lots of time
before but I don't have time to browse through the previous archives
and dig out the stuff I need.  So, guys, I would be thankful if you
could give me your valuable advice that I need right now.

I just need to know

 1)  What parameters I need to set in my.cnf to
log slow queries so that they stick out conspicuously and get noticed,
and

 2) How I can find out from the log that MySQL
creates as a result of 1) as to
 which queries are running slow.

--
Thanks in advance,

Asif

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Monitoring Slow Queries

2006-08-02 Thread Duncan Hill
On Wednesday 02 August 2006 09:28, Asif Lodhi wrote:
 Hi,

 Though I understand very well that it has been discussed lots of time
 before but I don't have time to browse through the previous archives
 and dig out the stuff I need.  So, guys, I would be thankful if you
 could give me your valuable advice that I need right now.

 I just need to know

   1)  What parameters I need to set in my.cnf to
 log slow queries so that they stick out conspicuously and get noticed,
 and

http://dev.mysql.com/doc/mysql/search.php?version=4.1q=slow+query+loglang=en

   2) How I can find out from the log that MySQL
 creates as a result of 1) as to
   which queries are running slow.

http://dev.mysql.com/doc/mysql/search.php?version=4.1q=slow+query+loglang=en
-- 
Scanned by iCritical.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Monitoring Slow Queries

2006-08-02 Thread Philip Hallstrom

Though I understand very well that it has been discussed lots of time
before but I don't have time to browse through the previous archives
and dig out the stuff I need.  So, guys, I would be thankful if you
could give me your valuable advice that I need right now.

I just need to know

1)  What parameters I need to set in my.cnf to
log slow queries so that they stick out conspicuously and get noticed,
and

2) How I can find out from the log that MySQL
creates as a result of 1) as to
which queries are running slow.


mysqlsla is kind of handy...

http://hackmysql.com/mysqlsla

mysqlsla analyzes general, slow, and raw MySQL statement logs. Formerly 
called mysqlprofile, the new name reflects what the script really does: 
combined MySQL Statement Log Analysis. mysqlsla can read multiple MySQL 
general and slow logs (and logs containing raw SQL statements), combine 
them, then run various analyses on all the queries.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow Queries Log and Index-less queries

2005-08-30 Thread HMax
Hey list,

I have a question regarding the slow queries log, and queries not using index.

I have a small table, with say 10 entries, like that :
ID |  Element
-
1  |  One
2  |  Two
3  |  Three
4  |  Four
5  |  Five
6  |  Six
7  |  Seven
8  |  Eight
9  |  Nine
10 |  Ten

I want to get all those entries:
SELECT ID, Element FROM tblentries;

The problem is that this query, even if very fast, is logged in the slow query 
log because it does not use index (I
activated this option in the slow query log). Woudln't it be better if such 
queries would be logged only in case there
is a WHERE, ORDER or GROUP/HAVING clause ?

Also, is it better to do :
SELECT ID, Element FROM tblentries;
or
SELECT ID, Element FROM tblentries WHERE ID  0;

(In this last case, it won't be logged in the slow query log beause it uses an 
index...)

Thank you,
HMax



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries Log and Index-less queries

2005-08-30 Thread Alexey Polyakov
 
 
 I have a question regarding the slow queries log, and queries not using 
 index.
 
 I have a small table, with say 10 entries, like that :
 ID | Element
 -
 1 | One
 2 | Two
 3 | Three
 4 | Four
 5 | Five
 6 | Six
 7 | Seven
 8 | Eight
 9 | Nine
 10 | Ten
 
 I want to get all those entries:
 SELECT ID, Element FROM tblentries;
 
 The problem is that this query, even if very fast, is logged in the slow 
 query log because it does not use index (I
 activated this option in the slow query log). Woudln't it be better if 
 such queries would be logged only in case there
 is a WHERE, ORDER or GROUP/HAVING clause ?

  Slow query log, especially with long-long-format turned on, isn't made for 
manual browsing anyway.
You'll have to use mysqldumpslow or some custom-made script to analyze it.
Queries that don't have 'where' are easy to filter then.
 
Also, is it better to do :
 SELECT ID, Element FROM tblentries;
 or
 SELECT ID, Element FROM tblentries WHERE ID  0;
 
 (In this last case, it won't be logged in the slow query log beause it 
 uses an index...)

 It won't be logged if it actually will use index. In your example it won't 
use index, full table scan will be used instead, because query optimizer is 
able to determine that all records match where condition. 
   
 -- 
 Alexey Polyakov


Re[2]: Slow Queries Log and Index-less queries

2005-08-30 Thread HMax
Thanks Alexey,

This is enough explanation for me ;)

Cheers,
HMax


AP I have a question regarding the slow queries log, and queries not using 
index.

AP I have a small table, with say 10 entries, like that :
AP ID |  Element
AP -
AP 1  |  One
AP 2  |  Two
AP 3  |  Three
AP 4  |  Four
AP 5  |  Five
AP 6  |  Six
AP 7  |  Seven
AP 8  |  Eight
AP 9  |  Nine
AP 10 |  Ten

AP I want to get all those entries:
AP SELECT ID, Element FROM tblentries;

AP The problem is that this query, even if very fast, is logged in the slow 
query log because it does not use index (I
AP activated this option in the slow query log). Woudln't it be better if such 
queries would be logged only in case there
AP is a WHERE, ORDER or GROUP/HAVING clause ?

AP  
AP Slow query log, especially with long-long-format turned on, isn't made for 
manual browsing anyway.
AP You'll have to use mysqldumpslow or some custom-made script to analyze it.
AP Queries that don't have 'where' are easy to filter then.

AP  


AP Also, is it better to do :
AP SELECT ID, Element FROM tblentries;
AP or
AP SELECT ID, Element FROM tblentries WHERE ID  0;

AP (In this last case, it won't be logged in the slow query log beause it uses 
an index...)

AP  
AP It won't be logged if it actually will use index. In your example it won't 
use index, full table scan will be
AP used instead, because query optimizer is able to determine that all records 
match where condition. 
AP  
AP  
AP  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



slow queries

2005-07-13 Thread tony
Hi,

I have a query that keeps coming up in my slow queries log. The whole
database is innodb and i'm using mysql 4.1.11 on 64bit intel running red
hat linux. There are less than 100 rows in the offending table at anyone
time, and the server load rarely creeps up above 0.5

If i try to manually insert this row, i cant make it take longer than a
fraction of a second. I do get a few other random queries in the log
that take an age but this one comes up a lot. 

Does anyone know if there are any issues with ON DUPLICATE KEY UPDATE,
or using varchar fields as a primary key?

if not how can i investigate this further? Table and query below

Thanks in advance

Tony


CREATE TABLE `tblSessionData` (
  `sessionKey` varchar(32) NOT NULL default '',
  `data` text NOT NULL,
  `expiry` int(11) NOT NULL default '0',
  PRIMARY KEY  (`sessionKey`),
  KEY `expiry` (`expiry`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;




# Query_time: 26  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
INSERT INTO dbseInfrastructure.tblSessionData
  (sessionKey,expiry,data)
  VALUES
  ('1dbeb00777bf1cd20f8e12d044f4fa4c',
1121252767, 'intRightBanner|i:6;formRequiredFields|a:2:{s:8:\username
\;i:0;s:8:\password\;i:0;}formValidation|a:2:{s:8:\username
\;i:0;s:8:\password\;i:0;}')
  ON DUPLICATE KEY UPDATE
   expiry=1121252767
   ,data='intRightBanner|
i:6;formRequiredFields|a:2:{s:8:\username\;i:0;s:8:\password
\;i:0;}formValidation|a:2:{s:8:\username\;i:0;s:8:\password\;


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Solution to slow queries (Resolved, kinda)

2005-05-12 Thread Paul Halliday
Hi,

First of all, thanks to everyone that provided pointers on this matter.
The route I chose to take was to make 2 tables. One is for cumulative
network stats; this table can be used for the weekly,monthly,yearly
reports. I also created a table for daily stats which will be dropped
at midnight each day.

So I whipped up a simple shell script that looks like this:

mysql -B --user=flow --password=hi flowdb -e DROP TABLE traffic
mysql -B --user=flow --password=hi flowdb  /home/flow/Code/create_flowdb.sql

and a crontab entry that looks like:

0   0   *   *   *   /home/flow/Code/db_rollover.sh
 /dev/null 21

the report entries look like this:

*/5 *   *   *   *  
/home/flow/Reports/incident_report.tcl  /dev/null 21

*/20*   *   *   *  
/home/flow/Reports/traffic_report.tcl  /dev/null 21

Now looking at crons log from last night I see:

May 12 00:00:00 watcher cron[84039]: (flow) CMD
(/home/flow/Code/db_rollover.sh  /dev/null 21)
May 12 00:00:00 watcher cron[84040]: (flow) CMD
(/home/flow/Reports/traffic_report.tcl  /dev/null 21)
May 12 00:00:00 watcher cron[84041]: (flow) CMD
(/home/flow/Reports/incident_report.tcl  /dev/null 21)

So the script did indeed run, yet it did not drop the table. The
script works fine from the command line so I guess because the other
programs were running too it could not drop the table? Those scripts
are just doing selects, no updates, but there is the possibility that
the program which populates the db was running at the same time too.

Is there a way to force the table drop? (Without adding checks to the
shell script)


Thanks.

_
Paul Halliday
http://dp.penix.org

Diplomacy is the art of saying Nice doggie! till you can find a rock.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Solution to slow queries

2005-05-11 Thread Roger Baklund
Paul Halliday wrote:
 srcaddr   VARCHAR(15),
 dstaddr   VARCHAR(15),
Are these ip-adresses? If they are, consider using UNSIGNED INT columns
and the INET_NTOA() and INET_ATON() funtions. It will save you a lot of
space, thus increase the amount of data your hw can handle.

They are indeed ip addresses. This infomation is gathered and input
into the db via a program called flow-export (export netflows). I
intially had the column as UNSIGNED INT but it would only pick up the
first octet, so I switched to VARCHAR.
This would happen if you did not use INET_ATON() to transform the IP to 
a single integer. When you select the data later, you use INET_NTOA() to 
transform the other way:

mysql select inet_aton('127.0.0.1');
++
| inet_aton('127.0.0.1') |
++
| 2130706433 |
++
1 row in set (0.00 sec)
mysql select inet_ntoa(2130706433);
+---+
| inet_ntoa(2130706433) |
+---+
| 127.0.0.1 |
+---+
1 row in set (0.00 sec)
URL: http://dev.mysql.com/doc/mysql/en/miscellaneous-functions.html 
--
Roger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Solution to slow queries

2005-05-10 Thread Paul Halliday
Hello,

I am working on a database that deals with network statistics. I have
a program that generates web reports based on this data every ten
minutes.

The table layout looks something like this:

CREATE TABLE traffic
(
  unix_secs INT UNSIGNED NOT NULL,
  dpkts INT UNSIGNED NOT NULL DEFAULT 0,
  doctets   INT UNSIGNED NOT NULL DEFAULT 0,
  first INT UNSIGNED,
  last  INT UNSIGNED,
  srcaddr   VARCHAR(15),
  dstaddr   VARCHAR(15),
  srcport   SMALLINT UNSIGNED,
  dstport   SMALLINT UNSIGNED,
  prot  TINYINT UNSIGNED NOT NULL DEFAULT 0,
  tos   TINYINT UNSIGNED NOT NULL DEFAULT 0,
  tcp_flags TINYINT UNSIGNED NOT NULL DEFAULT 0,
  INDEX unix_secs (unix_secs),
  INDEX srcaddr (srcaddr),
  INDEX dstaddr (dstaddr),
  INDEX srcport (srcport),
  INDEX dstport (dstport)
);

Now, as time progresses the queires are getting slower and slower.
I know this is expected, so I am curious as to how I can have a main
table that has all traffic, so that I can do monthly/yearly reports,
and  also have a daily table so that I can quickly do reports every
minute or so on that data.

I have read up a bit on merge tables (this is probably the answer) but
I am unsure as to how you trigger the changes. ie, how do you do the
rollover after every 24hours?

Any thoughts, or a pointer in the right direction would be greatly appreciated. 


Thanks.
 
-- 
_
Paul Halliday
http://dp.penix.org

Diplomacy is the art of saying Nice doggie! till you can find a rock.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Solution to slow queries

2005-05-10 Thread Frank Bax
At 02:22 PM 5/10/05, Paul Halliday wrote:
Now, as time progresses the queires are getting slower and slower.
I know this is expected,

I don't think so.  I thought that if the number of rows returned does not 
change and an index is properly used, then query time should not change 
significantly as size of database grows. 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Solution to slow queries

2005-05-10 Thread James Nobis
Don't forget to run an analyze to adjust the statistics for the
optimizer/indexes.  Also, after any updates (on dynamic tables which yours is)
or any deletes run an optimize.
Quoting Paul Halliday [EMAIL PROTECTED]:
Hello,
I am working on a database that deals with network statistics. I have
a program that generates web reports based on this data every ten
minutes.
The table layout looks something like this:
CREATE TABLE traffic
(
 unix_secs INT UNSIGNED NOT NULL,
 dpkts INT UNSIGNED NOT NULL DEFAULT 0,
 doctets   INT UNSIGNED NOT NULL DEFAULT 0,
 first INT UNSIGNED,
 last  INT UNSIGNED,
 srcaddr   VARCHAR(15),
 dstaddr   VARCHAR(15),
 srcport   SMALLINT UNSIGNED,
 dstport   SMALLINT UNSIGNED,
 prot  TINYINT UNSIGNED NOT NULL DEFAULT 0,
 tos   TINYINT UNSIGNED NOT NULL DEFAULT 0,
 tcp_flags TINYINT UNSIGNED NOT NULL DEFAULT 0,
 INDEX unix_secs (unix_secs),
 INDEX srcaddr (srcaddr),
 INDEX dstaddr (dstaddr),
 INDEX srcport (srcport),
 INDEX dstport (dstport)
);
Now, as time progresses the queires are getting slower and slower.
I know this is expected, so I am curious as to how I can have a main
table that has all traffic, so that I can do monthly/yearly reports,
and  also have a daily table so that I can quickly do reports every
minute or so on that data.
I have read up a bit on merge tables (this is probably the answer) but
I am unsure as to how you trigger the changes. ie, how do you do the
rollover after every 24hours?
Any thoughts, or a pointer in the right direction would be greatly 
appreciated.

Thanks.
--
_
Paul Halliday
http://dp.penix.org
Diplomacy is the art of saying Nice doggie! till you can find a rock.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



James Nobis
Web Developer
Academic Superstore
223 W. Anderson Ln. Suite A110, Austin, TX 78752
Voice: (512) 450-1199 x453 Fax: (512) 450-0263
http://www.academicsuperstore.com
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Solution to slow queries

2005-05-10 Thread John McCaskey
On Tue, 2005-05-10 at 14:56 -0400, Frank Bax wrote:
 At 02:22 PM 5/10/05, Paul Halliday wrote:
 Now, as time progresses the queires are getting slower and slower.
 I know this is expected,
 
 
 I don't think so.  I thought that if the number of rows returned does not 
 change and an index is properly used, then query time should not change 
 significantly as size of database grows. 
 

True, for the appropriate definition of 'significantly'.  Also false,
for the appropriate definition of 'significantly'.  The index's are
trees which must be searched, this is fairly fast and the time doesn't
grow linearly or anything like that, but the time does of course grow
with more rows.  So if the number of rows increases greatly then a
noticable increase in the time to search the index may occur.

John

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Solution to slow queries

2005-05-10 Thread TheRefUmp
I'm somewhat a newbee on this database but some observations:

As your table grows (and indexes) INSERTS will definitly slow because of the 
indexes.

Consider MySQL's version of Oracle's partitioning and using MERGE TABLES 
feature. Just remember that if you change 1 table, all of them have to be 
rebuilt the same way. 

Consider compressing the tables (Making them READ ONLY) with the MyISAM engine. 
The updating table (current one) would not be available since it could not be 
compressed. 

 


Paul Halliday [EMAIL PROTECTED] wrote:

Hello,

I am working on a database that deals with network statistics. I have
a program that generates web reports based on this data every ten
minutes.

The table layout looks something like this:

CREATE TABLE traffic
(
  unix_secs INT UNSIGNED NOT NULL,
  dpkts INT UNSIGNED NOT NULL DEFAULT 0,
  doctets   INT UNSIGNED NOT NULL DEFAULT 0,
  first INT UNSIGNED,
  last  INT UNSIGNED,
  srcaddr   VARCHAR(15),
  dstaddr   VARCHAR(15),
  srcport   SMALLINT UNSIGNED,
  dstport   SMALLINT UNSIGNED,
  prot  TINYINT UNSIGNED NOT NULL DEFAULT 0,
  tos   TINYINT UNSIGNED NOT NULL DEFAULT 0,
  tcp_flags TINYINT UNSIGNED NOT NULL DEFAULT 0,
  INDEX unix_secs (unix_secs),
  INDEX srcaddr (srcaddr),
  INDEX dstaddr (dstaddr),
  INDEX srcport (srcport),
  INDEX dstport (dstport)
);

Now, as time progresses the queires are getting slower and slower.
I know this is expected, so I am curious as to how I can have a main
table that has all traffic, so that I can do monthly/yearly reports,
and  also have a daily table so that I can quickly do reports every
minute or so on that data.

I have read up a bit on merge tables (this is probably the answer) but
I am unsure as to how you trigger the changes. ie, how do you do the
rollover after every 24hours?

Any thoughts, or a pointer in the right direction would be greatly appreciated.


Thanks.

--
_
Paul Halliday
http://dp.penix.org

Diplomacy is the art of saying Nice doggie! till you can find a rock.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



__
Switch to Netscape Internet Service.
As low as $9.95 a month -- Sign up today at http://isp.netscape.com/register

Netscape. Just the Net You Need.

New! Netscape Toolbar for Internet Explorer
Search from anywhere on the Web and block those annoying pop-ups.
Download now at http://channels.netscape.com/ns/search/install.jsp

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Solution to slow queries

2005-05-10 Thread mfatene
Hi,
you have to play with explain to see which index is used in your queries.
Since you defined only mono-column indexes, i think they are not used in
queries with multi-criteria search.

Consider adding indexes with all used columns and eventually drop the not used
ones to not slow updates and inserts.

merge (Myisam) tables can help you to partition the data on relevant keys used
in the queries. But i'm not sure it's certainly good because you then loose the
innodb row locking which is better in your situation.

you can also consider archiving of old (and not used) data. Finally, you can
prepare agregation tables every day (or hour) for example if you can consider a
gap of data in the results.



Mathias

Selon Paul Halliday [EMAIL PROTECTED]:

 Hello,

 I am working on a database that deals with network statistics. I have
 a program that generates web reports based on this data every ten
 minutes.

 The table layout looks something like this:

 CREATE TABLE traffic
 (
   unix_secs INT UNSIGNED NOT NULL,
   dpkts INT UNSIGNED NOT NULL DEFAULT 0,
   doctets   INT UNSIGNED NOT NULL DEFAULT 0,
   first INT UNSIGNED,
   last  INT UNSIGNED,
   srcaddr   VARCHAR(15),
   dstaddr   VARCHAR(15),
   srcport   SMALLINT UNSIGNED,
   dstport   SMALLINT UNSIGNED,
   prot  TINYINT UNSIGNED NOT NULL DEFAULT 0,
   tos   TINYINT UNSIGNED NOT NULL DEFAULT 0,
   tcp_flags TINYINT UNSIGNED NOT NULL DEFAULT 0,
   INDEX unix_secs (unix_secs),
   INDEX srcaddr (srcaddr),
   INDEX dstaddr (dstaddr),
   INDEX srcport (srcport),
   INDEX dstport (dstport)
 );

 Now, as time progresses the queires are getting slower and slower.
 I know this is expected, so I am curious as to how I can have a main
 table that has all traffic, so that I can do monthly/yearly reports,
 and  also have a daily table so that I can quickly do reports every
 minute or so on that data.

 I have read up a bit on merge tables (this is probably the answer) but
 I am unsure as to how you trigger the changes. ie, how do you do the
 rollover after every 24hours?

 Any thoughts, or a pointer in the right direction would be greatly
 appreciated.


 Thanks.

 --
 _
 Paul Halliday
 http://dp.penix.org

 Diplomacy is the art of saying Nice doggie! till you can find a rock.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Solution to slow queries

2005-05-10 Thread Eric Jensen
We did something similar for our large statistic tables.  The older data
that no longer changes would get shipped off into a very fast read only
table with a cron job and then that is the table we would generate the
reports on.  Even with millions of entries it is incredibly fast.

Eric Jensen

[EMAIL PROTECTED] wrote:

Consider compressing the tables (Making them READ ONLY) with the MyISAM 
engine. The updating table (current one) would not be available since it could 
not be compressed. 
  



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Solution to slow queries

2005-05-10 Thread Roger Baklund
Paul Halliday wrote:
  srcaddr   VARCHAR(15),
  dstaddr   VARCHAR(15),
Are these ip-adresses? If they are, consider using UNSIGNED INT columns 
and the INET_NTOA() and INET_ATON() funtions. It will save you a lot of 
space, thus increase the amount of data your hw can handle.

I have read up a bit on merge tables (this is probably the answer) but
I am unsure as to how you trigger the changes. ie, how do you do the
rollover after every 24hours?
You would have to program this yourself, there are no mechanisms for 
this in MySQL. It's pretty straight forward, though. Just use a shell 
script and cron or the equivalent if you are on a non-unix platform.

--
Roger
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Solution to slow queries

2005-05-10 Thread Paul Halliday
On 5/10/05, Roger Baklund [EMAIL PROTECTED] wrote:
 Paul Halliday wrote:
srcaddr   VARCHAR(15),
dstaddr   VARCHAR(15),
 
 Are these ip-adresses? If they are, consider using UNSIGNED INT columns
 and the INET_NTOA() and INET_ATON() funtions. It will save you a lot of
 space, thus increase the amount of data your hw can handle.

They are indeed ip addresses. This infomation is gathered and input
into the db via a program called flow-export (export netflows). I
intially had the column as UNSIGNED INT but it would only pick up the
first octet, so I switched to VARCHAR.


 
  I have read up a bit on merge tables (this is probably the answer) but
  I am unsure as to how you trigger the changes. ie, how do you do the
  rollover after every 24hours?
 
 You would have to program this yourself, there are no mechanisms for
 this in MySQL. It's pretty straight forward, though. Just use a shell
 script and cron or the equivalent if you are on a non-unix platform.
 
 --
 Roger
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
_
Paul Halliday
http://dp.penix.org

Diplomacy is the art of saying Nice doggie! till you can find a rock.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-05 Thread Gleb Paharenko
Hello.



There could be a lot of reasons for such a delay. First, you

should switch to bulk inserts and perform all operation as a single

transaction. Avoid usage of the autoextended or per-table tablespaces.

Are you able to upgrade? There could be some performance improvements

in the newer versions.





[EMAIL PROTECTED] wrote:

 

 Hi,

 

I have an interesting problem, i.e upto 20k data is inserted in 20

 min. But for 39k it took 3.5 hours. Could you please help me in this,

 what are all the possible scenarios which leads to this kind of

 problems.  Is there any fine tuning mechanism in Mysql 4.0.23 with

 innodb?

 

 Please help me in this, it is very urgent.

 

 Thanks,

 Narasimha

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



FW: Slow queries, why?

2005-05-05 Thread lakshmi.narasimharao

Hi,

  Thank you. I have a doubt, you mentioned one equation as

Memory Used By MySQL = Innodb_buffer_pool_size + key_buffer_size +
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) +
max_connections*2MB


Is it against mysqld or for each mysql prompt?.

Could you please confirm it?


Thanks,
Narasimha

From: David Griffiths [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 04, 2005 10:19 PM
To: Gleb Paharenko
Cc: mysql@lists.mysql.com
Subject: Re: Slow queries, why?

Yes, indexes slow down inserts (or updates that change the value of a
column that is indexed).

Also, remember that MySQL only uses one index per per table in a query.
So if there are some columns in your table that are indexed, but,

1) Have poor cardinality (number of distinct values - low cardinality
means there aren't many distinct values)
2) Are only used in a where clause with another column that has good
cardinality

then they are an excellent candidate for removal.

While EXPLAIN is great for queries, it won't help much with an insert;

it might be useful for figuring out what indexes are used, and which
ones aren't.

Use show innodb status to get an idea of what's going on (Gleb
suggested it in the link to the innodb monitor).

You should also post the relevant parts of your my.cnf file; have you
seen this equation before:

Memory Used By MySQL = Innodb_buffer_pool_size + key_buffer_size +
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) +
max_connections*2MB

Use it to calculate how much memory you are using.

Finally, read up on phantom reads:
http://sunsite.mff.cuni.cz/MIRRORS/ftp.mysql.com/doc/en/InnoDB_Next-key_
locking.html

This might be what's happening.

David


Gleb Paharenko wrote:

Hello.



 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   


 

InnoDB with a compound primary key, and additional indexes on all rows
   


 

with searchable options in the API. Any generic advice or admin tools
   


 

would be great.
   




Use EXPLAIN to determine how efficient your indexes are. Using a lot of

keys could slow down the INSERT operations but fasten the SELECTs.

InnoDB monitors might be helpful in your case as well. See:

  http://dev.mysql.com/doc/mysql/en/explain.html

  http://dev.mysql.com/doc/mysql/en/innodb-monitor.html







Joseph Cochran [EMAIL PROTECTED] wrote:

 

So here's my situation: we have a database that has a table of about 5
   


 

million rows. To put a new row into the table, I do an INSERT ...
   


 

SELECT, pulling data from one row in the table to seed the data for
   


 

the new row. When there are no active connections to the DB other than
   


 

the one making the INSERT, it runs like a charm. But during normal
   


 

daytime operation, when we run around 50 connections (most sleeping at
   


 

any one time), it takes up to two minutes to do, and ends up locking
   


 

any other inserts or updates against that table for the entire time.
   


 


 

I'll get into more specifics if they're required, but I wanted to ask
   


 

in general if MySQL has tools to diagnose this, or if anyone has had
   


 

general situations like this. In SQL Server (which is where I have
   


 

most of my experience) I could use the trace tool and the Query
   


 

Analyzer to tell what the execution plan for the query was and thus
   


 

what's stalling it (an index gone bad, a weird locking situation,
   


 

etc).
   


 


 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   


 

InnoDB with a compound primary key, and additional indexes on all rows
   


 

with searchable options in the API. Any generic advice or admin tools
   


 

would be great.
   


 


 

-- Joe
   


 




 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Confidentiality Notice

The information contained in this electronic message and any attachments to 
this message are intended
for the exclusive use of the addressee(s) and may contain confidential or 
privileged information. If
you are not the intended recipient, please notify the sender at Wipro or [EMAIL 
PROTECTED] immediately
and destroy all copies of this message and any attachments.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-04 Thread Gleb Paharenko
Hello.



 We're running MySQL 4.11 on a machine with 2GB memory, the table is

 InnoDB with a compound primary key, and additional indexes on all rows

 with searchable options in the API. Any generic advice or admin tools

 would be great.



Use EXPLAIN to determine how efficient your indexes are. Using a lot of

keys could slow down the INSERT operations but fasten the SELECTs. 

InnoDB monitors might be helpful in your case as well. See:

  http://dev.mysql.com/doc/mysql/en/explain.html

  http://dev.mysql.com/doc/mysql/en/innodb-monitor.html







Joseph Cochran [EMAIL PROTECTED] wrote:

 So here's my situation: we have a database that has a table of about 5

 million rows. To put a new row into the table, I do an INSERT ...

 SELECT, pulling data from one row in the table to seed the data for

 the new row. When there are no active connections to the DB other than

 the one making the INSERT, it runs like a charm. But during normal

 daytime operation, when we run around 50 connections (most sleeping at

 any one time), it takes up to two minutes to do, and ends up locking

 any other inserts or updates against that table for the entire time.

 

 I'll get into more specifics if they're required, but I wanted to ask

 in general if MySQL has tools to diagnose this, or if anyone has had

 general situations like this. In SQL Server (which is where I have

 most of my experience) I could use the trace tool and the Query

 Analyzer to tell what the execution plan for the query was and thus

 what's stalling it (an index gone bad, a weird locking situation,

 etc).

 

 We're running MySQL 4.11 on a machine with 2GB memory, the table is

 InnoDB with a compound primary key, and additional indexes on all rows

 with searchable options in the API. Any generic advice or admin tools

 would be great.

 

 -- Joe

 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, why?

2005-05-04 Thread lakshmi.narasimharao

Hi,

I have an interesting problem, i.e upto 20k data is inserted in 20
min. But for 39k it took 3.5 hours. Could you please help me in this,
what are all the possible scenarios which leads to this kind of
problems.  Is there any fine tuning mechanism in Mysql 4.0.23 with
innodb?

Please help me in this, it is very urgent.

Thanks,
Narasimha

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 04, 2005 1:11 PM
To: mysql@lists.mysql.com
Subject: Re: Slow queries, why?

Hello.

 We're running MySQL 4.11 on a machine with 2GB memory, the table is
 InnoDB with a compound primary key, and additional indexes on all rows
 with searchable options in the API. Any generic advice or admin tools
 would be great.

Use EXPLAIN to determine how efficient your indexes are. Using a lot of
keys could slow down the INSERT operations but fasten the SELECTs.
InnoDB monitors might be helpful in your case as well. See:
  http://dev.mysql.com/doc/mysql/en/explain.html
  http://dev.mysql.com/doc/mysql/en/innodb-monitor.html



Joseph Cochran [EMAIL PROTECTED] wrote:
 So here's my situation: we have a database that has a table of about 5
 million rows. To put a new row into the table, I do an INSERT ...
 SELECT, pulling data from one row in the table to seed the data for
 the new row. When there are no active connections to the DB other than
 the one making the INSERT, it runs like a charm. But during normal
 daytime operation, when we run around 50 connections (most sleeping at
 any one time), it takes up to two minutes to do, and ends up locking
 any other inserts or updates against that table for the entire time.

 I'll get into more specifics if they're required, but I wanted to ask
 in general if MySQL has tools to diagnose this, or if anyone has had
 general situations like this. In SQL Server (which is where I have
 most of my experience) I could use the trace tool and the Query
 Analyzer to tell what the execution plan for the query was and thus
 what's stalling it (an index gone bad, a weird locking situation,
 etc).

 We're running MySQL 4.11 on a machine with 2GB memory, the table is
 InnoDB with a compound primary key, and additional indexes on all rows
 with searchable options in the API. Any generic advice or admin tools
 would be great.

 -- Joe



--
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   ___/   www.mysql.com




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Confidentiality Notice

The information contained in this electronic message and any attachments to 
this message are intended
for the exclusive use of the addressee(s) and may contain confidential or 
privileged information. If
you are not the intended recipient, please notify the sender at Wipro or [EMAIL 
PROTECTED] immediately
and destroy all copies of this message and any attachments.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-04 Thread David Griffiths
Yes, indexes slow down inserts (or updates that change the value of a 
column that is indexed).

Also, remember that MySQL only uses one index per per table in a query. 
So if there are some columns in your table that are indexed, but,

1) Have poor cardinality (number of distinct values - low cardinality 
means there aren't many distinct values)
2) Are only used in a where clause with another column that has good 
cardinality

then they are an excellent candidate for removal.
While EXPLAIN is great for queries, it won't help much with an insert; 
it might be useful for figuring out what indexes are used, and which 
ones aren't.

Use show innodb status to get an idea of what's going on (Gleb 
suggested it in the link to the innodb monitor).

You should also post the relevant parts of your my.cnf file; have you 
seen this equation before:

Memory Used By MySQL = Innodb_buffer_pool_size + key_buffer_size + 
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) + 
max_connections*2MB

Use it to calculate how much memory you are using.
Finally, read up on phantom reads: 
http://sunsite.mff.cuni.cz/MIRRORS/ftp.mysql.com/doc/en/InnoDB_Next-key_locking.html

This might be what's happening.
David
Gleb Paharenko wrote:
Hello.

 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   

 

InnoDB with a compound primary key, and additional indexes on all rows
   

 

with searchable options in the API. Any generic advice or admin tools
   

 

would be great.
   


Use EXPLAIN to determine how efficient your indexes are. Using a lot of
keys could slow down the INSERT operations but fasten the SELECTs. 

InnoDB monitors might be helpful in your case as well. See:
 http://dev.mysql.com/doc/mysql/en/explain.html
 http://dev.mysql.com/doc/mysql/en/innodb-monitor.html



Joseph Cochran [EMAIL PROTECTED] wrote:
 

So here's my situation: we have a database that has a table of about 5
   

 

million rows. To put a new row into the table, I do an INSERT ...
   

 

SELECT, pulling data from one row in the table to seed the data for
   

 

the new row. When there are no active connections to the DB other than
   

 

the one making the INSERT, it runs like a charm. But during normal
   

 

daytime operation, when we run around 50 connections (most sleeping at
   

 

any one time), it takes up to two minutes to do, and ends up locking
   

 

any other inserts or updates against that table for the entire time.
   

 

 

I'll get into more specifics if they're required, but I wanted to ask
   

 

in general if MySQL has tools to diagnose this, or if anyone has had
   

 

general situations like this. In SQL Server (which is where I have
   

 

most of my experience) I could use the trace tool and the Query
   

 

Analyzer to tell what the execution plan for the query was and thus
   

 

what's stalling it (an index gone bad, a weird locking situation,
   

 

etc).
   

 

 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   

 

InnoDB with a compound primary key, and additional indexes on all rows
   

 

with searchable options in the API. Any generic advice or admin tools
   

 

would be great.
   

 

 

-- Joe
   

 


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slow queries, why?

2005-05-04 Thread Joseph Cochran
Thanks! Explain and InnoDB monitor were exactly what I needed to
diagnose and fix the problem! In case you were curious, the issue was
that the statement I was expecting to run was not the statement that
was running, but the first hundred and some-odd characters in both
were the same. Using the monitor I was able to see that the wrong
thing was running.

Some SELECTs are still taking longer than they should, but I have some
new tools at my disposal, which makes me very happy.

-- Joe

On 5/4/05, Gleb Paharenko [EMAIL PROTECTED] wrote:
 Hello.
 
 
  We're running MySQL 4.11 on a machine with 2GB memory, the table is
 
  InnoDB with a compound primary key, and additional indexes on all rows
 
  with searchable options in the API. Any generic advice or admin tools
 
  would be great.
 
 Use EXPLAIN to determine how efficient your indexes are. Using a lot of
 
 keys could slow down the INSERT operations but fasten the SELECTs.
 
 InnoDB monitors might be helpful in your case as well. See:
 
   http://dev.mysql.com/doc/mysql/en/explain.html
 
   http://dev.mysql.com/doc/mysql/en/innodb-monitor.html
 
 
 Joseph Cochran [EMAIL PROTECTED] wrote:
 
  So here's my situation: we have a database that has a table of about 5
 
  million rows. To put a new row into the table, I do an INSERT ...
 
  SELECT, pulling data from one row in the table to seed the data for
 
  the new row. When there are no active connections to the DB other than
 
  the one making the INSERT, it runs like a charm. But during normal
 
  daytime operation, when we run around 50 connections (most sleeping at
 
  any one time), it takes up to two minutes to do, and ends up locking
 
  any other inserts or updates against that table for the entire time.
 
 
 
  I'll get into more specifics if they're required, but I wanted to ask
 
  in general if MySQL has tools to diagnose this, or if anyone has had
 
  general situations like this. In SQL Server (which is where I have
 
  most of my experience) I could use the trace tool and the Query
 
  Analyzer to tell what the execution plan for the query was and thus
 
  what's stalling it (an index gone bad, a weird locking situation,
 
  etc).
 
 
 
  We're running MySQL 4.11 on a machine with 2GB memory, the table is
 
  InnoDB with a compound primary key, and additional indexes on all rows
 
  with searchable options in the API. Any generic advice or admin tools
 
  would be great.
 
 
 
  -- Joe
 
 
 
 --
 For technical support contracts, goto https://order.mysql.com/?ref=ensita
 This email is sponsored by Ensita.NET http://www.ensita.net/
__  ___ ___   __
   /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
 /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
___/   www.mysql.com
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow queries, why?

2005-05-03 Thread Joseph Cochran
So here's my situation: we have a database that has a table of about 5
million rows. To put a new row into the table, I do an INSERT ...
SELECT, pulling data from one row in the table to seed the data for
the new row. When there are no active connections to the DB other than
the one making the INSERT, it runs like a charm. But during normal
daytime operation, when we run around 50 connections (most sleeping at
any one time), it takes up to two minutes to do, and ends up locking
any other inserts or updates against that table for the entire time.

I'll get into more specifics if they're required, but I wanted to ask
in general if MySQL has tools to diagnose this, or if anyone has had
general situations like this. In SQL Server (which is where I have
most of my experience) I could use the trace tool and the Query
Analyzer to tell what the execution plan for the query was and thus
what's stalling it (an index gone bad, a weird locking situation,
etc).

We're running MySQL 4.11 on a machine with 2GB memory, the table is
InnoDB with a compound primary key, and additional indexes on all rows
with searchable options in the API. Any generic advice or admin tools
would be great.

-- Joe

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-03 Thread Scott Haneda
on 5/3/05 7:25 PM, Joseph Cochran at [EMAIL PROTECTED] wrote:

 So here's my situation: we have a database that has a table of about 5
 million rows. To put a new row into the table, I do an INSERT ...
 SELECT, pulling data from one row in the table to seed the data for
 the new row. When there are no active connections to the DB other than
 the one making the INSERT, it runs like a charm. But during normal
 daytime operation, when we run around 50 connections (most sleeping at
 any one time), it takes up to two minutes to do, and ends up locking
 any other inserts or updates against that table for the entire time.

Since you are pulling data from only one row, you may be a prime candidate
for http://dev.mysql.com/doc/mysql/en/select-into-statement.html which is
as far as I can tell, more efficient as it stores the results as variables.

I am really pretty new to all this, so just trying to help where I can, but
you may find that selecting and inserting in the application logic level
will perform better for you as well.

I am not sure what goes on behind the scenes in a insert select, from what I
have read, inserting a large amount of rows is going to get you faster
results that selecting them by hand, but for one or few rows, it is not as
optimum.

 
 I'll get into more specifics if they're required, but I wanted to ask
 in general if MySQL has tools to diagnose this, or if anyone has had
 general situations like this. In SQL Server (which is where I have
 most of my experience) I could use the trace tool and the Query
 Analyzer to tell what the execution plan for the query was and thus
 what's stalling it (an index gone bad, a weird locking situation,
 etc).

Take a look at:
http://dev.mysql.com/doc/mysql/en/slow-query-log.html
Though I have not had to use it yet, it may get you where you need to be.
-- 
-
Scott HanedaTel: 415.898.2602
http://www.newgeo.com Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow queries only the first time

2005-03-10 Thread Bob O'Neill
Hello.  I am wondering why some of my queries are slow on the first run, but
speedy on subsequent runs.  They are not being query cached, as I have
query_cache_type set to DEMAND.  Is it something as simple as pulling the
data into RAM from disk, or is there something else going on?  Here's a
simple example:

mysql select count(*) from foo;
+--+
| count(*) |
+--+
|  1374817 |
+--+
1 row in set (3.60 sec)

mysql select count(*) from foo;
+--+
| count(*) |
+--+
|  1374817 |
+--+
1 row in set (0.92 sec)

mysql show variables like 'query_cache_type';
+--++
| Variable_name| Value  |
+--++
| query_cache_type | DEMAND |
+--++
1 row in set (0.00 sec)

I am running MySQL 4.1.10 with InnoDB on RHEL 2.1 (kernel
2.4.9-e.49enterprise). Binaries are Linux x86 glibc static gcc RPMs from
mysql.com.

Thanks,
-Bob


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries only the first time

2005-03-10 Thread Homam S.A.
Most likely it's the OS cache caching all those disk
segments in memory. Also in InnoDB, MySQL uses the
Buffer Pool Size to cache data pages in addition to
the OS cache.

If you're running ona Windows machine, you can easily
tell what's going on by opening up Performance Monitor
and watching Pages/Sec and Disk Reads/Sec. If these
two counters were low in the subsequent queries (and
they should unless you're tight on physical memory),
then the data is cached.




--- Bob O'Neill [EMAIL PROTECTED] wrote:
 Hello.  I am wondering why some of my queries are
 slow on the first run, but
 speedy on subsequent runs.  They are not being query
 cached, as I have
 query_cache_type set to DEMAND.  Is it something as
 simple as pulling the
 data into RAM from disk, or is there something else
 going on?  Here's a
 simple example:
 
 mysql select count(*) from foo;
 +--+
 | count(*) |
 +--+
 |  1374817 |
 +--+
 1 row in set (3.60 sec)
 
 mysql select count(*) from foo;
 +--+
 | count(*) |
 +--+
 |  1374817 |
 +--+
 1 row in set (0.92 sec)
 
 mysql show variables like 'query_cache_type';
 +--++
 | Variable_name| Value  |
 +--++
 | query_cache_type | DEMAND |
 +--++
 1 row in set (0.00 sec)
 
 I am running MySQL 4.1.10 with InnoDB on RHEL 2.1
 (kernel
 2.4.9-e.49enterprise). Binaries are Linux x86 glibc
 static gcc RPMs from
 mysql.com.
 
 Thanks,
 -Bob
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:   

http://lists.mysql.com/[EMAIL PROTECTED]
 
 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries only the first time

2005-03-10 Thread Michael Stassen
On Mar 10, 2005, at 11:20 AM, Bob O'Neill wrote:
Hello.  I am wondering why some of my queries are slow on the first  
run, but
speedy on subsequent runs.  They are not being query cached, as I have
query_cache_type set to DEMAND.  Is it something as simple as pulling  
the
data into RAM from disk, or is there something else going on?  Here's a
simple example:

mysql select count(*) from foo;
+--+
| count(*) |
+--+
|  1374817 |
+--+
1 row in set (3.60 sec)
A table scan.
mysql select count(*) from foo;
+--+
| count(*) |
+--+
|  1374817 |
+--+
1 row in set (0.92 sec)
Another table scan, but now some or all of the table is cached in RAM  
(disk cache) by the OS (not by MySQL).

mysql show variables like 'query_cache_type';
+--++
| Variable_name| Value  |
+--++
| query_cache_type | DEMAND |
+--++
1 row in set (0.00 sec)
You could probably verify that the query cache wasn't used by  
monitoring the query cache stats  
http://dev.mysql.com/doc/mysql/en/query-cache-status-and- 
maintenance.html.

I am running MySQL 4.1.10 with InnoDB on RHEL 2.1 (kernel
2.4.9-e.49enterprise). Binaries are Linux x86 glibc static gcc RPMs  
from
mysql.com.

Thanks,
-Bob
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Slow queries only the first time

2005-03-10 Thread Gordon
The key to your question is InnoDB. InnoDB keeps data and indexes in its
buffer using LRU to flush. So the 2nd time your data was already in memory.
Depending on your buffer size and database size you have all of the
advantages of a memory resident database for frequently used data 
without the disadvantages of losing the data on system shutdown.

-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 10, 2005 1:43 PM
To: Bob O'Neill
Cc: mysql@lists.mysql.com
Subject: Re: Slow queries only the first time


On Mar 10, 2005, at 11:20 AM, Bob O'Neill wrote:

 Hello.  I am wondering why some of my queries are slow on the first  
 run, but
 speedy on subsequent runs.  They are not being query cached, as I have
 query_cache_type set to DEMAND.  Is it something as simple as pulling  
 the
 data into RAM from disk, or is there something else going on?  Here's a
 simple example:

 mysql select count(*) from foo;
 +--+
 | count(*) |
 +--+
 |  1374817 |
 +--+
 1 row in set (3.60 sec)

A table scan.

 mysql select count(*) from foo;
 +--+
 | count(*) |
 +--+
 |  1374817 |
 +--+
 1 row in set (0.92 sec)

Another table scan, but now some or all of the table is cached in RAM  
(disk cache) by the OS (not by MySQL).

 mysql show variables like 'query_cache_type';
 +--++
 | Variable_name| Value  |
 +--++
 | query_cache_type | DEMAND |
 +--++
 1 row in set (0.00 sec)

You could probably verify that the query cache wasn't used by  
monitoring the query cache stats  
http://dev.mysql.com/doc/mysql/en/query-cache-status-and- 
maintenance.html.

 I am running MySQL 4.1.10 with InnoDB on RHEL 2.1 (kernel
 2.4.9-e.49enterprise). Binaries are Linux x86 glibc static gcc RPMs  
 from
 mysql.com.

 Thanks,
 -Bob

Michael


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-11 Thread Pete Moran
I recently had a similar problem, however you may find that its more of a
case of correctly indexing your tables.

Yyou should look for the tables which need indexing, I enabled the
slow-query-log as well as enabling of logging of queries which didnt use
indexes and found some which did some very nasty table scans.  This can
seriously degrade your performance, if you can find the queries in question
then modify/add indexes or change sql which will reduce number of rows
scanned.

Lines in my.cnf

log-slow-queries = /var/lib/mysql/data/slow.log
set-variable = long-query-time=5
log-long-format  # this will log selects not using indexes

Then simply tail the log when you have issues to find which ones need
optimizing.

BTW - above is using Mysql 4 so syntax may be diff with your version

PM

-Original Message-
From: BD [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 8 January 2005 2:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M
to my my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting
my php-MySQL web application via http.  When I removed above and then
restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
set_variable=key_buffer=64M (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 265.6.9 - Release Date: 6/01/2005
 

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 265.6.10 - Release Date: 10/01/2005
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread BD
Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M
to my my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting
my php-MySQL web application via http.  When I removed above and then
restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
set_variable=key_buffer=64M (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread Tom Crimmins
[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
[/snip]

You can set it without stopping mysql using the following:

'SET GLOBAL key_buffer_size = [size] * 1024 * 1024'

where [size] is the size in Mb you want the key_buffer_size to be. You'll
need to add it to your my.cnf if you want this setting to hold on a restart.
Try adding 'set-variable = key_buffer = 64M' to your my.cnf.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
set_variable=key_buffer=64M (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread BD
Thanks Tom,

OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;

and got the error:  You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1

I must be doing something wrong.  I tried both 64M and just 64.  Should I
also enter tick marks?  I tried that too.

BD

-Original Message-
From: Tom Crimmins [mailto:[EMAIL PROTECTED]
Sent: Friday, January 07, 2005 11:20 AM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?


[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
[/snip]

You can set it without stopping mysql using the following:

'SET GLOBAL key_buffer_size = [size] * 1024 * 1024'

where [size] is the size in Mb you want the key_buffer_size to be. You'll
need to add it to your my.cnf if you want this setting to hold on a restart.
Try adding 'set-variable = key_buffer = 64M' to your my.cnf.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
set_variable=key_buffer=64M (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread Tom Crimmins
[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
I'm running MySQL 3.23.56 on a linux box.
[/snip]

Sorry, I didn't explain that you were probably not able to connect because
prior to version 4.1 setting variable in my.cnf require that you add
set-variable = before the setting ie. 'set-variable = key_buffer = 64M'.
When you restarted mysql it probably exited on error.

[snip]
OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;
and got the error:  You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1
[/snip]

Sorry about this too, in 3.23 leave out the word global. If you can restart
without a problem though, I would jest add the 'set-variable = key_buffer =
64M' line to your my.cnf file and restart mysql.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD 
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
set_variable=key_buffer=64M (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread BD
Hi Tom,

OK thanks I just added the set-variable = key_buffer = 64M
line to my my.cnf file and at least I got no errors and the
MySQL server restarted OK and I got my test site running.

The problem now is, this did not do anything to improve the query and parse
times.
I'm testing out an on line store which has about 12,000 product entries in
it
and about 300 product categories.  Both the parse and query times are
running over 12 seconds
- that's just to bring up the front page of the store with the category -
sub cat list.

I'm sure there are lots of other switches in MySQl, do you have any other
suggestions as to what
I could do to reduce query times down to a reasonable 1-3 seconds?

I just bought the book
http://www.amazon.com/exec/obidos/tg/detail/-/0596003064/102-0076963-3409775
?%5Fencoding=UTF8v=glance
but it has not arrived yet.  Any suggestions in the mean time?

BD


-Original Message-
From: Tom Crimmins [mailto:[EMAIL PROTECTED]
Sent: Friday, January 07, 2005 12:08 PM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?


[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
I'm running MySQL 3.23.56 on a linux box.
[/snip]

Sorry, I didn't explain that you were probably not able to connect because
prior to version 4.1 setting variable in my.cnf require that you add
set-variable = before the setting ie. 'set-variable = key_buffer = 64M'.
When you restarted mysql it probably exited on error.

[snip]
OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;
and got the error:  You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1
[/snip]

Sorry about this too, in 3.23 leave out the word global. If you can restart
without a problem though, I would jest add the 'set-variable = key_buffer =
64M' line to your my.cnf file and restart mysql.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
set_variable=key_buffer=64M (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread Tom Crimmins
[snip]
The problem now is, this did not do anything to improve the query and parse
times.
I'm testing out an on line store which has about 12,000 product entries in
it and about 300 product categories.  Both the parse and query times are
running over 12 seconds
[/snip]

This is probably related to not having proper indexes on your tables. If you
post the query, and a 'SHOW CREATE TABLE [tablename]' for each table
involved, someone maybe able to help you speed it up.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD 
Sent: Friday, January 07, 2005 11:21 AM
To: Tom Crimmins
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?

Hi Tom,

OK thanks I just added the set-variable = key_buffer = 64M line to my my.cnf
file and at least I got no errors and the MySQL server restarted OK and I
got my test site running.

The problem now is, this did not do anything to improve the query and parse
times.
I'm testing out an on line store which has about 12,000 product entries in
it and about 300 product categories.  Both the parse and query times are
running over 12 seconds
- that's just to bring up the front page of the store with the category -
sub cat list.

I'm sure there are lots of other switches in MySQl, do you have any other
suggestions as to what I could do to reduce query times down to a reasonable
1-3 seconds?

I just bought the book
http://www.amazon.com/exec/obidos/tg/detail/-/0596003064/102-0076963-3409775
?%5Fencoding=UTF8v=glance
but it has not arrived yet.  Any suggestions in the mean time?

BD


-Original Message-
From: Tom Crimmins
Sent: Friday, January 07, 2005 12:08 PM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?


[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
I'm running MySQL 3.23.56 on a linux box.
[/snip]

Sorry, I didn't explain that you were probably not able to connect because
prior to version 4.1 setting variable in my.cnf require that you add
set-variable = before the setting ie. 'set-variable = key_buffer = 64M'.
When you restarted mysql it probably exited on error.

[snip]
OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;
and got the error:  You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1
[/snip]

Sorry about this too, in 3.23 leave out the word global. If you can restart
without a problem though, I would jest add the 'set-variable = key_buffer =
64M' line to your my.cnf file and restart mysql.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a 0
DB_ERROR_NOT_CONNECTED error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
set_variable=key_buffer=64M (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer - zen-cart?

2005-01-07 Thread BD
OK thanks Tom,

The application I am using for the site is www.zen-cart.com so I'm not sure
I can do anything about changing
the table indexes because it is a pre written php-MySQL open source freeware
script.  I'm getting the query times
directly from the application and not MySQL.

However I could post a query I generate directly from MySQL, how could I do
that?  What would
be the command I should use to generate query times from MySQL?

For the SHOW CREATE TABLE, I have posted below the five tables that (I
believe) are directly involved in generating
a product and category list on the front page of my test store.  I did not
post all tables in the database because there are 97 tables total in the
database, but I think these are the pertinent tables involved in slow query
times; any suggestions that I get here I will definitely pass along to the
zen cart developers.  If there is something that I can do without changing
the PHP code of the application and fix just with MySQL settings that would
be great...

zen_products |CREATE TABLE `zen_products` (
  `products_id` int(11) NOT NULL auto_increment,
  `products_type` int(11) NOT NULL default '1',
  `products_quantity` float NOT NULL default '0',
  `products_model` varchar(32) default NULL,
  `products_image` varchar(64) default NULL,
  `products_price` decimal(15,4) NOT NULL default '0.',
  `products_virtual` tinyint(1) NOT NULL default '0',
  `products_date_added` datetime NOT NULL default '0001-01-01 00:00:00',
  `products_last_modified` datetime default NULL,
  `products_date_available` datetime default NULL,
  `products_weight` decimal(5,2) NOT NULL default '0.00',
  `products_status` tinyint(1) NOT NULL default '0',
  `products_tax_class_id` int(11) NOT NULL default '0',
  `manufacturers_id` int(11) default NULL,
  `products_ordered` float NOT NULL default '0',
  `products_quantity_order_min` float NOT NULL default '1',
  `products_quantity_order_units` float NOT NULL default '1',
  `products_priced_by_attribute` tinyint(1) NOT NULL default '0',
  `product_is_free` tinyint(1) NOT NULL default '0',
  `product_is_call` tinyint(1) NOT NULL default '0',
  `products_quantity_mixed` tinyint(1) NOT NULL default '0',
  `product_is_always_free_shipping` tinyint(1) NOT NULL default '0',
  `products_qty_box_status` tinyint(1) NOT NULL default '1',
  `products_quantity_order_max` float NOT NULL default '0',
  `products_sort_order` int(11) NOT NULL default '0',
  `products_discount_type` tinyint(1) NOT NULL default '0',
  `products_discount_type_from` tinyint(1) NOT NULL default '0',
  `products_price_sorter` decimal(15,4) NOT NULL default '0.',
  `master_categories_id` int(11) NOT NULL default '0',
  `products_mixed_discount_quantity` tinyint(1) NOT NULL default '1',
  PRIMARY KEY  (`products_id`),
  KEY `idx_products_date_added` (`products_date_added`)
) TYPE=MyISAM |

| zen_categories_description | CREATE TABLE `zen_categories_description` (
  `categories_id` int(11) NOT NULL default '0',
  `language_id` int(11) NOT NULL default '1',
  `categories_name` varchar(32) NOT NULL default '',
  `categories_description` text NOT NULL,
  PRIMARY KEY  (`categories_id`,`language_id`),
  KEY `idx_categories_name` (`categories_name`)
) TYPE=MyISAM |


zen_categories | CREATE TABLE `zen_categories` (
  `categories_id` int(11) NOT NULL auto_increment,
  `categories_image` varchar(64) default NULL,
  `parent_id` int(11) NOT NULL default '0',
  `sort_order` int(3) default NULL,
  `date_added` datetime default NULL,
  `last_modified` datetime default NULL,
  `categories_status` tinyint(1) NOT NULL default '1',
  PRIMARY KEY  (`categories_id`),
  KEY `idx_categories_parent_id` (`parent_id`),
  KEY `idx_sort_order` (`sort_order`)
) TYPE=MyISAM |

| zen_products_description | CREATE TABLE `zen_products_description` (
  `products_id` int(11) NOT NULL auto_increment,
  `language_id` int(11) NOT NULL default '1',
  `products_name` varchar(64) NOT NULL default '',
  `products_description` text,
  `products_url` varchar(255) default NULL,
  `products_viewed` int(5) default '0',
  PRIMARY KEY  (`products_id`,`language_id`),
  KEY `products_name` (`products_name`)
) TYPE=MyISAM |

| zen_products_to_categories | CREATE TABLE `zen_products_to_categories` (
  `products_id` int(11) NOT NULL default '0',
  `categories_id` int(11) NOT NULL default '0',
  PRIMARY KEY  (`products_id`,`categories_id`)
) TYPE=MyISAM |
+-



-Original Message-
From: Tom Crimmins [mailto:[EMAIL PROTECTED]
Sent: Friday, January 07, 2005 12:23 PM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?


[snip]
The problem now is, this did not do anything to improve the query and parse
times.
I'm testing out an on line store which has about 12,000 product entries in
it and about 300 product categories.  Both the parse and query times are
running over 12 seconds
[/snip]

This is probably related to not having proper indexes on your tables

RE: Slow queries, need advice on how to improve; key_buffer - zen -cart?

2005-01-07 Thread Tom Crimmins
[snip]
The application I am using for the site is www.zen-cart.com so I'm not sure
I can do anything about changing the table indexes because it is a pre
written php-MySQL open source freeware script.  I'm getting the query times
directly from the application and not MySQL.
[/snip]

You could turn on logging in mysql to see what the query is that is taking
so long, then make sure the tables are properly indexed based on this. I
would hope that this software has properly indexed the tables, but you can
verify this on your own.

Add the following to your my.cnf to enable the logging of slow queries.

[mysqld]
set-variable = long_query_time=2
log-long-format
log-slow-queries = /var/log/mysqld.slow.log (or whatever file you want, just
make sure the user mysqld is running as has write permissions to it.)

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 11:58 AM
To: Tom Crimmins
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer -
zen-cart?

OK thanks Tom,

The application I am using for the site is www.zen-cart.com so I'm not sure
I can do anything about changing the table indexes because it is a pre
written php-MySQL open source freeware script.  I'm getting the query times
directly from the application and not MySQL.

However I could post a query I generate directly from MySQL, how could I do
that?  What would be the command I should use to generate query times from
MySQL?

For the SHOW CREATE TABLE, I have posted below the five tables that (I
believe) are directly involved in generating a product and category list on
the front page of my test store.  I did not post all tables in the database
because there are 97 tables total in the database, but I think these are the
pertinent tables involved in slow query times; any suggestions that I get
here I will definitely pass along to the zen cart developers.  If there is
something that I can do without changing the PHP code of the application and
fix just with MySQL settings that would be great...

zen_products |CREATE TABLE `zen_products` (
  `products_id` int(11) NOT NULL auto_increment,
  `products_type` int(11) NOT NULL default '1',
  `products_quantity` float NOT NULL default '0',
  `products_model` varchar(32) default NULL,
  `products_image` varchar(64) default NULL,
  `products_price` decimal(15,4) NOT NULL default '0.',
  `products_virtual` tinyint(1) NOT NULL default '0',
  `products_date_added` datetime NOT NULL default '0001-01-01 00:00:00',
  `products_last_modified` datetime default NULL,
  `products_date_available` datetime default NULL,
  `products_weight` decimal(5,2) NOT NULL default '0.00',
  `products_status` tinyint(1) NOT NULL default '0',
  `products_tax_class_id` int(11) NOT NULL default '0',
  `manufacturers_id` int(11) default NULL,
  `products_ordered` float NOT NULL default '0',
  `products_quantity_order_min` float NOT NULL default '1',
  `products_quantity_order_units` float NOT NULL default '1',
  `products_priced_by_attribute` tinyint(1) NOT NULL default '0',
  `product_is_free` tinyint(1) NOT NULL default '0',
  `product_is_call` tinyint(1) NOT NULL default '0',
  `products_quantity_mixed` tinyint(1) NOT NULL default '0',
  `product_is_always_free_shipping` tinyint(1) NOT NULL default '0',
  `products_qty_box_status` tinyint(1) NOT NULL default '1',
  `products_quantity_order_max` float NOT NULL default '0',
  `products_sort_order` int(11) NOT NULL default '0',
  `products_discount_type` tinyint(1) NOT NULL default '0',
  `products_discount_type_from` tinyint(1) NOT NULL default '0',
  `products_price_sorter` decimal(15,4) NOT NULL default '0.',
  `master_categories_id` int(11) NOT NULL default '0',
  `products_mixed_discount_quantity` tinyint(1) NOT NULL default '1',
  PRIMARY KEY  (`products_id`),
  KEY `idx_products_date_added` (`products_date_added`)
) TYPE=MyISAM |

| zen_categories_description | CREATE TABLE `zen_categories_description` 
| (
  `categories_id` int(11) NOT NULL default '0',
  `language_id` int(11) NOT NULL default '1',
  `categories_name` varchar(32) NOT NULL default '',
  `categories_description` text NOT NULL,
  PRIMARY KEY  (`categories_id`,`language_id`),
  KEY `idx_categories_name` (`categories_name`)
) TYPE=MyISAM |


zen_categories | CREATE TABLE `zen_categories` (
  `categories_id` int(11) NOT NULL auto_increment,
  `categories_image` varchar(64) default NULL,
  `parent_id` int(11) NOT NULL default '0',
  `sort_order` int(3) default NULL,
  `date_added` datetime default NULL,
  `last_modified` datetime default NULL,
  `categories_status` tinyint(1) NOT NULL default '1',
  PRIMARY KEY  (`categories_id`),
  KEY `idx_categories_parent_id` (`parent_id`),
  KEY `idx_sort_order` (`sort_order`)
) TYPE=MyISAM |

| zen_products_description | CREATE TABLE `zen_products_description` (
  `products_id` int(11) NOT NULL auto_increment,
  `language_id` int(11) NOT NULL default '1

FW: Slow queries, need advice on how to improve; key_buffer - zen-cart?

2005-01-07 Thread BD
  Lock_time: 0  Rows_sent: 5  Rows_examined: 10
SELECT *
FROM phpbb_forums
ORDER BY cat_id, forum_order;
# Time: 050107 17:41:59
# [EMAIL PROTECTED]: graphic[graphic] @ localhost []
# Query_time: 0  Lock_time: 0  Rows_sent: 60  Rows_examined: 60
use graphicforum;
SELECT *
FROM phpbb_config;
# [EMAIL PROTECTED]: graphic[graphic] @ localhost []
# Query_time: 0  Lock_time: 0  Rows_sent: 2  Rows_examined: 4
SELECT *
FROM phpbb_forums
ORDER BY cat_id, forum_order;
# Time: 050107 17:42:11
# [EMAIL PROTECTED]: graphic[graphic] @ localhost []
# Query_time: 0  Lock_time: 0  Rows_sent: 62  Rows_examined: 62
use graphicguestbookdata;
SELECT *
FROM phpbb_config;
# [EMAIL PROTECTED]: graphic[graphic] @ localhost []
# Query_time: 0  Lock_time: 0  Rows_sent: 5  Rows_examined: 10
SELECT *
FROM phpbb_forums
ORDER BY cat_id, forum_order;

-Original Message-
From: Tom Crimmins [mailto:[EMAIL PROTECTED]
Sent: Friday, January 07, 2005 4:03 PM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer -
zen-cart?


[snip]
The application I am using for the site is www.zen-cart.com so I'm not sure
I can do anything about changing the table indexes because it is a pre
written php-MySQL open source freeware script.  I'm getting the query times
directly from the application and not MySQL.
[/snip]

You could turn on logging in mysql to see what the query is that is taking
so long, then make sure the tables are properly indexed based on this. I
would hope that this software has properly indexed the tables, but you can
verify this on your own.

Add the following to your my.cnf to enable the logging of slow queries.

[mysqld]
set-variable = long_query_time=2
log-long-format
log-slow-queries = /var/log/mysqld.slow.log (or whatever file you want, just
make sure the user mysqld is running as has write permissions to it.)

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer - zen -cart?

2005-01-07 Thread Tom Crimmins
[snip]

# Time: 050107 17:40:41
# [EMAIL PROTECTED]: zencarttest[zencarttest] @  [xxx.xxx.xxx.xxx]
# Query_time: 13  Lock_time: 0  Rows_sent: 148  Rows_examined: 1567270
use zencarttest;
select distinct m.manufacturers_id, m.manufacturers_name from
zen_manufacturers m
 left join zen_products p on m.manufacturers_id =
p.manufacturers_id
 where m.manufacturers_id = p.manufacturers_id and
p.products_status= '1'
 order by manufacturers_name;
[/snip] 

This appears to be the problem query. Looks like zen_products could use an
index on (manufacturers_id, products_status), and zen_manufacturers could
use an index on (manufacturers_id,manufacturers_name).

You can try to add these indexes and run the query to see if it helps. You
may want to do an EXPLAIN after adding the indexes to make see if it is
using them.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



really slow queries with innodb

2004-11-19 Thread Dale Fay
  I was using myisam tables and converted them to innodb
with  Alter table table TYPE=INNODB; A query that used
to take 23 minutes, does not complete in hours. There
about 33M rows in the table and I was doing a count of
the rows. Some queries with more conditions seem fine.

 Here is the table:
 dspam_token_data | CREATE TABLE `dspam_token_data` (
  `uid` smallint(5) unsigned default NULL,
  `token` char(20) default NULL,
  `spam_hits` int(11) default NULL,
  `innocent_hits` int(11) default NULL,
  `last_hit` date default NULL,
  UNIQUE KEY `id_token_data_01` (`uid`,`token`),
  KEY `id_token_data_02` (`innocent_hits`)
) TYPE=InnoDB |

  Below is the my.cnf file, with comments removed.
 
Thanks,
Dale


[client]
#password   = your_password
port= 3306
socket  = /tmp/mysql.sock

# Here follows entries for some specific programs

# The MySQL server
[mysqld]
port= 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer = 16M
max_allowed_packet = 16M
table_cache = 1024 
sort_buffer_size = 512K 
read_buffer_size = 2M
myisam_sort_buffer_size = 5M
thread_cache = 8
query_cache_size= 32M
# Try number of CPU's*2 for thread_concurrency
thread_concurrency = 2
wait_timeout=200

max_connections = 1000

# Replication Master Server (default)
# binary logging is required for replication
log-bin

# required unique id between 1 and 2^32 - 1
# defaults to 1 if master-host is not set
# but will not function as a master if omitted
server-id   = 1



innodb_data_home_dir = /private2/local/var/dspamdb/
innodb_data_file_path = ibdata1:3G:autoextend

# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high


innodb_buffer_pool_size = 800M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 100M
innodb_log_buffer_size = 20M
innodb_flush_method = nosync
innodb_flush_log_at_trx_commit = 2
innodb_lock_wait_timeout = 10



# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high


[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates



[mysqlhotcopy]
interactive-timeout






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: really slow queries with innodb

2004-11-19 Thread SGreen
What you sent is important information to diagnosing your problem but it 
would have really helped if you had posted your query along with an 
EXPLAIN of it. 

Also, it is a WELL KNOWN fact that InnoDB does NOT know exactly how many 
rows are in a table at any particular moment (because of the versioning 
system it uses to permit row-level locks) so any query like

SELECT count(*) 
FROM Any_InnoDB_Table;

could need to perform a full table scan to calculate that count. I am sure 
that is at least part of your slowdown.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine

Dale Fay [EMAIL PROTECTED] wrote on 11/19/2004 02:19:26 PM:

   I was using myisam tables and converted them to innodb
 with  Alter table table TYPE=INNODB; A query that used
 to take 23 minutes, does not complete in hours. There
 about 33M rows in the table and I was doing a count of
 the rows. Some queries with more conditions seem fine.
 
  Here is the table:
  dspam_token_data | CREATE TABLE `dspam_token_data` (
   `uid` smallint(5) unsigned default NULL,
   `token` char(20) default NULL,
   `spam_hits` int(11) default NULL,
   `innocent_hits` int(11) default NULL,
   `last_hit` date default NULL,
   UNIQUE KEY `id_token_data_01` (`uid`,`token`),
   KEY `id_token_data_02` (`innocent_hits`)
 ) TYPE=InnoDB |
 
   Below is the my.cnf file, with comments removed.
 
 Thanks,
 Dale
 
 
 [client]
 #password   = your_password
 port  = 3306
 socket  = /tmp/mysql.sock
 
 # Here follows entries for some specific programs
 
 # The MySQL server
 [mysqld]
 port  = 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer = 16M
 max_allowed_packet = 16M
 table_cache = 1024 
 sort_buffer_size = 512K 
 read_buffer_size = 2M
 myisam_sort_buffer_size = 5M
 thread_cache = 8
 query_cache_size= 32M
 # Try number of CPU's*2 for thread_concurrency
 thread_concurrency = 2
 wait_timeout=200
 
 max_connections   = 1000
 
 # Replication Master Server (default)
 # binary logging is required for replication
 log-bin
 
 # required unique id between 1 and 2^32 - 1
 # defaults to 1 if master-host is not set
 # but will not function as a master if omitted
 server-id   = 1
 
 
 
 innodb_data_home_dir = /private2/local/var/dspamdb/
 innodb_data_file_path = ibdata1:3G:autoextend
 
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 
 
 innodb_buffer_pool_size = 800M
 innodb_additional_mem_pool_size = 20M
 innodb_log_file_size = 100M
 innodb_log_buffer_size = 20M
 innodb_flush_method = nosync
 innodb_flush_log_at_trx_commit = 2
 innodb_lock_wait_timeout = 10
 
 
 
 # You can set .._buffer_pool_size up to 50 - 80 %
 # of RAM but beware of setting memory usage too high
 
 
 [mysqldump]
 quick
 max_allowed_packet = 16M
 
 [mysql]
 no-auto-rehash
 # Remove the next comment character if you are not familiar with SQL
 #safe-updates
 
 
 
 [mysqlhotcopy]
 interactive-timeout
 
 
 
 
 
 
 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 


Re: Slow Queries on Fast Server?

2004-08-27 Thread JVanV8
Thanks Brent and Donny, hopefully this info will help get to the root of the problem 
with the fulltext search.
The table structure is very, very simple:

mysql describe product_fulltext;
+-++--+-+-+---+
| Field   | Type   | Null | Key | Default | Extra |
+-++--+-+-+---+
| product_id  | int(9) |  | PRI | 0   |   |
| search_text | text   |  | MUL | |   |
+-++--+-+-+---+
2 rows in set (0.00 sec)

Space usage :
TypeUsage
Data502,455 KB
Index   440,412 KB
Total   942,867 KB

Row Statistic :
Statements  Value
Format  dynamic
Rows3,237,981
Row length ø158
Row size  ø 298 Bytes

MySQL 4.0.20-standard-log  Official MySQL RPM

I also calculated the average text feild length: 
mysql SELECT AVG(LENGTH(search_text)) AS avg_length FROM product_fulltext;
++
| avg_length |
++
|   147.2239 |
++
1 row in set (33.34 sec)

Is my average text length too long?  Is MySQL 4.0.20 really that slow for fulltext 
searching?  If so, how much will performance increase by upgrading to 4.1.x?  Is 
upgrading difficult?

You may try doubling or tripling your sort_buffer and myisam_sort_buffer settings 
and maybe you read_buffer.

My sort_buffer is 10Mb, the read_buffer is 2Mb and the myisam_sort_buffer is 64Mb ... 
are these still too low?

I'm think I'm going to try to install mytop to get more performance info.  
Currently, my temporary solution is to limit the query to 2000 rows but it still takes 
4-5 seconds and doesn't give a complete picture for search results.

Thanks for any help on this,
- John

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries on Fast Server?

2004-08-27 Thread Brent Baisley
If you are sorting the result, setting a limit only speeds things up 
for data transfer of the result set since MySQL still has to find all 
the records, sort them, then deliver only the first X records. You can 
usually tell how much time is spent on the transfer of the result set 
vs. finding the result by doing a select count(*) rather than actually 
selecting data.

As for average length, I don't think that's the issue. The system I'm 
designing searches resumes, so while there are only about 15k records, 
the average length is about 3 pages of text. My data size is actually 
about the same size as yours. My response time on a test machine is 
about 6-8 seconds, but the test machine is woefully underpowered for 
real use (Mac 350Mhz G4 256MB RAM).

I would definitely try increasing your buffer variables. Double them 
and see what happens. Many times on this list people have gotten 
significant performance boosts from setting high sort buffers. Please 
post results if it works.

On Aug 27, 2004, at 9:28 AM, [EMAIL PROTECTED] wrote:
Thanks Brent and Donny, hopefully this info will help get to the root 
of the problem with the fulltext search.
The table structure is very, very simple:

mysql describe product_fulltext;
+-++--+-+-+---+
| Field   | Type   | Null | Key | Default | Extra |
+-++--+-+-+---+
| product_id  | int(9) |  | PRI | 0   |   |
| search_text | text   |  | MUL | |   |
+-++--+-+-+---+
2 rows in set (0.00 sec)
Space usage :
TypeUsage
Data502,455 KB
Index   440,412 KB
Total   942,867 KB
Row Statistic :
Statements  Value
Format  dynamic
Rows3,237,981
Row length ø158
Row size  ø 298 Bytes
MySQL 4.0.20-standard-log  Official MySQL RPM
I also calculated the average text feild length:
mysql SELECT AVG(LENGTH(search_text)) AS avg_length FROM 
product_fulltext;
++
| avg_length |
++
|   147.2239 |
++
1 row in set (33.34 sec)

Is my average text length too long?  Is MySQL 4.0.20 really that slow 
for fulltext searching?  If so, how much will performance increase by 
upgrading to 4.1.x?  Is upgrading difficult?

You may try doubling or tripling your sort_buffer and 
myisam_sort_buffer settings and maybe you read_buffer.
My sort_buffer is 10Mb, the read_buffer is 2Mb and the 
myisam_sort_buffer is 64Mb ... are these still too low?

I'm think I'm going to try to install mytop to get more performance 
info.
Currently, my temporary solution is to limit the query to 2000 rows 
but it still takes 4-5 seconds and doesn't give a complete picture for 
search results.

Thanks for any help on this,
- John
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slow Queries on Fast Server?

2004-08-26 Thread JVanV8
I'm gathering by the lack of response that perhaps MySQL is incapable of executing a 
count of the number of fulltext matches on 3 million rows.  
I really thought that MySQL 4 was really suppose to be able to handle such a load
I still think my configuration may be to blame
?
- John






--
Could you send the output of an EXPLAIN for your query?

Sure, pretty sure the index is fine though:

mysql EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH (search_text) AGAINST
 ('black');
+--+--+---+-+-+--+--+-+
| table| type | possible_keys | key | key_len | ref  | rows |
Extra   |
+--+--+---+-+-+--+--+-+
| product_fulltext | fulltext | search_text   | search_text |   0 |  |1 |
Using where |
+--+--+---+-+-+--+--+-+
1 row in set (0.00 sec)




[EMAIL PROTECTED] wrote:
Have you checked the Optimization section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
 
 
 Oh yes, as I've attempted to configure the my.cnf file for best performance.  The
 query is correct.  The fulltext index is correct as I built the fulltext index on
 the
 single column (took 9 minutes) and even did repair and optimize on the table...
 so I
 don't think its the index.  I'm thinking its the server config...
 
 - John
 
 
 
 [EMAIL PROTECTED] wrote:
 
I'm running into a problem with some queries running on a dedicated mysql server
 (2.0
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST
 queries
are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2
 -
 0.75
seconds) but it was only fast for a day or so.
Here's the rundown:

TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 

QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST
 ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH
 (the_text)
AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query
 but
it didn't improve anything.  The table has a fulltext index on the column and is
optimized.  No other users are connected to the server.

Is there a RED FLAG in here somewhere?

MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1


Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)

Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
 0  1  19516  17580  42416 176975200 

Re: Slow Queries on Fast Server?

2004-08-26 Thread Brent Baisley
Capable? I can't think if why it wouldn't be capable. From your posts I  
assume your definition of capable in this case is a quick response.  
Are you running 4.0 or 4.1? I think the indexing was changed in 4.1 so  
it would give you better response. 5-20 seconds does seem long,  
assuming your disks are fast.

You may try doubling or tripling your sort_buffer and  
myisam_sort_buffer settings and maybe you read_buffer. Remember the  
template configuration files are still all purpose configurations. You  
are really looking to optimize a specific area.

Unfortunately, I don't have a dataset large enough yet to test myself,  
but I am curious. Improved performance is on the to do list.

On Aug 26, 2004, at 9:07 AM, [EMAIL PROTECTED] wrote:
I'm gathering by the lack of response that perhaps MySQL is incapable  
of executing a count of the number of fulltext matches on 3 million  
rows.
I really thought that MySQL 4 was really suppose to be able to handle  
such a load
I still think my configuration may be to blame
?
- John



--
Could you send the output of an EXPLAIN for your query?

Sure, pretty sure the index is fine though:
mysql EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH  
(search_text) AGAINST
('black');
+--+--+---+-+- 
+--+--+-+
| table| type | possible_keys | key | key_len  
| ref  | rows |
Extra   |
+--+--+---+-+- 
+--+--+-+
| product_fulltext | fulltext | search_text   | search_text |   0  
|  |1 |
Using where |
+--+--+---+-+- 
+--+--+-+
1 row in set (0.00 sec)


[EMAIL PROTECTED] wrote:
Have you checked the Optimization section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best  
performance.  The
query is correct.  The fulltext index is correct as I built the  
fulltext index on
the
single column (took 9 minutes) and even did repair and optimize  
on the table...
so I
don't think its the index.  I'm thinking its the server config...

- John

[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated  
mysql server
(2.0
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most  
MATCH, AGAINST
queries
are taking 5-20 seconds.  Performance was excellent for some reason  
one day (0.2
-
0.75
seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text)
Rows: 3,237,981
Type: MyISAM
Size: 920.8 MB
QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text)  
AGAINST
('blue');
or
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE  
MATCH
(the_text)
AGAINST ('blue') LIMIT 0, 20;
Both are problematic.  I even tried placing a limit of 2 on the  
first query
but
it didn't improve anything.  The table has a fulltext index on the  
column and is
optimized.  No other users are connected to the server.

Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST  
('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)

Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system
  cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us  
sy id wa
0  0  19500  17800  42432 177172800 060  11330   
0  0 99  1
0  1  19500  21524  42428 1765728  7240   960 0  536   444   
5  1 82 12
0  1  19500  19512  42424 176782000  2348 0  912   592   
0  1 50 49
0  1  19500  17788  42424 176954000  1980 0  868   588   
0  1 51 48
0  1  19500  17568  42424 176976000  2300 0  723   401   
0  0 50 49
0  1  19500  17704  42428 176962000  193620  662   364   
0  0 51 49
0  1  19500  17560  42428 176976400  2224 0  696   400   
0  0 51 49
0  1  19500  17504  42424 176982400  2136 0  670   380   
0  0 51 49
0  1  19500  17616  42424 176971200  2228 0  693   415   
0  0 51 49
0  1  19508  17608  42420 176972408  2348 8  692   389   
0  0 50 50
0  1  19508  17532  42428 176979200  1896   108  654   366   
0  0 50 49
0  1  

RE: Slow Queries on Fast Server?

2004-08-26 Thread Donny Simonton
John,
From my experience it is a lot more on how big is your data, not necessarily
the amount of data.  Which version of mysql are you running?  Are you using
a mysql prepared version (you downloaded it from mysql.com).  I'm using
4.1.3 and I have a table that has a char 68 with 29 million rows that is
fulltext indexed and all of my queries using something similar to yours take
0.1 to 0.2 seconds max.  Also if you provided your full table structure
including the indexes that would help.  

Donny

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
 Sent: Thursday, August 26, 2004 8:08 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Slow Queries on Fast Server?
 
 I'm gathering by the lack of response that perhaps MySQL is incapable of
 executing a count of the number of fulltext matches on 3 million rows.
 I really thought that MySQL 4 was really suppose to be able to handle such
 a load
 I still think my configuration may be to blame
 ?
 - John
 
 
 
 
 
 
 --
 Could you send the output of an EXPLAIN for your query?
 
 Sure, pretty sure the index is fine though:
 
 mysql EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH
 (search_text) AGAINST
  ('black');
 +--+--+---+-+-+---
 ---+--+-+
 | table| type | possible_keys | key | key_len |
 ref  | rows |
 Extra   |
 +--+--+---+-+-+---
 ---+--+-+
 | product_fulltext | fulltext | search_text   | search_text |   0 |
 |1 |
 Using where |
 +--+--+---+-+-+---
 ---+--+-+
 1 row in set (0.00 sec)
 
 
 
 
 [EMAIL PROTECTED] wrote:
 Have you checked the Optimization section of the manual yet?
 http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
 
 
  Oh yes, as I've attempted to configure the my.cnf file for best
 performance.  The
  query is correct.  The fulltext index is correct as I built the fulltext
 index on
  the
  single column (took 9 minutes) and even did repair and optimize on
 the table...
  so I
  don't think its the index.  I'm thinking its the server config...
 
  - John
 
 
 
  [EMAIL PROTECTED] wrote:
 
 I'm running into a problem with some queries running on a dedicated
 mysql server
  (2.0
 GHz, 2GB RAM).  Fulltext searching really exemplifies this as most
 MATCH, AGAINST
  queries
 are taking 5-20 seconds.  Performance was excellent for some reason one
 day (0.2
  -
  0.75
 seconds) but it was only fast for a day or so.
 Here's the rundown:
 
 TABLE:  fulltext_table (some_id, the_text)
 Rows: 3,237,981
 Type: MyISAM
 Size: 920.8 MB
 
 QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text)
 AGAINST
  ('blue');
 or
 QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE
 MATCH
  (the_text)
 AGAINST ('blue') LIMIT 0, 20;
 
 Both are problematic.  I even tried placing a limit of 2 on the
 first query
  but
 it didn't improve anything.  The table has a fulltext index on the
 column and is
 optimized.  No other users are connected to the server.
 
 Is there a RED FLAG in here somewhere?
 
 MySQL configuration settings (using my-huge.cnf template):
 key_buffer = 500M
 max_allowed_packet = 1M
 table_cache = 512
 sort_buffer_size = 10M
 read_buffer_size = 2M
 myisam_sort_buffer_size = 64M
 #thread_cache = 8
 thread_concurrency = 8
 #- Modifications --- #
 ft_min_word_len = 3
 set-variable = table_cache=1024
 set-variable = max_heap_table_size=64M
 set-variable = tmp_table_size=128M
 set-variable = query_cache_limit=2M
 query_cache_type=1
 
 
 Performance Test:
 SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST
 ('white');
 +--+
 | COUNT(*) |
 +--+
 |95074 |
 +--+
 1 row in set (27.83 sec)
 
 Statistics for vmstat 1 (my apologies if this doesn't look pretty):
 ---
 procs  memory  swap  io system
 cpu
  r  b   swpd   free   buff  cache   si   sobibo   incs us sy
 id wa
 0  0  19500  17800  42432 177172800 060  11330  0  0
 99  1
  0  1  19500  21524  42428 1765728  7240   960 0  536   444  5
 1 82 12
  0  1  19500  19512  42424 176782000  2348 0  912   592  0
 1 50 49
  0  1  19500  17788  42424 176954000  1980 0  868   588  0
 1 51 48
  0  1  19500  17568  42424 176976000  2300 0  723   401  0
 0 50 49
  0  1  19500  17704  42428 176962000  193620  662   364  0
 0 51 49
  0  1  19500  17560  42428 176976400  2224 0  696   400  0
 0 51 49
  0  1  19500  17504  42424 176982400  2136 0  670   380  0
 0 51 49
  0  1  19500  17616  42424 176971200  2228 0  693   415  0
 0 51 49
  0  1  19508  17608  42420 176972408  2348 8  692   389  0
 0 50 50
  0  1  19508  17532  42428 17697920

Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
I'm running into a problem with some queries running on a dedicated mysql server (2.0 
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries are taking 5-20 seconds.  Performance was excellent for some reason one day 
(0.2 - 0.75 seconds) but it was only fast for a day or so.
Here's the rundown:

TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 

QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text) 
AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but it 
didn't improve anything.  The table has a fulltext index on the column and is 
optimized.  No other users are connected to the server.

Is there a RED FLAG in here somewhere?

MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1


Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)

Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
 0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
 0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
 0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
 0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
 0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
 0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
 0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
 0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
 0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
 0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2

Statistics for top command:
-
 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 4784 root  15   0   488  488   420 S 0.2  0.0   0:00   0 vmstat
 3979 mysql 16   0 68128  52M  2188 S 0.1  2.6   0:06   1 mysqld
 3982 mysql 15   0 68128  52M  2188 S 0.1  2.6   0:05   2 mysqld
1 root  15   0   512  512   452 S 0.0  0.0   0:05   2 init
2 root  RT   0 00 0 SW0.0  0.0   0:00   0 migration/0
3 root  RT   0 00 0 SW0.0  0.0   0:00   1 migration/1
4 root  RT   0 00 0 SW0.0  0.0   0:00   2 

Re: Slow Queries on Fast Server?

2004-08-25 Thread V. M. Brasseur
Have you checked the Optimization section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
It's probably the best place to start.
Cheers,
--V
[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated mysql server (2.0 
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries are taking 5-20 seconds.  Performance was excellent for some reason one day 
(0.2 - 0.75 seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 

QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but it 
didn't improve anything.  The table has a fulltext index on the column and is 
optimized.  No other users are connected to the server.
Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)
Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
 0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
 0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
 0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
 0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
 0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
 0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
 0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
 0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
 0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
 0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2
Statistics for top command:
-
 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 4784 root  15   0   488  488   420 S 0.2  0.0   0:00   0 vmstat
 3979 mysql 16   0 68128  52M  2188 S 0.1  2.6   0:06   1 mysqld
 3982 mysql 15   0 68128  52M  2188 S 0.1  2.6   0:05   2 mysqld
1 root  15   0   512  512   452 S 0.0  0.0   0:05   2 init
2 root  RT   0 00 0 SW  

Re: Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
Have you checked the Optimization section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best performance.  The 
query is correct.  The fulltext index is correct as I built the fulltext index on the 
single column (took 9 minutes) and even did repair and optimize on the table... so 
I don't think its the index.  I'm thinking its the server config...

- John



[EMAIL PROTECTED] wrote:
 I'm running into a problem with some queries running on a dedicated mysql server (2.0
 GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
 queries
 are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 - 
 0.75
 seconds) but it was only fast for a day or so.
 Here's the rundown:
 
 TABLE:  fulltext_table (some_id, the_text) 
 Rows: 3,237,981 
 Type: MyISAM
 Size: 920.8 MB 
 
 QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
 or 
 QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text)
 AGAINST ('blue') LIMIT 0, 20;
 
 Both are problematic.  I even tried placing a limit of 2 on the first query but
 it didn't improve anything.  The table has a fulltext index on the column and is
 optimized.  No other users are connected to the server.
 
 Is there a RED FLAG in here somewhere?
 
 MySQL configuration settings (using my-huge.cnf template):
 key_buffer = 500M
 max_allowed_packet = 1M
 table_cache = 512
 sort_buffer_size = 10M
 read_buffer_size = 2M
 myisam_sort_buffer_size = 64M
 #thread_cache = 8
 thread_concurrency = 8
 #- Modifications --- #
 ft_min_word_len = 3
 set-variable = table_cache=1024
 set-variable = max_heap_table_size=64M
 set-variable = tmp_table_size=128M
 set-variable = query_cache_limit=2M
 query_cache_type=1
 
 
 Performance Test:
 SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
 +--+
 | COUNT(*) |
 +--+
 |95074 |
 +--+
 1 row in set (27.83 sec)
 
 Statistics for vmstat 1 (my apologies if this doesn't look pretty):
 ---
 procs  memory  swap  io system cpu
  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
  0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
  0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
  0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
  0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
  0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
  0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
  0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
  0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
  0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
  0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
  0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
  0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
  0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
  0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
 procs  memory  swap  io system cpu
  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
  0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
  0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
  0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
  0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
  0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
  0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
  0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
  0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
  0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
  0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
  0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
  0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
  0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
  0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
  0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2
 
 Statistics for top command:
 -
  PID USER PRI  NI  SIZE  

Re: Slow Queries on Fast Server?

2004-08-25 Thread V. M. Brasseur
Could you send the output of an EXPLAIN for your query?
--V
[EMAIL PROTECTED] wrote:
Have you checked the Optimization section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best performance.  The query is correct.  The 
fulltext index is correct as I built the fulltext index on the single column (took 9 minutes) and even did 
repair and optimize on the table... so I don't think its the index.  I'm thinking 
its the server config...
- John

[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated mysql server (2.0
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries
are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 - 0.75
seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 
   
QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text)
AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but
it didn't improve anything.  The table has a fulltext index on the column and is
optimized.  No other users are connected to the server.
Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)
Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2
Statistics for top command:
-
PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
4784 

Re: Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
Could you send the output of an EXPLAIN for your query?

Sure, pretty sure the index is fine though:

mysql EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH (search_text) AGAINST 
('black');
+--+--+---+-+-+--+--+-+
| table| type | possible_keys | key | key_len | ref  | rows | 
Extra   |
+--+--+---+-+-+--+--+-+
| product_fulltext | fulltext | search_text   | search_text |   0 |  |1 | 
Using where |
+--+--+---+-+-+--+--+-+
1 row in set (0.00 sec)




[EMAIL PROTECTED] wrote:
Have you checked the Optimization section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
 
 
 Oh yes, as I've attempted to configure the my.cnf file for best performance.  The
 query is correct.  The fulltext index is correct as I built the fulltext index on the
 single column (took 9 minutes) and even did repair and optimize on the table... 
 so I
 don't think its the index.  I'm thinking its the server config...
 
 - John
 
 
 
 [EMAIL PROTECTED] wrote:
 
I'm running into a problem with some queries running on a dedicated mysql server
 (2.0
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST
 queries
are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 -
 0.75
seconds) but it was only fast for a day or so.
Here's the rundown:

TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 

QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST
 ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH
 (the_text)
AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query
 but
it didn't improve anything.  The table has a fulltext index on the column and is
optimized.  No other users are connected to the server.

Is there a RED FLAG in here somewhere?

MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1


Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)

Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
 0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
 0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
 0  1  19516  17532  42416 176980000  

disabling optimizations to identify slow queries

2004-02-24 Thread Bill Marrs
I've found a performance issue with a series of mysql queries that I make 
to generate a web page.  But, when I go to investigate it, reloading the 
page a few times, I find the performance of the pages within a couple tries 
becomes very fast.

So, it's hard to track down and work on the queries that are slow, since 
they do not remain slow.

Unfortunately, the page is still slow fairly often, so this doesn't help my 
real problem.

On my test server, I've tried disabling some optimizations in the server 
parameters, to see if I could get a more reproducible case (figuring that 
query caching and/or the key buffer could be resulting in repeated queries 
being optimized):

set-variable = query_cache_type=0
set-variable = key_buffer_size=0
...but, I haven't managed to reproduce the slow queries reliably.

Does anything know what might be going on and if there's some setting I can 
use to make my performance issue more reproducible?

Am I missing something?

-=bill

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: disabling optimizations to identify slow queries

2004-02-24 Thread Mike Johnson
From: Bill Marrs [mailto:[EMAIL PROTECTED]

 I've found a performance issue with a series of mysql queries 
 that I make to generate a web page.  But, when I go to 
 investigate it, reloading the page a few times, I find the 
 performance of the pages within a couple tries becomes very 
 fast.


It sounds like query caching is working against you.

There are a variety of ways to get around it. While it'll be a PITA, you may want to 
have you script call `RESET QUERY CACHE` at the begining, and then include 
`SQL_NO_CACHE` in your SELECT statement(s).

More info here:
http://www.mysql.com/doc/en/Query_Cache_Status_and_Maintenance.html

And here:
http://www.mysql.com/doc/en/SELECT.html

HTH!


-- 
Mike Johnson
Web Developer
Smarter Living, Inc.
phone (617) 886-5539

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: disabling optimizations to identify slow queries

2004-02-24 Thread vpendleton
Are you logging slow queries? Have you run an explain plan for the 
queries in question?

 Original Message 

On 2/24/04, 10:29:33 AM, Bill Marrs [EMAIL PROTECTED] wrote regarding 
disabling optimizations to identify slow queries:


 I've found a performance issue with a series of mysql queries that I make
 to generate a web page.  But, when I go to investigate it, reloading the
 page a few times, I find the performance of the pages within a couple 
tries
 becomes very fast.

 So, it's hard to track down and work on the queries that are slow, since
 they do not remain slow.

 Unfortunately, the page is still slow fairly often, so this doesn't help 
my
 real problem.


 On my test server, I've tried disabling some optimizations in the server
 parameters, to see if I could get a more reproducible case (figuring that
 query caching and/or the key buffer could be resulting in repeated 
queries
 being optimized):

 set-variable = query_cache_type=0
 set-variable = key_buffer_size=0

 ...but, I haven't managed to reproduce the slow queries reliably.

 Does anything know what might be going on and if there's some setting I 
can
 use to make my performance issue more reproducible?

 Am I missing something?


 -=bill


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: disabling optimizations to identify slow queries

2004-02-24 Thread Bill Marrs
At 11:35 AM 2/24/2004, Mike Johnson wrote:
It sounds like query caching is working against you.

There are a variety of ways to get around it. While it'll be a PITA, you 
may want to have you script call `RESET QUERY CACHE` at the begining, and 
then include `SQL_NO_CACHE` in your SELECT statement(s).
I had thought that setting this in /etc/my.cnf (and restarting mysql) would 
disable the query cache:

set-variable = query_cache_type=0

I'll try the RESET as well, to make sure.

Actually, I just noticed that even after I restart mysql, the speed 
stays.  That doesn't make any sense, maybe there is some other unknown 
factor influencing this.



At 11:39 AM 2/24/2004, [EMAIL PROTECTED] wrote:
Are you logging slow queries? Have you run an explain plan for the
queries in question?
Yes, but the problem is more that I'm doing a number of not-super-fast 
queries, so the accumulated effect is slow, but each query generally isn't 
that slow.

Yes, I'm using explain... that's usually how I work on slow querys, run 
explain, try adding and index, etc.  These queries already have indexes, 
but in some cases they aren't fast.



-bill

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: disabling optimizations to identify slow queries

2004-02-24 Thread Keith C. Ivey
On 24 Feb 2004 at 12:00, Bill Marrs wrote:

 Actually, I just noticed that even after I restart mysql, the speed
 stays.  That doesn't make any sense, maybe there is some other unknown
 factor influencing this.

Sounds like it's your operating system's caching of the disk reads.

-- 
Keith C. Ivey [EMAIL PROTECTED]
Tobacco Documents Online
http://tobaccodocuments.org


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: disabling optimizations to identify slow queries

2004-02-24 Thread Bill Marrs
At 12:07 PM 2/24/2004, Keith C. Ivey wrote:
Sounds like it's your operating system's caching of the disk reads.
Yikes... that would explain it.

um... anyone know how to disable disk caching on Linux 2.6 kernel?

-bill

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Strange slow queries

2004-02-11 Thread Oscar Yen
N??j??j{zw???oz??
?i, all

I have a mysql server (dual P4 2.0G, 1G MEM, RH8.0, Mysql 4.0.12),
There are 2 tables defined as follow:

create table a (
  imgid int not null,
  parent int,
  imgtype char(3),
  img longtext,
  primary key (imgid),
  key (parent, imgid)
) type = innodb;

contains about 11000 rows, about 800M in size;

create table b (
  docid char(2) not null,
  dockey varchar(60) not null,
  owner varchar(8),
  data longtext,
  primary key (docid, dockey)
  key ind_docid (docid),
  key ind_dockey (dockey)
) type = innodb;

contains 3+ rows, about 2.8G in size.

SQL tested:

A1) select imgid, parent from a where parent = 10;
   returns 3357 rows in 0.08 sec.
A2) select imgid, parent, imgtype from a where parent = 10;
   return 3357 rows in 8.32 sec.!!!

A3) select imgid, parent from a where parent = 10 order by imgid;
   returns 3357 rows in 0.1 sec
A4) select imgid, parent, imgtype from a where parent = 10 order by imgid;
   returns 3357 rows in 25.88 sec!!!

A5) create table za select imgid, parent, imgtype from a;
   10102 rows in 1.71 sec.
A6) select imgid, parent, imgtype from za where parent = 10 order by imgid;
   3357 rows in 0.02 sec.

B1) select docid, dockey from b where dockey = 'MR';
   27786 rows in 0.15sec;
B2) select docid, dockey, owner from b where dockey='MR';
   27786 rows in 0.16sec;

B3) select distinct docid from b where dockey = 'MR';
   3426 rows in 85.47sec;
B4) create table zb select docid, dockey from b;
   30924 rows in 2.2 sec;
B5) select distinct docid from zb where dockey = 'MR';
   3426 rows in 0.24 sec.

Can anybody answer following questions, reguarding the listed test results:

Q-1. What cause the performance downgrade by adding single imgtype field to select? 
(A1 vs A2, A3 vs A4)
Q-2. Why no noticable performace downgrade on similar selects on table B? (B1 vs B2)?
Q-3. How to explain the performance differences,just because single BLOB field?(A4 vs 
A6, and B3 vs B5)?

comment: Tests also performed after I change the table type to mysql native for both 
table a and b, the result is similar.

Thanks you for your attention.

Oscar Yen.


  1   2   >