Re: Slow queries / inserts InnoDB

2012-10-09 Thread Cheng Xianming
Tello is right ! Moreno, and I sugest that: if you want query by 
"SUBSTR(ISN,2)" ,you should create a field named like "isnInfo", and 
save SUBSTR(ISN,2) to this filed when you insert . don't forget create a 
index on this field .



于 12-10-9 下午10:04, Andrés Tello 写道:

You are forcing mysql to do full table scans with the substr...

Use explain to see that you aren't using any index.

Avoid the use of substr in the where clause, by splitting your data, index
that field and do you query over that field.



That is why your query is so slow.

the slow insert, is due you S.O...


On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno <
aespinosamor...@gmail.com> wrote:


Hi.



I have developed my first application with VB.NET at work (junior
developer) and basically it reads several thousands of lines from X number
of files, and inserts them into a mysql database.



The application consists of several steps:

1)  Dump all lines with all fields into a temp table.

a.   This works fine. Optimal speed for the hardware we have.

2)  Query temp table to obtain certain. I query by a unique ID. I
insert all unique ID (isn field). If the ID matches my interests, I insert
it into an ArrayList.

a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
COUNT(SUBSTR(ISN,2)) = 4

b.  The isn is not unique per line, but per data (sms)

c.   Once I have all isn on an arraylist, I do the following query:

i.
  SELECT
  select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
= '" & isn & "' ORDER BY type LIMIT 1

d.  To obtain some data. During the process I query around 10 times
other table per ISN.

e.  Here is the problem. If I have a few files to process (around
3000-4000 lines in total, small array) this steps work fine, good speed.
But If I have big files or a lot of files (more than 1 lines in total,
big array), this steps are incredibly slow. Queries and inserts are too
slow. Meaning, one-two inserts per second, while the other case inserts are
around 800 per second.



Our hardware is not optimized for database server, but I don’t have other
choice. It is mostly a desktop computer

Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.



I have tried some optimizations commented in mysqlperformance blog without
success.

Any way to optimize this?



Thank you very much in advance.





Adrián Espinosa Moreno.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Slow queries / inserts InnoDB

2012-10-09 Thread Rick James
When creating the temp table, add another column, isn2, that is SUBSTR(ISN, 2).
Add
  INDEX(isn2, type)
Change the query to...
  isn2 = '" & isn & "' ORDER BY type LIMIT 1

It that does not work, please provide clearer details, including
SHOW CREATE TABLE
SHOW TABLE STATUS
EXPLAIN SELECT (with substitutions filled in)

> -Original Message-
> From: Andrés Tello [mailto:mr.crip...@gmail.com]
> Sent: Tuesday, October 09, 2012 7:04 AM
> To: Adrián Espinosa Moreno
> Cc: mysql@lists.mysql.com
> Subject: Re: Slow queries / inserts InnoDB
> 
> You are forcing mysql to do full table scans with the substr...
> 
> Use explain to see that you aren't using any index.
> 
> Avoid the use of substr in the where clause, by splitting your data,
> index that field and do you query over that field.
> 
> 
> 
> That is why your query is so slow.
> 
> the slow insert, is due you S.O...
> 
> 
> On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno <
> aespinosamor...@gmail.com> wrote:
> 
> > Hi.
> >
> >
> >
> > I have developed my first application with VB.NET at work (junior
> > developer) and basically it reads several thousands of lines from X
> > number of files, and inserts them into a mysql database.
> >
> >
> >
> > The application consists of several steps:
> >
> > 1)  Dump all lines with all fields into a temp table.
> >
> > a.   This works fine. Optimal speed for the hardware we have.
> >
> > 2)  Query temp table to obtain certain. I query by a unique ID. I
> > insert all unique ID (isn field). If the ID matches my interests, I
> > insert it into an ArrayList.
> >
> > a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
> > SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
> > COUNT(SUBSTR(ISN,2)) = 4
> >
> > b.  The isn is not unique per line, but per data (sms)
> >
> > c.   Once I have all isn on an arraylist, I do the following
> query:
> >
> >i.
> >  SELECT
> >  select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE
> > SUBSTR(ISN, 2) = '" & isn & "' ORDER BY type LIMIT 1
> >
> > d.  To obtain some data. During the process I query around 10
> times
> > other table per ISN.
> >
> > e.  Here is the problem. If I have a few files to process (around
> > 3000-4000 lines in total, small array) this steps work fine, good
> speed.
> > But If I have big files or a lot of files (more than 1 lines in
> > total, big array), this steps are incredibly slow. Queries and
> inserts
> > are too slow. Meaning, one-two inserts per second, while the other
> > case inserts are around 800 per second.
> >
> >
> >
> > Our hardware is not optimized for database server, but I don't have
> > other choice. It is mostly a desktop computer
> >
> > Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.
> >
> >
> >
> > I have tried some optimizations commented in mysqlperformance blog
> > without success.
> >
> > Any way to optimize this?
> >
> >
> >
> > Thank you very much in advance.
> >
> >
> >
> >
> >
> > Adrián Espinosa Moreno.
> >

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Slow queries / inserts InnoDB

2012-10-09 Thread Andrés Tello
You are forcing mysql to do full table scans with the substr...

Use explain to see that you aren't using any index.

Avoid the use of substr in the where clause, by splitting your data, index
that field and do you query over that field.



That is why your query is so slow.

the slow insert, is due you S.O...


On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno <
aespinosamor...@gmail.com> wrote:

> Hi.
>
>
>
> I have developed my first application with VB.NET at work (junior
> developer) and basically it reads several thousands of lines from X number
> of files, and inserts them into a mysql database.
>
>
>
> The application consists of several steps:
>
> 1)  Dump all lines with all fields into a temp table.
>
> a.   This works fine. Optimal speed for the hardware we have.
>
> 2)  Query temp table to obtain certain. I query by a unique ID. I
> insert all unique ID (isn field). If the ID matches my interests, I insert
> it into an ArrayList.
>
> a.   Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM
> SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING
> COUNT(SUBSTR(ISN,2)) = 4
>
> b.  The isn is not unique per line, but per data (sms)
>
> c.   Once I have all isn on an arraylist, I do the following query:
>
>i.
>  SELECT
>  select, my, fields,of,interest  FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2)
> = '" & isn & "' ORDER BY type LIMIT 1
>
> d.  To obtain some data. During the process I query around 10 times
> other table per ISN.
>
> e.  Here is the problem. If I have a few files to process (around
> 3000-4000 lines in total, small array) this steps work fine, good speed.
> But If I have big files or a lot of files (more than 1 lines in total,
> big array), this steps are incredibly slow. Queries and inserts are too
> slow. Meaning, one-two inserts per second, while the other case inserts are
> around 800 per second.
>
>
>
> Our hardware is not optimized for database server, but I don’t have other
> choice. It is mostly a desktop computer
>
> Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm.
>
>
>
> I have tried some optimizations commented in mysqlperformance blog without
> success.
>
> Any way to optimize this?
>
>
>
> Thank you very much in advance.
>
>
>
>
>
> Adrián Espinosa Moreno.
>


Re: Slow queries when using left join

2010-03-19 Thread Johan De Meersman
2010/3/19 Olav Mørkrid 

> Dear MySQL forum.
>
> I have performance problems when using "left join x" combined with
> "where x.y is null", in particularily when combining three tables this
> way.
>

With a left join, particularly when you're using *is (not) null*, you can't
use index selecting on your right table. That is, you're bound to do a
tablescan on what is essentially the cartesian product of your tables.

Every additional table only compounds the problem. 100x100 is 10.000., but
100x100x100 is 1.000.000.

Avoid left joins whenever possible - in some cases it's quicker to split out
the complex query and implement it in code with loops - not always, though,
you'll have to apply some elbow grease to find out the optimal solution.

The most recent example of this, was a hierarchical lookup query in Drupal's
taxonomy module: the hierarchy table was left-joined to itself five times.
Execution time on an unloaded machine was 0.54 seconds. By doing individual
lookups in a code loop until I got to the top level, I replaced that query
with a maximum of five (and usually less) 0.00 second ones over an existing
covering index.


Another thing - and maybe one you should look at first, is wether you can
add more selective where-clauses for you base table. That doesn't always
stop at the actual data you want, either. Another example from here: for a
radiostation, there was a multiple left-join query to display the last 20
played songs on the homepage. However, the playlist table keeps growing, so
I got the website people to agree that it's pretty unlikely that songs from
yesterday end up in those 20: we added an index on the playdate and selected
on that. Boom, execution time down from 0.35 to 0.01. In addition, killing
off old playlist items would've been very beneficial, but this was not an
option due to business requirements. Shame, I love to delete people's data
:-D


And, of course, check if you have indexes on the major parts of your where
clause. Selectivity brings speed.


I seem to have the order of obviousness in this mail wrong, though. Please
read it from bottom to top :-)


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel


Re: slow queries not being logged

2010-02-23 Thread Carsten Pedersen

You might want to read the comments to this posting: 

http://www.bitbybit.dk/carsten/blog/?p=116



Several tools/methods for controlling and analyzing the slow query log are

suggested there.



Best,



/ Carsten



On Tue, 23 Feb 2010 14:09:30 +0530, Ananda Kumar 

wrote:

> slow query log will also have sql's which are not using indexes(doing

full

> table scan).

> May be those queries with "ZERO SECOND" run on small table without using

> indexes.

> 

> regards

> anandkl

> 

> On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards

> wrote:

> 

>> Hi All

>>

>>

>>

>>I found my problem and this was kind of a blonde moment

>>for

>> me...

>>

>>

>>

>>When configuring the log_slow_queries parameter, it was

>> configured as follows:  log_slow_queries=1

>>

>>

>>

>>This the file being created is called 1 and the 1 does

not

>> mean it is enabled.

>>

>>

>>

>>I have fixed this now but need to wait for a gap to

reboot

>> again to have it set properly. (have to live with the filename 1 for

the

>> time being.)

>>

>>

>>

>>I did however find something interesting though, while

>> looking at the queries being logged.

>>

>>

>>

>>The slow_query_time is set to 2 (2 seconds i am

assuming)

>> however all the queries being logged states that it ran for 0 seconds.

>>

>>

>>

>>I am busy doing explain plans on some of them now but

not

>> really sure what to look for yet (Rather new to MySQL and hope google

>> will

>> have some answers J  )

>>

>>

>>

>>

>>

>> Thank you

>>

>>

>>

>>

>>

>>

>>

>>

>>

>> From: John Daisley [mailto:mg_s...@hotmail.com]

>> Sent: 23 February 2010 10:24 AM

>> To: machi...@rdc.co.za; mysql@lists.mysql.com

>> Subject: RE: slow queries not being logged

>>

>>

>>

>>

>>

>> > From: machi...@rdc.co.za

>> > To: mysql@lists.mysql.com

>> > Subject: slow queries not being logged

>> > Date: Tue, 23 Feb 2010 09:59:13 +0200

>> >

>> > Good day all

>> >

>> >

>> >

>> > I hope you can assist me with this one...

>> >

>> >

>> >

>> > We have a client where the slow query log was disabled.

>> >

>>

>> Slow query log is on the server only.  If you are saying you have

enabled

>> the slow query log and the servers query log is empty can you post your

>> my.cnf/my.ini file. Also make sure --long-query-time is set

>> appropriately.

>>

>>

>> >

>> >

>> > We noticed that the slow query amount (when viewing global

>> > status) has skyrocketed during the last 2 weeks going up to over 2

>> million

>> > (from 160 million queries).

>> >

>> >

>> >

>> > We wanted to look at these queries to see if it can be

>> > optimised to reduce the amount and went through the whole database

>> restart

>> > routine to enable the slow query log again (they are running version

>> > 5.0

>> so

>> > had to restart).

>> >

>> >

>> >

>> >

>> >

>> > However, even though the slow query log is enabled, it is

>> > not logging the queries to the file specified.

>> >

>> >

>> >

>> > Can someone please assist in why this is not being done? I

>> > thought that it might be logging to a default filename but there is

>> > only

>> one

>> > slow queries log file in the directory and it is empty.

>> >

>> >

>> >

>> > Checking the global status again, it showed 29 000 slow

>> > queries since this morning (3 hours ago) but nothing in the logs.

>> >

>> >

>> >

>> >

>> >

>> > Your help will be appreciated.

>> >

>> >

>> >

>> >

>> >

>> > Regards

>> >

>> >

>> >

>>

>>  _

>>

>> Do you want a Hotmail account? Sign-up

>> <http://clk.atdmt.com/UKM/go/19780/direct/01/>  now - Free

>>

>>

> 

> 

> !DSPAM:451,4b839535858212076517642!

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: slow queries not being logged

2010-02-23 Thread Ananda Kumar
slow query log will also have sql's which are not using indexes(doing full
table scan).
May be those queries with "ZERO SECOND" run on small table without using
indexes.

regards
anandkl

On Tue, Feb 23, 2010 at 2:02 PM, Machiel Richards wrote:

> Hi All
>
>
>
>I found my problem and this was kind of a blonde moment for
> me...
>
>
>
>When configuring the log_slow_queries parameter, it was
> configured as follows:  log_slow_queries=1
>
>
>
>This the file being created is called 1 and the 1 does not
> mean it is enabled.
>
>
>
>I have fixed this now but need to wait for a gap to reboot
> again to have it set properly. (have to live with the filename 1 for the
> time being.)
>
>
>
>I did however find something interesting though, while
> looking at the queries being logged.
>
>
>
>The slow_query_time is set to 2 (2 seconds i am assuming)
> however all the queries being logged states that it ran for 0 seconds.
>
>
>
>I am busy doing explain plans on some of them now but not
> really sure what to look for yet (Rather new to MySQL and hope google will
> have some answers J  )
>
>
>
>
>
> Thank you
>
>
>
>
>
>
>
>
>
> From: John Daisley [mailto:mg_s...@hotmail.com]
> Sent: 23 February 2010 10:24 AM
> To: machi...@rdc.co.za; mysql@lists.mysql.com
> Subject: RE: slow queries not being logged
>
>
>
>
>
> > From: machi...@rdc.co.za
> > To: mysql@lists.mysql.com
> > Subject: slow queries not being logged
> > Date: Tue, 23 Feb 2010 09:59:13 +0200
> >
> > Good day all
> >
> >
> >
> > I hope you can assist me with this one...
> >
> >
> >
> > We have a client where the slow query log was disabled.
> >
>
> Slow query log is on the server only.  If you are saying you have enabled
> the slow query log and the servers query log is empty can you post your
> my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.
>
>
> >
> >
> > We noticed that the slow query amount (when viewing global
> > status) has skyrocketed during the last 2 weeks going up to over 2
> million
> > (from 160 million queries).
> >
> >
> >
> > We wanted to look at these queries to see if it can be
> > optimised to reduce the amount and went through the whole database
> restart
> > routine to enable the slow query log again (they are running version 5.0
> so
> > had to restart).
> >
> >
> >
> >
> >
> > However, even though the slow query log is enabled, it is
> > not logging the queries to the file specified.
> >
> >
> >
> > Can someone please assist in why this is not being done? I
> > thought that it might be logging to a default filename but there is only
> one
> > slow queries log file in the directory and it is empty.
> >
> >
> >
> > Checking the global status again, it showed 29 000 slow
> > queries since this morning (3 hours ago) but nothing in the logs.
> >
> >
> >
> >
> >
> > Your help will be appreciated.
> >
> >
> >
> >
> >
> > Regards
> >
> >
> >
>
>  _
>
> Do you want a Hotmail account? Sign-up
> <http://clk.atdmt.com/UKM/go/19780/direct/01/>  now - Free
>
>


RE: slow queries not being logged

2010-02-23 Thread Machiel Richards
Hi All

 

I found my problem and this was kind of a blonde moment for
me...

 

When configuring the log_slow_queries parameter, it was
configured as follows:  log_slow_queries=1

 

This the file being created is called 1 and the 1 does not
mean it is enabled.

 

I have fixed this now but need to wait for a gap to reboot
again to have it set properly. (have to live with the filename 1 for the
time being.)

 

I did however find something interesting though, while
looking at the queries being logged.

 

The slow_query_time is set to 2 (2 seconds i am assuming)
however all the queries being logged states that it ran for 0 seconds.

 

I am busy doing explain plans on some of them now but not
really sure what to look for yet (Rather new to MySQL and hope google will
have some answers J  )

 

 

Thank you

 

 

 

 

From: John Daisley [mailto:mg_s...@hotmail.com] 
Sent: 23 February 2010 10:24 AM
To: machi...@rdc.co.za; mysql@lists.mysql.com
Subject: RE: slow queries not being logged

 



> From: machi...@rdc.co.za
> To: mysql@lists.mysql.com
> Subject: slow queries not being logged
> Date: Tue, 23 Feb 2010 09:59:13 +0200
> 
> Good day all
> 
> 
> 
> I hope you can assist me with this one...
> 
> 
> 
> We have a client where the slow query log was disabled.
> 

Slow query log is on the server only.  If you are saying you have enabled
the slow query log and the servers query log is empty can you post your
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


> 
> 
> We noticed that the slow query amount (when viewing global
> status) has skyrocketed during the last 2 weeks going up to over 2 million
> (from 160 million queries).
> 
> 
> 
> We wanted to look at these queries to see if it can be
> optimised to reduce the amount and went through the whole database restart
> routine to enable the slow query log again (they are running version 5.0
so
> had to restart).
> 
> 
> 
> 
> 
> However, even though the slow query log is enabled, it is
> not logging the queries to the file specified.
> 
> 
> 
> Can someone please assist in why this is not being done? I
> thought that it might be logging to a default filename but there is only
one
> slow queries log file in the directory and it is empty.
> 
> 
> 
> Checking the global status again, it showed 29 000 slow
> queries since this morning (3 hours ago) but nothing in the logs.
> 
> 
> 
> 
> 
> Your help will be appreciated.
> 
> 
> 
> 
> 
> Regards
> 
> 
> 

  _  

Do you want a Hotmail account? Sign-up
<http://clk.atdmt.com/UKM/go/19780/direct/01/>  now - Free



RE: slow queries not being logged

2010-02-23 Thread John Daisley



> From: machi...@rdc.co.za
> To: mysql@lists.mysql.com
> Subject: slow queries not being logged
> Date: Tue, 23 Feb 2010 09:59:13 +0200
> 
> Good day all
> 
>  
> 
> I hope you can assist me with this one...
> 
>  
> 
> We have a client where the slow query log was disabled.
> 

Slow query log is on the server only.  If you are saying you have enabled the 
slow query log and the servers query log is empty can you post your 
my.cnf/my.ini file. Also make sure --long-query-time is set appropriately.


>  
> 
> We noticed that the slow query amount (when viewing global
> status) has skyrocketed during the last 2 weeks going up to over 2 million
> (from 160 million queries).
> 
>  
> 
> We wanted to look at these queries to see if it can be
> optimised to reduce the amount and went through the whole database restart
> routine to enable the slow query log again (they are running version 5.0 so
> had to restart).
> 
>  
> 
>  
> 
> However, even though the slow query log is enabled, it is
> not logging the queries to the file specified.
> 
>  
> 
> Can someone please assist in why this is not being done? I
> thought that it might be logging to a default filename but there is only one
> slow queries log file in the directory and it is empty.
> 
>  
> 
> Checking the global status again, it showed 29 000 slow
> queries since this morning (3 hours ago) but nothing in the logs.
> 
>  
> 
>  
> 
> Your help will be appreciated.
> 
>  
> 
>  
> 
> Regards
> 
>  
> 
  
_
Do you have a story that started on Hotmail? Tell us now
http://clk.atdmt.com/UKM/go/195013117/direct/01/

Re: Slow Queries

2008-04-24 Thread Sebastian Mendel

D Hill schrieb:

On Thu, 24 Apr 2008 at 08:58 +0200, [EMAIL PROTECTED] confabulated:


D Hill schrieb:


I have something I am trying to resolve with an over abundant number 
of slow queries. Perhaps it is because of some additional indexes 
needed. As soon as I enabled the option 
'log_queries_not_using_indexes = 1' in the configuration file, I 
started getting messages relating to the select query:


  SELECT wite_what, wite_desc FROM witelist
WHERE
  wite_what = '$oct1' OR
  wite_what = '$oct1.$oct2' OR
  wite_what = '$oct1.$oct2.$oct3' OR
  wite_what = '$oct1.$oct2.$oct3.$oct4' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '$from_dom' OR
  wite_what = '$rcpt_dom';


did you tried (result depending on your MySQL version):

WHERE wite_what IN ('$oct1', '$oct1.$oct2', '$oct1.$oct2.$oct3',
   '$oct1.$oct2.$oct3.$oct4', '[EMAIL PROTECTED]',
   '[EMAIL PROTECTED]', '$from_dom', '$rcpt_dom');


MySQL version is 5.0.51. Sorry I forgot to mention that. I did change 
the query to what you have shown. I'll have to wait till the server 
comes under a load to tell. I have noted when the last slow query was 
logged for this and will see.



you could also vary with thee index length if wite_what.


Right now the index is for the full length of the field (128). I just 
ran a query for the length of wite_what and the maximum length so far is 
34. So, I will cut the index length down to 64.



and what indexes do you have currently exactly?


id -  is the primary and has an index type btree
wite_what - is a unique and has an index type of btree


so this looks all ok, i am not sure if the query time includes the time if 
the query needs to wait for locked tables ...


--
Sebastian

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread D Hill

On Thu, 24 Apr 2008 at 10:16 -0400, [EMAIL PROTECTED] confabulated:


On Thu, Apr 24, 2008 at 2:54 AM, Sebastian Mendel
<[EMAIL PROTECTED]> wrote:

 IMHO not in this case, cause it is just a simple "WHERE field IN ()"


I'm pretty sure that just looks like a bunch of ORs to MySQL.  If it
didn't use the index with OR, it won't use it with IN.

What usually works is to change it to UNION:

SELECT wite_what, wite_desc FROM witelist
  WHERE wite_what = '$oct1'
UNION
SELECT wite_what, wite_desc FROM witelist
  WHERE wite_what = '$oct1.$oct2'
UNION
...etc.


I'm still new to MySQL. The input is greatly appreciated.

It took some minor thought and the documentation, but I understand what is 
going on with the UNION.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread Perrin Harkins
On Thu, Apr 24, 2008 at 2:54 AM, Sebastian Mendel
<[EMAIL PROTECTED]> wrote:
>  IMHO not in this case, cause it is just a simple "WHERE field IN ()"

I'm pretty sure that just looks like a bunch of ORs to MySQL.  If it
didn't use the index with OR, it won't use it with IN.

What usually works is to change it to UNION:

SELECT wite_what, wite_desc FROM witelist
   WHERE wite_what = '$oct1'
UNION
SELECT wite_what, wite_desc FROM witelist
   WHERE wite_what = '$oct1.$oct2'
UNION
...etc.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-24 Thread D Hill

On Thu, 24 Apr 2008 at 08:58 +0200, [EMAIL PROTECTED] confabulated:


D Hill schrieb:


I have something I am trying to resolve with an over abundant number of 
slow queries. Perhaps it is because of some additional indexes needed. As 
soon as I enabled the option 'log_queries_not_using_indexes = 1' in the 
configuration file, I started getting messages relating to the select 
query:


  SELECT wite_what, wite_desc FROM witelist
WHERE
  wite_what = '$oct1' OR
  wite_what = '$oct1.$oct2' OR
  wite_what = '$oct1.$oct2.$oct3' OR
  wite_what = '$oct1.$oct2.$oct3.$oct4' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '$from_dom' OR
  wite_what = '$rcpt_dom';


did you tried (result depending on your MySQL version):

WHERE wite_what IN ('$oct1', '$oct1.$oct2', '$oct1.$oct2.$oct3',
   '$oct1.$oct2.$oct3.$oct4', '[EMAIL PROTECTED]',
   '[EMAIL PROTECTED]', '$from_dom', '$rcpt_dom');


MySQL version is 5.0.51. Sorry I forgot to mention that. I did change the 
query to what you have shown. I'll have to wait till the server comes 
under a load to tell. I have noted when the last slow query was logged for 
this and will see.



you could also vary with thee index length if wite_what.


Right now the index is for the full length of the field (128). I just ran 
a query for the length of wite_what and the maximum length so far is 34. 
So, I will cut the index length down to 64.



and what indexes do you have currently exactly?


id -  is the primary and has an index type btree
wite_what - is a unique and has an index type of btree


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-23 Thread Sebastian Mendel

D Hill schrieb:


I have something I am trying to resolve with an over abundant number of 
slow queries. Perhaps it is because of some additional indexes needed. 
As soon as I enabled the option 'log_queries_not_using_indexes = 1' in 
the configuration file, I started getting messages relating to the 
select query:


  SELECT wite_what, wite_desc FROM witelist
WHERE
  wite_what = '$oct1' OR
  wite_what = '$oct1.$oct2' OR
  wite_what = '$oct1.$oct2.$oct3' OR
  wite_what = '$oct1.$oct2.$oct3.$oct4' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '[EMAIL PROTECTED]' OR
  wite_what = '$from_dom' OR
  wite_what = '$rcpt_dom';


did you tried (result depending on your MySQL version):

WHERE wite_what IN ('$oct1', '$oct1.$oct2', '$oct1.$oct2.$oct3',
'$oct1.$oct2.$oct3.$oct4', '[EMAIL PROTECTED]',
'[EMAIL PROTECTED]', '$from_dom', '$rcpt_dom');

you could also vary with thee index length if wite_what.

and what indexes do you have currently exactly?

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-23 Thread Sebastian Mendel

Perrin Harkins schrieb:

On Wed, Apr 23, 2008 at 9:22 PM, D Hill <[EMAIL PROTECTED]> wrote:

 Can anyone shed some light if I should index wite_desc to speed things up?


No, since you don't use that column at all.  If you're not on MySQL 5,
upgrading to MySQL 5 will help.  Otherwise, you're best bet is to
rewrite the query as UNION clauses with one of your WHERE conditions
in each.  I know it sounds crazy, but before MySQL 5 the use of
indexes with OR queries was not very good.


IMHO not in this case, cause it is just a simple "WHERE field IN ()"

--
Sebastian Mendel

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries

2008-04-23 Thread Perrin Harkins
On Wed, Apr 23, 2008 at 9:22 PM, D Hill <[EMAIL PROTECTED]> wrote:
>  Can anyone shed some light if I should index wite_desc to speed things up?

No, since you don't use that column at all.  If you're not on MySQL 5,
upgrading to MySQL 5 will help.  Otherwise, you're best bet is to
rewrite the query as UNION clauses with one of your WHERE conditions
in each.  I know it sounds crazy, but before MySQL 5 the use of
indexes with OR queries was not very good.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries

2006-08-17 Thread Jon Molin

On 8/17/06, Chris <[EMAIL PROTECTED]> wrote:


> Unfortunately didn't that help, it leads to:
> ++-+---+---+---
>
> | id | select_type | table | type  | possible_keys
>  | key   | key_len | ref | rows| Extra
>|
> ++-+---+---+---
>
> |  1 | SIMPLE  | ps| range |
> phrase_search,id_search,phrase_date | id_search | 3   | NULL
> | 3836930 | Using where; Using temporary; Using filesort

Yeh it's finding a lot more rows there which isn't what you want so the
extra time isn't surprising.


Does rewriting the query to be an inner join help?

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw INNER JOIN phrase_searches ps ON
(ps.phrase_id=pw.phrase_id) WHERE
pw.word_id IN (966,1,1250,1741) AND ps.search_date >=
'2006-07-17' AND ps.search_date <= '2006-08-16'
GROUP by pw.word_id;

or even:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
pw.phrase_id = ps.phrase_id AND pw.word_id IN (966,1,1250,1741) AND
ps.search_date >= '2006-07-17' AND ps.search_date <= '2006-08-16'
GROUP by pw.word_id;

(which puts the join between the two tables first).



That didn't help either. Same amount of rows as my first join and
about the same speed as well (only a few seconds differing when
executed).



That would help with this discussion too:
http://lists.mysql.com/mysql/201015

;)


Yes, it'd be sweet if that mysql internals guru revelead her/him-self
from the cloud of guruness and spoke the true way of doing it.

What pisses me off most is that 'grep -E "^word$| word$|^word | word "
2006/07/*/phrases |wc -l' is so much quicker than the db :(

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries

2006-08-17 Thread Chris



Unfortunately didn't that help, it leads to:
++-+---+---+---

| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows| Extra
   |
++-+---+---+---

|  1 | SIMPLE  | ps| range |
phrase_search,id_search,phrase_date | id_search | 3   | NULL
| 3836930 | Using where; Using temporary; Using filesort


Yeh it's finding a lot more rows there which isn't what you want so the 
extra time isn't surprising.



Does rewriting the query to be an inner join help?

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw INNER JOIN phrase_searches ps ON 
(ps.phrase_id=pw.phrase_id) WHERE

pw.word_id IN (966,1,1250,1741) AND ps.search_date >=
'2006-07-17' AND ps.search_date <= '2006-08-16'
GROUP by pw.word_id;

or even:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
pw.phrase_id = ps.phrase_id AND pw.word_id IN (966,1,1250,1741) AND
ps.search_date >= '2006-07-17' AND ps.search_date <= '2006-08-16'
GROUP by pw.word_id;

(which puts the join between the two tables first).


That would help with this discussion too: 
http://lists.mysql.com/mysql/201015


;)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries

2006-08-17 Thread Jon Molin

On 8/17/06, Chris <[EMAIL PROTECTED]> wrote:

Jon Molin wrote:
> Hi list
>
> I have 5 tables:
>
> words (word_id int auto_increment, word varbinary(40)) (has ~3.5M
> rows) with the keys:
> PRIMARY KEY  (`word_id`),UNIQUE KEY `word_ind` (`word`)
>
> phrases (phrase_id int auto_increment, phrase varbinary(100)) (has
> ~11M rows) with the keys:
> PRIMARY KEY  (`phrase_id`),UNIQUE KEY `phrase_ind` (`phrase`)
>
> phrase_words (phrase_id, word_id) (has ~31M rows) with:
> UNIQUE KEY `phrase_ind` (`phrase_id`,`word_id`)
> KEY `word` (`word_id`), KEY `phrase` (`phrase_id`)
>
> word_searches (word_id, search_date date, search hour char(2), amount
> smallint, type char(8), location char(2)) with:
> KEY `word_search` (`word_id`),
> KEY `id_search` (`search_date`),
> KEY `word_date` (`word_id`,`search_date`)
>
> (and a similar for phrase_searches, these two tables are merge tables
> with one table for each month, each table having 15-30M rows)
>
> phrases are built of  "words" identified by phrase_words (these are
> not human language words and phrases but rather random bytes where
> some are human readable).
>
> Now, I'm trying to find out "how many times has word 1..n been
> searched for and how many times has phrases containing 1..n been
> searched for?"
>
> These queries take a really long time to execute, first I select for the
> words:
> explain sELECT w.word as word, w.word_id as word_id, sum(ws.amount) as
> amount FROM words w, word_searches ws WHERE
> ws.word_id=w.word_id AND w.word IN ("p", "xyz", "zzz", "abc") AND
>  ws.search_date >= '2006-07-17' AND ws.search_date <=
> '2006-08-16' group by ws.word_id;
> 
++-+---+---+-+-+-+--+--+--+
>
> | id | select_type | table | type  | possible_keys   |
> key | key_len | ref  | rows | Extra
>|
> 
++-+---+---+-+-+-+--+--+--+
>
> |  1 | SIMPLE  | w | range | PRIMARY,word_ind|
> word_ind| 42  | NULL |4 | Using where;
> Using temporary; Using filesort |
> |  1 | SIMPLE  | ws| ref   | word_search,id_search,word_date |
> word_search | 4   | statistics.w.word_id |   15 | Using where
>|
> 
++-+---+---+-+-+-+--+--+--+
>
>
> and then for phrases:
> explain SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
> phrase_words pw, phrase_searches ps WHERE
>pw.word_id IN (966,1,1250,1741) AND
>pw.phrase_id = ps.phrase_id AND ps.search_date >=
> '2006-07-17' AND ps.search_date <= '2006-08-16'
>GROUP by pw.word_id;
> 
++-+---+---+-+---+-+-++-+
>
> | id | select_type | table | type  | possible_keys
>  | key   | key_len | ref | rows   | Extra
>  |
> 
++-+---+---+-+---+-+-++-+
>
> |  1 | SIMPLE  | pw| range | phrase_ind,word,phrase
>  | word  | 4   | NULL| 226847 | Using
> where |
> |  1 | SIMPLE  | ps| ref   |
> phrase_search,id_search,phrase_date | phrase_search | 4   |
> statistics.pw.phrase_id | 15 | Using where |
> 
++-+---+---+-+---+-+-++-+

The problem is it's picking the "word" index which apparently is
returning 226,000+ areas.

Test this:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps IGNORE INDEX (word) WHERE
pw.word_id IN (966,1,1250,1741) AND
pw.phrase_id = ps.phrase_id AND ps.search_date >=
'2006-07-17' AND ps.search_date <= '2006-08-16'
GROUP by pw.word_id;

(that should ignore the 'word' index and instead use the 'phrase' index).



Unfortunately didn't that help, it leads to:
++-+---+---+-+---+-+-+-+--+
| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows| Extra
   |
++-+---+---+-+---+-+-+-+--+
|  1 | SIMPLE  | ps| ran

Re: Slow queries

2006-08-17 Thread Chris

Jon Molin wrote:

Hi list

I have 5 tables:

words (word_id int auto_increment, word varbinary(40)) (has ~3.5M
rows) with the keys:
PRIMARY KEY  (`word_id`),UNIQUE KEY `word_ind` (`word`)

phrases (phrase_id int auto_increment, phrase varbinary(100)) (has
~11M rows) with the keys:
PRIMARY KEY  (`phrase_id`),UNIQUE KEY `phrase_ind` (`phrase`)

phrase_words (phrase_id, word_id) (has ~31M rows) with:
UNIQUE KEY `phrase_ind` (`phrase_id`,`word_id`)
KEY `word` (`word_id`), KEY `phrase` (`phrase_id`)

word_searches (word_id, search_date date, search hour char(2), amount
smallint, type char(8), location char(2)) with:
KEY `word_search` (`word_id`),
KEY `id_search` (`search_date`),
KEY `word_date` (`word_id`,`search_date`)

(and a similar for phrase_searches, these two tables are merge tables
with one table for each month, each table having 15-30M rows)

phrases are built of  "words" identified by phrase_words (these are
not human language words and phrases but rather random bytes where
some are human readable).

Now, I'm trying to find out "how many times has word 1..n been
searched for and how many times has phrases containing 1..n been
searched for?"

These queries take a really long time to execute, first I select for the 
words:

explain sELECT w.word as word, w.word_id as word_id, sum(ws.amount) as
amount FROM words w, word_searches ws WHERE
ws.word_id=w.word_id AND w.word IN ("p", "xyz", "zzz", "abc") AND
 ws.search_date >= '2006-07-17' AND ws.search_date <=
'2006-08-16' group by ws.word_id;
++-+---+---+-+-+-+--+--+--+ 


| id | select_type | table | type  | possible_keys   |
key | key_len | ref  | rows | Extra
   |
++-+---+---+-+-+-+--+--+--+ 


|  1 | SIMPLE  | w | range | PRIMARY,word_ind|
word_ind| 42  | NULL |4 | Using where;
Using temporary; Using filesort |
|  1 | SIMPLE  | ws| ref   | word_search,id_search,word_date |
word_search | 4   | statistics.w.word_id |   15 | Using where
   |
++-+---+---+-+-+-+--+--+--+ 



and then for phrases:
explain SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps WHERE
   pw.word_id IN (966,1,1250,1741) AND
   pw.phrase_id = ps.phrase_id AND ps.search_date >=
'2006-07-17' AND ps.search_date <= '2006-08-16'
   GROUP by pw.word_id;
++-+---+---+-+---+-+-++-+ 


| id | select_type | table | type  | possible_keys
 | key   | key_len | ref | rows   | Extra
 |
++-+---+---+-+---+-+-++-+ 


|  1 | SIMPLE  | pw| range | phrase_ind,word,phrase
 | word  | 4   | NULL| 226847 | Using
where |
|  1 | SIMPLE  | ps| ref   |
phrase_search,id_search,phrase_date | phrase_search | 4   |
statistics.pw.phrase_id | 15 | Using where |
++-+---+---+-+---+-+-++-+ 


The problem is it's picking the "word" index which apparently is 
returning 226,000+ areas.


Test this:

EXPLAIN SELECT pw.word_id as word_id, sum(ps.amount) as amount FROM
phrase_words pw, phrase_searches ps IGNORE INDEX (word) WHERE
pw.word_id IN (966,1,1250,1741) AND
pw.phrase_id = ps.phrase_id AND ps.search_date >=
'2006-07-17' AND ps.search_date <= '2006-08-16'
GROUP by pw.word_id;

(that should ignore the 'word' index and instead use the 'phrase' index).

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries Log and Index-less queries

2005-08-30 Thread Alexey Polyakov
> 
> 
> I have a question regarding the slow queries log, and queries not using 
> index.
> 
> I have a small table, with say 10 entries, like that :
> ID | Element
> -
> 1 | One
> 2 | Two
> 3 | Three
> 4 | Four
> 5 | Five
> 6 | Six
> 7 | Seven
> 8 | Eight
> 9 | Nine
> 10 | Ten
> 
> I want to get all those entries:
> SELECT ID, Element FROM tblentries;
> 
> The problem is that this query, even if very fast, is logged in the slow 
> query log because it does not use index (I
> activated this option in the slow query log). Woudln't it be better if 
> such queries would be logged only in case there
> is a WHERE, ORDER or GROUP/HAVING clause ?

  Slow query log, especially with long-long-format turned on, isn't made for 
manual browsing anyway.
You'll have to use mysqldumpslow or some custom-made script to analyze it.
Queries that don't have 'where' are easy to filter then.
 
Also, is it better to do :
> SELECT ID, Element FROM tblentries;
> or
> SELECT ID, Element FROM tblentries WHERE ID > 0;
> 
> (In this last case, it won't be logged in the slow query log beause it 
> uses an index...)

 It won't be logged if it actually will use index. In your example it won't 
use index, full table scan will be used instead, because query optimizer is 
able to determine that all records match where condition. 
   
> -- 
> Alexey Polyakov


Re: Slow queries, why?

2005-05-05 Thread Gleb Paharenko
Hello.



There could be a lot of reasons for such a delay. First, you

should switch to bulk inserts and perform all operation as a single

transaction. Avoid usage of the autoextended or per-table tablespaces.

Are you able to upgrade? There could be some performance improvements

in the newer versions.





[EMAIL PROTECTED] wrote:

> 

> Hi,

> 

>I have an interesting problem, i.e upto 20k data is inserted in 20

> min. But for 39k it took 3.5 hours. Could you please help me in this,

> what are all the possible scenarios which leads to this kind of

> problems.  Is there any fine tuning mechanism in Mysql 4.0.23 with

> innodb?

> 

> Please help me in this, it is very urgent.

> 

> Thanks,

> Narasimha

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-04 Thread Joseph Cochran
Thanks! Explain and InnoDB monitor were exactly what I needed to
diagnose and fix the problem! In case you were curious, the issue was
that the statement I was expecting to run was not the statement that
was running, but the first hundred and some-odd characters in both
were the same. Using the monitor I was able to see that the wrong
thing was running.

Some SELECTs are still taking longer than they should, but I have some
new tools at my disposal, which makes me very happy.

-- Joe

On 5/4/05, Gleb Paharenko <[EMAIL PROTECTED]> wrote:
> Hello.
> 
> 
> > We're running MySQL 4.11 on a machine with 2GB memory, the table is
> 
> > InnoDB with a compound primary key, and additional indexes on all rows
> 
> > with searchable options in the API. Any generic advice or admin tools
> 
> > would be great.
> 
> Use EXPLAIN to determine how efficient your indexes are. Using a lot of
> 
> keys could slow down the INSERT operations but fasten the SELECTs.
> 
> InnoDB monitors might be helpful in your case as well. See:
> 
>   http://dev.mysql.com/doc/mysql/en/explain.html
> 
>   http://dev.mysql.com/doc/mysql/en/innodb-monitor.html
> 
> 
> Joseph Cochran <[EMAIL PROTECTED]> wrote:
> 
> > So here's my situation: we have a database that has a table of about 5
> 
> > million rows. To put a new row into the table, I do an INSERT ...
> 
> > SELECT, pulling data from one row in the table to seed the data for
> 
> > the new row. When there are no active connections to the DB other than
> 
> > the one making the INSERT, it runs like a charm. But during normal
> 
> > daytime operation, when we run around 50 connections (most sleeping at
> 
> > any one time), it takes up to two minutes to do, and ends up locking
> 
> > any other inserts or updates against that table for the entire time.
> 
> >
> 
> > I'll get into more specifics if they're required, but I wanted to ask
> 
> > in general if MySQL has tools to diagnose this, or if anyone has had
> 
> > general situations like this. In SQL Server (which is where I have
> 
> > most of my experience) I could use the trace tool and the Query
> 
> > Analyzer to tell what the execution plan for the query was and thus
> 
> > what's stalling it (an index gone bad, a weird locking situation,
> 
> > etc).
> 
> >
> 
> > We're running MySQL 4.11 on a machine with 2GB memory, the table is
> 
> > InnoDB with a compound primary key, and additional indexes on all rows
> 
> > with searchable options in the API. Any generic advice or admin tools
> 
> > would be great.
> 
> >
> 
> > -- Joe
> 
> >
> 
> --
> For technical support contracts, goto https://order.mysql.com/?ref=ensita
> This email is sponsored by Ensita.NET http://www.ensita.net/
>__  ___ ___   __
>   /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
>  / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
> /_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
><___/   www.mysql.com
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
> 
>

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-04 Thread David Griffiths
Yes, indexes slow down inserts (or updates that change the value of a 
column that is indexed).

Also, remember that MySQL only uses one index per per table in a query. 
So if there are some columns in your table that are indexed, but,

1) Have poor cardinality (number of distinct values - low cardinality 
means there aren't many distinct values)
2) Are only used in a where clause with another column that has good 
cardinality

then they are an excellent candidate for removal.
While "EXPLAIN" is great for queries, it won't help much with an insert; 
it might be useful for figuring out what indexes are used, and which 
ones aren't.

Use "show innodb status" to get an idea of what's going on (Gleb 
suggested it in the link to the innodb monitor).

You should also post the relevant parts of your my.cnf file; have you 
seen this equation before:

Memory Used By MySQL = Innodb_buffer_pool_size + key_buffer_size + 
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) + 
max_connections*2MB

Use it to calculate how much memory you are using.
Finally, read up on phantom reads: 
http://sunsite.mff.cuni.cz/MIRRORS/ftp.mysql.com/doc/en/InnoDB_Next-key_locking.html

This might be what's happening.
David
Gleb Paharenko wrote:
Hello.

 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   

 

InnoDB with a compound primary key, and additional indexes on all rows
   

 

with searchable options in the API. Any generic advice or admin tools
   

 

would be great.
   


Use EXPLAIN to determine how efficient your indexes are. Using a lot of
keys could slow down the INSERT operations but fasten the SELECTs. 

InnoDB monitors might be helpful in your case as well. See:
 http://dev.mysql.com/doc/mysql/en/explain.html
 http://dev.mysql.com/doc/mysql/en/innodb-monitor.html



Joseph Cochran <[EMAIL PROTECTED]> wrote:
 

So here's my situation: we have a database that has a table of about 5
   

 

million rows. To put a new row into the table, I do an INSERT ...
   

 

SELECT, pulling data from one row in the table to seed the data for
   

 

the new row. When there are no active connections to the DB other than
   

 

the one making the INSERT, it runs like a charm. But during normal
   

 

daytime operation, when we run around 50 connections (most sleeping at
   

 

any one time), it takes up to two minutes to do, and ends up locking
   

 

any other inserts or updates against that table for the entire time.
   

 

 

I'll get into more specifics if they're required, but I wanted to ask
   

 

in general if MySQL has tools to diagnose this, or if anyone has had
   

 

general situations like this. In SQL Server (which is where I have
   

 

most of my experience) I could use the trace tool and the Query
   

 

Analyzer to tell what the execution plan for the query was and thus
   

 

what's stalling it (an index gone bad, a weird locking situation,
   

 

etc).
   

 

 

We're running MySQL 4.11 on a machine with 2GB memory, the table is
   

 

InnoDB with a compound primary key, and additional indexes on all rows
   

 

with searchable options in the API. Any generic advice or admin tools
   

 

would be great.
   

 

 

-- Joe
   

 


 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Slow queries, why?

2005-05-04 Thread lakshmi.narasimharao

Hi,

I have an interesting problem, i.e upto 20k data is inserted in 20
min. But for 39k it took 3.5 hours. Could you please help me in this,
what are all the possible scenarios which leads to this kind of
problems.  Is there any fine tuning mechanism in Mysql 4.0.23 with
innodb?

Please help me in this, it is very urgent.

Thanks,
Narasimha

-Original Message-
From: Gleb Paharenko [mailto:[EMAIL PROTECTED]
Sent: Wednesday, May 04, 2005 1:11 PM
To: mysql@lists.mysql.com
Subject: Re: Slow queries, why?

Hello.

> We're running MySQL 4.11 on a machine with 2GB memory, the table is
> InnoDB with a compound primary key, and additional indexes on all rows
> with searchable options in the API. Any generic advice or admin tools
> would be great.

Use EXPLAIN to determine how efficient your indexes are. Using a lot of
keys could slow down the INSERT operations but fasten the SELECTs.
InnoDB monitors might be helpful in your case as well. See:
  http://dev.mysql.com/doc/mysql/en/explain.html
  http://dev.mysql.com/doc/mysql/en/innodb-monitor.html



Joseph Cochran <[EMAIL PROTECTED]> wrote:
> So here's my situation: we have a database that has a table of about 5
> million rows. To put a new row into the table, I do an INSERT ...
> SELECT, pulling data from one row in the table to seed the data for
> the new row. When there are no active connections to the DB other than
> the one making the INSERT, it runs like a charm. But during normal
> daytime operation, when we run around 50 connections (most sleeping at
> any one time), it takes up to two minutes to do, and ends up locking
> any other inserts or updates against that table for the entire time.
>
> I'll get into more specifics if they're required, but I wanted to ask
> in general if MySQL has tools to diagnose this, or if anyone has had
> general situations like this. In SQL Server (which is where I have
> most of my experience) I could use the trace tool and the Query
> Analyzer to tell what the execution plan for the query was and thus
> what's stalling it (an index gone bad, a weird locking situation,
> etc).
>
> We're running MySQL 4.11 on a machine with 2GB memory, the table is
> InnoDB with a compound primary key, and additional indexes on all rows
> with searchable options in the API. Any generic advice or admin tools
> would be great.
>
> -- Joe
>


--
For technical support contracts, goto
https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]




Confidentiality Notice

The information contained in this electronic message and any attachments to 
this message are intended
for the exclusive use of the addressee(s) and may contain confidential or 
privileged information. If
you are not the intended recipient, please notify the sender at Wipro or [EMAIL 
PROTECTED] immediately
and destroy all copies of this message and any attachments.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-04 Thread Gleb Paharenko
Hello.



> We're running MySQL 4.11 on a machine with 2GB memory, the table is

> InnoDB with a compound primary key, and additional indexes on all rows

> with searchable options in the API. Any generic advice or admin tools

> would be great.



Use EXPLAIN to determine how efficient your indexes are. Using a lot of

keys could slow down the INSERT operations but fasten the SELECTs. 

InnoDB monitors might be helpful in your case as well. See:

  http://dev.mysql.com/doc/mysql/en/explain.html

  http://dev.mysql.com/doc/mysql/en/innodb-monitor.html







Joseph Cochran <[EMAIL PROTECTED]> wrote:

> So here's my situation: we have a database that has a table of about 5

> million rows. To put a new row into the table, I do an INSERT ...

> SELECT, pulling data from one row in the table to seed the data for

> the new row. When there are no active connections to the DB other than

> the one making the INSERT, it runs like a charm. But during normal

> daytime operation, when we run around 50 connections (most sleeping at

> any one time), it takes up to two minutes to do, and ends up locking

> any other inserts or updates against that table for the entire time.

> 

> I'll get into more specifics if they're required, but I wanted to ask

> in general if MySQL has tools to diagnose this, or if anyone has had

> general situations like this. In SQL Server (which is where I have

> most of my experience) I could use the trace tool and the Query

> Analyzer to tell what the execution plan for the query was and thus

> what's stalling it (an index gone bad, a weird locking situation,

> etc).

> 

> We're running MySQL 4.11 on a machine with 2GB memory, the table is

> InnoDB with a compound primary key, and additional indexes on all rows

> with searchable options in the API. Any generic advice or admin tools

> would be great.

> 

> -- Joe

> 



-- 
For technical support contracts, goto https://order.mysql.com/?ref=ensita
This email is sponsored by Ensita.NET http://www.ensita.net/
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /Gleb Paharenko
 / /|_/ / // /\ \/ /_/ / /__   [EMAIL PROTECTED]
/_/  /_/\_, /___/\___\_\___/   MySQL AB / Ensita.NET
   <___/   www.mysql.com




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries, why?

2005-05-03 Thread Scott Haneda
on 5/3/05 7:25 PM, Joseph Cochran at [EMAIL PROTECTED] wrote:

> So here's my situation: we have a database that has a table of about 5
> million rows. To put a new row into the table, I do an INSERT ...
> SELECT, pulling data from one row in the table to seed the data for
> the new row. When there are no active connections to the DB other than
> the one making the INSERT, it runs like a charm. But during normal
> daytime operation, when we run around 50 connections (most sleeping at
> any one time), it takes up to two minutes to do, and ends up locking
> any other inserts or updates against that table for the entire time.

Since you are pulling data from only one row, you may be a prime candidate
for  which is
as far as I can tell, more efficient as it stores the results as variables.

I am really pretty new to all this, so just trying to help where I can, but
you may find that selecting and inserting in the application logic level
will perform better for you as well.

I am not sure what goes on behind the scenes in a insert select, from what I
have read, inserting a large amount of rows is going to get you faster
results that selecting them by hand, but for one or few rows, it is not as
optimum.

 
> I'll get into more specifics if they're required, but I wanted to ask
> in general if MySQL has tools to diagnose this, or if anyone has had
> general situations like this. In SQL Server (which is where I have
> most of my experience) I could use the trace tool and the Query
> Analyzer to tell what the execution plan for the query was and thus
> what's stalling it (an index gone bad, a weird locking situation,
> etc).

Take a look at:

Though I have not had to use it yet, it may get you where you need to be.
-- 
-
Scott HanedaTel: 415.898.2602
 Novato, CA U.S.A.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries only the first time

2005-03-10 Thread Gordon
The key to your question is InnoDB. InnoDB keeps data and indexes in its
buffer using LRU to flush. So the 2nd time your data was already in memory.
Depending on your buffer size and database size you have all of the
advantages of a memory resident database for frequently used data 
without the disadvantages of losing the data on system shutdown.

-Original Message-
From: Michael Stassen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, March 10, 2005 1:43 PM
To: Bob O'Neill
Cc: mysql@lists.mysql.com
Subject: Re: Slow queries only the first time


On Mar 10, 2005, at 11:20 AM, Bob O'Neill wrote:

> Hello.  I am wondering why some of my queries are slow on the first  
> run, but
> speedy on subsequent runs.  They are not being query cached, as I have
> query_cache_type set to DEMAND.  Is it something as simple as pulling  
> the
> data into RAM from disk, or is there something else going on?  Here's a
> simple example:
>
> mysql> select count(*) from foo;
> +--+
> | count(*) |
> +--+
> |  1374817 |
> +--+
> 1 row in set (3.60 sec)

A table scan.

> mysql> select count(*) from foo;
> +--+
> | count(*) |
> +--+
> |  1374817 |
> +--+
> 1 row in set (0.92 sec)

Another table scan, but now some or all of the table is cached in RAM  
(disk cache) by the OS (not by MySQL).

> mysql> show variables like 'query_cache_type';
> +--++
> | Variable_name| Value  |
> +--++
> | query_cache_type | DEMAND |
> +--++
> 1 row in set (0.00 sec)

You could probably verify that the query cache wasn't used by  
monitoring the query cache stats  
<http://dev.mysql.com/doc/mysql/en/query-cache-status-and- 
maintenance.html>.

> I am running MySQL 4.1.10 with InnoDB on RHEL 2.1 (kernel
> 2.4.9-e.49enterprise). Binaries are Linux x86 glibc static gcc RPMs  
> from
> mysql.com.
>
> Thanks,
> -Bob

Michael


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow queries only the first time

2005-03-10 Thread Michael Stassen
On Mar 10, 2005, at 11:20 AM, Bob O'Neill wrote:
Hello.  I am wondering why some of my queries are slow on the first  
run, but
speedy on subsequent runs.  They are not being query cached, as I have
query_cache_type set to DEMAND.  Is it something as simple as pulling  
the
data into RAM from disk, or is there something else going on?  Here's a
simple example:

mysql> select count(*) from foo;
+--+
| count(*) |
+--+
|  1374817 |
+--+
1 row in set (3.60 sec)
A table scan.
mysql> select count(*) from foo;
+--+
| count(*) |
+--+
|  1374817 |
+--+
1 row in set (0.92 sec)
Another table scan, but now some or all of the table is cached in RAM  
(disk cache) by the OS (not by MySQL).

mysql> show variables like 'query_cache_type';
+--++
| Variable_name| Value  |
+--++
| query_cache_type | DEMAND |
+--++
1 row in set (0.00 sec)
You could probably verify that the query cache wasn't used by  
monitoring the query cache stats  
.

I am running MySQL 4.1.10 with InnoDB on RHEL 2.1 (kernel
2.4.9-e.49enterprise). Binaries are Linux x86 glibc static gcc RPMs  
from
mysql.com.

Thanks,
-Bob
Michael
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slow queries only the first time

2005-03-10 Thread Homam S.A.
Most likely it's the OS cache caching all those disk
segments in memory. Also in InnoDB, MySQL uses the
Buffer Pool Size to cache data pages in addition to
the OS cache.

If you're running ona Windows machine, you can easily
tell what's going on by opening up Performance Monitor
and watching Pages/Sec and Disk Reads/Sec. If these
two counters were low in the subsequent queries (and
they should unless you're tight on physical memory),
then the data is cached.




--- Bob O'Neill <[EMAIL PROTECTED]> wrote:
> Hello.  I am wondering why some of my queries are
> slow on the first run, but
> speedy on subsequent runs.  They are not being query
> cached, as I have
> query_cache_type set to DEMAND.  Is it something as
> simple as pulling the
> data into RAM from disk, or is there something else
> going on?  Here's a
> simple example:
> 
> mysql> select count(*) from foo;
> +--+
> | count(*) |
> +--+
> |  1374817 |
> +--+
> 1 row in set (3.60 sec)
> 
> mysql> select count(*) from foo;
> +--+
> | count(*) |
> +--+
> |  1374817 |
> +--+
> 1 row in set (0.92 sec)
> 
> mysql> show variables like 'query_cache_type';
> +--++
> | Variable_name| Value  |
> +--++
> | query_cache_type | DEMAND |
> +--++
> 1 row in set (0.00 sec)
> 
> I am running MySQL 4.1.10 with InnoDB on RHEL 2.1
> (kernel
> 2.4.9-e.49enterprise). Binaries are Linux x86 glibc
> static gcc RPMs from
> mysql.com.
> 
> Thanks,
> -Bob
> 
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
>
http://lists.mysql.com/[EMAIL PROTECTED]
> 
> 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-11 Thread Pete Moran
I recently had a similar problem, however you may find that its more of a
case of correctly indexing your tables.

Yyou should look for the tables which need indexing, I enabled the
slow-query-log as well as enabling of logging of queries which didn’t use
indexes and found some which did some very nasty table scans.  This can
seriously degrade your performance, if you can find the queries in question
then modify/add indexes or change sql which will reduce number of rows
scanned.

Lines in my.cnf

log-slow-queries = /var/lib/mysql/data/slow.log
set-variable = long-query-time=5
log-long-format  # this will log selects not using indexes

Then simply tail the log when you have issues to find which ones need
optimizing.

BTW - above is using Mysql 4 so syntax may be diff with your version

PM

-Original Message-
From: BD [mailto:[EMAIL PROTECTED] 
Sent: Saturday, 8 January 2005 2:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M
to my my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting
my php-MySQL web application via http.  When I removed above and then
restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
"set_variable=key_buffer=64M" (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 265.6.9 - Release Date: 6/01/2005
 

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 265.6.10 - Release Date: 10/01/2005
 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer - zen -cart?

2005-01-07 Thread Tom Crimmins
[snip]

# Time: 050107 17:40:41
# [EMAIL PROTECTED]: zencarttest[zencarttest] @  [xxx.xxx.xxx.xxx]
# Query_time: 13  Lock_time: 0  Rows_sent: 148  Rows_examined: 1567270
use zencarttest;
select distinct m.manufacturers_id, m.manufacturers_name from
zen_manufacturers m
 left join zen_products p on m.manufacturers_id =
p.manufacturers_id
 where m.manufacturers_id = p.manufacturers_id and
p.products_status= '1'
 order by manufacturers_name;
[/snip] 

This appears to be the problem query. Looks like zen_products could use an
index on (manufacturers_id, products_status), and zen_manufacturers could
use an index on (manufacturers_id,manufacturers_name).

You can try to add these indexes and run the query to see if it helps. You
may want to do an EXPLAIN after adding the indexes to make see if it is
using them.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer - zen -cart?

2005-01-07 Thread Tom Crimmins
[snip]
The application I am using for the site is www.zen-cart.com so I'm not sure
I can do anything about changing the table indexes because it is a pre
written php-MySQL open source freeware script.  I'm getting the query times
directly from the application and not MySQL.
[/snip]

You could turn on logging in mysql to see what the query is that is taking
so long, then make sure the tables are properly indexed based on this. I
would hope that this software has properly indexed the tables, but you can
verify this on your own.

Add the following to your my.cnf to enable the logging of slow queries.

[mysqld]
set-variable = long_query_time=2
log-long-format
log-slow-queries = /var/log/mysqld.slow.log (or whatever file you want, just
make sure the user mysqld is running as has write permissions to it.)

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 11:58 AM
To: Tom Crimmins
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer -
zen-cart?

OK thanks Tom,

The application I am using for the site is www.zen-cart.com so I'm not sure
I can do anything about changing the table indexes because it is a pre
written php-MySQL open source freeware script.  I'm getting the query times
directly from the application and not MySQL.

However I could post a query I generate directly from MySQL, how could I do
that?  What would be the command I should use to generate query times from
MySQL?

For the SHOW CREATE TABLE, I have posted below the five tables that (I
believe) are directly involved in generating a product and category list on
the front page of my test store.  I did not post all tables in the database
because there are 97 tables total in the database, but I think these are the
pertinent tables involved in slow query times; any suggestions that I get
here I will definitely pass along to the zen cart developers.  If there is
something that I can do without changing the PHP code of the application and
fix just with MySQL settings that would be great...

zen_products |CREATE TABLE `zen_products` (
  `products_id` int(11) NOT NULL auto_increment,
  `products_type` int(11) NOT NULL default '1',
  `products_quantity` float NOT NULL default '0',
  `products_model` varchar(32) default NULL,
  `products_image` varchar(64) default NULL,
  `products_price` decimal(15,4) NOT NULL default '0.',
  `products_virtual` tinyint(1) NOT NULL default '0',
  `products_date_added` datetime NOT NULL default '0001-01-01 00:00:00',
  `products_last_modified` datetime default NULL,
  `products_date_available` datetime default NULL,
  `products_weight` decimal(5,2) NOT NULL default '0.00',
  `products_status` tinyint(1) NOT NULL default '0',
  `products_tax_class_id` int(11) NOT NULL default '0',
  `manufacturers_id` int(11) default NULL,
  `products_ordered` float NOT NULL default '0',
  `products_quantity_order_min` float NOT NULL default '1',
  `products_quantity_order_units` float NOT NULL default '1',
  `products_priced_by_attribute` tinyint(1) NOT NULL default '0',
  `product_is_free` tinyint(1) NOT NULL default '0',
  `product_is_call` tinyint(1) NOT NULL default '0',
  `products_quantity_mixed` tinyint(1) NOT NULL default '0',
  `product_is_always_free_shipping` tinyint(1) NOT NULL default '0',
  `products_qty_box_status` tinyint(1) NOT NULL default '1',
  `products_quantity_order_max` float NOT NULL default '0',
  `products_sort_order` int(11) NOT NULL default '0',
  `products_discount_type` tinyint(1) NOT NULL default '0',
  `products_discount_type_from` tinyint(1) NOT NULL default '0',
  `products_price_sorter` decimal(15,4) NOT NULL default '0.',
  `master_categories_id` int(11) NOT NULL default '0',
  `products_mixed_discount_quantity` tinyint(1) NOT NULL default '1',
  PRIMARY KEY  (`products_id`),
  KEY `idx_products_date_added` (`products_date_added`)
) TYPE=MyISAM |

| zen_categories_description | CREATE TABLE `zen_categories_description` 
| (
  `categories_id` int(11) NOT NULL default '0',
  `language_id` int(11) NOT NULL default '1',
  `categories_name` varchar(32) NOT NULL default '',
  `categories_description` text NOT NULL,
  PRIMARY KEY  (`categories_id`,`language_id`),
  KEY `idx_categories_name` (`categories_name`)
) TYPE=MyISAM |


zen_categories | CREATE TABLE `zen_categories` (
  `categories_id` int(11) NOT NULL auto_increment,
  `categories_image` varchar(64) default NULL,
  `parent_id` int(11) NOT NULL default '0',
  `sort_order` int(3) default NULL,
  `date_added` datetime default NULL,
  `last_modified` datetime default NULL,
  `categories_status` tinyint(1) NOT NULL default '

RE: Slow queries, need advice on how to improve; key_buffer - zen-cart?

2005-01-07 Thread BD
OK thanks Tom,

The application I am using for the site is www.zen-cart.com so I'm not sure
I can do anything about changing
the table indexes because it is a pre written php-MySQL open source freeware
script.  I'm getting the query times
directly from the application and not MySQL.

However I could post a query I generate directly from MySQL, how could I do
that?  What would
be the command I should use to generate query times from MySQL?

For the SHOW CREATE TABLE, I have posted below the five tables that (I
believe) are directly involved in generating
a product and category list on the front page of my test store.  I did not
post all tables in the database because there are 97 tables total in the
database, but I think these are the pertinent tables involved in slow query
times; any suggestions that I get here I will definitely pass along to the
zen cart developers.  If there is something that I can do without changing
the PHP code of the application and fix just with MySQL settings that would
be great...

zen_products |CREATE TABLE `zen_products` (
  `products_id` int(11) NOT NULL auto_increment,
  `products_type` int(11) NOT NULL default '1',
  `products_quantity` float NOT NULL default '0',
  `products_model` varchar(32) default NULL,
  `products_image` varchar(64) default NULL,
  `products_price` decimal(15,4) NOT NULL default '0.',
  `products_virtual` tinyint(1) NOT NULL default '0',
  `products_date_added` datetime NOT NULL default '0001-01-01 00:00:00',
  `products_last_modified` datetime default NULL,
  `products_date_available` datetime default NULL,
  `products_weight` decimal(5,2) NOT NULL default '0.00',
  `products_status` tinyint(1) NOT NULL default '0',
  `products_tax_class_id` int(11) NOT NULL default '0',
  `manufacturers_id` int(11) default NULL,
  `products_ordered` float NOT NULL default '0',
  `products_quantity_order_min` float NOT NULL default '1',
  `products_quantity_order_units` float NOT NULL default '1',
  `products_priced_by_attribute` tinyint(1) NOT NULL default '0',
  `product_is_free` tinyint(1) NOT NULL default '0',
  `product_is_call` tinyint(1) NOT NULL default '0',
  `products_quantity_mixed` tinyint(1) NOT NULL default '0',
  `product_is_always_free_shipping` tinyint(1) NOT NULL default '0',
  `products_qty_box_status` tinyint(1) NOT NULL default '1',
  `products_quantity_order_max` float NOT NULL default '0',
  `products_sort_order` int(11) NOT NULL default '0',
  `products_discount_type` tinyint(1) NOT NULL default '0',
  `products_discount_type_from` tinyint(1) NOT NULL default '0',
  `products_price_sorter` decimal(15,4) NOT NULL default '0.',
  `master_categories_id` int(11) NOT NULL default '0',
  `products_mixed_discount_quantity` tinyint(1) NOT NULL default '1',
  PRIMARY KEY  (`products_id`),
  KEY `idx_products_date_added` (`products_date_added`)
) TYPE=MyISAM |

| zen_categories_description | CREATE TABLE `zen_categories_description` (
  `categories_id` int(11) NOT NULL default '0',
  `language_id` int(11) NOT NULL default '1',
  `categories_name` varchar(32) NOT NULL default '',
  `categories_description` text NOT NULL,
  PRIMARY KEY  (`categories_id`,`language_id`),
  KEY `idx_categories_name` (`categories_name`)
) TYPE=MyISAM |


zen_categories | CREATE TABLE `zen_categories` (
  `categories_id` int(11) NOT NULL auto_increment,
  `categories_image` varchar(64) default NULL,
  `parent_id` int(11) NOT NULL default '0',
  `sort_order` int(3) default NULL,
  `date_added` datetime default NULL,
  `last_modified` datetime default NULL,
  `categories_status` tinyint(1) NOT NULL default '1',
  PRIMARY KEY  (`categories_id`),
  KEY `idx_categories_parent_id` (`parent_id`),
  KEY `idx_sort_order` (`sort_order`)
) TYPE=MyISAM |

| zen_products_description | CREATE TABLE `zen_products_description` (
  `products_id` int(11) NOT NULL auto_increment,
  `language_id` int(11) NOT NULL default '1',
  `products_name` varchar(64) NOT NULL default '',
  `products_description` text,
  `products_url` varchar(255) default NULL,
  `products_viewed` int(5) default '0',
  PRIMARY KEY  (`products_id`,`language_id`),
  KEY `products_name` (`products_name`)
) TYPE=MyISAM |

| zen_products_to_categories | CREATE TABLE `zen_products_to_categories` (
  `products_id` int(11) NOT NULL default '0',
  `categories_id` int(11) NOT NULL default '0',
  PRIMARY KEY  (`products_id`,`categories_id`)
) TYPE=MyISAM |
+-



-Original Message-
From: Tom Crimmins [mailto:[EMAIL PROTECTED]
Sent: Friday, January 07, 2005 12:23 PM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buf

RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread Tom Crimmins
[snip]
The problem now is, this did not do anything to improve the query and parse
times.
I'm testing out an on line store which has about 12,000 product entries in
it and about 300 product categories.  Both the parse and query times are
running over 12 seconds
[/snip]

This is probably related to not having proper indexes on your tables. If you
post the query, and a 'SHOW CREATE TABLE [tablename]' for each table
involved, someone maybe able to help you speed it up.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD 
Sent: Friday, January 07, 2005 11:21 AM
To: Tom Crimmins
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?

Hi Tom,

OK thanks I just added the set-variable = key_buffer = 64M line to my my.cnf
file and at least I got no errors and the MySQL server restarted OK and I
got my test site running.

The problem now is, this did not do anything to improve the query and parse
times.
I'm testing out an on line store which has about 12,000 product entries in
it and about 300 product categories.  Both the parse and query times are
running over 12 seconds
- that's just to bring up the front page of the store with the category -
sub cat list.

I'm sure there are lots of other switches in MySQl, do you have any other
suggestions as to what I could do to reduce query times down to a reasonable
1-3 seconds?

I just bought the book
http://www.amazon.com/exec/obidos/tg/detail/-/0596003064/102-0076963-3409775
?%5Fencoding=UTF8&v=glance
but it has not arrived yet.  Any suggestions in the mean time?

BD


-Original Message-
From: Tom Crimmins
Sent: Friday, January 07, 2005 12:08 PM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?


[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
I'm running MySQL 3.23.56 on a linux box.
[/snip]

Sorry, I didn't explain that you were probably not able to connect because
prior to version 4.1 setting variable in my.cnf require that you add
set-variable = before the setting ie. 'set-variable = key_buffer = 64M'.
When you restarted mysql it probably exited on error.

[snip]
OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;
and got the error:  "You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1"
[/snip]

Sorry about this too, in 3.23 leave out the word global. If you can restart
without a problem though, I would jest add the 'set-variable = key_buffer =
64M' line to your my.cnf file and restart mysql.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
"set_variable=key_buffer=64M" (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread BD
Hi Tom,

OK thanks I just added the set-variable = key_buffer = 64M
line to my my.cnf file and at least I got no errors and the
MySQL server restarted OK and I got my test site running.

The problem now is, this did not do anything to improve the query and parse
times.
I'm testing out an on line store which has about 12,000 product entries in
it
and about 300 product categories.  Both the parse and query times are
running over 12 seconds
- that's just to bring up the front page of the store with the category -
sub cat list.

I'm sure there are lots of other switches in MySQl, do you have any other
suggestions as to what
I could do to reduce query times down to a reasonable 1-3 seconds?

I just bought the book
http://www.amazon.com/exec/obidos/tg/detail/-/0596003064/102-0076963-3409775
?%5Fencoding=UTF8&v=glance
but it has not arrived yet.  Any suggestions in the mean time?

BD


-Original Message-
From: Tom Crimmins [mailto:[EMAIL PROTECTED]
Sent: Friday, January 07, 2005 12:08 PM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?


[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
I'm running MySQL 3.23.56 on a linux box.
[/snip]

Sorry, I didn't explain that you were probably not able to connect because
prior to version 4.1 setting variable in my.cnf require that you add
set-variable = before the setting ie. 'set-variable = key_buffer = 64M'.
When you restarted mysql it probably exited on error.

[snip]
OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;
and got the error:  "You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1"
[/snip]

Sorry about this too, in 3.23 leave out the word global. If you can restart
without a problem though, I would jest add the 'set-variable = key_buffer =
64M' line to your my.cnf file and restart mysql.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
"set_variable=key_buffer=64M" (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread Tom Crimmins
[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
I'm running MySQL 3.23.56 on a linux box.
[/snip]

Sorry, I didn't explain that you were probably not able to connect because
prior to version 4.1 setting variable in my.cnf require that you add
set-variable = before the setting ie. 'set-variable = key_buffer = 64M'.
When you restarted mysql it probably exited on error.

[snip]
OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;
and got the error:  "You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1"
[/snip]

Sorry about this too, in 3.23 leave out the word global. If you can restart
without a problem though, I would jest add the 'set-variable = key_buffer =
64M' line to your my.cnf file and restart mysql.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD 
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
"set_variable=key_buffer=64M" (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread BD
Thanks Tom,

OK I tried this, I must be doing something wrong, I entered in:   SET GLOBAL
key_buffer_size = 64 * 1024 *1024;

and got the error:  "You have an error in your SQL syntax near
'key_buffer_size = 64 * 1024 *1024' at line 1"

I must be doing something wrong.  I tried both 64M and just 64.  Should I
also enter tick marks?  I tried that too.

BD

-Original Message-
From: Tom Crimmins [mailto:[EMAIL PROTECTED]
Sent: Friday, January 07, 2005 11:20 AM
To: BD
Cc: mysql@lists.mysql.com
Subject: RE: Slow queries, need advice on how to improve; key_buffer?


[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
[/snip]

You can set it without stopping mysql using the following:

'SET GLOBAL key_buffer_size = [size] * 1024 * 1024'

where [size] is the size in Mb you want the key_buffer_size to be. You'll
need to add it to your my.cnf if you want this setting to hold on a restart.
Try adding 'set-variable = key_buffer = 64M' to your my.cnf.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
"set_variable=key_buffer=64M" (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow queries, need advice on how to improve; key_buffer?

2005-01-07 Thread Tom Crimmins
[snip]
I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.
[/snip]

You can set it without stopping mysql using the following:

'SET GLOBAL key_buffer_size = [size] * 1024 * 1024'

where [size] is the size in Mb you want the key_buffer_size to be. You'll
need to add it to your my.cnf if you want this setting to hold on a restart.
Try adding 'set-variable = key_buffer = 64M' to your my.cnf.

---
Tom Crimmins
Interface Specialist
Pottawattamie County, Iowa

-Original Message-
From: BD
Sent: Friday, January 07, 2005 9:07 AM
To: mysql@lists.mysql.com
Subject: Slow queries, need advice on how to improve; key_buffer?

Hi,

I'm having a problem with slow query and parse times with a MySQL - PHP
ecommerce application I am testing out.  Also getting an error message with
the key_buffer variable.

I tried to improve MySQL speed/performance by adding key_buffer=50M to my
my.cnf file for [mysqld].  When I restarted MySQL, I'm getting a "0
DB_ERROR_NOT_CONNECTED" error when visiting my php-MySQL web application via
http.  When I removed above and then restarted MySQL the error goes away.

I read in my Widenius - MySQL book that I should use the command
"set_variable=key_buffer=64M" (or some other memory limit - I have 1.2 gig
of ram on server).  I also tried this instead of adding the above to my.cnf.
However upon logging on as root in mySQL and running the command, I'm,
getting a command not found error.

Can anyone tell me what I'm doing wrong with the above two attempts?  Or
give any advice as to how to improve speed of accessing the mySQL database
for my web application?  I'm running MySQL 3.23.56 on a linux box.

Thanks anyone,

BD


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Slow Queries on Fast Server?

2004-08-27 Thread Brent Baisley
If you are sorting the result, setting a limit only speeds things up 
for data transfer of the result set since MySQL still has to find all 
the records, sort them, then deliver only the first X records. You can 
usually tell how much time is spent on the transfer of the result set 
vs. finding the result by doing a select count(*) rather than actually 
selecting data.

As for average length, I don't think that's the issue. The system I'm 
designing searches resumes, so while there are only about 15k records, 
the "average" length is about 3 pages of text. My data size is actually 
about the same size as yours. My response time on a test machine is 
about 6-8 seconds, but the test machine is woefully underpowered for 
real use (Mac 350Mhz G4 256MB RAM).

I would definitely try increasing your buffer variables. Double them 
and see what happens. Many times on this list people have gotten 
significant performance boosts from setting high sort buffers. Please 
post results if it works.

On Aug 27, 2004, at 9:28 AM, [EMAIL PROTECTED] wrote:
Thanks Brent and Donny, hopefully this info will help get to the root 
of the problem with the fulltext search.
The table structure is very, very simple:

mysql> describe product_fulltext;
+-++--+-+-+---+
| Field   | Type   | Null | Key | Default | Extra |
+-++--+-+-+---+
| product_id  | int(9) |  | PRI | 0   |   |
| search_text | text   |  | MUL | |   |
+-++--+-+-+---+
2 rows in set (0.00 sec)
Space usage :
TypeUsage
Data502,455 KB
Index   440,412 KB
Total   942,867 KB
Row Statistic :
Statements  Value
Format  dynamic
Rows3,237,981
Row length ø158
Row size  ø 298 Bytes
MySQL 4.0.20-standard-log  Official MySQL RPM
I also calculated the average text feild length:
mysql> SELECT AVG(LENGTH(search_text)) AS avg_length FROM 
product_fulltext;
++
| avg_length |
++
|   147.2239 |
++
1 row in set (33.34 sec)

Is my average text length too long?  Is MySQL 4.0.20 really that slow 
for fulltext searching?  If so, how much will performance increase by 
upgrading to 4.1.x?  Is upgrading difficult?

You may try doubling or tripling your sort_buffer and 
myisam_sort_buffer settings and maybe you read_buffer.
My sort_buffer is 10Mb, the read_buffer is 2Mb and the 
myisam_sort_buffer is 64Mb ... are these still too low?

I'm think I'm going to try to install "mytop" to get more performance 
info.
Currently, my temporary solution is to limit the query to 2000 rows 
but it still takes 4-5 seconds and doesn't give a complete picture for 
search results.

Thanks for any help on this,
- John
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Slow Queries on Fast Server?

2004-08-27 Thread JVanV8
Thanks Brent and Donny, hopefully this info will help get to the root of the problem 
with the fulltext search.
The table structure is very, very simple:

mysql> describe product_fulltext;
+-++--+-+-+---+
| Field   | Type   | Null | Key | Default | Extra |
+-++--+-+-+---+
| product_id  | int(9) |  | PRI | 0   |   |
| search_text | text   |  | MUL | |   |
+-++--+-+-+---+
2 rows in set (0.00 sec)

Space usage :
TypeUsage
Data502,455 KB
Index   440,412 KB
Total   942,867 KB

Row Statistic :
Statements  Value
Format  dynamic
Rows3,237,981
Row length ø158
Row size  ø 298 Bytes

MySQL 4.0.20-standard-log  Official MySQL RPM

I also calculated the average text feild length: 
mysql> SELECT AVG(LENGTH(search_text)) AS avg_length FROM product_fulltext;
++
| avg_length |
++
|   147.2239 |
++
1 row in set (33.34 sec)

Is my average text length too long?  Is MySQL 4.0.20 really that slow for fulltext 
searching?  If so, how much will performance increase by upgrading to 4.1.x?  Is 
upgrading difficult?

>>You may try doubling or tripling your sort_buffer and myisam_sort_buffer settings 
>>and maybe you read_buffer.

My sort_buffer is 10Mb, the read_buffer is 2Mb and the myisam_sort_buffer is 64Mb ... 
are these still too low?

I'm think I'm going to try to install "mytop" to get more performance info.  
Currently, my temporary solution is to limit the query to 2000 rows but it still takes 
4-5 seconds and doesn't give a complete picture for search results.

Thanks for any help on this,
- John

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Slow Queries on Fast Server?

2004-08-26 Thread Donny Simonton
John,
>From my experience it is a lot more on how big is your data, not necessarily
the amount of data.  Which version of mysql are you running?  Are you using
a mysql prepared version (you downloaded it from mysql.com).  I'm using
4.1.3 and I have a table that has a char 68 with 29 million rows that is
fulltext indexed and all of my queries using something similar to yours take
0.1 to 0.2 seconds max.  Also if you provided your full table structure
including the indexes that would help.  

Donny

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Thursday, August 26, 2004 8:08 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Slow Queries on Fast Server?
> 
> I'm gathering by the lack of response that perhaps MySQL is incapable of
> executing a count of the number of fulltext matches on 3 million rows.
> I really thought that MySQL 4 was really suppose to be able to handle such
> a load
> I still think my configuration may be to blame
> ?
> - John
> 
> 
> 
> 
> 
> 
> --
> >>Could you send the output of an EXPLAIN for your query?
> 
> >Sure, pretty sure the index is fine though:
> 
> mysql> EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH
> (search_text) AGAINST
> > ('black');
> +--+--+---+-+-+---
> ---+--+-+
> | table| type | possible_keys | key | key_len |
> ref  | rows |
> Extra   |
> +--+--+---+-+-+---
> ---+--+-+
> | product_fulltext | fulltext | search_text   | search_text |   0 |
> |1 |
> Using where |
> +--+--+---+-+-+---
> ---+--+-+
> 1 row in set (0.00 sec)
> 
> 
> 
> 
> [EMAIL PROTECTED] wrote:
> >>Have you checked the "Optimization" section of the manual yet?
> >>http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
> >
> >
> > Oh yes, as I've attempted to configure the my.cnf file for best
> performance.  The
> > query is correct.  The fulltext index is correct as I built the fulltext
> index on
> > the
> > single column (took 9 minutes) and even did "repair" and "optimize" on
> the table...
> > so I
> > don't think its the index.  I'm thinking its the server config...
> >
> > - John
> >
> >
> >
> > [EMAIL PROTECTED] wrote:
> >
> >>I'm running into a problem with some queries running on a dedicated
> mysql server
> > (2.0
> >>GHz, 2GB RAM).  Fulltext searching really exemplifies this as most
> MATCH, AGAINST
> > queries
> >>are taking 5-20 seconds.  Performance was excellent for some reason one
> day (0.2
> > -
> > 0.75
> >>seconds) but it was only fast for a day or so.
> >>Here's the rundown:
> >>
> >>TABLE:  fulltext_table (some_id, the_text)
> >>Rows: 3,237,981
> >>Type: MyISAM
> >>Size: 920.8 MB
> >>
> >>QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text)
> AGAINST
> > ('blue');
> >>or
> >>QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE
> MATCH
> > (the_text)
> >>AGAINST ('blue') LIMIT 0, 20;
> >>
> >>Both are problematic.  I even tried placing a limit of 2 on the
> first query
> > but
> >>it didn't improve anything.  The table has a fulltext index on the
> column and is
> >>optimized.  No other users are connected to the server.
> >>
> >>Is there a RED FLAG in here somewhere?
> >>
> >>MySQL configuration settings (using my-huge.cnf template):
> >>key_buffer = 500M
> >>max_allowed_packet = 1M
> >>table_cache = 512
> >>sort_buffer_size = 10M
> >>read_buffer_size = 2M
> >>myisam_sort_buffer_size = 64M
> >>#thread_cache = 8
> >>thread_concurrency = 8
> >>#- Modifications --- #
> >>ft_min_word_len = 3
> >>set-variable = table_cache=1024
> >>set-variable = max_heap_table_size=64M
> >>set-variable = tmp_table_size=128M
> >>set-variable = query_cache_limit=2M
> >>query_cache_type=1
> >>
> >>
> >>Performance Test:
> >>SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST
> ('white');
> >>+--+
> >>| COUNT(*) |
> >>+--+
> >>|95074 |
> >>+-

Re: Slow Queries on Fast Server?

2004-08-26 Thread Brent Baisley
Capable? I can't think if why it wouldn't be capable. From your posts I  
assume your definition of "capable" in this case is a quick response.  
Are you running 4.0 or 4.1? I think the indexing was changed in 4.1 so  
it would give you better response. 5-20 seconds does seem long,  
assuming your disks are fast.

You may try doubling or tripling your sort_buffer and  
myisam_sort_buffer settings and maybe you read_buffer. Remember the  
template configuration files are still all purpose configurations. You  
are really looking to optimize a specific area.

Unfortunately, I don't have a dataset large enough yet to test myself,  
but I am curious. Improved performance is on the to do list.

On Aug 26, 2004, at 9:07 AM, [EMAIL PROTECTED] wrote:
I'm gathering by the lack of response that perhaps MySQL is incapable  
of executing a count of the number of fulltext matches on 3 million  
rows.
I really thought that MySQL 4 was really suppose to be able to handle  
such a load
I still think my configuration may be to blame
?
- John



--
Could you send the output of an EXPLAIN for your query?

Sure, pretty sure the index is fine though:
mysql> EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH  
(search_text) AGAINST
('black');
+--+--+---+-+- 
+--+--+-+
| table| type | possible_keys | key | key_len  
| ref  | rows |
Extra   |
+--+--+---+-+- 
+--+--+-+
| product_fulltext | fulltext | search_text   | search_text |   0  
|  |1 |
Using where |
+--+--+---+-+- 
+--+--+-+
1 row in set (0.00 sec)


[EMAIL PROTECTED] wrote:
Have you checked the "Optimization" section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best  
performance.  The
query is correct.  The fulltext index is correct as I built the  
fulltext index on
the
single column (took 9 minutes) and even did "repair" and "optimize"  
on the table...
so I
don't think its the index.  I'm thinking its the server config...

- John

[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated  
mysql server
(2.0
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most  
MATCH, AGAINST
queries
are taking 5-20 seconds.  Performance was excellent for some reason  
one day (0.2
-
0.75
seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text)
Rows: 3,237,981
Type: MyISAM
Size: 920.8 MB
QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text)  
AGAINST
('blue');
or
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE  
MATCH
(the_text)
AGAINST ('blue') LIMIT 0, 20;
Both are problematic.  I even tried placing a limit of 2 on the  
first query
but
it didn't improve anything.  The table has a fulltext index on the  
column and is
optimized.  No other users are connected to the server.

Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST  
('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)

Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system
  cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us  
sy id wa
0  0  19500  17800  42432 177172800 060  11330   
0  0 99  1
0  1  19500  21524  42428 1765728  7240   960 0  536   444   
5  1 82 12
0  1  19500  19512  42424 176782000  2348 0  912   592   
0  1 50 49
0  1  19500  17788  42424 176954000  1980 0  868   588   
0  1 51 48
0  1  19500  17568  42424 176976000  2300 0  723   401   
0  0 50 49
0  1  19500  17704  42428 176962000  193620  662   364   
0  0 51 49
0  1  19500  17560  42428 176976400  2224 0  696   400   
0  0 51 49
0  1  19500  17504  42424 176982400  2136 0  670   380   
0  0 51 49
0  1  19500  17616  42424 176971200  2228 0  693   415   
0  0 51 49
0  1  19508  17608  42420 176972408  2348 8  692   389   
0  0 50 50
0  1  19508  17532  42428 176979200  1896   108  654   366   
0  0 50 4

Re: Slow Queries on Fast Server?

2004-08-26 Thread JVanV8
I'm gathering by the lack of response that perhaps MySQL is incapable of executing a 
count of the number of fulltext matches on 3 million rows.  
I really thought that MySQL 4 was really suppose to be able to handle such a load
I still think my configuration may be to blame
?
- John






--
>>Could you send the output of an EXPLAIN for your query?

>Sure, pretty sure the index is fine though:

mysql> EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH (search_text) AGAINST
> ('black');
+--+--+---+-+-+--+--+-+
| table| type | possible_keys | key | key_len | ref  | rows |
Extra   |
+--+--+---+-+-+--+--+-+
| product_fulltext | fulltext | search_text   | search_text |   0 |  |1 |
Using where |
+--+--+---+-+-+--+--+-+
1 row in set (0.00 sec)




[EMAIL PROTECTED] wrote:
>>Have you checked the "Optimization" section of the manual yet?
>>http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
> 
> 
> Oh yes, as I've attempted to configure the my.cnf file for best performance.  The
> query is correct.  The fulltext index is correct as I built the fulltext index on
> the
> single column (took 9 minutes) and even did "repair" and "optimize" on the table...
> so I
> don't think its the index.  I'm thinking its the server config...
> 
> - John
> 
> 
> 
> [EMAIL PROTECTED] wrote:
> 
>>I'm running into a problem with some queries running on a dedicated mysql server
> (2.0
>>GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST
> queries
>>are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2
> -
> 0.75
>>seconds) but it was only fast for a day or so.
>>Here's the rundown:
>>
>>TABLE:  fulltext_table (some_id, the_text) 
>>Rows: 3,237,981 
>>Type: MyISAM
>>Size: 920.8 MB 
>>
>>QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST
> ('blue');
>>or 
>>QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH
> (the_text)
>>AGAINST ('blue') LIMIT 0, 20;
>>
>>Both are problematic.  I even tried placing a limit of 2 on the first query
> but
>>it didn't improve anything.  The table has a fulltext index on the column and is
>>optimized.  No other users are connected to the server.
>>
>>Is there a RED FLAG in here somewhere?
>>
>>MySQL configuration settings (using my-huge.cnf template):
>>key_buffer = 500M
>>max_allowed_packet = 1M
>>table_cache = 512
>>sort_buffer_size = 10M
>>read_buffer_size = 2M
>>myisam_sort_buffer_size = 64M
>>#thread_cache = 8
>>thread_concurrency = 8
>>#- Modifications --- #
>>ft_min_word_len = 3
>>set-variable = table_cache=1024
>>set-variable = max_heap_table_size=64M
>>set-variable = tmp_table_size=128M
>>set-variable = query_cache_limit=2M
>>query_cache_type=1
>>
>>
>>Performance Test:
>>SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
>>+--+
>>| COUNT(*) |
>>+--+
>>|95074 |
>>+--+
>>1 row in set (27.83 sec)
>>
>>Statistics for vmstat 1 (my apologies if this doesn't look pretty):
>>---
>>procs  memory  swap  io system cpu
>> r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>>0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
>> 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
>> 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
>> 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
>> 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
>> 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
>> 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
>> 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
>> 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
>> 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
>> 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
>> 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
>> 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
>> 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
>> 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
>>procs  memory  swap  io system cpu
>> r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>> 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
>> 0  1  19516  17632  424

Re: Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
>Could you send the output of an EXPLAIN for your query?

Sure, pretty sure the index is fine though:

mysql> EXPLAIN SELECT COUNT(*) FROM product_fulltext WHERE MATCH (search_text) AGAINST 
('black');
+--+--+---+-+-+--+--+-+
| table| type | possible_keys | key | key_len | ref  | rows | 
Extra   |
+--+--+---+-+-+--+--+-+
| product_fulltext | fulltext | search_text   | search_text |   0 |  |1 | 
Using where |
+--+--+---+-+-+--+--+-+
1 row in set (0.00 sec)




[EMAIL PROTECTED] wrote:
>>Have you checked the "Optimization" section of the manual yet?
>>http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
> 
> 
> Oh yes, as I've attempted to configure the my.cnf file for best performance.  The
> query is correct.  The fulltext index is correct as I built the fulltext index on the
> single column (took 9 minutes) and even did "repair" and "optimize" on the table... 
> so I
> don't think its the index.  I'm thinking its the server config...
> 
> - John
> 
> 
> 
> [EMAIL PROTECTED] wrote:
> 
>>I'm running into a problem with some queries running on a dedicated mysql server
> (2.0
>>GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST
> queries
>>are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 -
> 0.75
>>seconds) but it was only fast for a day or so.
>>Here's the rundown:
>>
>>TABLE:  fulltext_table (some_id, the_text) 
>>Rows: 3,237,981 
>>Type: MyISAM
>>Size: 920.8 MB 
>>
>>QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST
> ('blue');
>>or 
>>QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH
> (the_text)
>>AGAINST ('blue') LIMIT 0, 20;
>>
>>Both are problematic.  I even tried placing a limit of 2 on the first query
> but
>>it didn't improve anything.  The table has a fulltext index on the column and is
>>optimized.  No other users are connected to the server.
>>
>>Is there a RED FLAG in here somewhere?
>>
>>MySQL configuration settings (using my-huge.cnf template):
>>key_buffer = 500M
>>max_allowed_packet = 1M
>>table_cache = 512
>>sort_buffer_size = 10M
>>read_buffer_size = 2M
>>myisam_sort_buffer_size = 64M
>>#thread_cache = 8
>>thread_concurrency = 8
>>#- Modifications --- #
>>ft_min_word_len = 3
>>set-variable = table_cache=1024
>>set-variable = max_heap_table_size=64M
>>set-variable = tmp_table_size=128M
>>set-variable = query_cache_limit=2M
>>query_cache_type=1
>>
>>
>>Performance Test:
>>SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
>>+--+
>>| COUNT(*) |
>>+--+
>>|95074 |
>>+--+
>>1 row in set (27.83 sec)
>>
>>Statistics for vmstat 1 (my apologies if this doesn't look pretty):
>>---
>>procs  memory  swap  io system cpu
>> r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>>0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
>> 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
>> 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
>> 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
>> 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
>> 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
>> 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
>> 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
>> 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
>> 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
>> 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
>> 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
>> 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
>> 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
>> 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
>>procs  memory  swap  io system cpu
>> r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>> 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
>> 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
>> 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
>> 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
>> 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
>> 0  1  19516  17556  4

Re: Slow Queries on Fast Server?

2004-08-25 Thread V. M. Brasseur
Could you send the output of an EXPLAIN for your query?
--V
[EMAIL PROTECTED] wrote:
Have you checked the "Optimization" section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best performance.  The query is correct.  The 
fulltext index is correct as I built the fulltext index on the single column (took 9 minutes) and even did 
"repair" and "optimize" on the table... so I don't think its the index.  I'm thinking 
its the server config...
- John

[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated mysql server (2.0
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries
are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 - 0.75
seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 
   
QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text)
AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but
it didn't improve anything.  The table has a fulltext index on the column and is
optimized.  No other users are connected to the server.
Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)
Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2
Statistics for top command:
-
PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND

Re: Slow Queries on Fast Server?

2004-08-25 Thread JVanV8
>Have you checked the "Optimization" section of the manual yet?
>http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html

Oh yes, as I've attempted to configure the my.cnf file for best performance.  The 
query is correct.  The fulltext index is correct as I built the fulltext index on the 
single column (took 9 minutes) and even did "repair" and "optimize" on the table... so 
I don't think its the index.  I'm thinking its the server config...

- John



[EMAIL PROTECTED] wrote:
> I'm running into a problem with some queries running on a dedicated mysql server (2.0
> GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
> queries
> are taking 5-20 seconds.  Performance was excellent for some reason one day (0.2 - 
> 0.75
> seconds) but it was only fast for a day or so.
> Here's the rundown:
> 
> TABLE:  fulltext_table (some_id, the_text) 
> Rows: 3,237,981 
> Type: MyISAM
> Size: 920.8 MB 
> 
> QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
> or 
> QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text)
> AGAINST ('blue') LIMIT 0, 20;
> 
> Both are problematic.  I even tried placing a limit of 2 on the first query but
> it didn't improve anything.  The table has a fulltext index on the column and is
> optimized.  No other users are connected to the server.
> 
> Is there a RED FLAG in here somewhere?
> 
> MySQL configuration settings (using my-huge.cnf template):
> key_buffer = 500M
> max_allowed_packet = 1M
> table_cache = 512
> sort_buffer_size = 10M
> read_buffer_size = 2M
> myisam_sort_buffer_size = 64M
> #thread_cache = 8
> thread_concurrency = 8
> #- Modifications --- #
> ft_min_word_len = 3
> set-variable = table_cache=1024
> set-variable = max_heap_table_size=64M
> set-variable = tmp_table_size=128M
> set-variable = query_cache_limit=2M
> query_cache_type=1
> 
> 
> Performance Test:
> SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
> +--+
> | COUNT(*) |
> +--+
> |95074 |
> +--+
> 1 row in set (27.83 sec)
> 
> Statistics for vmstat 1 (my apologies if this doesn't look pretty):
> ---
> procs  memory  swap  io system cpu
>  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
> 0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
>  0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
>  0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
>  0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
>  0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
>  0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
>  0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
>  0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
>  0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
>  0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
>  0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
>  0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
>  0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
>  0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
>  0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
> procs  memory  swap  io system cpu
>  r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
>  0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
>  0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
>  0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
>  0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
>  0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
>  0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
>  0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
>  0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
>  0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
>  0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
>  0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
>  0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
>  0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
>  0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
>  0  0  19516  19056  42404 17697520040 0  13240  0  0 97 

Re: Slow Queries on Fast Server?

2004-08-25 Thread V. M. Brasseur
Have you checked the "Optimization" section of the manual yet?
http://dev.mysql.com/doc/mysql/en/MySQL_Optimization.html
It's probably the best place to start.
Cheers,
--V
[EMAIL PROTECTED] wrote:
I'm running into a problem with some queries running on a dedicated mysql server (2.0 
GHz, 2GB RAM).  Fulltext searching really exemplifies this as most MATCH, AGAINST 
queries are taking 5-20 seconds.  Performance was excellent for some reason one day 
(0.2 - 0.75 seconds) but it was only fast for a day or so.
Here's the rundown:
TABLE:  fulltext_table (some_id, the_text) 
Rows: 3,237,981 
Type: MyISAM
Size: 920.8 MB 

QUERY: SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue');
or 
QUERY: SELECT SQL_CALC_FOUND_ROWS some_id FROM fulltext_table WHERE MATCH (the_text) AGAINST ('blue') LIMIT 0, 20;

Both are problematic.  I even tried placing a limit of 2 on the first query but it 
didn't improve anything.  The table has a fulltext index on the column and is 
optimized.  No other users are connected to the server.
Is there a RED FLAG in here somewhere?
MySQL configuration settings (using my-huge.cnf template):
key_buffer = 500M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 10M
read_buffer_size = 2M
myisam_sort_buffer_size = 64M
#thread_cache = 8
thread_concurrency = 8
#- Modifications --- #
ft_min_word_len = 3
set-variable = table_cache=1024
set-variable = max_heap_table_size=64M
set-variable = tmp_table_size=128M
set-variable = query_cache_limit=2M
query_cache_type=1
Performance Test:
SELECT COUNT(*) FROM fulltext_table WHERE MATCH (the_text) AGAINST ('white');
+--+
| COUNT(*) |
+--+
|95074 |
+--+
1 row in set (27.83 sec)
Statistics for vmstat 1 (my apologies if this doesn't look pretty):
---
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
0  0  19500  17800  42432 177172800 060  11330  0  0 99  1
 0  1  19500  21524  42428 1765728  7240   960 0  536   444  5  1 82 12
 0  1  19500  19512  42424 176782000  2348 0  912   592  0  1 50 49
 0  1  19500  17788  42424 176954000  1980 0  868   588  0  1 51 48
 0  1  19500  17568  42424 176976000  2300 0  723   401  0  0 50 49
 0  1  19500  17704  42428 176962000  193620  662   364  0  0 51 49
 0  1  19500  17560  42428 176976400  2224 0  696   400  0  0 51 49
 0  1  19500  17504  42424 176982400  2136 0  670   380  0  0 51 49
 0  1  19500  17616  42424 176971200  2228 0  693   415  0  0 51 49
 0  1  19508  17608  42420 176972408  2348 8  692   389  0  0 50 50
 0  1  19508  17532  42428 176979200  1896   108  654   366  0  0 50 49
 0  1  19512  17644  42424 176968404  2220 4  720   450  0  1 50 49
 0  1  19516  17620  42420 176971204  2104 4  707   424  0  0 51 48
 0  1  19516  17744  42420 176958800  2476 0  762   462  0  1 50 49
 0  1  19516  17532  42416 176980400  2292 0  719   401  0  0 51 49
procs  memory  swap  io system cpu
 r  b   swpd   free   buff  cache   si   sobibo   incs us sy id wa
 0  1  19516  17388  42424 176994000  221616  699   388  0  0 51 49
 0  1  19516  17632  42420 176970000  1836 0  629   380  0  0 51 49
 0  1  19516  17596  42420 176973200  2112 0  661   374  0  1 51 48
 0  1  19516  17580  42416 176975200  1836 0  631   396  0  0 51 49
 0  1  19516  17624  42416 176970800  2036 0  654   368  0  0 51 49
 0  1  19516  17556  42420 176977200  188016  643   381  0  0 50 50
 0  1  19516  17652  42420 176967600  1984 0  657   380  0  0 51 49
 0  1  19516  17532  42416 176980000  1940 0  646   386  0  1 50 49
 0  1  19516  17520  42416 176981200  1832 0  631   389  0  0 50 49
 0  1  19516  17548  42412 176978800  2052 0  648   387  0  1 50 49
 0  1  19516  17700  42412 176963600  244028  741   448  0  0 50 50
 0  1  19516  17656  42408 176968400  2384 0  683   412  0  1 50 49
 0  1  19516  17676  42408 176966000  2316 0  679   387  0  1 50 49
 0  1  19516  17624  42404 176971200  2128 0  652   407  0  1 50 49
 0  0  19516  19056  42404 17697520040 0  13240  0  0 97  2
Statistics for top command:
-
 PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 4784 root  15   0   488  488   420 S 0.2  0.0   0:00   0 vmstat
 3979 mysql 16   0 68128  52M  2188 S 0.1  2.6   0:06   1 mysqld
 3982 mysql 15   0 68128  52M  2188 S 0.1  2.6   0:05   2 mysqld
1 root  15   0   512  512   452 S 0.0  0.0   0:05   2 init
2 root  RT   0 00 0 SW

RE: slow queries

2002-11-12 Thread Black, Kelly W [PCS]
Make sure when you are creating the database that 

 U_Number   | int(9) unsigned  |  | PRI | NULL

is 

 U_Number   | numeric()  |  | PRI | NULL
-Original Message-
From: Petre Agenbag [mailto:internet@;boesmanland.com]
Sent: Tuesday, November 12, 2002 4:27 AM
To: [EMAIL PROTECTED]
Subject: slow queries


mysql,query

Hi List

I have a couple of slow queries listed in my slow log, but I don't know
what to do from here to make them "fast".

Here is an "EXPLAIN table":

mysql> explain w3t_Users;
++--+--+-+-+
+
| Field  | Type | Null | Key | Default |
Extra  |
++--+--+-+-+
+
| U_Username | varchar(30)  |  | MUL |
||
| U_Password | varchar(32)  |  | |
||
| U_Email| varchar(50)  | YES  | | NULL   
||
| U_Fakeemail| varchar(50)  | YES  | | NULL   
||
| U_Name | varchar(100) | YES  | | NULL   
||
| U_Totalposts   | int(9) unsigned  | YES  | | NULL   
||
| U_Laston   | int(11) unsigned | YES  | | NULL   
||
| U_Signature| text | YES  | | NULL   
||
| U_Homepage | varchar(150) | YES  | | NULL   
||
| U_Occupation   | varchar(150) | YES  | | NULL   
||
| U_Hobbies  | varchar(200) | YES  | | NULL   
||
| U_Location | varchar(200) | YES  | | NULL   
||
| U_Bio  | text | YES  | | NULL   
||
| U_Status   | varchar(15)  |  | MUL | User   
||
| U_Sort | int(4) unsigned  | YES  | | NULL   
||
| U_Display  | varchar(10)  |  | |
||
| U_View | varchar(10)  |  | |
||
| U_PostsPer | int(11) unsigned | YES  | | NULL   
||
| U_Number   | int(9) unsigned  |  | PRI | NULL|
auto_increment |
| U_EReplies | char(3)  |  | |
||
| U_Notify   | char(3)  |  | |
||
| U_TextCols | char(3)  | YES  | | NULL   
||
| U_TextRows | char(3)  | YES  | | NULL   
||
| U_Extra1   | varchar(200) | YES  | | NULL   
||
| U_Extra2   | varchar(200) | YES  | | NULL   
||
| U_Extra3   | varchar(200) | YES  | | NULL   
||
| U_Extra4   | varchar(200) | YES  | | NULL   
||
| U_Extra5   | varchar(200) | YES  | | NULL   
||
| U_Post_Format  | varchar(5)   |  | |
||
| U_Registered   | int(11) unsigned | YES  | | NULL   
||
| U_Preview  | varchar(5)   | YES  | | NULL   
||
| U_Picture  | varchar(150) | YES  | | NULL   
||
| U_PictureView  | char(3)  | YES  | | NULL   
||
| U_Visible  | char(3)  | YES  | | yes
||
| U_PicturePosts | char(3)  | YES  | | NULL   
||
| U_AcceptPriv   | char(3)  | YES  | | yes
||
| U_RegEmail | varchar(50)  | YES  | | NULL   
||
| U_RegIP| varchar(15)  | YES  | | NULL   
||
| U_Groups   | varchar(250) | YES  | | -1-
||
| U_Language | varchar(20)  | YES  | | NULL   
||
| U_Title| varchar(100) | YES  | | NULL   
||
| U_FlatPosts| char(2)  | YES  | | NULL   
||
| U_TempPass | varchar(32)  | YES  | | NULL   
||
| U_Color| varchar(15)  | YES  | | NULL   
||
| U_TempRead | text | YES  | | NULL   
||
| U_StyleSheet   | varchar(50)  | YES  | | NULL   
||
| U_TimeOffset   | varchar(10)  | YES  | | NULL   
||
| U_Privates | int(4) unsigned  | YES  | | 0  
||
| U_FrontPage| varchar(20)  | YES  | | NULL   
||
| U_ActiveThread | int(4) unsigned  | YES  | | NULL   
||
| U_StartPage| char(2)  | YES  | | cp 
||
| U_Favorites| varchar(250) | YES  | | -  
||
| U_ShowSigs | char(3)  | YES  | | NULL   
||
| U_OnlineFormat | char(3)  | YES  | | NULL   
||
| U_Rating   | v

Re: Slow queries

2002-07-27 Thread Bhavin Vyas

Thanks Mike! Yes, the query is certainly using tmp tables for the longest of
times. I will try and implement the solution you provided and see how it
goes. If I do, I will update you with the results.

Regards,
Bhavin.
- Original Message -
From: "mos" <[EMAIL PROTECTED]>
To: "Bhavin Vyas" <[EMAIL PROTECTED]>
Sent: Friday, July 26, 2002 2:21 PM
Subject: Re: Slow queries


> At 05:53 PM 7/24/2002, you wrote:
> >Hello,
> >I am running mysql in our production environment, which have a few
reporting
> >tables with millions of rows. Certain queries cascade 2 such tables that
> >have millions of rows and there by, there are extremely slow, sometimes
> >taking over 10 minutes.
> >However, we are ok with that since the size of the data is extremely
large.
> >However, the problem that we are facing is that when these queries are
> >running, other queries, *unrelated to the 2 large tables* are also put on
> >hold, so they can't be executed either and basically mysql becomes
> >unresponsive until I kill the SLOW query and then everything is back to
> >normal.
> >Anybody knows why this happens and if there is a solution for this (like
> >maybe, reprioritizing the slow query to a very low priority or
something). I
> >would think that mysql would independently try to execute the other
queries
> >but that does not seem to be happening.
> >
> >Thanks,
> >Bhavin.
>
> Bhavin,
>  If MySQL has to create temporary tables to do the join, then of
> course things are going to slow down quite a bit. I've been thinking about
> a solution and this may help (I haven't tried it). Why not create a RAM
> disk and have your MySQL TmpDir point to that? RAM is pretty cheap and a
> 30-100MB RAM drive would speed up table joins quite a bit because it can
be
> written to 10x faster than a hard drive. There are a lot of 3rd party
> software RAM drives out there for Windows and Linux. You would have to
> specify a RAM drive large enough to hold all your temporary files.  There
> are also RAM drive PCI cards and external RAM drive devices but these cost
> a lot more than a software RAM drive. These devices also let you have a
RAM
> drive of 4g or more without taking memory away from the OS. Their data is
> persistent between reboots via an external battery.
>
>  MONTY: It would be nice if TmpDir could accept more than 1
> directory, like a delimited string of directories as in
> "o:\RAM;m:\MySQLTmp1;o:\MySQLTmp2" so when one runs out of disk space, it
> looks for another directory in the path.
>
> Mike
>


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Slow queries

2002-07-24 Thread Lopez David E-r9374c

Bhavin

I'm doing queries to one table with 4 other smaller tables.
The large table is 12-14 million records. With an index,
the table select is 13-15 seconds. With a compound index,
the delay is sub one second. Use explain to verify which
index is being used. Read the manual to tune your index's.

The only time it is slower (4-6 seconds) is when the index
is in swap. Can anyone tell me how to prevent Solaris from
moving part of mysqld to swap? I'm thinking a config file -
but where.

David


> -Original Message-
> From: Bhavin Vyas [mailto:[EMAIL PROTECTED]]
> Sent: Wednesday, July 24, 2002 3:54 PM
> To: [EMAIL PROTECTED]
> Subject: Slow queries
> 
> 
> Hello,
> I am running mysql in our production environment, which have 
> a few reporting
> tables with millions of rows. Certain queries cascade 2 such 
> tables that
> have millions of rows and there by, there are extremely slow, 
> sometimes
> taking over 10 minutes.
> However, we are ok with that since the size of the data is 
> extremely large.
> However, the problem that we are facing is that when these queries are
> running, other queries, *unrelated to the 2 large tables* are 
> also put on
> hold, so they can't be executed either and basically mysql becomes
> unresponsive until I kill the SLOW query and then everything 
> is back to
> normal.
> Anybody knows why this happens and if there is a solution for 
> this (like
> maybe, reprioritizing the slow query to a very low priority 
> or something). I
> would think that mysql would independently try to execute the 
> other queries
> but that does not seem to be happening.
> 
> Thanks,
> Bhavin.
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail 
> <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php