how to improve mysql's query speed in table 'token' of keystone,when using in openstack with lots of VMs?

2014-08-02 Thread 曾国仕
when i used mysql as the keystone's backend in openstack ,i found that the 
'token' table saved 29 millions record (using myisam as engine,the size of 
token.MYD is 100G) and have 4 new token save per second. That result to the 
slow query of a token .since of inserting new token frequently,how could i set 
the configure to speed up the query operation. 


the token's struct is id,expires,extra,valid,user_id with index {expires,valid}
and  the select sql is "select id,expires,extra,valid,user_id from token where 
valid=1 and expires >='-XX-XX XX:XX:XX' and user_id 
='XXX';"with often return 2 results.


Here is some db status data in a real  openstack environment with 381 active 
VMs: 
+---+-+ 
| Variable_name | Value | 
+---+-+ 
| Handler_read_first | 259573419 | 
| Handler_read_key | 1344821219 | 
| Handler_read_next | 3908969530 | 
| Handler_read_prev | 1235 | 
| Handler_read_rnd | 1951101 | 
| Handler_read_rnd_next | 48777237518 | 
+---+-+ 

and 
+-++ 
| Variable_name | Value | 
+-++ 
| Qcache_free_blocks | 498 | 
| Qcache_free_memory | 1192512 | 
| Qcache_hits | 1122242834 | 
| Qcache_inserts | 352700155 | 
| Qcache_lowmem_prunes | 34145019 | 
| Qcache_not_cached | 1529123943 | 
| Qcache_queries_in_cache | 1681 | 
| Qcache_total_blocks | 4949 | 
+-++ ‍


it seems that the 'insert' operation of saving new token affects the query 
buffer,and result of a low-level of query-hit's rate.


please give me some help,thanks.

RE: Mesaure query speed and InnoDB pool

2013-04-17 Thread Ilya Kazakevich
Hello Rick,

>Run your query twice; take the second time.  For most queries the first run
>brings everything into cache, then the second gives you a repeatable,
though
>cached, timing.
Yes, but  I need cache to be > my database size to prevent other pages from
pushing out pages for my query, right?
Or I need to do at the dedicated server..

>Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will critique
>your indexes and query plan.
I speak about query optimization in general)


>Handler* is another way to get consistent values.  These numbers are
>unaffected by caching.
What variable exactly should I take?
Why can't I use" Innodb_pages_read"? That is number of page reads regardless
its source (pool or disk), is not it?


>1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if you are
>running only InnoDB, the buffer_pool should be set to about 70% of
available
>RAM.
I will increase it now.
But I will need to disable swapping also to prevent my OS from swapping out
InnoDB pages.

Ilya.

>
>> -Original Message-
>> From: Ananda Kumar [mailto:anan...@gmail.com]
>> Sent: Tuesday, April 16, 2013 2:06 AM
>> To: Ilya Kazakevich
>> Cc: MySQL
>> Subject: Re: Mesaure query speed and InnoDB pool
>>
>> Does your query use proper indexes.
>> Does your query scan less number blocks/rows can you share the explain
>> plan of the sql
>>
>>
>> On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
>> ilya.kazakev...@jetbrains.com> wrote:
>>
>> > Hello,
>> >
>> > I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when
>> > it reads data from disk and about 2 seconds when data already exists
>> > in pool. And it may take 10 seconds when _some_ pages are on disk
>> > and
>> some are in pool.
>> > So, what is the best way to test query performance? I have several
>> ideas:
>> > * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual
>> > time
>> > * Set pool as small as possible to reduce its effect on query speed
>> > * Set pool larger than my db and run query to load all data into
>> > pool and measure speed then
>> >
>> > How do you measure your queries' speed?
>> >
>> > Ilya Kazakevich
>> >
>> >
>> > --
>> > MySQL General Mailing List
>> > For list archives: http://lists.mysql.com/mysql
>> > To unsubscribe:http://lists.mysql.com/mysql
>> >
>> >


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mesaure query speed and InnoDB pool

2013-04-17 Thread Rick James
>Run your query twice; take the second time.  

One more thing -- avoid the Query cache.  That could lead to really bogus 
timings.

> Yes, but  I need cache to be > my database size to prevent other pages from 
> pushing out pages for my query, right?

Well, yes and no.  If the cache is big enough, there won't be any thrashing.
If the "working set" is smaller than the cache, then there won't be any 
thrashing.  That is, if you don't access all the data/index blocks, there could 
be room for everything that is needed (the "working set").
I often see, say, 100GB on disk and only 5GB of cache, yet the system is 
humming along fine -- the working set is < 5GB and/or the accesses to other 
blocks is infrequent enough so that it is not a big issue.

> I speak about query optimization in general

General tips, many relating to optimization:
http://mysql.rjweb.org/doc.php/ricksrots
Quick lesson in "compound indexes" (something that novices don't understand -- 
and a significant optimization principle):
http://mysql.rjweb.org/doc.php/index1

> I will increase it now.

I hope you are referring to increasing RAM.

> But I will need to disable swapping also to prevent my OS from swapping out 
> InnoDB pages.

NO!  Don't "disable" swapping; "avoid" swapping.  Disabling could cause a crash 
or other nasties.  "Avoiding" means decreasing the tunables so that mysqld does 
not need to be swapped.
This gives the main tunables:
http://mysql.rjweb.org/doc.php/memory
Decrease each as much as practical for your situation.  (For example, change 
max_connections from 200 down to 5 -- assuming you don't need more than 5 
simultaneous connections.)  innodb_buffer_pool_size is probably the biggest 
memory consumer, so it is the easiest way to shrink mysqld's footprint.


> -Original Message-
> From: Ilya Kazakevich [mailto:ilya.kazakev...@jetbrains.com]
> Sent: Wednesday, April 17, 2013 8:05 AM
> To: Rick James
> Cc: 'MySQL'
> Subject: RE: Mesaure query speed and InnoDB pool
> 
> Hello Rick,
> 
> >Run your query twice; take the second time.  For most queries the
> first
> >run brings everything into cache, then the second gives you a
> >repeatable,
> though
> >cached, timing.
> Yes, but  I need cache to be > my database size to prevent other pages
> from pushing out pages for my query, right?
> Or I need to do at the dedicated server..
> 
> >Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will critique
> >your indexes and query plan.
> I speak about query optimization in general)
> 
> 
> >Handler* is another way to get consistent values.  These numbers are
> >unaffected by caching.
> What variable exactly should I take?
> Why can't I use" Innodb_pages_read"? That is number of page reads
> regardless its source (pool or disk), is not it?
> 
> 
> >1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if
> you
> >are running only InnoDB, the buffer_pool should be set to about 70% of
> available
> >RAM.
> I will increase it now.
> But I will need to disable swapping also to prevent my OS from swapping
> out InnoDB pages.
> 
> Ilya.
> 
> >
> >> -Original Message-
> >> From: Ananda Kumar [mailto:anan...@gmail.com]
> >> Sent: Tuesday, April 16, 2013 2:06 AM
> >> To: Ilya Kazakevich
> >> Cc: MySQL
> >> Subject: Re: Mesaure query speed and InnoDB pool
> >>
> >> Does your query use proper indexes.
> >> Does your query scan less number blocks/rows can you share the
> >> explain plan of the sql
> >>
> >>
> >> On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
> >> ilya.kazakev...@jetbrains.com> wrote:
> >>
> >> > Hello,
> >> >
> >> > I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when
> >> > it reads data from disk and about 2 seconds when data already
> >> > exists in pool. And it may take 10 seconds when _some_ pages are
> on
> >> > disk and
> >> some are in pool.
> >> > So, what is the best way to test query performance? I have several
> >> ideas:
> >> > * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of
> actual
> >> > time
> >> > * Set pool as small as possible to reduce its effect on query
> speed
> >> > * Set pool larger than my db and run query to load all data into
> >> > pool and measure speed then
> >> >
> >> > How do you measure your queries' speed?
> >> >
> >> > Ilya Kazakevich
> >> >
> >> >
> >> > --
> >> > MySQL General Mailing List
> >> > For list archives: http://lists.mysql.com/mysql
> >> > To unsubscribe:http://lists.mysql.com/mysql
> >> >
> >> >


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Mesaure query speed and InnoDB pool

2013-04-16 Thread Igor Shevtsov

Hi Rick,
I thought you have to dedicate 70-80% of available RAM not a total RAM.
Saying if I have 2 gig of RAM on my exclusively innodb box, and I 
dedicate 1.4Gig to innodb pool, my 64-bit linux machine will start 
swapping heavily.
If I set it to 800-900M, it just fine and I have like 100M of RAM left 
for some occasional process. I did try it.

Thanks,
Igor

On 16/04/13 16:21, Rick James wrote:

Run your query twice; take the second time.  For most queries the first run 
brings everything into cache, then the second gives you a repeatable, though 
cached, timing.

Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will critique your 
indexes and query plan.

Handler* is another way to get consistent values.  These numbers are unaffected 
by caching.

1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if you are 
running only InnoDB, the buffer_pool should be set to about 70% of available 
RAM.


-Original Message-
From: Ananda Kumar [mailto:anan...@gmail.com]
Sent: Tuesday, April 16, 2013 2:06 AM
To: Ilya Kazakevich
Cc: MySQL
Subject: Re: Mesaure query speed and InnoDB pool

Does your query use proper indexes.
Does your query scan less number blocks/rows can you share the explain
plan of the sql


On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
ilya.kazakev...@jetbrains.com> wrote:


Hello,

I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it
reads data from disk and about 2 seconds when data already exists in
pool. And it may take 10 seconds when _some_ pages are on disk and

some are in pool.

So, what is the best way to test query performance? I have several

ideas:

* Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual
time
* Set pool as small as possible to reduce its effect on query speed
* Set pool larger than my db and run query to load all data into pool
and measure speed then

How do you measure your queries' speed?

Ilya Kazakevich


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mesaure query speed and InnoDB pool

2013-04-16 Thread Rick James
Swapping is really bad.  Shrink buffer_pool as needed to avoid swapping.  The 
70-80% 'rule' works for 'most' machines today, because most machines have a lot 
more than 2GB 'available' to MySQL.  As you observed, 2GB box would probably 
swap if buffer_pool were 1.4GB, so 800-900M is better.  Meanwhile, a 20GB box 
would be fine with 14GB, even 16GB.  The best formula would be something more 
complex than a simple percent.  I recommend "70% of available ram" because it 
safely covers most machines today.  Then I backpeddle like this when I see that 
the machine is smaller.  (Sorry, I was assuming you had a bigger machine, and 
had not allocated as much as you could.)  The old default of 8M is terrible.  
Even the new default is puny (most of the time).

> -Original Message-
> From: Igor Shevtsov [mailto:nixofort...@gmail.com]
> Sent: Tuesday, April 16, 2013 8:38 AM
> To: mysql@lists.mysql.com
> Subject: Re: Mesaure query speed and InnoDB pool
> 
> Hi Rick,
> I thought you have to dedicate 70-80% of available RAM not a total RAM.
> Saying if I have 2 gig of RAM on my exclusively innodb box, and I
> dedicate 1.4Gig to innodb pool, my 64-bit linux machine will start
> swapping heavily.
> If I set it to 800-900M, it just fine and I have like 100M of RAM left
> for some occasional process. I did try it.
> Thanks,
> Igor
> 
> On 16/04/13 16:21, Rick James wrote:
> > Run your query twice; take the second time.  For most queries the
> first run brings everything into cache, then the second gives you a
> repeatable, though cached, timing.
> >
> > Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will
> critique your indexes and query plan.
> >
> > Handler* is another way to get consistent values.  These numbers are
> unaffected by caching.
> >
> > 1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if
> you are running only InnoDB, the buffer_pool should be set to about 70%
> of available RAM.
> >
> >> -Original Message-
> >> From: Ananda Kumar [mailto:anan...@gmail.com]
> >> Sent: Tuesday, April 16, 2013 2:06 AM
> >> To: Ilya Kazakevich
> >> Cc: MySQL
> >> Subject: Re: Mesaure query speed and InnoDB pool
> >>
> >> Does your query use proper indexes.
> >> Does your query scan less number blocks/rows can you share the
> >> explain plan of the sql
> >>
> >>
> >> On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
> >> ilya.kazakev...@jetbrains.com> wrote:
> >>
> >>> Hello,
> >>>
> >>> I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when
> >>> it reads data from disk and about 2 seconds when data already
> exists
> >>> in pool. And it may take 10 seconds when _some_ pages are on disk
> >>> and
> >> some are in pool.
> >>> So, what is the best way to test query performance? I have several
> >> ideas:
> >>> * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual
> >>> time
> >>> * Set pool as small as possible to reduce its effect on query speed
> >>> * Set pool larger than my db and run query to load all data into
> >>> pool and measure speed then
> >>>
> >>> How do you measure your queries' speed?
> >>>
> >>> Ilya Kazakevich
> >>>
> >>>
> >>> --
> >>> MySQL General Mailing List
> >>> For list archives: http://lists.mysql.com/mysql
> >>> To unsubscribe:http://lists.mysql.com/mysql
> >>>
> >>>
> 
> 
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Mesaure query speed and InnoDB pool

2013-04-16 Thread Rick James
Run your query twice; take the second time.  For most queries the first run 
brings everything into cache, then the second gives you a repeatable, though 
cached, timing.

Please provide EXPLAIN SELECT, SHOW CREATE TABLE, and we will critique your 
indexes and query plan.

Handler* is another way to get consistent values.  These numbers are unaffected 
by caching.

1GB buffer_pool?  You have only 2GB of available RAM?  Normally, if you are 
running only InnoDB, the buffer_pool should be set to about 70% of available 
RAM.

> -Original Message-
> From: Ananda Kumar [mailto:anan...@gmail.com]
> Sent: Tuesday, April 16, 2013 2:06 AM
> To: Ilya Kazakevich
> Cc: MySQL
> Subject: Re: Mesaure query speed and InnoDB pool
> 
> Does your query use proper indexes.
> Does your query scan less number blocks/rows can you share the explain
> plan of the sql
> 
> 
> On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
> ilya.kazakev...@jetbrains.com> wrote:
> 
> > Hello,
> >
> > I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it
> > reads data from disk and about 2 seconds when data already exists in
> > pool. And it may take 10 seconds when _some_ pages are on disk and
> some are in pool.
> > So, what is the best way to test query performance? I have several
> ideas:
> > * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual
> > time
> > * Set pool as small as possible to reduce its effect on query speed
> > * Set pool larger than my db and run query to load all data into pool
> > and measure speed then
> >
> > How do you measure your queries' speed?
> >
> > Ilya Kazakevich
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:http://lists.mysql.com/mysql
> >
> >

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Mesaure query speed and InnoDB pool

2013-04-16 Thread Ilya Kazakevich
Hello,

I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it reads
data from disk and about 2 seconds when data already exists in pool. And it
may take 10 seconds when _some_ pages are on disk and some are in pool. 
So, what is the best way to test query performance? I have several ideas:
* Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual time
* Set pool as small as possible to reduce its effect on query speed
* Set pool larger than my db and run query to load all data into pool and
measure speed then

How do you measure your queries' speed?

Ilya Kazakevich


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Mesaure query speed and InnoDB pool

2013-04-16 Thread Ananda Kumar
Does your query use proper indexes.
Does your query scan less number blocks/rows
can you share the explain plan of the sql


On Tue, Apr 16, 2013 at 2:23 PM, Ilya Kazakevich <
ilya.kazakev...@jetbrains.com> wrote:

> Hello,
>
> I have 12Gb DB and 1Gb InnoDB pool. My query takes 50 seconds when it reads
> data from disk and about 2 seconds when data already exists in pool. And it
> may take 10 seconds when _some_ pages are on disk and some are in pool.
> So, what is the best way to test query performance? I have several ideas:
> * Count 'Innodb_rows_read' or 'Innodb_pages_read' instead of actual time
> * Set pool as small as possible to reduce its effect on query speed
> * Set pool larger than my db and run query to load all data into pool and
> measure speed then
>
> How do you measure your queries' speed?
>
> Ilya Kazakevich
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/mysql
>
>


Re: Query Speed

2006-06-28 Thread Jay Pipes

Jesse wrote:
The error was, 
"Unknown column 'primary_grouping.State' in 'on clause'".  I assume this 
is in the ON clause that's JOINing the member_counts to the 
primary_grouping.


No, that's because of a stupid mistake on my part.  Here you go:

SELECT
primary_grouping.State
, primary_grouping.Sub
, ST.StateName
, primary_grouping.ChapterType
, member_counts.member_count AS TotMem
, advisor_counts.advisor_count AS TotAdv
FROM (
  SELECT S.State, S.Sub, C.ChapterType
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS primary_grouping
INNER JOIN State ST
 ON primary_grouping.State = ST.State
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS member_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN Members M
ON C.ID = M.ChapterID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS member_counts
 ON primary_grouping.State = member_counts.State
 AND primary_grouping.Sub = member_counts.Sub
 AND primary_grouping.ChapterType = member_counts.ChapterType
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS advisor_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN AdvChapterLink ACL
ON C.ID = ACL.ChapterID
   INNER JOIN LocalAdvisors LA
ON ACL.AdvisorID = LA.ID
AND LA.InvoiceNo IS NOT NULL
  GROUP BY S.State, S.Sub, C.ChapterType
) AS advisor_counts
 ON primary_grouping.State = advisor_counts.State
 AND primary_grouping.Sub = advisor_counts.Sub
 AND primary_grouping.ChapterType = advisor_counts.ChapterType;



- Original Message - From: "Jay Pipes" <[EMAIL PROTECTED]>
To: "Jesse" <[EMAIL PROTECTED]>
Cc: "mysql" 
Sent: Tuesday, June 27, 2006 7:22 PM
Subject: Re: Query Speed



Jesse wrote:
I worked with the query for a while, trying equi-joins instead of 
JOINs, and variuos other things.  I found that the queries that I was 
using to represent the TotMem & TotAdv columns was what was closing 
things down. I finally ended up using a sub-query to solve the 
problem.  I gathered the basic information, then added the "count 
columns" to that, and now, I'm down to less than a second execution.  
Much better!  Here's the new query:


SELECT sq.State, sq.Sub, sq.StateName, sq.ChapterType, (SELECT 
Count(*) FROM Members M JOIN Chapters C1 ON C1.ID=M.ChapterID JOIN 
Schools S1 ON S1.ID=C1.SchoolID WHERE S1.State=sq.State AND 
S1.Sub=sq.Sub AND C1.ChapterType=sq.ChapterType) AS TotMem, (SELECT 
Count(*) FROM ((AdvChapterLink ACL JOIN LocalAdvisors LA ON 
LA.ID=ACL.AdvisorID) JOIN Chapters C2 ON C2.ID=ACL.ChapterID) JOIN 
Schools S2 ON S2.ID=C2.SchoolID WHERE S2.State=Sq.State AND 
S2.Sub=Sq.Sub AND C2.ChapterType=sq.ChapterType AND LA.InvoiceNo IS 
NOT NULL) AS TotAdv FROM (SELECT S.State, S.Sub, ST.StateName, 
C.ChapterType FROM Schools S, State ST, Chapters C WHERE 
ST.State=S.State AND C.SchoolID=S.ID GROUP BY S.State, S.Sub, 
ST.StateName, C.ChapterType) AS sq ORDER BY State, Sub, ChapterType


Correlated subqueries are evil.  You will get much better performance 
by converting the correlations to derived tables (subqueries in the 
FROM clause) like so (I formatted it differently so I could read it).


Here is your original query (from above):

SELECT
sq.State
, sq.Sub
, sq.StateName
, sq.ChapterType
, (
SELECT Count(*)
FROM Members M
JOIN Chapters C1 ON C1.ID=M.ChapterID
JOIN Schools S1 ON S1.ID=C1.SchoolID
WHERE S1.State=sq.State
AND S1.Sub=sq.Sub
AND C1.ChapterType=sq.ChapterType
) AS TotMem
, (
SELECT Count(*)
FROM (
(
AdvChapterLink ACL
JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID
)
JOIN Chapters C2 ON C2.ID=ACL.ChapterID
)
JOIN Schools S2 ON S2.ID=C2.SchoolID
WHERE S2.State=Sq.State
AND S2.Sub=Sq.Sub
AND C2.ChapterType=sq.ChapterType
AND LA.InvoiceNo IS NOT NULL
) AS TotAdv
FROM (
SELECT S.State, S.Sub, ST.StateName, C.ChapterType
FROM Schools S, State ST, Chapters C
WHERE ST.State=S.State
AND C.SchoolID=S.ID
GROUP BY S.State, S.Sub, ST.StateName, C.ChapterType
) AS sq
ORDER BY
State
, Sub
, ChapterType;

The problem is that the two correlated subqueries in the SELECT clause 
will be executed *once for every result returned from the outer 
primary query.  That means lots of queries, needlessly.  Instead, get 
rid of all the parenthetical joins (not needed), the join to State in 
the GROUP BY (not needed) and convert the correlations into derived 
tables, using standard joins.  Remember to think in terms of the 
*sets* of data upon which you are operating, not in terms of iterating 
through a set of data. They are different concepts.  Here's the new 
query:


SELECT
primary_grouping.State
, primary_grouping.Sub
, ST.StateName
, primary_grouping.ChapterType
, member_counts.member_count AS TotMem
, advisor_counts.advisor_count AS TotAdv
FROM (
  SELECT S.State, S.Sub, C.ChapterType
  FROM Schools S
   INNER JOIN Chapters 

Re: Query Speed

2006-06-28 Thread Jesse
I've never seen a query like this, and didn't know it was possible to do a 
lot of the things that you're doing. That's great.  I believe I understand 
most of it.  However, I got an error that doesn't seem to make any sense to 
me when I tried to execute the query.  The error was, "Unknown column 
'primary_grouping.State' in 'on clause'".  I assume this is in the ON clause 
that's JOINing the member_counts to the primary_grouping.


BTW, the "original" query executes in .811 seconds.  I'm very satisfied with 
that timing.  However, if I can get your more efficient query working, I 
would like to.  Any ideas why it's not working?


Thanks,
Jesse

- Original Message - 
From: "Jay Pipes" <[EMAIL PROTECTED]>

To: "Jesse" <[EMAIL PROTECTED]>
Cc: "mysql" 
Sent: Tuesday, June 27, 2006 7:22 PM
Subject: Re: Query Speed



Jesse wrote:
I worked with the query for a while, trying equi-joins instead of JOINs, 
and variuos other things.  I found that the queries that I was using to 
represent the TotMem & TotAdv columns was what was closing things down. 
I finally ended up using a sub-query to solve the problem.  I gathered 
the basic information, then added the "count columns" to that, and now, 
I'm down to less than a second execution.  Much better!  Here's the new 
query:


SELECT sq.State, sq.Sub, sq.StateName, sq.ChapterType, (SELECT Count(*) 
FROM Members M JOIN Chapters C1 ON C1.ID=M.ChapterID JOIN Schools S1 ON 
S1.ID=C1.SchoolID WHERE S1.State=sq.State AND S1.Sub=sq.Sub AND 
C1.ChapterType=sq.ChapterType) AS TotMem, (SELECT Count(*) FROM 
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN 
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID 
WHERE S2.State=Sq.State AND S2.Sub=Sq.Sub AND 
C2.ChapterType=sq.ChapterType AND LA.InvoiceNo IS NOT NULL) AS TotAdv 
FROM (SELECT S.State, S.Sub, ST.StateName, C.ChapterType FROM Schools S, 
State ST, Chapters C WHERE ST.State=S.State AND C.SchoolID=S.ID GROUP BY 
S.State, S.Sub, ST.StateName, C.ChapterType) AS sq ORDER BY State, Sub, 
ChapterType


Correlated subqueries are evil.  You will get much better performance by 
converting the correlations to derived tables (subqueries in the FROM 
clause) like so (I formatted it differently so I could read it).


Here is your original query (from above):

SELECT
sq.State
, sq.Sub
, sq.StateName
, sq.ChapterType
, (
SELECT Count(*)
FROM Members M
JOIN Chapters C1 ON C1.ID=M.ChapterID
JOIN Schools S1 ON S1.ID=C1.SchoolID
WHERE S1.State=sq.State
AND S1.Sub=sq.Sub
AND C1.ChapterType=sq.ChapterType
) AS TotMem
, (
SELECT Count(*)
FROM (
(
AdvChapterLink ACL
JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID
)
JOIN Chapters C2 ON C2.ID=ACL.ChapterID
)
JOIN Schools S2 ON S2.ID=C2.SchoolID
WHERE S2.State=Sq.State
AND S2.Sub=Sq.Sub
AND C2.ChapterType=sq.ChapterType
AND LA.InvoiceNo IS NOT NULL
) AS TotAdv
FROM (
SELECT S.State, S.Sub, ST.StateName, C.ChapterType
FROM Schools S, State ST, Chapters C
WHERE ST.State=S.State
AND C.SchoolID=S.ID
GROUP BY S.State, S.Sub, ST.StateName, C.ChapterType
) AS sq
ORDER BY
State
, Sub
, ChapterType;

The problem is that the two correlated subqueries in the SELECT clause 
will be executed *once for every result returned from the outer primary 
query.  That means lots of queries, needlessly.  Instead, get rid of all 
the parenthetical joins (not needed), the join to State in the GROUP BY 
(not needed) and convert the correlations into derived tables, using 
standard joins.  Remember to think in terms of the *sets* of data upon 
which you are operating, not in terms of iterating through a set of data. 
They are different concepts.  Here's the new query:


SELECT
primary_grouping.State
, primary_grouping.Sub
, ST.StateName
, primary_grouping.ChapterType
, member_counts.member_count AS TotMem
, advisor_counts.advisor_count AS TotAdv
FROM (
  SELECT S.State, S.Sub, C.ChapterType
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS primary_grouping
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS member_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN Members M
ON C.ID = M.ChapterID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS member_counts
 ON primary_grouping.State = member_counts.State
 AND primary_grouping.Sub = member_counts.Sub
 AND primary_grouping.ChapterType = member_counts.ChapterType
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS advisor_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN AdvChapterLink ACL
ON C.ID = ACL.ChapterID
   INNER JOIN LocalAdvisors LA
ON ACL.AdvisorID = LA.ID
AND LA.InvoiceNo IS NOT NULL
  GROUP BY S.State, S.Sub, C.ChapterType
) AS advisor_counts
INNER JOIN State ST
 ON primary_grouping.State =

Re: Query Speed

2006-06-27 Thread Jay Pipes

Jesse wrote:
I worked with the query for a while, trying equi-joins instead of JOINs, 
and variuos other things.  I found that the queries that I was using to 
represent the TotMem & TotAdv columns was what was closing things down.  
I finally ended up using a sub-query to solve the problem.  I gathered 
the basic information, then added the "count columns" to that, and now, 
I'm down to less than a second execution.  Much better!  Here's the new 
query:


SELECT sq.State, sq.Sub, sq.StateName, sq.ChapterType, (SELECT Count(*) 
FROM Members M JOIN Chapters C1 ON C1.ID=M.ChapterID JOIN Schools S1 ON 
S1.ID=C1.SchoolID WHERE S1.State=sq.State AND S1.Sub=sq.Sub AND 
C1.ChapterType=sq.ChapterType) AS TotMem, (SELECT Count(*) FROM 
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN 
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID 
WHERE S2.State=Sq.State AND S2.Sub=Sq.Sub AND 
C2.ChapterType=sq.ChapterType AND LA.InvoiceNo IS NOT NULL) AS TotAdv 
FROM (SELECT S.State, S.Sub, ST.StateName, C.ChapterType FROM Schools S, 
State ST, Chapters C WHERE ST.State=S.State AND C.SchoolID=S.ID GROUP BY 
S.State, S.Sub, ST.StateName, C.ChapterType) AS sq ORDER BY State, Sub, 
ChapterType


Correlated subqueries are evil.  You will get much better performance by 
converting the correlations to derived tables (subqueries in the FROM 
clause) like so (I formatted it differently so I could read it).


Here is your original query (from above):

SELECT
sq.State
, sq.Sub
, sq.StateName
, sq.ChapterType
, (
SELECT Count(*)
FROM Members M
JOIN Chapters C1 ON C1.ID=M.ChapterID
JOIN Schools S1 ON S1.ID=C1.SchoolID
WHERE S1.State=sq.State
AND S1.Sub=sq.Sub
AND C1.ChapterType=sq.ChapterType
) AS TotMem
, (
SELECT Count(*)
FROM (
(
AdvChapterLink ACL
JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID
)
JOIN Chapters C2 ON C2.ID=ACL.ChapterID
)
JOIN Schools S2 ON S2.ID=C2.SchoolID
WHERE S2.State=Sq.State
AND S2.Sub=Sq.Sub
AND C2.ChapterType=sq.ChapterType
AND LA.InvoiceNo IS NOT NULL
) AS TotAdv
FROM (
SELECT S.State, S.Sub, ST.StateName, C.ChapterType
FROM Schools S, State ST, Chapters C
WHERE ST.State=S.State
AND C.SchoolID=S.ID
GROUP BY S.State, S.Sub, ST.StateName, C.ChapterType
) AS sq
ORDER BY
State
, Sub
, ChapterType;

The problem is that the two correlated subqueries in the SELECT clause 
will be executed *once for every result returned from the outer primary 
query.  That means lots of queries, needlessly.  Instead, get rid of all 
the parenthetical joins (not needed), the join to State in the GROUP BY 
(not needed) and convert the correlations into derived tables, using 
standard joins.  Remember to think in terms of the *sets* of data upon 
which you are operating, not in terms of iterating through a set of 
data.  They are different concepts.  Here's the new query:


SELECT
primary_grouping.State
, primary_grouping.Sub
, ST.StateName
, primary_grouping.ChapterType
, member_counts.member_count AS TotMem
, advisor_counts.advisor_count AS TotAdv
FROM (
  SELECT S.State, S.Sub, C.ChapterType
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS primary_grouping
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS member_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN Members M
ON C.ID = M.ChapterID
  GROUP BY S.State, S.Sub, C.ChapterType
) AS member_counts
 ON primary_grouping.State = member_counts.State
 AND primary_grouping.Sub = member_counts.Sub
 AND primary_grouping.ChapterType = member_counts.ChapterType
INNER JOIN (
  SELECT S.State, S.Sub, C.ChapterType, COUNT(*) AS advisor_count
  FROM Schools S
   INNER JOIN Chapters C
ON S.ID = C.SchoolID
   INNER JOIN AdvChapterLink ACL
ON C.ID = ACL.ChapterID
   INNER JOIN LocalAdvisors LA
ON ACL.AdvisorID = LA.ID
AND LA.InvoiceNo IS NOT NULL
  GROUP BY S.State, S.Sub, C.ChapterType
) AS advisor_counts
INNER JOIN State ST
 ON primary_grouping.State = ST.State
ORDER BY
State
, Sub
, ChapterType;

This should reduce the number of queries actually executed to 3 instead 
of > 700 million


Let us know the output of EXPLAIN and the speed difference.

Cheers,

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Speed

2006-06-27 Thread Jesse
I worked with the query for a while, trying equi-joins instead of JOINs, and 
variuos other things.  I found that the queries that I was using to 
represent the TotMem & TotAdv columns was what was closing things down.  I 
finally ended up using a sub-query to solve the problem.  I gathered the 
basic information, then added the "count columns" to that, and now, I'm down 
to less than a second execution.  Much better!  Here's the new query:


SELECT sq.State, sq.Sub, sq.StateName, sq.ChapterType, (SELECT Count(*) FROM 
Members M JOIN Chapters C1 ON C1.ID=M.ChapterID JOIN Schools S1 ON 
S1.ID=C1.SchoolID WHERE S1.State=sq.State AND S1.Sub=sq.Sub AND 
C1.ChapterType=sq.ChapterType) AS TotMem, (SELECT Count(*) FROM 
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN 
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID 
WHERE S2.State=Sq.State AND S2.Sub=Sq.Sub AND C2.ChapterType=sq.ChapterType 
AND LA.InvoiceNo IS NOT NULL) AS TotAdv FROM (SELECT S.State, S.Sub, 
ST.StateName, C.ChapterType FROM Schools S, State ST, Chapters C WHERE 
ST.State=S.State AND C.SchoolID=S.ID GROUP BY S.State, S.Sub, ST.StateName, 
C.ChapterType) AS sq ORDER BY State, Sub, ChapterType


Anyway, thanks for your help.

Jesse


- Original Message - 
From: "Dan Buettner" <[EMAIL PROTECTED]>

Cc: "Jesse" <[EMAIL PROTECTED]>; "mysql" 
Sent: Monday, June 26, 2006 8:18 PM
Subject: Re: Query Speed



Sorry, I had an extra '9' in there.  Math is actually:

54 * 69 * 1 * 65 * 1 * 2 * 1 * 65 * 1 * 24 = 755632800

Still a lot of operations, and if you index the state field, you can 
potentially reduce it by a factor of 54.


Dan



Dan Buettner wrote:
EXPLAIN output is a good way to see how MySQL is planning to execute your 
query - which indexes it chooses to use, how much work it thinks it needs 
to do for each table reference.


My understanding is that you can get an approximate / rough idea of 
operations needed by multiplying all the 'rows' columns in the EXPLAIN 
output.  For you, that's

54 * 9 * 69 * 1 * 65 * 1 * 2 * 1 * 65 * 1 * 24 = 6800695200

which is obviously a lot of operations.

Your state table may not have an index on the column you are joining on. 
Try adding one there.  (Don't see your 'state' table def here to check).


What does your CPU usage look like while this query is running?  And what 
does 'SHOW PROCESSLIST' tell you while this query is running?


Dan



Jesse wrote:
Thanks for the help.  Just so you know, I stated in the original message 
that the tables are InnoDB, but I've since converted them to MyISAM to 
see if that helped.  It didn't.  Here's the information you wanted:


Here are the table structures:

CREATE TABLE `members` (
 `ID` int(10) NOT NULL,
 `ChapterID` int(10) default NULL,
 `FirstName` varchar(25) character set utf8 default NULL,
 `MI` varchar(1) character set utf8 default NULL,
 `LastName` varchar(25) character set utf8 default NULL,
 `UID` varchar(15) character set utf8 default NULL,
 `MemberType` varchar(20) character set utf8 default NULL,
 `InvoiceNo` varchar(7) character set utf8 default NULL,
 `PayDate` datetime default NULL,
 `MembershipExpires` datetime default NULL,
 `NLCEligible` tinyint(1) NOT NULL default '1',
 `PayNatDues` tinyint(1) NOT NULL default '1',
 `GPA` decimal(18,2) default NULL,
 `GradYear` int(10) default NULL,
 `Gender` varchar(1) character set utf8 default NULL,
 `BusEdCourse` varchar(40) character set utf8 default NULL,
 `AddDate` datetime default NULL,
 `PhotoID` smallint(5) default NULL,
 PRIMARY KEY  (`ID`),
 KEY `IX_Members` (`LastName`,`FirstName`,`MI`),
 KEY `IX_Members_1` (`UID`),
 KEY `IX_Members_2` (`InvoiceNo`),
 KEY `IX_Members_3` (`ChapterID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

CREATE TABLE `chapters` (
 `ID` int(10) NOT NULL,
 `SchoolID` int(10) default NULL,
 `Name` varchar(50) character set utf8 default NULL,
 `ChapterType` varchar(25) character set utf8 default NULL,
 `UID` varchar(8) character set utf8 default NULL,
 `NextMemNo` int(10) default NULL,
 `Males` int(10) default NULL,
 `Females` int(10) default NULL,
 `AfricanAmerican` int(10) default NULL,
 `Asian` int(10) default NULL,
 `Caucasian` int(10) default NULL,
 `Hispanic` int(10) default NULL,
 `NativeAmerican` int(10) default NULL,
 `Other` int(10) default NULL,
 `Grade6` int(10) default NULL,
 `Grade7` int(10) default NULL,
 `Grade8` int(10) default NULL,
 `Grade9` int(10) default NULL,
 `Grade10` int(10) default NULL,
 `Grade11` int(10) default NULL,
 `Grade12` int(10) default NULL,
 `Freshmen` int(10) default NULL,
 `Sophomore` int(10) default NULL,
 `Junior` int(10) default NULL,
 `Senior` int(10) default NULL,
 `PostGraduate` int(10) default NULL,
 `AgeBelow22` int(10) default NULL,
 `Age22_25` int(10) default NULL,
 `Age26_30` int(10) default NULL,
 `Age31_40` int(10) default NU

Re: Query Speed

2006-06-26 Thread Dan Buettner
L,
 `Fax` varchar(15) character set utf8 default NULL,
 `Region` varchar(10) character set utf8 default NULL,
 `District` varchar(10) character set utf8 default NULL,
 `InvoiceRequired` tinyint(1) default '0',
 `PrincipalsName` varchar(50) default NULL,
 `PrincipalsEMail` varchar(65) default NULL,
 PRIMARY KEY  (`ID`),
 KEY `IX_Schools` (`Name`),
 KEY `IX_Schools1` (`State`,`Sub`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

CREATE TABLE `advchapterlink` (
 `ID` int(10) NOT NULL,
 `AdvisorID` int(10) default NULL,
 `ChapterID` int(10) default NULL,
 `Rank` int(10) default NULL,
 `Primary` tinyint(1) NOT NULL default '0',
 `AllowToEdit` tinyint(1) NOT NULL default '1',
 PRIMARY KEY  (`ID`),
 KEY `IX_AdvChapterLink` (`AdvisorID`,`ChapterID`),
 KEY `IX_AdvChapterLink_1` (`ChapterID`,`AdvisorID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

CREATE TABLE `localadvisors` (
 `ID` int(10) NOT NULL,
 `FirstName` varchar(30) character set utf8 default NULL,
 `LastName` varchar(35) character set utf8 default NULL,
 `ChapterID` int(10) default NULL,
 `Address` varchar(50) character set utf8 default NULL,
 `City` varchar(30) character set utf8 default NULL,
 `State` varchar(2) character set utf8 default NULL,
 `Sub` varchar(2) character set utf8 default NULL,
 `Zip` varchar(10) character set utf8 default NULL,
 `EMailAddress` varchar(50) character set utf8 default NULL,
 `HomePhone` varchar(15) character set utf8 default NULL,
 `WorkPhone` varchar(15) character set utf8 default NULL,
 `WorkExt` varchar(10) character set utf8 default NULL,
 `CellPhone` varchar(15) character set utf8 default NULL,
 `UserName` varchar(50) character set utf8 default NULL,
 `FirstTimeIn` tinyint(1) NOT NULL default '1',
 `Fax` varchar(15) character set utf8 default NULL,
 `FaxPermission` tinyint(1) NOT NULL default '1',
 `EMailPermission` tinyint(1) NOT NULL default '1',
 `ContactType` varchar(20) character set utf8 default NULL,
 `InvoiceNo` varchar(7) character set utf8 default NULL,
 `PayDate` datetime default NULL,
 `StartService` int(10) default NULL,
 PRIMARY KEY  (`ID`),
 KEY `IX_LocalAdvisors` (`LastName`,`FirstName`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

EXPLAIN returned several rows of information:

1, 'PRIMARY', 'ST', 'ALL', 'IX_State', '', '', '', 54, 'Using 
temporary; Using filesort'
1, 'PRIMARY', 'S', 'ref', 'PRIMARY,IX_Schools1', 'IX_Schools1', '9', 
'bpa.ST.State', 69, 'Using where'
1, 'PRIMARY', 'C', 'ref', 'IX_Chapters_1', 'IX_Chapters_1', '5', 
'bpa.S.ID', 1, 'Using where'
3, 'DEPENDENT SUBQUERY', 'S2', 'ref', 'PRIMARY,IX_Schools1', 
'IX_Schools1', '18', 'bpa.S.State,bpa.S.Sub', 65, 'Using where'
3, 'DEPENDENT SUBQUERY', 'C2', 'ref', 
'PRIMARY,IX_Chapters_1,IX_Chapters_2', 'IX_Chapters_1', '5', 
'bpa.S2.ID', 1, 'Using where'
3, 'DEPENDENT SUBQUERY', 'ACL', 'ref', 
'IX_AdvChapterLink,IX_AdvChapterLink_1', 'IX_AdvChapterLink_1', '5', 
'bpa.C2.ID', 2, 'Using where; Using index'
3, 'DEPENDENT SUBQUERY', 'LA', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 
'bpa.ACL.AdvisorID', 1, 'Using where'
2, 'DEPENDENT SUBQUERY', 'S1', 'ref', 'PRIMARY,IX_Schools1', 
'IX_Schools1', '18', 'bpa.S.State,bpa.S.Sub', 65, 'Using where'
2, 'DEPENDENT SUBQUERY', 'C1', 'ref', 
'PRIMARY,IX_Chapters_1,IX_Chapters_2', 'IX_Chapters_1', '5', 
'bpa.S1.ID', 1, 'Using where'
2, 'DEPENDENT SUBQUERY', 'M', 'ref', 'IX_Members_3', 'IX_Members_3', 
'5', 'bpa.C1.ID', 24, 'Using where; Using index'


Not sure what all that means, but there it is.

Now, as for the other stuff.  I'm running MySQL version 5.0.15-nt.  
I'm running on my development machine, which is a Dell Demension 
3000.  It's running a Pentium 4 2.8 GHz CPU with 1 GB of RAM, with 
Windows XP Pro.


Thanks,
Jesse

- Original Message - From: "Dan Buettner" 
<[EMAIL PROTECTED]>

To: "Jesse" <[EMAIL PROTECTED]>
Cc: "MySQL List" 
Sent: Monday, June 26, 2006 5:21 PM
Subject: Re: Query Speed



Jesse, can you post table structures ( SHOW CREATE TABLE tablename )
and the output you get from EXPLAIN followed by the query below?

Also what version of MySQL you're on, and high level details of the 
hardware (RA

Re: Query Speed

2006-06-26 Thread Dan Buettner
LL,
 `PrincipalsEMail` varchar(65) default NULL,
 PRIMARY KEY  (`ID`),
 KEY `IX_Schools` (`Name`),
 KEY `IX_Schools1` (`State`,`Sub`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

CREATE TABLE `advchapterlink` (
 `ID` int(10) NOT NULL,
 `AdvisorID` int(10) default NULL,
 `ChapterID` int(10) default NULL,
 `Rank` int(10) default NULL,
 `Primary` tinyint(1) NOT NULL default '0',
 `AllowToEdit` tinyint(1) NOT NULL default '1',
 PRIMARY KEY  (`ID`),
 KEY `IX_AdvChapterLink` (`AdvisorID`,`ChapterID`),
 KEY `IX_AdvChapterLink_1` (`ChapterID`,`AdvisorID`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

CREATE TABLE `localadvisors` (
 `ID` int(10) NOT NULL,
 `FirstName` varchar(30) character set utf8 default NULL,
 `LastName` varchar(35) character set utf8 default NULL,
 `ChapterID` int(10) default NULL,
 `Address` varchar(50) character set utf8 default NULL,
 `City` varchar(30) character set utf8 default NULL,
 `State` varchar(2) character set utf8 default NULL,
 `Sub` varchar(2) character set utf8 default NULL,
 `Zip` varchar(10) character set utf8 default NULL,
 `EMailAddress` varchar(50) character set utf8 default NULL,
 `HomePhone` varchar(15) character set utf8 default NULL,
 `WorkPhone` varchar(15) character set utf8 default NULL,
 `WorkExt` varchar(10) character set utf8 default NULL,
 `CellPhone` varchar(15) character set utf8 default NULL,
 `UserName` varchar(50) character set utf8 default NULL,
 `FirstTimeIn` tinyint(1) NOT NULL default '1',
 `Fax` varchar(15) character set utf8 default NULL,
 `FaxPermission` tinyint(1) NOT NULL default '1',
 `EMailPermission` tinyint(1) NOT NULL default '1',
 `ContactType` varchar(20) character set utf8 default NULL,
 `InvoiceNo` varchar(7) character set utf8 default NULL,
 `PayDate` datetime default NULL,
 `StartService` int(10) default NULL,
 PRIMARY KEY  (`ID`),
 KEY `IX_LocalAdvisors` (`LastName`,`FirstName`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1

EXPLAIN returned several rows of information:

1, 'PRIMARY', 'ST', 'ALL', 'IX_State', '', '', '', 54, 'Using temporary; 
Using filesort'
1, 'PRIMARY', 'S', 'ref', 'PRIMARY,IX_Schools1', 'IX_Schools1', '9', 
'bpa.ST.State', 69, 'Using where'
1, 'PRIMARY', 'C', 'ref', 'IX_Chapters_1', 'IX_Chapters_1', '5', 
'bpa.S.ID', 1, 'Using where'
3, 'DEPENDENT SUBQUERY', 'S2', 'ref', 'PRIMARY,IX_Schools1', 
'IX_Schools1', '18', 'bpa.S.State,bpa.S.Sub', 65, 'Using where'
3, 'DEPENDENT SUBQUERY', 'C2', 'ref', 
'PRIMARY,IX_Chapters_1,IX_Chapters_2', 'IX_Chapters_1', '5', 
'bpa.S2.ID', 1, 'Using where'
3, 'DEPENDENT SUBQUERY', 'ACL', 'ref', 
'IX_AdvChapterLink,IX_AdvChapterLink_1', 'IX_AdvChapterLink_1', '5', 
'bpa.C2.ID', 2, 'Using where; Using index'
3, 'DEPENDENT SUBQUERY', 'LA', 'eq_ref', 'PRIMARY', 'PRIMARY', '4', 
'bpa.ACL.AdvisorID', 1, 'Using where'
2, 'DEPENDENT SUBQUERY', 'S1', 'ref', 'PRIMARY,IX_Schools1', 
'IX_Schools1', '18', 'bpa.S.State,bpa.S.Sub', 65, 'Using where'
2, 'DEPENDENT SUBQUERY', 'C1', 'ref', 
'PRIMARY,IX_Chapters_1,IX_Chapters_2', 'IX_Chapters_1', '5', 
'bpa.S1.ID', 1, 'Using where'
2, 'DEPENDENT SUBQUERY', 'M', 'ref', 'IX_Members_3', 'IX_Members_3', 
'5', 'bpa.C1.ID', 24, 'Using where; Using index'


Not sure what all that means, but there it is.

Now, as for the other stuff.  I'm running MySQL version 5.0.15-nt.  I'm 
running on my development machine, which is a Dell Demension 3000.  It's 
running a Pentium 4 2.8 GHz CPU with 1 GB of RAM, with Windows XP Pro.


Thanks,
Jesse

- Original Message - From: "Dan Buettner" 
<[EMAIL PROTECTED]>

To: "Jesse" <[EMAIL PROTECTED]>
Cc: "MySQL List" 
Sent: Monday, June 26, 2006 5:21 PM
Subject: Re: Query Speed



Jesse, can you post table structures ( SHOW CREATE TABLE tablename )
and the output you get from EXPLAIN followed by the query below?

Also what version of MySQL you're on, and high level details of the 
hardware (RAM, disks, processors, OS).


That will all be helpful in trying to help you out here.

Dan


Jesse wrote:
I have a query which I can execute in Microsoft SQL, and it's 
instantaneous. However, In MySQL, I've only been able to get it down 
to 48 seconds:


S

Re: Query Speed

2006-06-26 Thread Dan Buettner

Jesse, can you post table structures ( SHOW CREATE TABLE tablename )
and the output you get from EXPLAIN followed by the query below?

Also what version of MySQL you're on, and high level details of the 
hardware (RAM, disks, processors, OS).


That will all be helpful in trying to help you out here.

Dan


Jesse wrote:
I have a query which I can execute in Microsoft SQL, and it's 
instantaneous. However, In MySQL, I've only been able to get it down to 
48 seconds:


SELECT S.State, ST.StateName, S.Sub, C.ChapterType, (SELECT Count(*) 
FROM (Members M JOIN Chapters C1 ON C1.ID=M.ChapterID) JOIN Schools S1 
on S1.ID=C1.SchoolID WHERE S1.State=S.State AND S1.Sub=S.Sub AND 
C1.ChapterType=C.ChapterType) AS TotMem, (SELECT Count(*) FROM 
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN 
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID 
WHERE S2.State=S.State AND S2.Sub=S.Sub AND C2.ChapterType=C.ChapterType 
AND LA.InvoiceNo IS NOT NULL) AS TotAdv

FROM (Chapters C JOIN Schools S ON S.ID=C.SchoolID)
   JOIN State ST ON S.State=ST.State
GROUP BY S.State, ST.StateName, S.Sub, C.ChapterType
ORDER BY S.State, S.Sub, C.ChapterType

I have added indexes to make sure that all of the JOINs and WHEREs can 
operate efficiently.  This helped tremendously, as my first attempt at 
this query timed out, so I have no idea how long it would have actually 
taken. I'm doing this query using ASP on a Windows XP Pro machine, 
however, doing it in the MySQL Query Browser takes just as long (as one 
would expect). The tables are all InnoDB.  Is there anything else I can 
do to help speed this query up?


Thanks,
Jesse



--
Dan Buettner

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Speed

2006-06-26 Thread Jesse
On the chance that such a thing were actually happening, I converted all of 
my tables over to MyISAM, because it says that it is "very fast", and I'm 
not using Transactions anyway.  If that's the only feature that InnoDB gives 
me, I'd much rather have the speed.  Anyway, it actually increased the time 
of this query from about 48 seconds to about 1:40 or so.  More than twice as 
much.  However, I've noticed that other things do seem to be running faster 
since converting to MyISAM, but this query is not.


Thanks,
Jesse

- Original Message - 
From: "Price, Randall" <[EMAIL PROTECTED]>

To: "Jesse" <[EMAIL PROTECTED]>; "MySQL List" 
Sent: Monday, June 26, 2006 4:47 PM
Subject: RE: Query Speed


Hi Jesse,

I am not 100% sure cause I have only been using MySQL for ~6 months but
I do read this mailing list everyday and have learned a lot.  I believe
that InnoDB tables to not maintain a count(*) for the tables so it has
to physically count the rows.  I believe MyISAM tables do maintain that
count(*) so the tables were MyISAM they count(*) would be faster.  That
may be where the slowness is coming from.

Again, as I am new to MySQL, this may be totally off the wall.  Maybe
someone else more experienced with MySQL could verify this.

Thanks,
--Randall Price


-Original Message-
From: Jesse [mailto:[EMAIL PROTECTED]
Sent: Monday, June 26, 2006 4:28 PM
To: MySQL List
Subject: Query Speed

I have a query which I can execute in Microsoft SQL, and it's
instantaneous.
However, In MySQL, I've only been able to get it down to 48 seconds:

SELECT S.State, ST.StateName, S.Sub, C.ChapterType, (SELECT Count(*)
FROM
(Members M JOIN Chapters C1 ON C1.ID=M.ChapterID) JOIN Schools S1 on
S1.ID=C1.SchoolID WHERE S1.State=S.State AND S1.Sub=S.Sub AND
C1.ChapterType=C.ChapterType) AS TotMem, (SELECT Count(*) FROM
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID

WHERE S2.State=S.State AND S2.Sub=S.Sub AND C2.ChapterType=C.ChapterType
AND
LA.InvoiceNo IS NOT NULL) AS TotAdv
FROM (Chapters C JOIN Schools S ON S.ID=C.SchoolID)
   JOIN State ST ON S.State=ST.State
GROUP BY S.State, ST.StateName, S.Sub, C.ChapterType
ORDER BY S.State, S.Sub, C.ChapterType

I have added indexes to make sure that all of the JOINs and WHEREs can
operate efficiently.  This helped tremendously, as my first attempt at
this
query timed out, so I have no idea how long it would have actually
taken.
I'm doing this query using ASP on a Windows XP Pro machine, however,
doing
it in the MySQL Query Browser takes just as long (as one would expect).
The tables are all InnoDB.  Is there anything else I can do to help
speed
this query up?

Thanks,
Jesse


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Query Speed

2006-06-26 Thread Price, Randall
Hi Jesse,

I am not 100% sure cause I have only been using MySQL for ~6 months but
I do read this mailing list everyday and have learned a lot.  I believe
that InnoDB tables to not maintain a count(*) for the tables so it has
to physically count the rows.  I believe MyISAM tables do maintain that
count(*) so the tables were MyISAM they count(*) would be faster.  That
may be where the slowness is coming from.

Again, as I am new to MySQL, this may be totally off the wall.  Maybe
someone else more experienced with MySQL could verify this.

Thanks,
--Randall Price


-Original Message-
From: Jesse [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 26, 2006 4:28 PM
To: MySQL List
Subject: Query Speed

I have a query which I can execute in Microsoft SQL, and it's
instantaneous. 
However, In MySQL, I've only been able to get it down to 48 seconds:

SELECT S.State, ST.StateName, S.Sub, C.ChapterType, (SELECT Count(*)
FROM 
(Members M JOIN Chapters C1 ON C1.ID=M.ChapterID) JOIN Schools S1 on 
S1.ID=C1.SchoolID WHERE S1.State=S.State AND S1.Sub=S.Sub AND 
C1.ChapterType=C.ChapterType) AS TotMem, (SELECT Count(*) FROM 
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN 
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID

WHERE S2.State=S.State AND S2.Sub=S.Sub AND C2.ChapterType=C.ChapterType
AND 
LA.InvoiceNo IS NOT NULL) AS TotAdv
FROM (Chapters C JOIN Schools S ON S.ID=C.SchoolID)
JOIN State ST ON S.State=ST.State
GROUP BY S.State, ST.StateName, S.Sub, C.ChapterType
ORDER BY S.State, S.Sub, C.ChapterType

I have added indexes to make sure that all of the JOINs and WHEREs can 
operate efficiently.  This helped tremendously, as my first attempt at
this 
query timed out, so I have no idea how long it would have actually
taken. 
I'm doing this query using ASP on a Windows XP Pro machine, however,
doing 
it in the MySQL Query Browser takes just as long (as one would expect). 
The tables are all InnoDB.  Is there anything else I can do to help
speed 
this query up?

Thanks,
Jesse 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Query Speed

2006-06-26 Thread Jesse
I have a query which I can execute in Microsoft SQL, and it's instantaneous. 
However, In MySQL, I've only been able to get it down to 48 seconds:


SELECT S.State, ST.StateName, S.Sub, C.ChapterType, (SELECT Count(*) FROM 
(Members M JOIN Chapters C1 ON C1.ID=M.ChapterID) JOIN Schools S1 on 
S1.ID=C1.SchoolID WHERE S1.State=S.State AND S1.Sub=S.Sub AND 
C1.ChapterType=C.ChapterType) AS TotMem, (SELECT Count(*) FROM 
((AdvChapterLink ACL JOIN LocalAdvisors LA ON LA.ID=ACL.AdvisorID) JOIN 
Chapters C2 ON C2.ID=ACL.ChapterID) JOIN Schools S2 ON S2.ID=C2.SchoolID 
WHERE S2.State=S.State AND S2.Sub=S.Sub AND C2.ChapterType=C.ChapterType AND 
LA.InvoiceNo IS NOT NULL) AS TotAdv

FROM (Chapters C JOIN Schools S ON S.ID=C.SchoolID)
   JOIN State ST ON S.State=ST.State
GROUP BY S.State, ST.StateName, S.Sub, C.ChapterType
ORDER BY S.State, S.Sub, C.ChapterType

I have added indexes to make sure that all of the JOINs and WHEREs can 
operate efficiently.  This helped tremendously, as my first attempt at this 
query timed out, so I have no idea how long it would have actually taken. 
I'm doing this query using ASP on a Windows XP Pro machine, however, doing 
it in the MySQL Query Browser takes just as long (as one would expect). 
The tables are all InnoDB.  Is there anything else I can do to help speed 
this query up?


Thanks,
Jesse 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Do foreign keys affect query speed?

2006-04-12 Thread barney
That's the whole question.

Do foreign keys (FKs) affect query speed?

'Course the answer could lead to sub-questions , e.g.,
"If so, how best to optimize a query for them?"

And I guess a corollary question would be whether implementing FKs slows down 
MySQL processing in general, but that can probably wait for another post.

Don't think it really matters for this question, but WinXP SP2, 3G cpu, 1G RAM, 
PHP v4.3.11, Apache v2.0.55,

Make a good day ...
 ... barn


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Do foreign keys affect query speed?

2006-04-12 Thread barney
That's the whole question.

Do foreign keys (FKs) affect query speed?

'Course the answer could lead to sub-questions , e.g.,
"If so, how best to optimize a query for them?"

And I guess a corollary question would be whether implementing FKs slows down 
MySQL processing in general, but that can probably wait for another post.

Don't think it really matters for this question, but WinXP SP2, 3G cpu, 1G RAM, 
PHP v4.3.11, Apache v2.0.55, MySQL v4.1.14, and user v61.

Make a good day ...
 ... barn





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Speed

2006-02-08 Thread سيد هادی راستگوی حقی
Any suggestions?

On 2/3/06, سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote:
>
> Dear all,
> Thanks for your replies.
>
> The main table for me is traffic_log. I use combination of recipient_id
> and mobile_retry fields to uniquely identify each row in the traffic_log and
> use the same combination on status_log as my foreign key to traffic_log.
> Each message is saved as a row in traffic_log and its statuses are stored
> in status_log.
> The make me join these tow tables on both fields to find each message's
> statuses.
>
> For performing fast query, I always first select messages from traffic_log
> becuase of it's indices and then select last status for each message.
> I mean that i perform 2 separate queries.
>
> But the problem arises when i want to search on statuses, so i have to
> join these large tables and then select base on my status constraint and
> then sort them base on their times. these parts are time consuming and i
> want to make them fast.
>
> As i said before, you may suggest me to restructure my tables.
> I will appreciate your suggestions.
>
> Thanks for your interest to solving my problem.
>
> On 2/2/06, [EMAIL PROTECTED] < [EMAIL PROTECTED]> wrote:
> >
> >
> > Sorry, but you gave us a "best guess" situation. Your tables do not have
> > any PRIMARY KEYs defined on them so I had to guess at what made each row in
> > each table unique from all other rows in that table based only on your
> > sample query.
> >
> > What value or combination of values will allow me to uniquely identify a
> > single record from each table?  Armed with that information I can rework my
> > solution to accurately identify what you want to know. My suggestion is that
> > you add two integer-based auto_increment columns, one to each table, and
> > make them the PRIMARY KEYs and foreign keys as appropriate.
> >
> > For example: What makes a single row of traffic_log different from each
> > of the others? Is it the `recipient_id` column or a combination of values?
> > Same for the `status_log` table. What makes each row different from all
> > others? How do I uniquely identify a single row in `traffic_log` that
> > corresponds to any random row from `status_log`?
> >
> > Yours,
> >
> > Shawn Green
> > Database Administrator
> > Unimin Corporation - Spruce Pine
> >
> >
> >
> > سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/02/2006
> > 01:14:35 AM:
> >
> > > Thanks for your suggestion,
> > > I forget to tell that each message in traffic_log may has at least 2
> > status
> > > in status_log and I use to columns "recipients_id" and "mobile_retry"
> > > to uniquely find each message's statuses.
> > > May be I have to change my tables structure. I don't know.
> > >
> > > It's really important for me to show each message with it's last
> > status. So
> > > I have to use group by because in other way such as
> > >
> > > SELECT * FROM status_log
> > > ORDER BY time;
> > >
> > > returns all statuses in order of time regards to multiple statuses for
> > any
> > > message. so I think that the query may be like this
> > >
> > > (SELECT status, recipient_id, time, mobile_retry FROM status_log GROUP
> > BY
> > > recipient_id HAVING time=MAX(time)) AS sts*
> > > JOIN traffic_log ON traffic_log.recipient_id=sts.recipient_id AND
> > > traffic_log.mobile_retry=sts.mobile_retry
> > >
> > > *sts --> to find last status of each message
> > >
> > >
> > > On 2/1/06, [EMAIL PROTECTED] < [EMAIL PROTECTED]> wrote:
> > > >
> > > >
> > > >
> > > > سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/01/2006
> > 11:07:49
> > > > AM:
> > > >
> > > > > Dear All,
> > > > > I need your suggestions please.
> > > > >
> > > > > have to large tables with these schemas:
> > > > >
> > > > >  Table: traffic_log
> > > > > Create Table: CREATE TABLE `traffic_log` (
> > > > >   `recipient_id` int(11) NOT NULL default '0',
> > > > >   `retry` smallint(4) NOT NULL default '0',
> > > > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > > > >   `orig` varchar(13) default NULL,
> > > > >   `dest` varchar(13) default NULL,
> > > > >   `message` text,
> > > > >   `account_id` int(11) NOT NULL default '0',
> > > > >   `service_id` int(11) NOT NULL default '0',
> > > > >   `dir` enum('IN','OUT') NOT NULL default 'IN',
> > > > >   `plugin` varchar(30) NOT NULL default 'UNKNOWN',
> > > > >   `date_entered` datetime NOT NULL default '-00-00 00:00:00',
> > > > >   `replied` tinyint(4) default '0',
> > > > >   KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
> > > > >   KEY `account_id_2` (`account_id`,`date_entered`)
> > > > > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> > > > >
> > > > > Table: status_log
> > > > > Create Table: CREATE TABLE `status_log` (
> > > > >   `recipient_id` int(11) NOT NULL default '0',
> > > > >   `retry` smallint(4) NOT NULL default '0',
> > > > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > > > >   `status` smallint(5) NOT NULL default '0',
> > > > >   `time` datetime NOT NULL default '-00-00 00:00:00',
>

Re: Query Speed

2006-02-03 Thread سيد هادی راستگوی حقی
Dear all,
Thanks for your replies.

The main table for me is traffic_log. I use combination of recipient_id and
mobile_retry fields to uniquely identify each row in the traffic_log and use
the same combination on status_log as my foreign key to traffic_log.
Each message is saved as a row in traffic_log and its statuses are stored in
status_log.
The make me join these tow tables on both fields to find each message's
statuses.

For performing fast query, I always first select messages from traffic_log
becuase of it's indices and then select last status for each message.
I mean that i perform 2 separate queries.

But the problem arises when i want to search on statuses, so i have to join
these large tables and then select base on my status constraint and then
sort them base on their times. these parts are time consuming and i want to
make them fast.

As i said before, you may suggest me to restructure my tables.
I will appreciate your suggestions.

Thanks for your interest to solving my problem.

On 2/2/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
> Sorry, but you gave us a "best guess" situation. Your tables do not have
> any PRIMARY KEYs defined on them so I had to guess at what made each row in
> each table unique from all other rows in that table based only on your
> sample query.
>
> What value or combination of values will allow me to uniquely identify a
> single record from each table?  Armed with that information I can rework my
> solution to accurately identify what you want to know. My suggestion is that
> you add two integer-based auto_increment columns, one to each table, and
> make them the PRIMARY KEYs and foreign keys as appropriate.
>
> For example: What makes a single row of traffic_log different from each of
> the others? Is it the `recipient_id` column or a combination of values? Same
> for the `status_log` table. What makes each row different from all others?
> How do I uniquely identify a single row in `traffic_log` that corresponds to
> any random row from `status_log`?
>
> Yours,
>
> Shawn Green
> Database Administrator
> Unimin Corporation - Spruce Pine
>
>
>
> سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/02/2006 01:14:35
> AM:
>
> > Thanks for your suggestion,
> > I forget to tell that each message in traffic_log may has at least 2
> status
> > in status_log and I use to columns "recipients_id" and "mobile_retry"
> > to uniquely find each message's statuses.
> > May be I have to change my tables structure. I don't know.
> >
> > It's really important for me to show each message with it's last status.
> So
> > I have to use group by because in other way such as
> >
> > SELECT * FROM status_log
> > ORDER BY time;
> >
> > returns all statuses in order of time regards to multiple statuses for
> any
> > message. so I think that the query may be like this
> >
> > (SELECT status, recipient_id, time, mobile_retry FROM status_log GROUP
> BY
> > recipient_id HAVING time=MAX(time)) AS sts*
> > JOIN traffic_log ON traffic_log.recipient_id=sts.recipient_id AND
> > traffic_log.mobile_retry=sts.mobile_retry
> >
> > *sts --> to find last status of each message
> >
> >
> > On 2/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > >
> > >
> > >
> > > سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/01/2006
> 11:07:49
> > > AM:
> > >
> > > > Dear All,
> > > > I need your suggestions please.
> > > >
> > > > have to large tables with these schemas:
> > > >
> > > >  Table: traffic_log
> > > > Create Table: CREATE TABLE `traffic_log` (
> > > >   `recipient_id` int(11) NOT NULL default '0',
> > > >   `retry` smallint(4) NOT NULL default '0',
> > > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > > >   `orig` varchar(13) default NULL,
> > > >   `dest` varchar(13) default NULL,
> > > >   `message` text,
> > > >   `account_id` int(11) NOT NULL default '0',
> > > >   `service_id` int(11) NOT NULL default '0',
> > > >   `dir` enum('IN','OUT') NOT NULL default 'IN',
> > > >   `plugin` varchar(30) NOT NULL default 'UNKNOWN',
> > > >   `date_entered` datetime NOT NULL default '-00-00 00:00:00',
> > > >   `replied` tinyint(4) default '0',
> > > >   KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
> > > >   KEY `account_id_2` (`account_id`,`date_entered`)
> > > > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> > > >
> > > > Table: status_log
> > > > Create Table: CREATE TABLE `status_log` (
> > > >   `recipient_id` int(11) NOT NULL default '0',
> > > >   `retry` smallint(4) NOT NULL default '0',
> > > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > > >   `status` smallint(5) NOT NULL default '0',
> > > >   `time` datetime NOT NULL default '-00-00 00:00:00',
> > > >   `smsc` varchar(20) NOT NULL default '',
> > > >   `priority` tinyint(2) unsigned NOT NULL default '0',
> > > >   `ack` varchar(30) NOT NULL default '',
> > > >   KEY `recipient_id_2`
> (`recipient_id`,`mobile_retry`,`time`,`status`),
> > > >   KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
> > >

Re: Query Speed

2006-02-02 Thread SGreen
Sorry, but you gave us a "best guess" situation. Your tables do not have 
any PRIMARY KEYs defined on them so I had to guess at what made each row 
in each table unique from all other rows in that table based only on your 
sample query. 

What value or combination of values will allow me to uniquely identify a 
single record from each table?  Armed with that information I can rework 
my solution to accurately identify what you want to know. My suggestion is 
that you add two integer-based auto_increment columns, one to each table, 
and make them the PRIMARY KEYs and foreign keys as appropriate.

For example: What makes a single row of traffic_log different from each of 
the others? Is it the `recipient_id` column or a combination of values? 
Same for the `status_log` table. What makes each row different from all 
others? How do I uniquely identify a single row in `traffic_log` that 
corresponds to any random row from `status_log`?

Yours,

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/02/2006 
01:14:35 AM:

> Thanks for your suggestion,
> I forget to tell that each message in traffic_log may has at least 2 
status
> in status_log and I use to columns "recipients_id" and "mobile_retry"
> to uniquely find each message's statuses.
> May be I have to change my tables structure. I don't know.
> 
> It's really important for me to show each message with it's last status. 
So
> I have to use group by because in other way such as
> 
> SELECT * FROM status_log
> ORDER BY time;
> 
> returns all statuses in order of time regards to multiple statuses for 
any
> message. so I think that the query may be like this
> 
> (SELECT status, recipient_id, time, mobile_retry FROM status_log GROUP 
BY
> recipient_id HAVING time=MAX(time)) AS sts*
> JOIN traffic_log ON traffic_log.recipient_id=sts.recipient_id AND
> traffic_log.mobile_retry=sts.mobile_retry
> 
> *sts --> to find last status of each message
> 
> 
> On 2/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> > سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/01/2006 
11:07:49
> > AM:
> >
> > > Dear All,
> > > I need your suggestions please.
> > >
> > > have to large tables with these schemas:
> > >
> > >  Table: traffic_log
> > > Create Table: CREATE TABLE `traffic_log` (
> > >   `recipient_id` int(11) NOT NULL default '0',
> > >   `retry` smallint(4) NOT NULL default '0',
> > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > >   `orig` varchar(13) default NULL,
> > >   `dest` varchar(13) default NULL,
> > >   `message` text,
> > >   `account_id` int(11) NOT NULL default '0',
> > >   `service_id` int(11) NOT NULL default '0',
> > >   `dir` enum('IN','OUT') NOT NULL default 'IN',
> > >   `plugin` varchar(30) NOT NULL default 'UNKNOWN',
> > >   `date_entered` datetime NOT NULL default '-00-00 00:00:00',
> > >   `replied` tinyint(4) default '0',
> > >   KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
> > >   KEY `account_id_2` (`account_id`,`date_entered`)
> > > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> > >
> > > Table: status_log
> > > Create Table: CREATE TABLE `status_log` (
> > >   `recipient_id` int(11) NOT NULL default '0',
> > >   `retry` smallint(4) NOT NULL default '0',
> > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > >   `status` smallint(5) NOT NULL default '0',
> > >   `time` datetime NOT NULL default '-00-00 00:00:00',
> > >   `smsc` varchar(20) NOT NULL default '',
> > >   `priority` tinyint(2) unsigned NOT NULL default '0',
> > >   `ack` varchar(30) NOT NULL default '',
> > >   KEY `recipient_id_2` 
(`recipient_id`,`mobile_retry`,`time`,`status`),
> > >   KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
> > > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> > >
> > > I want to execute a query to find out each last message's status. So 
my
> > > query is :
> > >
> > > select * from traffic_log LEFT JOIN status_log ON
> > > traffic_log.recipient_id=status_log.recipient_id and
> > > traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 
32
> > > group by status_log.recipient_id HAVING time=max(time) order by 
time;
> > >
> > > And MySQL explanation about this query is:
> > > *** 1. row ***
> > >id: 1
> > >   select_type: SIMPLE
> > > table: traffic_log
> > >  type: ref
> > > possible_keys: account_id,account_id_2
> > >   key: account_id
> > >   key_len: 4
> > >   ref: const
> > >  rows: 1049598
> > > Extra: Using temporary; Using filesort
> > > *** 2. row ***
> > >id: 1
> > >   select_type: SIMPLE
> > > table: status_log
> > >  type: ref
> > > possible_keys: recipient_id_2
> > >   key: recipient_id_2
> > >   key_len: 5
> > >   ref: smse.traffic_log.recipient_id,
> > smse.traffic_log.mobile_retry
> > >  r

Re: Query Speed

2006-02-01 Thread سيد هادی راستگوی حقی
Another question is that if I run such CREATE TEMPORARY statements in my
query, is MySQL really can do it fast?
Cause this query may be run periodically !


On 2/2/06, سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote:
>
> Thanks for your suggestion,
> I forget to tell that each message in traffic_log may has at least 2
> status in status_log and I use to columns "recipients_id" and "mobile_retry"
>
> to uniquely find each message's statuses.
> May be I have to change my tables structure. I don't know.
>
> It's really important for me to show each message with it's last status.
> So I have to use group by because in other way such as
>
> SELECT * FROM status_log
> ORDER BY time;
>
> returns all statuses in order of time regards to multiple statuses for any
> message. so I think that the query may be like this
>
> (SELECT status, recipient_id, time, mobile_retry FROM status_log GROUP BY
> recipient_id HAVING time=MAX(time)) AS sts*
> JOIN traffic_log ON traffic_log.recipient_id=sts.recipient_id AND
> traffic_log.mobile_retry=sts.mobile_retry
>
> *sts --> to find last status of each message
>
>
> On 2/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> > سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/01/2006
> > 11:07:49 AM:
> >
> > > Dear All,
> > > I need your suggestions please.
> > >
> > > have to large tables with these schemas:
> > >
> > >  Table: traffic_log
> > > Create Table: CREATE TABLE `traffic_log` (
> > >   `recipient_id` int(11) NOT NULL default '0',
> > >   `retry` smallint(4) NOT NULL default '0',
> > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > >   `orig` varchar(13) default NULL,
> > >   `dest` varchar(13) default NULL,
> > >   `message` text,
> > >   `account_id` int(11) NOT NULL default '0',
> > >   `service_id` int(11) NOT NULL default '0',
> > >   `dir` enum('IN','OUT') NOT NULL default 'IN',
> > >   `plugin` varchar(30) NOT NULL default 'UNKNOWN',
> > >   `date_entered` datetime NOT NULL default '-00-00 00:00:00',
> > >   `replied` tinyint(4) default '0',
> > >   KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
> > >   KEY `account_id_2` (`account_id`,`date_entered`)
> > > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> > >
> > > Table: status_log
> > > Create Table: CREATE TABLE `status_log` (
> > >   `recipient_id` int(11) NOT NULL default '0',
> > >   `retry` smallint(4) NOT NULL default '0',
> > >   `mobile_retry` tinyint(1) NOT NULL default '0',
> > >   `status` smallint(5) NOT NULL default '0',
> > >   `time` datetime NOT NULL default '-00-00 00:00:00',
> > >   `smsc` varchar(20) NOT NULL default '',
> > >   `priority` tinyint(2) unsigned NOT NULL default '0',
> > >   `ack` varchar(30) NOT NULL default '',
> > >   KEY `recipient_id_2`
> > (`recipient_id`,`mobile_retry`,`time`,`status`),
> > >   KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
> > > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> > >
> > > I want to execute a query to find out each last message's status. So
> > my
> > > query is :
> > >
> > > select * from traffic_log LEFT JOIN status_log ON
> > > traffic_log.recipient_id=status_log.recipient_id and
> > > traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id =
> > 32
> > > group by status_log.recipient_id HAVING time=max(time) order by time;
> > >
> > > And MySQL explanation about this query is:
> > > *** 1. row ***
> > >id: 1
> > >   select_type: SIMPLE
> > > table: traffic_log
> > >  type: ref
> > > possible_keys: account_id,account_id_2
> > >   key: account_id
> > >   key_len: 4
> > >   ref: const
> > >  rows: 1049598
> > > Extra: Using temporary; Using filesort
> > > *** 2. row ***
> > >id: 1
> > >   select_type: SIMPLE
> > > table: status_log
> > >  type: ref
> > > possible_keys: recipient_id_2
> > >   key: recipient_id_2
> > >   key_len: 5
> > >   ref: smse.traffic_log.recipient_id,
> > smse.traffic_log.mobile_retry
> > >  rows: 2
> > > Extra:
> > >
> > > as you see return records are 1049598.
> > >
> > > But it's very slow.
> > >
> > > Do you have any suggestions to fast it?
> > >
> > > --
> > > Sincerely,
> > > Hadi Rastgou
> > > http://www.spreadfirefox.com/?q=affiliates&id=0&t=1
> > ">Get
> > > Firefox!
> >
> > This is another variant of the groupwise maximum pattern of query:
> >
> > http://dev. mysql
> > .com/doc/refman/4.1/en/example-maximum-column-group-row.html
> >
> > My favorite way to write these kinds of queries is to make a temporary
> > table (or more if needed) identifying the group and max-per-group then using
> > that temp table to create the final query. In your case, the group is the
> > recipient_id and the max-per-group will be MAX(`t

Re: Query Speed

2006-02-01 Thread سيد هادی راستگوی حقی
Thanks for your suggestion,
I forget to tell that each message in traffic_log may has at least 2 status
in status_log and I use to columns "recipients_id" and "mobile_retry"
to uniquely find each message's statuses.
May be I have to change my tables structure. I don't know.

It's really important for me to show each message with it's last status. So
I have to use group by because in other way such as

SELECT * FROM status_log
ORDER BY time;

returns all statuses in order of time regards to multiple statuses for any
message. so I think that the query may be like this

(SELECT status, recipient_id, time, mobile_retry FROM status_log GROUP BY
recipient_id HAVING time=MAX(time)) AS sts*
JOIN traffic_log ON traffic_log.recipient_id=sts.recipient_id AND
traffic_log.mobile_retry=sts.mobile_retry

*sts --> to find last status of each message


On 2/1/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
>
> سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/01/2006 11:07:49
> AM:
>
> > Dear All,
> > I need your suggestions please.
> >
> > have to large tables with these schemas:
> >
> >  Table: traffic_log
> > Create Table: CREATE TABLE `traffic_log` (
> >   `recipient_id` int(11) NOT NULL default '0',
> >   `retry` smallint(4) NOT NULL default '0',
> >   `mobile_retry` tinyint(1) NOT NULL default '0',
> >   `orig` varchar(13) default NULL,
> >   `dest` varchar(13) default NULL,
> >   `message` text,
> >   `account_id` int(11) NOT NULL default '0',
> >   `service_id` int(11) NOT NULL default '0',
> >   `dir` enum('IN','OUT') NOT NULL default 'IN',
> >   `plugin` varchar(30) NOT NULL default 'UNKNOWN',
> >   `date_entered` datetime NOT NULL default '-00-00 00:00:00',
> >   `replied` tinyint(4) default '0',
> >   KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
> >   KEY `account_id_2` (`account_id`,`date_entered`)
> > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> >
> > Table: status_log
> > Create Table: CREATE TABLE `status_log` (
> >   `recipient_id` int(11) NOT NULL default '0',
> >   `retry` smallint(4) NOT NULL default '0',
> >   `mobile_retry` tinyint(1) NOT NULL default '0',
> >   `status` smallint(5) NOT NULL default '0',
> >   `time` datetime NOT NULL default '-00-00 00:00:00',
> >   `smsc` varchar(20) NOT NULL default '',
> >   `priority` tinyint(2) unsigned NOT NULL default '0',
> >   `ack` varchar(30) NOT NULL default '',
> >   KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
> >   KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
> > ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> >
> > I want to execute a query to find out each last message's status. So my
> > query is :
> >
> > select * from traffic_log LEFT JOIN status_log ON
> > traffic_log.recipient_id=status_log.recipient_id and
> > traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
> > group by status_log.recipient_id HAVING time=max(time) order by time;
> >
> > And MySQL explanation about this query is:
> > *** 1. row ***
> >id: 1
> >   select_type: SIMPLE
> > table: traffic_log
> >  type: ref
> > possible_keys: account_id,account_id_2
> >   key: account_id
> >   key_len: 4
> >   ref: const
> >  rows: 1049598
> > Extra: Using temporary; Using filesort
> > *** 2. row ***
> >id: 1
> >   select_type: SIMPLE
> > table: status_log
> >  type: ref
> > possible_keys: recipient_id_2
> >   key: recipient_id_2
> >   key_len: 5
> >   ref: smse.traffic_log.recipient_id,
> smse.traffic_log.mobile_retry
> >  rows: 2
> > Extra:
> >
> > as you see return records are 1049598.
> >
> > But it's very slow.
> >
> > Do you have any suggestions to fast it?
> >
> > --
> > Sincerely,
> > Hadi Rastgou
> >  > href="http://www.spreadfirefox.com/?q=affiliates&id=0&t=1
> ">Get
> > Firefox!
>
> This is another variant of the groupwise maximum pattern of query:
>
> http://dev.mysql
> .com/doc/refman/4.1/en/example-maximum-column-group-row.html
>
> My favorite way to write these kinds of queries is to make a temporary
> table (or more if needed) identifying the group and max-per-group then using
> that temp table to create the final query. In your case, the group is the
> recipient_id and the max-per-group will be MAX(`time`);
>
> CREATE TEMPORARY TABLE tmpLastStatusTimes SELECT
> `recipient_id`
>,MAX(`time`) as lastmsg
> FROM status_log;
>
> CREATE TEMPORARY TABLE tmpLastStatuses SELECT
> sl.*
> FROM status_log sl
> INNER JOIN tmpLastStatusTimes lst
> ON lst.`recipient_id` = sl.`recipient_id`
> AND lst.lastmsg = sl.`time`;
>
> SELECT * from traffic_log
> LEFT JOIN tmpLastStatuses
> ON traffic_log.recipient_id=tmpLastStatuses.recipient_id
> AND traffic_log.mobile_retry=tmpLastStatuses.mobile_retry
> WH

Re: Query Speed

2006-02-01 Thread SGreen
سيد هادی راستگوی حقی <[EMAIL PROTECTED]> wrote on 02/01/2006 
11:07:49 AM:

> Dear All,
> I need your suggestions please.
> 
> have to large tables with these schemas:
> 
>  Table: traffic_log
> Create Table: CREATE TABLE `traffic_log` (
>   `recipient_id` int(11) NOT NULL default '0',
>   `retry` smallint(4) NOT NULL default '0',
>   `mobile_retry` tinyint(1) NOT NULL default '0',
>   `orig` varchar(13) default NULL,
>   `dest` varchar(13) default NULL,
>   `message` text,
>   `account_id` int(11) NOT NULL default '0',
>   `service_id` int(11) NOT NULL default '0',
>   `dir` enum('IN','OUT') NOT NULL default 'IN',
>   `plugin` varchar(30) NOT NULL default 'UNKNOWN',
>   `date_entered` datetime NOT NULL default '-00-00 00:00:00',
>   `replied` tinyint(4) default '0',
>   KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
>   KEY `account_id_2` (`account_id`,`date_entered`)
> ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> 
> Table: status_log
> Create Table: CREATE TABLE `status_log` (
>   `recipient_id` int(11) NOT NULL default '0',
>   `retry` smallint(4) NOT NULL default '0',
>   `mobile_retry` tinyint(1) NOT NULL default '0',
>   `status` smallint(5) NOT NULL default '0',
>   `time` datetime NOT NULL default '-00-00 00:00:00',
>   `smsc` varchar(20) NOT NULL default '',
>   `priority` tinyint(2) unsigned NOT NULL default '0',
>   `ack` varchar(30) NOT NULL default '',
>   KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
>   KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
> ) ENGINE=MyISAM DEFAULT CHARSET=utf8
> 
> I want to execute a query to find out each last message's status. So my
> query is :
> 
> select * from traffic_log LEFT JOIN status_log ON
> traffic_log.recipient_id=status_log.recipient_id and
> traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
> group by status_log.recipient_id HAVING time=max(time) order by time;
> 
> And MySQL explanation about this query is:
> *** 1. row ***
>id: 1
>   select_type: SIMPLE
> table: traffic_log
>  type: ref
> possible_keys: account_id,account_id_2
>   key: account_id
>   key_len: 4
>   ref: const
>  rows: 1049598
> Extra: Using temporary; Using filesort
> *** 2. row ***
>id: 1
>   select_type: SIMPLE
> table: status_log
>  type: ref
> possible_keys: recipient_id_2
>   key: recipient_id_2
>   key_len: 5
>   ref: 
smse.traffic_log.recipient_id,smse.traffic_log.mobile_retry
>  rows: 2
> Extra:
> 
> as you see return records are 1049598.
> 
> But it's very slow.
> 
> Do you have any suggestions to fast it?
> 
> --
> Sincerely,
> Hadi Rastgou
> http://www.spreadfirefox.com/?q=affiliates&id=0&t=1";>Get
> Firefox!

This is another variant of the groupwise maximum pattern of query:

http://dev.mysql.com/doc/refman/4.1/en/example-maximum-column-group-row.html

My favorite way to write these kinds of queries is to make a temporary 
table (or more if needed) identifying the group and max-per-group then 
using that temp table to create the final query. In your case, the group 
is the recipient_id and the max-per-group will be MAX(`time`);

CREATE TEMPORARY TABLE tmpLastStatusTimes SELECT 
`recipient_id`
   ,MAX(`time`) as lastmsg
FROM status_log;

CREATE TEMPORARY TABLE tmpLastStatuses SELECT
sl.*
FROM status_log sl
INNER JOIN tmpLastStatusTimes lst
ON lst.`recipient_id` = sl.`recipient_id`
AND lst.lastmsg = sl.`time`;

SELECT * from traffic_log 
LEFT JOIN tmpLastStatuses 
ON traffic_log.recipient_id=tmpLastStatuses.recipient_id 
AND traffic_log.mobile_retry=tmpLastStatuses.mobile_retry 
WHERE  account_id = 32
order by time;

DROP TEMPORARY TABLE tmpLastStatuses, tmpLastStatusTimes;

In your specific sample, you were only looking for messages from a 
particular person (account 32) so we may be able to speed up my example 
even more if we change the first statement of this query to read:

CREATE TEMPORARY TABLE tmpLastStatusTimes SELECT 
sl.`recipient_id`
   ,MAX(sl.`time`) as lastmsg
FROM status_log sl
INNER JOIN traffic_log tl
on tl.`recipient_id` = sl.`recipient_id`
and tl.account_id = 32;

That way we keep that table's contents within the scope of the actual 
desired results instead of computing the most recent statues for ALL 
messages for EVERYONE.

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Re: Query Speed

2006-02-01 Thread Peter Brawley




Hadi,
>But it's very slow.
>Do you have any suggestions to fast it?

Your query calls no aggregate functions, so what do you mean to achieve
by GROUP BY ... HAVING? For example this bit of logic extracted from
your query ...

SELECT * FROM table
GROUP BY pkcol
HAVING pkcol=MAX(pkcol)

is logically equivalent to ...

SELECT * FROM table
ORDER BY pkcol;

if pkcol is unique but orders of magnitude slower. If you want the
maximum time for each recipient_id, you need something like ...

SELECT 
  recipient_id, 
  ...,
  MAX(time)
...
GROUP BY recipient_id;

PB

-

سيد هادی راستگوی حقی wrote:

  Dear All,
I need your suggestions please.

have to large tables with these schemas:

 Table: traffic_log
Create Table: CREATE TABLE `traffic_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `orig` varchar(13) default NULL,
  `dest` varchar(13) default NULL,
  `message` text,
  `account_id` int(11) NOT NULL default '0',
  `service_id` int(11) NOT NULL default '0',
  `dir` enum('IN','OUT') NOT NULL default 'IN',
  `plugin` varchar(30) NOT NULL default 'UNKNOWN',
  `date_entered` datetime NOT NULL default '-00-00 00:00:00',
  `replied` tinyint(4) default '0',
  KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
  KEY `account_id_2` (`account_id`,`date_entered`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

Table: status_log
Create Table: CREATE TABLE `status_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `status` smallint(5) NOT NULL default '0',
  `time` datetime NOT NULL default '-00-00 00:00:00',
  `smsc` varchar(20) NOT NULL default '',
  `priority` tinyint(2) unsigned NOT NULL default '0',
  `ack` varchar(30) NOT NULL default '',
  KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
  KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

I want to execute a query to find out each last message's status. So my
query is :

select * from traffic_log LEFT JOIN status_log ON
traffic_log.recipient_id=status_log.recipient_id and
traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
group by status_log.recipient_id HAVING time=max(time) order by time;

And MySQL explanation about this query is:
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: traffic_log
 type: ref
possible_keys: account_id,account_id_2
  key: account_id
  key_len: 4
  ref: const
 rows: 1049598
Extra: Using temporary; Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: status_log
 type: ref
possible_keys: recipient_id_2
  key: recipient_id_2
  key_len: 5
  ref: smse.traffic_log.recipient_id,smse.traffic_log.mobile_retry
 rows: 2
Extra:

as you see return records are 1049598.

But it's very slow.

Do you have any suggestions to fast it?

--
Sincerely,
Hadi Rastgou
"http://www.spreadfirefox.com/?q=affiliates&id=0&t=1">Get
Firefox!
  
  

No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date: 2/1/2006
  



No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.1.375 / Virus Database: 267.15.0/248 - Release Date: 2/1/2006


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Query Speed

2006-02-01 Thread سيد هادی راستگوی حقی
Dear All,
I need your suggestions please.

have to large tables with these schemas:

 Table: traffic_log
Create Table: CREATE TABLE `traffic_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `orig` varchar(13) default NULL,
  `dest` varchar(13) default NULL,
  `message` text,
  `account_id` int(11) NOT NULL default '0',
  `service_id` int(11) NOT NULL default '0',
  `dir` enum('IN','OUT') NOT NULL default 'IN',
  `plugin` varchar(30) NOT NULL default 'UNKNOWN',
  `date_entered` datetime NOT NULL default '-00-00 00:00:00',
  `replied` tinyint(4) default '0',
  KEY `account_id` (`account_id`,`recipient_id`,`mobile_retry`),
  KEY `account_id_2` (`account_id`,`date_entered`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

Table: status_log
Create Table: CREATE TABLE `status_log` (
  `recipient_id` int(11) NOT NULL default '0',
  `retry` smallint(4) NOT NULL default '0',
  `mobile_retry` tinyint(1) NOT NULL default '0',
  `status` smallint(5) NOT NULL default '0',
  `time` datetime NOT NULL default '-00-00 00:00:00',
  `smsc` varchar(20) NOT NULL default '',
  `priority` tinyint(2) unsigned NOT NULL default '0',
  `ack` varchar(30) NOT NULL default '',
  KEY `recipient_id_2` (`recipient_id`,`mobile_retry`,`time`,`status`),
  KEY `time` (`time`,`recipient_id`,`mobile_retry`,`status`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8

I want to execute a query to find out each last message's status. So my
query is :

select * from traffic_log LEFT JOIN status_log ON
traffic_log.recipient_id=status_log.recipient_id and
traffic_log.mobile_retry=status_log.mobile_retry WHERE  account_id = 32
group by status_log.recipient_id HAVING time=max(time) order by time;

And MySQL explanation about this query is:
*** 1. row ***
   id: 1
  select_type: SIMPLE
table: traffic_log
 type: ref
possible_keys: account_id,account_id_2
  key: account_id
  key_len: 4
  ref: const
 rows: 1049598
Extra: Using temporary; Using filesort
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: status_log
 type: ref
possible_keys: recipient_id_2
  key: recipient_id_2
  key_len: 5
  ref: smse.traffic_log.recipient_id,smse.traffic_log.mobile_retry
 rows: 2
Extra:

as you see return records are 1049598.

But it's very slow.

Do you have any suggestions to fast it?

--
Sincerely,
Hadi Rastgou
http://www.spreadfirefox.com/?q=affiliates&id=0&t=1";>Get
Firefox!


Re: how to increase query speed ?

2005-10-25 Thread C.R. Vegelin

Thanks (to) Brent and Alexey ...
The solution to speed up my query was the compound index on Hash+Year.
With this compound key the query time dropped dramatically.

From 30 minutes to 14 seconds !

Cor Vegelin
Business Explorer BV


- Original Message - 
From: "Brent Baisley" <[EMAIL PROTECTED]>

To: "C.R.Vegelin" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, October 25, 2005 6:05 PM
Subject: Re: how to increase query speed ?



Sorry I missed the explain part.

You are doing a full table scan on the Updates table. There really is  
no way around speeding up reading a million rows. That's going to be  
dependent mainly on the speed of your disks and then how much memory  
you have.
MySQL is optimizing your query to select from the Updates table  
instead, so you may want to change your query to match what MySQL is  
doing. Not sure if it would be faster, but it's something to try.
Select Count(*) From Update Inner Join Data ON Data.Hash =  
Updates.Hash AND Data.Year = Updates.Year


As Alexey mentioned, you should absolutely add a compound index on  
Hash+Year.


The bottom line is that you are searching Data for over a million  
matching Updates. Without fast disks and/or lots of RAM for caching,  
you're not going to get great performance. You just have a lot of  
matches to sift through.



On Oct 25, 2005, at 10:32 AM, C.R. Vegelin wrote:


Hi Brent,
Well I did post an EXPLAIN before my query,
but it got squeezed at the end of my former email.
The EXPLAIN for my query says:
==
id:1
select-type:   simple
table:Updates
type: ALL
possible_keys:  Hash, Year
key:  NULL
key_len:NULL
ref:NULL
rows:1003823
Extra:   ===
id:1
select-type:   simple
table:Data
type: ref
possible_keys:  Hash, Year
key:  Hash
key_len:8
ref:Updates.Hash
rows:2
Extra:using where
=
And yes, both columns Hash and Year and indexed in both tables.
Regards, Cor

- Original Message - From: "Brent Baisley"  
<[EMAIL PROTECTED]>

To: "C.R.Vegelin" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, October 25, 2005 4:15 PM
Subject: Re: how to increase query speed ?



How about posting the explain for your query. Just put explain  
before  it, MySQL with then tell you how it will go about  
executing the  query, like which indexes it's using. I assume you  
have both columns  indexed?

On Oct 25, 2005, at 4:46 AM, C.R. Vegelin wrote:


Hi List,
I have a performance problem I can't get solved.
I have 2 tables, called Updates (1 mln rows) and Data (5 mln rows).
Table Updates has 2 (non-unique) keys, defined as:



Hash bigint(20) unsigned default NULL
Year tinyint(4) NOT NULL default '0'



Table Data has the same 2 (non-unique) keys, defined as:



Hash bigint(20) unsigned NOT NULL default '0'
Year tinyint(4) NOT NULL default '0'


The final purpose is to insert and update Data with new /  
changed  Updates.

Before doing so, i have a "simple" query, like:
Select Count(*) From Data Inner Join Updates ON Data.Hash =   
Updates.Hash AND Data.Year = Updates.Year;

but this one takes more than 30 minutes.
Before running this query, I did: Analyze Table Updates, Data;  
to  speed up acc. the manual 7.2.3.

Does someone know how to increase the performance for this query ?
Below the results of the explain for this query.
TIA, regards Cor


  myQuery id select_type table type possible_keys key key_len  
ref  rows Extra

  1 SIMPLE Updates ALL Hash,Year


 1003823

  1 SIMPLE Data ref Hash,Year Hash 8 Updates.Hash 2 Using where



--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to increase query speed ?

2005-10-25 Thread Brent Baisley

Sorry I missed the explain part.

You are doing a full table scan on the Updates table. There really is  
no way around speeding up reading a million rows. That's going to be  
dependent mainly on the speed of your disks and then how much memory  
you have.
MySQL is optimizing your query to select from the Updates table  
instead, so you may want to change your query to match what MySQL is  
doing. Not sure if it would be faster, but it's something to try.
Select Count(*) From Update Inner Join Data ON Data.Hash =  
Updates.Hash AND Data.Year = Updates.Year


As Alexey mentioned, you should absolutely add a compound index on  
Hash+Year.


The bottom line is that you are searching Data for over a million  
matching Updates. Without fast disks and/or lots of RAM for caching,  
you're not going to get great performance. You just have a lot of  
matches to sift through.



On Oct 25, 2005, at 10:32 AM, C.R. Vegelin wrote:


Hi Brent,
Well I did post an EXPLAIN before my query,
but it got squeezed at the end of my former email.
The EXPLAIN for my query says:
==
id:1
select-type:   simple
table:Updates
type: ALL
possible_keys:  Hash, Year
key:  NULL
key_len:NULL
ref:NULL
rows:1003823
Extra:   ===
id:1
select-type:   simple
table:Data
type: ref
possible_keys:  Hash, Year
key:  Hash
key_len:8
ref:Updates.Hash
rows:2
Extra:using where
=
And yes, both columns Hash and Year and indexed in both tables.
Regards, Cor

- Original Message - From: "Brent Baisley"  
<[EMAIL PROTECTED]>

To: "C.R.Vegelin" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, October 25, 2005 4:15 PM
Subject: Re: how to increase query speed ?



How about posting the explain for your query. Just put explain  
before  it, MySQL with then tell you how it will go about  
executing the  query, like which indexes it's using. I assume you  
have both columns  indexed?

On Oct 25, 2005, at 4:46 AM, C.R. Vegelin wrote:


Hi List,
I have a performance problem I can't get solved.
I have 2 tables, called Updates (1 mln rows) and Data (5 mln rows).
Table Updates has 2 (non-unique) keys, defined as:



Hash bigint(20) unsigned default NULL
Year tinyint(4) NOT NULL default '0'



Table Data has the same 2 (non-unique) keys, defined as:



Hash bigint(20) unsigned NOT NULL default '0'
Year tinyint(4) NOT NULL default '0'


The final purpose is to insert and update Data with new /  
changed  Updates.

Before doing so, i have a "simple" query, like:
Select Count(*) From Data Inner Join Updates ON Data.Hash =   
Updates.Hash AND Data.Year = Updates.Year;

but this one takes more than 30 minutes.
Before running this query, I did: Analyze Table Updates, Data;  
to  speed up acc. the manual 7.2.3.

Does someone know how to increase the performance for this query ?
Below the results of the explain for this query.
TIA, regards Cor


  myQuery id select_type table type possible_keys key key_len  
ref  rows Extra

  1 SIMPLE Updates ALL Hash,Year


 1003823

  1 SIMPLE Data ref Hash,Year Hash 8 Updates.Hash 2 Using where



--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to increase query speed ?

2005-10-25 Thread Alexey Polyakov
Adding compound (hash, years) index (or even better unique index if it
fits in your business logic)  in both tables should speed up things.

--
Alexey Polyakov

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to increase query speed ?

2005-10-25 Thread C.R. Vegelin

Hi Brent,
Well I did post an EXPLAIN before my query,
but it got squeezed at the end of my former email.
The EXPLAIN for my query says:
==
id:1
select-type:   simple
table:Updates
type: ALL
possible_keys:  Hash, Year
key:  NULL
key_len:NULL
ref:NULL
rows:1003823
Extra:   
===

id:1
select-type:   simple
table:Data
type: ref
possible_keys:  Hash, Year
key:  Hash
key_len:8
ref:Updates.Hash
rows:2
Extra:using where
=
And yes, both columns Hash and Year and indexed in both tables.
Regards, Cor

- Original Message - 
From: "Brent Baisley" <[EMAIL PROTECTED]>

To: "C.R.Vegelin" <[EMAIL PROTECTED]>
Cc: 
Sent: Tuesday, October 25, 2005 4:15 PM
Subject: Re: how to increase query speed ?


How about posting the explain for your query. Just put explain before  
it, MySQL with then tell you how it will go about executing the  
query, like which indexes it's using. I assume you have both columns  
indexed?



On Oct 25, 2005, at 4:46 AM, C.R. Vegelin wrote:


Hi List,
I have a performance problem I can't get solved.
I have 2 tables, called Updates (1 mln rows) and Data (5 mln rows).
Table Updates has 2 (non-unique) keys, defined as:


Hash bigint(20) unsigned default NULL
Year tinyint(4) NOT NULL default '0'


Table Data has the same 2 (non-unique) keys, defined as:


Hash bigint(20) unsigned NOT NULL default '0'
Year tinyint(4) NOT NULL default '0'

The final purpose is to insert and update Data with new / changed  
Updates.

Before doing so, i have a "simple" query, like:
Select Count(*) From Data Inner Join Updates ON Data.Hash =  
Updates.Hash AND Data.Year = Updates.Year;

but this one takes more than 30 minutes.
Before running this query, I did: Analyze Table Updates, Data; to  
speed up acc. the manual 7.2.3.

Does someone know how to increase the performance for this query ?
Below the results of the explain for this query.
TIA, regards Cor


  myQuery id select_type table type possible_keys key key_len ref  
rows Extra

  1 SIMPLE Updates ALL Hash,Year


 1003823

  1 SIMPLE Data ref Hash,Year Hash 8 Updates.Hash 2 Using where



--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to increase query speed ?

2005-10-25 Thread Brent Baisley
How about posting the explain for your query. Just put explain before  
it, MySQL with then tell you how it will go about executing the  
query, like which indexes it's using. I assume you have both columns  
indexed?



On Oct 25, 2005, at 4:46 AM, C.R. Vegelin wrote:


Hi List,
I have a performance problem I can't get solved.
I have 2 tables, called Updates (1 mln rows) and Data (5 mln rows).
Table Updates has 2 (non-unique) keys, defined as:


Hash bigint(20) unsigned default NULL
Year tinyint(4) NOT NULL default '0'


Table Data has the same 2 (non-unique) keys, defined as:


Hash bigint(20) unsigned NOT NULL default '0'
Year tinyint(4) NOT NULL default '0'

The final purpose is to insert and update Data with new / changed  
Updates.

Before doing so, i have a "simple" query, like:
Select Count(*) From Data Inner Join Updates ON Data.Hash =  
Updates.Hash AND Data.Year = Updates.Year;

but this one takes more than 30 minutes.
Before running this query, I did: Analyze Table Updates, Data; to  
speed up acc. the manual 7.2.3.

Does someone know how to increase the performance for this query ?
Below the results of the explain for this query.
TIA, regards Cor


  myQuery id select_type table type possible_keys key key_len ref  
rows Extra

  1 SIMPLE Updates ALL Hash,Year


 1003823

  1 SIMPLE Data ref Hash,Year Hash 8 Updates.Hash 2 Using where



--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



how to increase query speed ?

2005-10-25 Thread C.R. Vegelin
Hi List,
I have a performance problem I can't get solved.
I have 2 tables, called Updates (1 mln rows) and Data (5 mln rows).
Table Updates has 2 (non-unique) keys, defined as:
> Hash bigint(20) unsigned default NULL
> Year tinyint(4) NOT NULL default '0'
Table Data has the same 2 (non-unique) keys, defined as:
> Hash bigint(20) unsigned NOT NULL default '0'
> Year tinyint(4) NOT NULL default '0'
The final purpose is to insert and update Data with new / changed Updates.
Before doing so, i have a "simple" query, like:
Select Count(*) From Data Inner Join Updates ON Data.Hash = Updates.Hash AND 
Data.Year = Updates.Year;
but this one takes more than 30 minutes.
Before running this query, I did: Analyze Table Updates, Data; to speed up acc. 
the manual 7.2.3.
Does someone know how to increase the performance for this query ?
Below the results of the explain for this query.
TIA, regards Cor


  myQuery id select_type table type possible_keys key key_len ref rows Extra 
  1 SIMPLE Updates ALL Hash,Year 
 
 
 1003823 
 
  1 SIMPLE Data ref Hash,Year Hash 8 Updates.Hash 2 Using where 


Re: Query Speed

2004-04-20 Thread Robert J Taylor
Craig Gardner wrote:

Thank you very much.  That's what fixed my problem.

Robert J Taylor wrote:

Can you restrict to Not Null instead of != ""? (I.e, can you scrub 
the data not to have empty strings?).

The explain shows 3 extra where calculations per row...that's painful.


Great! Glad that solved your problem.

Robert Taylor
[EMAIL PROTECTED]
P.S. I didn't respond to the list initially, so I'm correcting that now.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Query Speed

2004-04-20 Thread Craig Gardner
I have two queries that are very similar. One of the queries takes a few 
minutes (3:43:07 last run) to complete, while the other takes less than 
a second to complete.

I know these are two different queries and shouldn't take the same 
amount of time, but I based the fast query on the slower one.  I would 
like to make the slower query faster and I don't have the slightest clue 
on how to do it.  Here are my queries and their explanations if that helps:

Please let me know if there's anything that I can do.

Thanks,
Craig


Slow Query:
SELECT insco.insconame, CONCAT_WS(", ", pt.ptlname, pt.ptfname) name, pt.id,
coverage.covpatinsno, cpt.cptcode , DATE_FORMAT(proc.procdt, 
"%m-%d-%Y") procdt,
insco.inscophone, proc.id procid, payrec.payrecamt current, 
fee.expected, coverage.covinsco, proc.proccpt
FROM payrec
LEFT JOIN procrec AS proc ON payrec.payrecpatient = proc.procpatient
LEFT JOIN patient AS pt ON payrec.payrecpatient = pt.id
LEFT JOIN coverage ON coverage.covpatient = pt.id AND (proc.proccov1 = 
coverage.id OR proc.proccurcovid = coverage.id)
LEFT JOIN insco ON coverage.covinsco = insco.id
LEFT JOIN cpt ON proc.proccpt = cpt.id
LEFT JOIN feeProfiles AS fee ON fee.cpt_ID = proc.proccpt AND 
fee.insurance_ID = coverage.covinsco
WHERE payrec.payreccat = 5 AND coverage.covinsco != ""
GROUP BY insconame, name, cptcode, procdt
ORDER BY insconame

Slow Query Explanation:
+--++---+-+-++--+--+
| table| type   | possible_keys | key | key_len | 
ref| rows | 
Extra|
+--++---+-+-++--+--+
| payrec   | ALL| NULL  | NULL|NULL | 
NULL   |  179 | Using where; Using temporary; 
Using filesort |
| coverage | ALL| PRIMARY   | NULL|NULL | 
NULL   |  935 | Using 
where  |
| proc | ALL| NULL  | NULL|NULL | 
NULL   |  420 | Using 
where  |
| pt   | eq_ref | PRIMARY   | PRIMARY |   4 | 
payrec.payrecpatient   |1 | Using 
where  |
| insco| eq_ref | PRIMARY   | PRIMARY |   4 | 
coverage.covinsco  |1 
|  |
| cpt  | eq_ref | PRIMARY   | PRIMARY |   4 | 
proc.proccpt   |1 
|  |
| fee  | eq_ref | unqq  | unqq|   8 | 
proc.proccpt,coverage.covinsco |1 
|  |
+--++---+-+-++--+--+

Fast Query:
SELECT insco.insconame, CONCAT_WS(", ", pt.ptlname, pt.ptfname) name, pt.id,
coverage.covpatinsno, cpt.cptcode , DATE_FORMAT(proc.procdt, 
"%m-%d-%Y") procdt,
pt.pthphone, proc.id procid, payrec.payrecamt current, fee.expected, 
coverage.covinsco, proc.proccpt
FROM payrec   
LEFT JOIN procrec AS proc ON payrec.payrecpatient = 
proc.procpatient   LEFT JOIN patient 
AS pt ON payrec.payrecpatient = pt.id
LEFT JOIN coverage ON coverage.covpatient = pt.id AND (proc.proccov1 = 
coverage.id OR proc.proccurcovid = coverage.id)   
LEFT JOIN insco ON coverage.covinsco = insco.id
LEFT JOIN cpt ON proc.proccpt = cpt.id
LEFT JOIN feeProfiles AS fee ON fee.cpt_ID = proc.proccpt AND 
fee.insurance_ID = coverage.covinsco
WHERE payrec.payreccat = 6 AND payrec.payrecsource = 0
GROUP BY name, cptcode, procdt
ORDER BY name, insconame

Explanaion:
+--++---+-+-++--+--+
| table| type   | possible_keys | key | key_len | 
ref| rows | 
Extra|
+--++---+-+-++--+--+
| payrec   | ALL| NULL  | NULL|NULL | 
NULL   |  179 | Using where; Using temporary; 
Using filesort |
| proc | ALL| NULL  | NULL|NULL | 
NULL   |  420 
|  |
| pt   | eq_ref | PRIMARY   | PRIMARY |   4 | 
payrec.payrecpatient   |1 
|  |
| coverage | ALL| PRIMARY   | NULL|NULL | 
NULL   |  935 
|  |
| insco| eq_ref | PRIMARY   | PRIMARY |   4 | 
coverage.covinsco

Re: Query Speed in Terminal vs. PHP

2004-03-02 Thread Saqib Ali
Is index defined on all of your tables?

Saqib Ali
-
http://validate.sf.net < (X)HTML / DocBook Validator and Transformer

On Tue, 2 Mar 2004, Chris Fowler wrote:

> I have a query that is admittedly inefficient in that it is doing
> multiple OR clauses and joining multiple tables. However, the query
> runs at an acceptable speed if I am in a terminal session and run the
> query directly in the terminal. On the other hand, when PHP performs
> the same query for use in a page, it takes 5-7 minutes for the query to
> run (the same query running in the terminal takes about 3-5 seconds). I
> can do a show processlist when the php page runs the query and I see
> "Copying to tmp table" in the State column of the show processlist
> results. Does anyone have any tips on why it is so much slower when run
> from php vs. the terminal.
>
>
> --
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
>
>

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Speed in Terminal vs. PHP

2004-03-02 Thread Daniel Kasak




Chris Fowler wrote:
I have a query that is admittedly inefficient in that it
is doing multiple OR clauses and joining multiple tables. However, the
query runs at an acceptable speed if I am in a terminal session and run
the query directly in the terminal. On the other hand, when PHP
performs the same query for use in a page, it takes 5-7 minutes for the
query to run (the same query running in the terminal takes about 3-5
seconds). I can do a show processlist when the php page runs the query
and I see "Copying to tmp table" in the State column of the show
processlist results. Does anyone have any tips on why it is so much
slower when run from php vs. the terminal.
  

Unless there's something going on that I don't know about, it should
take the same time either way.
Have you checked that the query is EXACTLY the same?
It's possible that the query run from PHP is not using an index that
the console query is. Try doing an 'explain' on the query from the
processlist and comparing it to an 'explain' from the console query to
see if they are using the same indexes.

Dan

-- 

signature
Daniel Kasak

IT Developer

NUS Consulting Group

Level 5, 77 Pacific Highway

North Sydney, NSW, Australia 2060

T: (+61) 2 9922-7676 / F: (+61) 2 9922 7989

email: [EMAIL PROTECTED]

website: http://www.nusconsulting.com.au



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Query Speed in Terminal vs. PHP

2004-03-02 Thread Chris Fowler
I have a query that is admittedly inefficient in that it is doing 
multiple OR clauses and joining multiple tables. However, the query 
runs at an acceptable speed if I am in a terminal session and run the 
query directly in the terminal. On the other hand, when PHP performs 
the same query for use in a page, it takes 5-7 minutes for the query to 
run (the same query running in the terminal takes about 3-5 seconds). I 
can do a show processlist when the php page runs the query and I see 
"Copying to tmp table" in the State column of the show processlist 
results. Does anyone have any tips on why it is so much slower when run 
from php vs. the terminal.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re[2]: Improving Query speed - any suggestions?

2003-10-23 Thread Peter Buri
Hello,

one more idea, use something like this [ i hope it's correct :) ]

SELECT
content.row_id AS row_id,
content.app_id AS app_id,
CASE s1.field_id
  when 69 then "niche",
  when 70 then "type",
  when 71 then "title",
  when 72 then "description",
  when 73 then "image",
  when 74 then "last_update",
  when 76 then "content_link",
  when 84 then "unique_id",
  when 129 then "date_added",
  when 116 then "content_provider",
  when 118 then "user_hits",
  when 120 then "vote_total",
  when 121 then "vote_user_total",
  when 125 then "channel" 
END as fieldname,
s1.data 
FROM 
content JOIN
content s1 ON s1.row_id = content.row_id and JOIN
content s2 ON s2.field_id=74 AND s2.row_id = content.row_id
WHERE 
content.app_id = 11 AND 
unix_timestamp(s2.data)-unix_timestamp('2003-10-23 23:59:59') < 0 AND
s1.field_id IN (69,70,71,72,73,74,76,84,129,116,118,120,121,125)
ORDER BY
last_update desc LIMIT -1

I think it's can be fast enough, but your aplication have to collect the different 
fields...

burci

Thursday, October 23, 2003, 11:50:08 PM, you wrote:

MB> Thanks everyone for your input, I'll try the ramdisk idea, I read about
MB> someone else who tried that and had some success. Beyond, that I'm gonna
MB> take the long route and redesign the database to be a bit more
MB> conventional.

MB> Thanks!
MB> Matt

MB> On Thu, 2003-10-23 at 20:28, Peter Buri wrote:
>> Hello,
>> 
>> as i see you use one table to store all the data, but the cohesive data are
>> split into 15! different rows.
>> 
>> I think to get the best performance you shoud redesign your tabel.
>> Use at last first normal form [1NF], if the app_id is uniq this can be the
>> primary key [which will speed up the query] .
>> 
>> Data which have the same row_id should be in one row.
>> 
>> Your table definition shoud look like this:
>> 
>> create table content (
>> app_id   MEDIUMINT NOT NULL AUTO_INCREMENT,
>> niche??? , -- maybe int
>> type int,
>> titlevarchar(200),
>> description  text,
>> image???, -- maybe varchar
>> last_update  datetime,
>> content_link varchar(200),
>> unique_idint,
>> date_added   datetime,
>> content_provider int,
>> user_hitsint,
>> vote_total   int,
>> vote_user_total  int,
>> channel  int,
>> --...
>> 
>> primary key ( app_id )
>> 
>> );
>> 
>> As i see you get at last 129 different filed type [s9.field_id=129], you can
>> split the data into different tables. [Use heigher normal form.]
>> 
>> But if you don't want redesign the tables and all the different fields exists
>> then use join instead of left join and then the "group by" is needless.
>> 
>> burci
>> 
>> Thursday, October 23, 2003, 9:05:26 PM, you wrote:
>> 
>> MB> Hey All-
>> 
>> MB> I am trying to improve the speed of a website and was wondering if
>> MB> anyone had any ways I can rewrite these queries so that they actually
>> MB> run with some descent speed.
>> 
>> MB> Its a really nasty query and I'm not sure where to start, I'd like to
>> MB> now have to redo the tables and I already put some indexes on it which
>> MB> improved speed a lot but these queries are still very slow. You can most
>> MB> certainly laugh to yourselves on this one... jsut trying to get some
>> MB> opinions on what I should do with this.
>> 
>> MB> Thanks-
>> MB> Matt
>> 
>> MB> SELECT content.row_id AS row_id, content.app_id AS app_id, s1.data AS
>> MB> niche, s2.data AS type, s3.data AS title, s4.data AS description,
>> MB> s5.data AS image, s6.data AS last_update, s7.data AS content_link,
>> MB> s8.data AS unique_id, s9.data AS date_added, s10.data AS
>> MB> content_provider, s11.data AS user_hits, s12.data AS vote_total,
>> MB> s13.data AS vote_user_total, s14.data AS channel FROM content LEFT JOIN
>> MB> content s1 ON s1.field_id=69 AND s1.row_id = content.row_id LEFT JOIN
>> MB> content s2 ON s2.field_id=70 AND s2.row_id = content.row_id LEFT JOIN
>> MB> content s3 ON s3.field_id=71 AND s3.row_id = content.row_id LEFT JOIN
>> MB> content s4 ON s4.field_id=72 AND s4.row_id = content.row_id LEFT JOIN
>> MB> content s5 ON s5.field_id=73 AND s5.row_id = content.row_id LEFT JOIN
>> MB> content s6 ON s6.field_id=74 AND s6.row_id = content.row_id LEFT JOIN
>> MB> content s7 ON s7.field_id=76 AND s7.row_id = content.row_id LEFT JOIN
>> MB> content s8 ON s8.field_id=84 AND s8.row_id = content.row_id LEFT JOIN
>> MB> content s9 ON s9.field_id=129 AND s9.row_id = content.row_id LEFT JOIN
>> MB> content s10 ON s10.field_id=116 AND s10.row_id = content.row_id LEFT
>> MB> JOIN content s11 ON s11.field_id=118 AND s11.row_id = content.row_id
>> MB> LEFT JOIN 

Re: Improving Query speed - any suggestions?

2003-10-23 Thread Matt Babineau
Thanks everyone for your input, I'll try the ramdisk idea, I read about
someone else who tried that and had some success. Beyond, that I'm gonna
take the long route and redesign the database to be a bit more
conventional.

Thanks!
Matt

On Thu, 2003-10-23 at 20:28, Peter Buri wrote:
> Hello,
> 
> as i see you use one table to store all the data, but the cohesive data are
> split into 15! different rows.
> 
> I think to get the best performance you shoud redesign your tabel.
> Use at last first normal form [1NF], if the app_id is uniq this can be the
> primary key [which will speed up the query] .
> 
> Data which have the same row_id should be in one row.
> 
> Your table definition shoud look like this:
> 
> create table content (
> app_id   MEDIUMINT NOT NULL AUTO_INCREMENT,
> niche??? , -- maybe int
> type int,
> titlevarchar(200),
> description  text,
> image???, -- maybe varchar
> last_update  datetime,
> content_link varchar(200),
> unique_idint,
> date_added   datetime,
> content_provider int,
> user_hitsint,
> vote_total   int,
> vote_user_total  int,
> channel  int,
> --...
> 
> primary key ( app_id )
> 
> );
> 
> As i see you get at last 129 different filed type [s9.field_id=129], you can
> split the data into different tables. [Use heigher normal form.]
> 
> But if you don't want redesign the tables and all the different fields exists
> then use join instead of left join and then the "group by" is needless.
> 
> burci
> 
> Thursday, October 23, 2003, 9:05:26 PM, you wrote:
> 
> MB> Hey All-
> 
> MB> I am trying to improve the speed of a website and was wondering if
> MB> anyone had any ways I can rewrite these queries so that they actually
> MB> run with some descent speed.
> 
> MB> Its a really nasty query and I'm not sure where to start, I'd like to
> MB> now have to redo the tables and I already put some indexes on it which
> MB> improved speed a lot but these queries are still very slow. You can most
> MB> certainly laugh to yourselves on this one... jsut trying to get some
> MB> opinions on what I should do with this.
> 
> MB> Thanks-
> MB> Matt
> 
> MB> SELECT content.row_id AS row_id, content.app_id AS app_id, s1.data AS
> MB> niche, s2.data AS type, s3.data AS title, s4.data AS description,
> MB> s5.data AS image, s6.data AS last_update, s7.data AS content_link,
> MB> s8.data AS unique_id, s9.data AS date_added, s10.data AS
> MB> content_provider, s11.data AS user_hits, s12.data AS vote_total,
> MB> s13.data AS vote_user_total, s14.data AS channel FROM content LEFT JOIN
> MB> content s1 ON s1.field_id=69 AND s1.row_id = content.row_id LEFT JOIN
> MB> content s2 ON s2.field_id=70 AND s2.row_id = content.row_id LEFT JOIN
> MB> content s3 ON s3.field_id=71 AND s3.row_id = content.row_id LEFT JOIN
> MB> content s4 ON s4.field_id=72 AND s4.row_id = content.row_id LEFT JOIN
> MB> content s5 ON s5.field_id=73 AND s5.row_id = content.row_id LEFT JOIN
> MB> content s6 ON s6.field_id=74 AND s6.row_id = content.row_id LEFT JOIN
> MB> content s7 ON s7.field_id=76 AND s7.row_id = content.row_id LEFT JOIN
> MB> content s8 ON s8.field_id=84 AND s8.row_id = content.row_id LEFT JOIN
> MB> content s9 ON s9.field_id=129 AND s9.row_id = content.row_id LEFT JOIN
> MB> content s10 ON s10.field_id=116 AND s10.row_id = content.row_id LEFT
> MB> JOIN content s11 ON s11.field_id=118 AND s11.row_id = content.row_id
> MB> LEFT JOIN content s12 ON s12.field_id=120 AND s12.row_id =
> MB> content.row_id LEFT JOIN content s13 ON s13.field_id=121 AND s13.row_id
> MB> = content.row_id LEFT JOIN content s14 ON s14.field_id=125 AND
> MB> s14.row_id = content.row_id WHERE content.app_id = 11 AND
> MB> unix_timestamp(s6.data)-unix_timestamp('2003-10-23 23:59:59') < 0 GROUP
> MB> BY row_id ORDER BY last_update desc LIMIT -1
> 
> 
> 
> 
> -- 
> [nick]:burci [hp]:http://peter.buri.hu [mailto]:[EMAIL PROTECTED] [motto]:"Music 
> makes life easier to survive!"
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Improving Query speed - any suggestions?

2003-10-23 Thread Peter Buri
Hello,

as i see you use one table to store all the data, but the cohesive data are
split into 15! different rows.

I think to get the best performance you shoud redesign your tabel.
Use at last first normal form [1NF], if the app_id is uniq this can be the
primary key [which will speed up the query] .

Data which have the same row_id should be in one row.

Your table definition shoud look like this:

create table content (
app_id   MEDIUMINT NOT NULL AUTO_INCREMENT,
niche??? , -- maybe int
type int,
titlevarchar(200),
description  text,
image???, -- maybe varchar
last_update  datetime,
content_link varchar(200),
unique_idint,
date_added   datetime,
content_provider int,
user_hitsint,
vote_total   int,
vote_user_total  int,
channel  int,
--...

primary key ( app_id )

);

As i see you get at last 129 different filed type [s9.field_id=129], you can
split the data into different tables. [Use heigher normal form.]

But if you don't want redesign the tables and all the different fields exists
then use join instead of left join and then the "group by" is needless.

burci

Thursday, October 23, 2003, 9:05:26 PM, you wrote:

MB> Hey All-

MB> I am trying to improve the speed of a website and was wondering if
MB> anyone had any ways I can rewrite these queries so that they actually
MB> run with some descent speed.

MB> Its a really nasty query and I'm not sure where to start, I'd like to
MB> now have to redo the tables and I already put some indexes on it which
MB> improved speed a lot but these queries are still very slow. You can most
MB> certainly laugh to yourselves on this one... jsut trying to get some
MB> opinions on what I should do with this.

MB> Thanks-
MB> Matt

MB> SELECT content.row_id AS row_id, content.app_id AS app_id, s1.data AS
MB> niche, s2.data AS type, s3.data AS title, s4.data AS description,
MB> s5.data AS image, s6.data AS last_update, s7.data AS content_link,
MB> s8.data AS unique_id, s9.data AS date_added, s10.data AS
MB> content_provider, s11.data AS user_hits, s12.data AS vote_total,
MB> s13.data AS vote_user_total, s14.data AS channel FROM content LEFT JOIN
MB> content s1 ON s1.field_id=69 AND s1.row_id = content.row_id LEFT JOIN
MB> content s2 ON s2.field_id=70 AND s2.row_id = content.row_id LEFT JOIN
MB> content s3 ON s3.field_id=71 AND s3.row_id = content.row_id LEFT JOIN
MB> content s4 ON s4.field_id=72 AND s4.row_id = content.row_id LEFT JOIN
MB> content s5 ON s5.field_id=73 AND s5.row_id = content.row_id LEFT JOIN
MB> content s6 ON s6.field_id=74 AND s6.row_id = content.row_id LEFT JOIN
MB> content s7 ON s7.field_id=76 AND s7.row_id = content.row_id LEFT JOIN
MB> content s8 ON s8.field_id=84 AND s8.row_id = content.row_id LEFT JOIN
MB> content s9 ON s9.field_id=129 AND s9.row_id = content.row_id LEFT JOIN
MB> content s10 ON s10.field_id=116 AND s10.row_id = content.row_id LEFT
MB> JOIN content s11 ON s11.field_id=118 AND s11.row_id = content.row_id
MB> LEFT JOIN content s12 ON s12.field_id=120 AND s12.row_id =
MB> content.row_id LEFT JOIN content s13 ON s13.field_id=121 AND s13.row_id
MB> = content.row_id LEFT JOIN content s14 ON s14.field_id=125 AND
MB> s14.row_id = content.row_id WHERE content.app_id = 11 AND
MB> unix_timestamp(s6.data)-unix_timestamp('2003-10-23 23:59:59') < 0 GROUP
MB> BY row_id ORDER BY last_update desc LIMIT -1




-- 
[nick]:burci [hp]:http://peter.buri.hu [mailto]:[EMAIL PROTECTED] [motto]:"Music makes 
life easier to survive!"


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Improving Query speed - any suggestions?

2003-10-23 Thread mos
At 02:05 PM 10/23/2003, you wrote:
Hey All-

I am trying to improve the speed of a website and was wondering if
anyone had any ways I can rewrite these queries so that they actually
run with some descent speed.
Its a really nasty query and I'm not sure where to start, I'd like to
now have to redo the tables and I already put some indexes on it which
improved speed a lot but these queries are still very slow. You can most
certainly laugh to yourselves on this one... jsut trying to get some
opinions on what I should do with this.
Thanks-
Matt
Matt,
Instead of doing table joins on a large number of tables, in my 
application I've created a balance line procedure that would sync the 
"linked" tables with the primary table. (The tables are NOT linked using 
SQL) This works if there is a 1:1 correspondence between the tables. So 
each table has a simple Select statement for just that table and an 
optional Where clause and the table is sorted on a field that is common to 
all tables. If the key field of any of the secondary tables are less than 
the key field in the primary table, then the secondary records are skipped 
until it matches or exceeds the primary key field. It it exceeds the 
primary field then the secondary record is missing and you would use all 
null's for the secondary table.

Example:
select cust_id, col1a, col1b from table1 order by cust_id
select cust_id, col2a, col2b from table2 order by cust_id
select cust_id, col3a, col3b from table3 order by cust_id
A GetNextRcd procedure would sync the records so it returns 3 records all 
pointing to the same cust_id. A loop would then traverse through the table.

The other solution would be to create a RAM disk for your MySQL 
work directory. This will speed up the table joins. MySQL allows for 
multiple work volumes so the first volume could be the ram disk. If it 
overflows that, I'm thinking it will go to your second work directory. I 
haven't tried it, but it may be something to look into.

Mike


SELECT content.row_id AS row_id, content.app_id AS app_id, s1.data AS
niche, s2.data AS type, s3.data AS title, s4.data AS description,
s5.data AS image, s6.data AS last_update, s7.data AS content_link,
s8.data AS unique_id, s9.data AS date_added, s10.data AS
content_provider, s11.data AS user_hits, s12.data AS vote_total,
s13.data AS vote_user_total, s14.data AS channel FROM content LEFT JOIN
content s1 ON s1.field_id=69 AND s1.row_id = content.row_id LEFT JOIN
content s2 ON s2.field_id=70 AND s2.row_id = content.row_id LEFT JOIN
content s3 ON s3.field_id=71 AND s3.row_id = content.row_id LEFT JOIN
content s4 ON s4.field_id=72 AND s4.row_id = content.row_id LEFT JOIN
content s5 ON s5.field_id=73 AND s5.row_id = content.row_id LEFT JOIN
content s6 ON s6.field_id=74 AND s6.row_id = content.row_id LEFT JOIN
content s7 ON s7.field_id=76 AND s7.row_id = content.row_id LEFT JOIN
content s8 ON s8.field_id=84 AND s8.row_id = content.row_id LEFT JOIN
content s9 ON s9.field_id=129 AND s9.row_id = content.row_id LEFT JOIN
content s10 ON s10.field_id=116 AND s10.row_id = content.row_id LEFT
JOIN content s11 ON s11.field_id=118 AND s11.row_id = content.row_id
LEFT JOIN content s12 ON s12.field_id=120 AND s12.row_id =
content.row_id LEFT JOIN content s13 ON s13.field_id=121 AND s13.row_id
= content.row_id LEFT JOIN content s14 ON s14.field_id=125 AND
s14.row_id = content.row_id WHERE content.app_id = 11 AND
unix_timestamp(s6.data)-unix_timestamp('2003-10-23 23:59:59') < 0 GROUP
BY row_id ORDER BY last_update desc LIMIT -1
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Improving Query speed - any suggestions?

2003-10-23 Thread Dathan Vance Pattishall
Glancing at the query, I'm already under the impression that it should
be broken up into multiple queries and code should be used to simulate
the behavior. I find that this method is faster for mysql and a bit more
portable for developers in my opinion.

Other then what is stated above I would have to run it by hand to
determine what the purpose of this query is. Then see if it can be
rewritten.


- Dathan Vance Pattishall
  - Sr. Programmer and mySQL DBA for FriendFinder Inc.
  - http://friendfinder.com/go/p40688


-->-Original Message-
-->From: Matt Babineau [mailto:[EMAIL PROTECTED]
-->Sent: Thursday, October 23, 2003 12:05 PM
-->To: [EMAIL PROTECTED]
-->Subject: Improving Query speed - any suggestions?
-->
-->Hey All-
-->
-->I am trying to improve the speed of a website and was wondering if
-->anyone had any ways I can rewrite these queries so that they actually
-->run with some descent speed.
-->
-->Its a really nasty query and I'm not sure where to start, I'd like to
-->now have to redo the tables and I already put some indexes on it
which
-->improved speed a lot but these queries are still very slow. You can
most
-->certainly laugh to yourselves on this one... jsut trying to get some
-->opinions on what I should do with this.
-->
-->Thanks-
-->Matt
-->
-->SELECT content.row_id AS row_id, content.app_id AS app_id, s1.data AS
-->niche, s2.data AS type, s3.data AS title, s4.data AS description,
-->s5.data AS image, s6.data AS last_update, s7.data AS content_link,
-->s8.data AS unique_id, s9.data AS date_added, s10.data AS
-->content_provider, s11.data AS user_hits, s12.data AS vote_total,
-->s13.data AS vote_user_total, s14.data AS channel FROM content LEFT
JOIN
-->content s1 ON s1.field_id=69 AND s1.row_id = content.row_id LEFT JOIN
-->content s2 ON s2.field_id=70 AND s2.row_id = content.row_id LEFT JOIN
-->content s3 ON s3.field_id=71 AND s3.row_id = content.row_id LEFT JOIN
-->content s4 ON s4.field_id=72 AND s4.row_id = content.row_id LEFT JOIN
-->content s5 ON s5.field_id=73 AND s5.row_id = content.row_id LEFT JOIN
-->content s6 ON s6.field_id=74 AND s6.row_id = content.row_id LEFT JOIN
-->content s7 ON s7.field_id=76 AND s7.row_id = content.row_id LEFT JOIN
-->content s8 ON s8.field_id=84 AND s8.row_id = content.row_id LEFT JOIN
-->content s9 ON s9.field_id=129 AND s9.row_id = content.row_id LEFT
JOIN
-->content s10 ON s10.field_id=116 AND s10.row_id = content.row_id LEFT
-->JOIN content s11 ON s11.field_id=118 AND s11.row_id = content.row_id
-->LEFT JOIN content s12 ON s12.field_id=120 AND s12.row_id =
-->content.row_id LEFT JOIN content s13 ON s13.field_id=121 AND
s13.row_id
-->= content.row_id LEFT JOIN content s14 ON s14.field_id=125 AND
-->s14.row_id = content.row_id WHERE content.app_id = 11 AND
-->unix_timestamp(s6.data)-unix_timestamp('2003-10-23 23:59:59') < 0
GROUP
-->BY row_id ORDER BY last_update desc LIMIT -1
-->
-->
-->--
-->MySQL General Mailing List
-->For list archives: http://lists.mysql.com/mysql
-->To unsubscribe:
-->http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Improving Query speed - any suggestions?

2003-10-23 Thread Matt Babineau
Hey All-

I am trying to improve the speed of a website and was wondering if
anyone had any ways I can rewrite these queries so that they actually
run with some descent speed.

Its a really nasty query and I'm not sure where to start, I'd like to
now have to redo the tables and I already put some indexes on it which
improved speed a lot but these queries are still very slow. You can most
certainly laugh to yourselves on this one... jsut trying to get some
opinions on what I should do with this.

Thanks-
Matt

SELECT content.row_id AS row_id, content.app_id AS app_id, s1.data AS
niche, s2.data AS type, s3.data AS title, s4.data AS description,
s5.data AS image, s6.data AS last_update, s7.data AS content_link,
s8.data AS unique_id, s9.data AS date_added, s10.data AS
content_provider, s11.data AS user_hits, s12.data AS vote_total,
s13.data AS vote_user_total, s14.data AS channel FROM content LEFT JOIN
content s1 ON s1.field_id=69 AND s1.row_id = content.row_id LEFT JOIN
content s2 ON s2.field_id=70 AND s2.row_id = content.row_id LEFT JOIN
content s3 ON s3.field_id=71 AND s3.row_id = content.row_id LEFT JOIN
content s4 ON s4.field_id=72 AND s4.row_id = content.row_id LEFT JOIN
content s5 ON s5.field_id=73 AND s5.row_id = content.row_id LEFT JOIN
content s6 ON s6.field_id=74 AND s6.row_id = content.row_id LEFT JOIN
content s7 ON s7.field_id=76 AND s7.row_id = content.row_id LEFT JOIN
content s8 ON s8.field_id=84 AND s8.row_id = content.row_id LEFT JOIN
content s9 ON s9.field_id=129 AND s9.row_id = content.row_id LEFT JOIN
content s10 ON s10.field_id=116 AND s10.row_id = content.row_id LEFT
JOIN content s11 ON s11.field_id=118 AND s11.row_id = content.row_id
LEFT JOIN content s12 ON s12.field_id=120 AND s12.row_id =
content.row_id LEFT JOIN content s13 ON s13.field_id=121 AND s13.row_id
= content.row_id LEFT JOIN content s14 ON s14.field_id=125 AND
s14.row_id = content.row_id WHERE content.app_id = 11 AND
unix_timestamp(s6.data)-unix_timestamp('2003-10-23 23:59:59') < 0 GROUP
BY row_id ORDER BY last_update desc LIMIT -1


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query speed issue

2003-10-02 Thread aguia

What are the configuration you are using? What's the size of your buffers?
What's your system?

Maybe increasing sort buffer and key buffer will be good.

;)
Alexis

Quoting Brad Teale <[EMAIL PROTECTED]>:

> Hello, 
> 
> The problem:
> I have the following query with is taking upwards of 2 minutes to complete
> and we need it faster, prefer less than 30 seconds (don't laugh):
> select modelhr, avg(f.temp-b.temp), avg(abs(f.temp-b.temp)),
> stddev(f.temp-b.temp), stddev(abs(f.temp-b.temp)), count(f.temp-b.temp) from
> foo as f, bar as b where f.fyearmoda=b.yearmoda and f.fhr=b.hr and
> f.stn=b.stn and b.yearmoda >= '2003-01-01' and b.yearmoda <= '2003-01-31'
> and b.stn='' group by modelhr;
> 
> When we run explain we get:
> +---+---+---+-+-+---
> ++
> | table | type  | possible_keys | key | key_len | ref
> | rows | Extra|
> +---+---+---+-+-+---
> ++
> | b | range | PRIMARY,interp_hr | PRIMARY |   7 | NULL
> |  679 | Using where; Using temporary; Using filesort |
> | f | ref   | stn,fcst  | stn |  11 |
> const,m.yearmoda,m.Hr |   26 | Using where
> |
> +---+---+---+-+-+---
> +--+--+
> 
> Is there a reasonable way to get this query from using temporary and
> filesort?  I tried dumping the data into a temporary table, and the explain
> ran the same.  Also, both MySQL setups perform the same.  Any
> ideasPlease!
> 
>  System/Table Stuff Below
> -
> System: dual Xeon 2.4GHz machine with 2G RAM
> Interconnect: QLogicFC 2200
> Disk1: Sun T3 hardware raid5 with Reiserfs (64M cache controller)
> Disk2: Sun T3 hardware raid0 with Reiserfs (64M cache controller)
> OS: Red Hat Linux release 8.0 (with qlogicfc module)
> MySQL1: 4.0.14 - prebuilt MySQL rpm uses Disk1
> MySQL2: 4.0.15a - Hand built with MySQL Manual options uses Disk2
> 
> The table structures are as follows:
> CREATE TABLE foo (
>   yearmoda date NOT NULL default '-00-00',
>   mruntime int(2) NOT NULL default '0',
>   mhr int(3) NOT NULL default '0',
>   fyearmoda date NOT NULL default '-00-00',
>   fhr int(2) NOT NULL default '0',
>   stn varchar(4) NOT NULL default '',
>   temp decimal(6,2) default NULL,
> ... more but unused data here
>   PRIMARY KEY  (yearmoda,mruntime,mhr,stn),
>   KEY stn (stn,fyearmoda,fhr),
>   KEY fcst (stn,yearmoda,mruntime)
> ) TYPE=MyISAM;
> 
> CREATE TABLE bar (
>   stn char(4) NOT NULL default '',
>   hr int(2) NOT NULL default '0',
>   min int(2) NOT NULL default '0',
>   day int(2) NOT NULL default '0',
>   temp decimal(6,2) NOT NULL default '0.00',
> ... More unused data here
>   yearmoda date NOT NULL default '-00-00',
>   PRIMARY KEY  (stn,yearmoda,hr,min),
>   KEY interp_hr (yearmoda,hr,stn)
> ) TYPE=MyISAM;
> 
> Table Stats:
> foo - 38G - Data/18G - Index (326K rows)
> bar - 24G - Data/14G - Index (35K rows)
> 
>   
> Thanks,
> Brad
> 
> -- 
> MySQL General Mailing List
> For list archives: http://lists.mysql.com/mysql
> To unsubscribe:   
> http://lists.mysql.com/[EMAIL PROTECTED]
> 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Query speed issue

2003-10-02 Thread Brad Teale
Hello, 

The problem:
I have the following query with is taking upwards of 2 minutes to complete
and we need it faster, prefer less than 30 seconds (don't laugh):
select modelhr, avg(f.temp-b.temp), avg(abs(f.temp-b.temp)),
stddev(f.temp-b.temp), stddev(abs(f.temp-b.temp)), count(f.temp-b.temp) from
foo as f, bar as b where f.fyearmoda=b.yearmoda and f.fhr=b.hr and
f.stn=b.stn and b.yearmoda >= '2003-01-01' and b.yearmoda <= '2003-01-31'
and b.stn='' group by modelhr;

When we run explain we get:
+---+---+---+-+-+---
++
| table | type  | possible_keys | key | key_len | ref
| rows | Extra|
+---+---+---+-+-+---
++
| b | range | PRIMARY,interp_hr | PRIMARY |   7 | NULL
|  679 | Using where; Using temporary; Using filesort |
| f | ref   | stn,fcst  | stn |  11 |
const,m.yearmoda,m.Hr |   26 | Using where
|
+---+---+---+-+-+---
+--+--+

Is there a reasonable way to get this query from using temporary and
filesort?  I tried dumping the data into a temporary table, and the explain
ran the same.  Also, both MySQL setups perform the same.  Any
ideasPlease!

 System/Table Stuff Below
-
System: dual Xeon 2.4GHz machine with 2G RAM
Interconnect: QLogicFC 2200
Disk1: Sun T3 hardware raid5 with Reiserfs (64M cache controller)
Disk2: Sun T3 hardware raid0 with Reiserfs (64M cache controller)
OS: Red Hat Linux release 8.0 (with qlogicfc module)
MySQL1: 4.0.14 - prebuilt MySQL rpm uses Disk1
MySQL2: 4.0.15a - Hand built with MySQL Manual options uses Disk2

The table structures are as follows:
CREATE TABLE foo (
  yearmoda date NOT NULL default '-00-00',
  mruntime int(2) NOT NULL default '0',
  mhr int(3) NOT NULL default '0',
  fyearmoda date NOT NULL default '-00-00',
  fhr int(2) NOT NULL default '0',
  stn varchar(4) NOT NULL default '',
  temp decimal(6,2) default NULL,
... more but unused data here
  PRIMARY KEY  (yearmoda,mruntime,mhr,stn),
  KEY stn (stn,fyearmoda,fhr),
  KEY fcst (stn,yearmoda,mruntime)
) TYPE=MyISAM;

CREATE TABLE bar (
  stn char(4) NOT NULL default '',
  hr int(2) NOT NULL default '0',
  min int(2) NOT NULL default '0',
  day int(2) NOT NULL default '0',
  temp decimal(6,2) NOT NULL default '0.00',
... More unused data here
  yearmoda date NOT NULL default '-00-00',
  PRIMARY KEY  (stn,yearmoda,hr,min),
  KEY interp_hr (yearmoda,hr,stn)
) TYPE=MyISAM;

Table Stats:
foo - 38G - Data/18G - Index (326K rows)
bar - 24G - Data/14G - Index (35K rows)

  
Thanks,
Brad

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query speed and key optimization

2003-07-10 Thread Brent Baisley
It sounds like you are referring to full text indexing. Whenever you 
have to put a wild card at the start of a word, you should probably 
considering using full text indexing. It's easy to implement and the 
manual pages are fairly informative.

On Thursday, July 10, 2003, at 02:41 PM, Wendell Dingus wrote:

Some way to
form a query that can tell it to ignore spaces and punctuation in text
in a field? Hmm... Thanks!
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search & Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Query speed and key optimization

2003-07-10 Thread Wendell Dingus
I've been immersing myself in reading and trying to understand what I
can about how keys and indexes work and the flow of a query. Here's an
example of one that's not real efficient and I'm not sure if it can be
made any more efficient or done some other way to go faster though. I'm
guessing a "like '%word%'" query of a varchar(80) is going to
sequentially search everything any way you slice it? The purpose of that
row is to perform a "pure alpha" search and is just the first and last
name fields concatenated with all spaces and punctuation removed.
Someone here came up with that idea a long time ago and I'm also
wondering if there's not a more efficient way to do that? Some way to
form a query that can tell it to ignore spaces and punctuation in text
in a field? Hmm... Thanks!


mysql> describe TBL_AllNames;
+---+-+--+-+-+---+
| Field | Type| Null | Key | Default | Extra |
+---+-+--+-+-+---+
| InstNum   | varchar(8)  |  | PRI | |   |
| Year  | varchar(4)  |  | PRI | |   |
| NameType  | char(2) |  | PRI | |   |
| NameClass | char(1) | YES  | MUL | NULL|   |
| NameAP| char(1) | YES  | | NULL|   |
| Ncount| int(11) |  | PRI | 0   |   |
| LastName  | varchar(80) | YES  | MUL | NULL|   |
| FirstName | varchar(60) | YES  | MUL | NULL|   |
| TypeofName| varchar(20) | YES  | | NULL|   |
| PAName| varchar(80) | YES  | MUL | NULL|   |
| SoundKeyFirst | varchar(12) | YES  | MUL | NULL|   |
| SoundKeyLast  | varchar(12) | YES  | MUL | NULL|   |
| RecDate   | varchar(8)  |  | MUL | |   |
| InstCode  | varchar(10) |  | MUL | |   |
| IndexType | varchar(4)  |  | | |   |
| XrefGroup | varchar(8)  |  | | |   |
+---+-+--+-+-+---+
16 rows in set (0.01 sec)

mysql> explain SELECT LastName,FirstName,InstNum,NameType FROM
TBL_AllNames WHERE  PAName LIKE '%dawn%' ORDER BY RecDate;
+--+---+---+-+-+--+-
+-+
| table| type  | possible_keys | key | key_len | ref  | rows
| Extra   |
+--+---+---+-+-+--+-
+-+
| TBL_AllNames | index | NULL  | RecDate |   8 | NULL |
6083145 | Using where |
+--+---+---+-+-+--+-
+-+
1 row in set (0.00 sec)

mysql> SELECT PAName,LastName,FirstName FROM TBL_AllNames WHERE  PAName
LIKE '%dawn%' ORDER BY RecDate limit 5;
+--+-+---+
| PAName   | LastName| FirstName |
+--+-+---+
| STEELEDAWNCRESTINC   | STEELE DAWN CREST INC   |   |
| NEWDAWNAPARTMENTSLTD | NEW DAWN APARTMENTS LTD |   |
| NEWDAWNAPTSLTD   | NEW DAWN APTS LTD   |   |
| LEEDAWNEM| LEE | DAWNE M   |
| NEWDAWNAPARTMENTSLTD | NEW DAWN APARTMENTS LTD |   |
+--+-+---+
5 rows in set (3.12 sec) 
^ Without the limit of 5 returned 1200+ hits and took over 6 minutes to
run and was probably also cached the second time I ran it with the
limit.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Conception - Tree - Recursivity -Address book - Query speed

2003-04-03 Thread Leif B. Kristensen
On Wed, 02 Apr 2003 16:53:44 +0200, Grégoire Dubois
<[EMAIL PROTECTED]> wrote:

>Creating the tree doesn't give me problem. Where I ask me some 
>questions, is the speed to get the whole tree from the database in a 
>recursive way.

I've made a PHP script doing genealogical descendancy charts that does
recursion on the application level. For an extreme example, look at
http://solumslekt.org/slekta/etterkommere.php?person=383> which
lists a very prolific family in which I've done extensive research.

I just ran it, and it lists 1100 spouses and descendants four levels
deep in a little over 5 seconds. The query is somewhat optimised, as I
check for the sex of the parent:

if ($sex == 'M')
$query = "select per_no, mother, sex from slekta 
where father = $p_no order by p_birth_date1";
elseif ($sex == 'F')
$query = "select per_no, father, sex from slekta 
where mother = $p_no order by p_birth_date1";

The table has per_no as primary key, and is indexed on (father, mother).

By the way, I wonder if there are other genealogy hackers here who'd
like to collaborate on an open-source project. 

regards,
-- 
Leif Biberg Kristensen 
http://solumslekt.org/
mail to leif at solumslekt dot org

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Conception - Tree - Recursivity -Address book - Query speed

2003-04-03 Thread Kevin Fries
I'm not sure Celko described all the queries in those articles (the url
I sent plus the followups in April and May that year.  You'll have to
hunt down those follow-ups.
But, for a much easier reference, look at Joe Celko's  _SQL For
Smarties_, where in one of the later chapters, he covers the nested set
hierarchy more completely.  I actually used that chapter when I did my
first implementation of that concept.  For me, I needed a hierarchy that
would stand up in any reasonable database.  It worked out very well,
even though I had to adapt the model to work across multiple tables.
The book is money well spent.

The following Usenet article is where I first was convince that this
would work for me.
http://groups.google.com/groups?q=celko+CONNECT+BY+from+Oracle+in+SQL-Se
rver%3F&hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=%230%23O0OiFBHA.1932%40tkmsftng
p04&rnum=4
That's a horrible URL.  If it doesn't work right, go to your favorite
usenet archive and look for :
celko "CONNECT BY from Oracle in SQL-Server?"

He posted a very nice article in July 2001 that explains a lot of the
concept, including 
How to get a complete subtree stom one node.  Reading the results is
slightly tricker, since a columnar result set can't obviously look like
a tree.

As for having to update a large portion of the tree to add or subtract
one node, yes.  That's necessary.  But for the cases I've dealt with,
this is business structure data, and it changes very rarely by
comparison to daily transactional information.  The payoff is that every
query against the tree is faster than it would have been under the
CONNECT-BY architecture particular to Oracle.  Oh, and in my case, and
in what Celko describes, the update to move the LFT/RGT values to make
room for a new node can be done in one update statement, even in mysql.
So it's tricky to write, but very simple to operate.

Kevin

-Original Message-
From: Grégoire Dubois [mailto:[EMAIL PROTECTED] 
Sent: Thursday, April 03, 2003 6:28 AM
To: Kevin Fries
Cc: [EMAIL PROTECTED]
Subject: Re: Conception - Tree - Recursivity -Address book - Query speed


Thank you very much for your answer.

I read the article.
I've understood that this nested tree model (with the "worm", I liked 
the image), permits to retrive very easiely the path of a node, and some

other things.
But : I've understood that if we want to modify the tree, all left and 
right numbers of all the nodes of the tree have to be changed. Why 
not... But biggest problem, I don't see how it permits me to get the 
whole tree in a recursive maner. I haven't seen a request that would 
give me something like that :
(root(node1, node2(node3, node4)))
corresponding to :
  root
   ||
 node1  node2
   ||
node3node4   

Kevin Fries wrote:

>My advice is to consider an alternate storage approach. Instead of 
>storing parent id's, store the trees using the Nested Set model.
> http://www.dbmsmag.com/9603d06.html
>
>About halfway down the article he presents a model where you never 
>store the parent id, but you do store two columns (LEFT and RIGHT) 
>which represent the boundaries of the child records, and their 
>respective LEFT and RIGHT information.
>
>That sounds complicated, and it is. But it is an ANSI-compliant 
>solution, and it can be used to get terrific results.
>
>His article is followed up in the april and may issues, as well, also 
>online.
>
>In addition, you can follow up his article by looking in Usenet groups 
>for the term "Nested Set Hierarchy" or "Nested Set Model".
>
>
>Good luck with it.
>Kevin
>
>-Original Message-
>From: Grégoire Dubois [mailto:[EMAIL PROTECTED]
>Sent: Wednesday, April 02, 2003 6:54 AM
>To: [EMAIL PROTECTED]
>Subject: Conception - Tree - Recursivity -Address book - Query speed
>
>
>Hello all,
>
>I am putting multiple "address book" (trees) into a MySQL database.
>These "address book" are made of "directories" and "persons".
>
>It gives something like that for the tables :
>   
>Directory
>--
>ID
>Name
>ID_father (the reference to the father directory)
>
>
>Person
>-
>ID
>Name
>ID_father (the reference to the father directory)
>
>
>Creating the tree doesn't give me problem. Where I ask me some
>questions, is the speed to get the whole tree from the database in a 
>recursive way.
>The way the tree is defined in the database, I will be obliged to make 
>for each node a SELECT to know all its childr

Re: Conception - Tree - Recursivity -Address book - Query speed

2003-04-03 Thread Grégoire Dubois
Thank you very much for your answer.

I read the article.
I've understood that this nested tree model (with the "worm", I liked 
the image), permits to retrive very easiely the path of a node, and some 
other things.
But : I've understood that if we want to modify the tree, all left and 
right numbers of all the nodes of the tree have to be changed. Why 
not... But biggest problem, I don't see how it permits me to get the 
whole tree in a recursive maner. I haven't seen a request that would 
give me something like that :
   (root(node1, node2(node3, node4)))
corresponding to :
 root
  ||
node1  node2
  ||
   node3node4   

Kevin Fries wrote:

My advice is to consider an alternate storage approach.
Instead of storing parent id's, store the trees using the Nested Set
model.
http://www.dbmsmag.com/9603d06.html
About halfway down the article he presents a model where you never store
the parent id, but you do store two columns (LEFT and RIGHT) which
represent the boundaries of the child records, and their respective LEFT
and RIGHT information.
That sounds complicated, and it is. But it is an ANSI-compliant
solution, and it can be used to get terrific results.
His article is followed up in the april and may issues, as well, also
online.
In addition, you can follow up his article by looking in Usenet groups
for the term "Nested Set Hierarchy" or "Nested Set Model".
Good luck with it.
Kevin
-Original Message-
From: Grégoire Dubois [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 02, 2003 6:54 AM
To: [EMAIL PROTECTED]
Subject: Conception - Tree - Recursivity -Address book - Query speed

Hello all,

I am putting multiple "address book" (trees) into a MySQL database. 
These "address book" are made of "directories" and "persons".

It gives something like that for the tables :
  
   Directory
   --
   ID
   Name
   ID_father (the reference to the father directory)

   Person
   -
   ID
   Name
   ID_father (the reference to the father directory)
Creating the tree doesn't give me problem. Where I ask me some 
questions, is the speed to get the whole tree from the database in a 
recursive way.
The way the tree is defined in the database, I will be obliged to make 
for each node a SELECT to know all its children, then do the same for 
the children, etc... Each SELECT will give me a ResultSet. And from each

ResultSet I will know if the node has got childrens, and if yes, I'll 
have to make new requests to the database (one for each child), etc...

As I will have a lot of clients that will ask for those trees, I am 
wondering if this technique won't take too much ressources and be too
slow.

I know that Oracle uses CONNECT BY PRIOR for this type of problem. But, 
as this functionnality isn't available yet in MySQL, I'm asking you for 
the best solution you would think of.

Thank you very much for your help.

PS : Any website talking about database conception would be very 
helpfull to me if you know some.



 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


RE: Conception - Tree - Recursivity -Address book - Query speed

2003-04-02 Thread Kevin Fries
My advice is to consider an alternate storage approach.
Instead of storing parent id's, store the trees using the Nested Set
model.
 http://www.dbmsmag.com/9603d06.html

About halfway down the article he presents a model where you never store
the parent id, but you do store two columns (LEFT and RIGHT) which
represent the boundaries of the child records, and their respective LEFT
and RIGHT information.

That sounds complicated, and it is. But it is an ANSI-compliant
solution, and it can be used to get terrific results.

His article is followed up in the april and may issues, as well, also
online.

In addition, you can follow up his article by looking in Usenet groups
for the term "Nested Set Hierarchy" or "Nested Set Model".


Good luck with it.
Kevin

-Original Message-
From: Grégoire Dubois [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 02, 2003 6:54 AM
To: [EMAIL PROTECTED]
Subject: Conception - Tree - Recursivity -Address book - Query speed


Hello all,

I am putting multiple "address book" (trees) into a MySQL database. 
These "address book" are made of "directories" and "persons".

It gives something like that for the tables :
   
Directory
--
ID
Name
ID_father (the reference to the father directory)


Person
-
ID
Name
ID_father (the reference to the father directory)


Creating the tree doesn't give me problem. Where I ask me some 
questions, is the speed to get the whole tree from the database in a 
recursive way.
The way the tree is defined in the database, I will be obliged to make 
for each node a SELECT to know all its children, then do the same for 
the children, etc... Each SELECT will give me a ResultSet. And from each

ResultSet I will know if the node has got childrens, and if yes, I'll 
have to make new requests to the database (one for each child), etc...

As I will have a lot of clients that will ask for those trees, I am 
wondering if this technique won't take too much ressources and be too
slow.

I know that Oracle uses CONNECT BY PRIOR for this type of problem. But, 
as this functionnality isn't available yet in MySQL, I'm asking you for 
the best solution you would think of.

Thank you very much for your help.

PS : Any website talking about database conception would be very 
helpfull to me if you know some.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Conception - Tree - Recursivity -Address book - Query speed

2003-04-02 Thread Grégoire Dubois
Hello all,

I am putting multiple "address book" (trees) into a MySQL database. 
These "address book" are made of "directories" and "persons".

It gives something like that for the tables :
  
   Directory
   --
   ID
   Name
   ID_father (the reference to the father directory)

   Person
   -
   ID
   Name
   ID_father (the reference to the father directory)
Creating the tree doesn't give me problem. Where I ask me some 
questions, is the speed to get the whole tree from the database in a 
recursive way.
The way the tree is defined in the database, I will be obliged to make 
for each node a SELECT to know all its children, then do the same for 
the children, etc... Each SELECT will give me a ResultSet. And from each 
ResultSet I will know if the node has got childrens, and if yes, I'll 
have to make new requests to the database (one for each child), etc...

As I will have a lot of clients that will ask for those trees, I am 
wondering if this technique won't take too much ressources and be too slow.

I know that Oracle uses CONNECT BY PRIOR for this type of problem. But, 
as this functionnality isn't available yet in MySQL, I'm asking you for 
the best solution you would think of.

Thank you very much for your help.

PS : Any website talking about database conception would be very 
helpfull to me if you know some.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


problems with query speed when joining several tables

2002-12-19 Thread Joseph Dietz

platform: windows 2000 pro, mysql default table type myIsam, non-binary 
distribution (install version).

I am still getting very slow query results when I join multiple tables 
together. I have been trying to figure this out for days and am at a loss. I 
added an index to my cross reference tables as suggested and things worked 
very well for the first query but not the second one. Someone thought it 
might be a bug in the query optimizer. My tables contain only two records 
and are structured like this:


pk_organism_id---pk_media_idfk_media_id,fk_author_id---pk_authors_id
fk_organism_id
fk_technique_id
fk_admin_id


pk_technique_id
pk_admin_id

   .
   .
   .
   .
   .

CREATE TABLE Medias (
pk_media_id bigint unsigned not null auto_increment,
fk_organism_id int unsigned not null,
fk_technique_id int unsigned not null,
fk_admin_id int unsigned not null,
file_name_in_db char(50),
file_name char(50),
file_name_altered char(50),
file_data longblob,
creation_date date,
upload_date date,
download_date date,
modified_date date,
creation_time time,
upload_time time,
update_timestamp timestamp(14),
download_time time,
modified_time time,
pixel_size int unsigned,
file_size int unsigned,
format_in_db char(15),
format char(15),
magnification int unsigned,
label bool,
release bool,
comment varchar(250),
keyword varchar(150),
field_status char(50),
PRIMARY KEY (pk_media_id),
FOREIGN KEY(fk_organism_id) REFERENCES Organisms(pk_organism_id),
FOREIGN KEY(fk_technique_id) REFERENCES Techniques(pk_technique_id),
FOREIGN KEY(fk_admin_id) REFERENCES Admin(pk_admin_id));


CREATE TABLE MediaAuthors (
fk_media_id bigint unsigned not null,
fk_author_id int unsigned not null,
UNIQUE (fk_media_id, fk_author_id),
KEY (fk_author_id),
FOREIGN KEY(fk_media_id) REFERENCES Medias(pk_media_id),
FOREIGN KEY(fk_author_id) REFERENCES Authors(pk_author_id));

CREATE TABLE Authors (
pk_author_id int unsigned not null auto_increment,
admin_username char(50),
last_name char(50),
first_name char(50),
organization varchar(100),
title varchar(100),
position varchar(100),
email char(50),
email_1 char(50),
email_2 char(50),
phone_number char(50),
phone_number_1 char(50),
phone_number_2 char(50),
address varchar(100),
city varchar(100),
province varchar(100),
country varchar(100),
postal_code char(50),
field_status char(50),
PRIMARY KEY(pk_author_id));

mysql> select * from organisms \G;
*** 1. row ***
pk_organism_id: 1
genus: test genus
  species: test species
  common_name: test common name
   empire: test empire
  kingdom: test kingdom
   complexity: test nosebleed
  comment: test comment
  keyword: test keyword
 field_status: active
*** 2. row ***
pk_organism_id: 42
genus: test genus 2
  species: test species 2
  common_name: test common name 2
   empire: Eukaryotic
  kingdom: Plant
   complexity: Multicellular
  comment: test comment 2
  keyword: test keyword 2
 field_status: active
2 rows in set (0.00 sec)
 .
 .
 .
 .
 .


// this works well //
EXPLAIN SELECT DISTINCT Medias.pk_media_id, Organisms.common_name,
Tissues.type As tiss_type, Cells.type As cell_type, Organelles.type As 
org_type,
Macromolecules.type As macro_type, Authors.last_name,
Authors.organization, Medias.file_name, Medias.format, Medias.label
FROM Macromolecules
INNER JOIN MediaMacromolecules ON fk_macromolecule_id = pk_macromolecule_id
INNER JOIN Medias ON pk_media_id = MediaMacromolecules.fk_media_id
INNER JOIN MediaAuthors ON pk_media_id = MediaAuthors.fk_media_id
INNER JOIN Authors ON fk_author_id = pk_author_id
INNER JOIN MediaTissues ON pk_media_id = MediaTissues.fk_media_id
INNER JOIN Tissues ON fk_tissue_id = pk_tissue_id
INNER JOIN MediaCells ON pk_media_id = MediaCells.fk_media_id
INNER JOIN Cells ON fk_cell_id = pk_cell_id
INNER JOIN MediaOrganelles ON pk_media_id = MediaOrganelles.fk_media_id
INNER JOIN Organelles ON fk_organelle_id = pk_organelle_id
INNER JOIN Organisms ON fk_organism_id = pk_organism_id
INNER JOIN Techniques ON fk_technique_id = pk_technique_id
INNER JOIN Admin ON Medias.fk_admin_id = pk_admin_id
WHERE pk_macromolecule_id = 1;


*** 1. row ***
   table: Macromolecules
type: const
possible_keys: PRIMARY,pk_macromolecule_id
 key: PRIMARY
 key_len: 8
 ref: const
rows: 1
   Extra: Using temporary
*** 2. row ***
   table: MediaMacromolecules
type: ref
possible_keys: fk_media_id,fk_macromolecule_id
 key: fk_macromolecule_id
 key_len: 8
 ref: const

Re: JOINS: MySQL Query Speed

2002-05-28 Thread Roger Baklund

* [EMAIL PROTECTED]
>
> I have a question on how MySQL JOIN has effect on query (search)
> performance:
>
> Our database consists of about 250.000 datasets (rows).
> Right now, all of the 150 columns are in one big table.
>
> However, we do have certain columns that are empty for most rows (for
> example information of the status of pictures of customer).
> Out of the 250.000, only 20.000 have information on photo-status.
>
> The idea is to put the columns (amount: 5) about the photo-status in a
> seperate table to not blow up the database (to normalize the database ?).

This is called database normalization, yes.

> A search over, lets say, 15 columns, that includes one columns out of the
> photo-status-columns would then have to JOIN these two tables.
> Does that lead to performance drop ?

Probably not, because the amount of data to process is smaller (Your 230.000
x 5 empty photo-status columns are gone) and the join lookup is (should be)
indexed.

> Or is maybe even faster to search with a JOIN, because we only
> have 20.000
> rows in that seperate photo-status-table ?

Eh... right. It is probably faster with the join. :)

> What would probably happen if we did this with, lets say, 5 other
> data-sets
> (we would the have 7 tables connected through JOIN) ?

The query would probably be even faster. :)

There is an upper limit on how many joins you can do in a single statement,
but this is rarely a problem. (32 in version 3.22 and 64 in version 3.23,
according to crashme.)

> I hope someone can help me with performance issues about JOIN.

As long as all joined tables have an index (normally you join using the
primary key, which is a perfectly good index, but any column/combination of
columns could be used, just make sure the used columns are indexed!) there
are (afaik) no performance issues with JOIN. On the contrary: most mysql
databases (of some size) will be faster when normalized.

--
Roger


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




JOINS: MySQL Query Speed

2002-05-28 Thread Florian Wilken

Hello

I have a question on how MySQL JOIN has effect on query (search) performance:

Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.

However, we do have certain columns that are empty for most rows (for 
example information of the status of pictures of customer).
Out of the 250.000, only 20.000 have information on photo-status.

The idea is to put the columns (amount: 5) about the photo-status in a 
seperate table to not blow up the database (to normalize the database ?).

A search over, lets say, 15 columns, that includes one columns out of the 
photo-status-columns would then have to JOIN these two tables.
Does that lead to performance drop ?
Or is maybe even faster to search with a JOIN, because we only have 20.000 
rows in that seperate photo-status-table ?

What would probably happen if we did this with, lets say, 5 other data-sets 
(we would the have 7 tables connected through JOIN) ?

I hope someone can help me with performance issues about JOIN.

Thanks.

Flo 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




JOINS: MySQL Query Speed

2002-05-28 Thread Florian Wilken

Hello

I have a question on how MySQL JOIN has effect on query (search) performance:

Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.

However, we do have certain columns that are empty for most rows (for 
example information of the status of pictures of customer).
Out of the 250.000, only 20.000 have information on photo-status.

The idea is to put the columns (amount: 5) about the photo-status in a 
seperate table to not blow up the database (to normalize the database ?).

A search over, lets say, 15 columns, that includes one columns out of the 
photo-status-columns would then have to JOIN these two tables.
Does that lead to performance drop ?
Or is maybe even faster to search with a JOIN, because we only have 20.000 
rows in that seperate photo-status-table ?

What would probably happen if we did this with, lets say, 5 other data-sets 
(we would the have 7 tables connected through JOIN) ?

I hope someone can help me with performance issues about JOIN.

Thanks.

Flo 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




JOINS: MySQL Query Speed

2002-05-28 Thread Florian Wilken

Hello

I have a question on how MySQL JOIN has effect on query (search) performance:

Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.

However, we do have certain columns that are empty for most rows (for 
example information of the status of pictures of customer).
Out of the 250.000, only 20.000 have information on photo-status.

The idea is to put the columns (amount: 5) about the photo-status in a 
seperate table to not blow up the database (to normalize the database ?).

A search over, lets say, 15 columns, that includes one columns out of the 
photo-status-columns would then have to JOIN these two tables.
Does that lead to performance drop ?
Or is maybe even faster to search with a JOIN, because we only have 20.000 
rows in that seperate photo-status-table ?

What would probably happen if we did this with, lets say, 5 other data-sets 
(we would the have 7 tables connected through JOIN) ?

I hope someone can help me with performance issues about JOIN.

Thanks.

Flo 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




JOINS: MySQL Query Speed

2002-05-28 Thread Florian Wilken

Hello

I have a question on how MySQL JOIN has effect on query (search) performance:

Our database consists of about 250.000 datasets (rows).
Right now, all of the 150 columns are in one big table.

However, we do have certain columns that are empty for most rows (for 
example information of the status of pictures of customer).
Out of the 250.000, only 20.000 have information on photo-status.

The idea is to put the columns (amount: 5) about the photo-status in a 
seperate table to not blow up the database (to normalize the database ?).

A search over, lets say, 15 columns, that includes one columns out of the 
photo-status-columns would then have to JOIN these two tables.
Does that lead to performance drop ?
Or is maybe even faster to search with a JOIN, because we only have 20.000 
rows in that seperate photo-status-table ?

What would probably happen if we did this with, lets say, 5 other data-sets 
(we would the have 7 tables connected through JOIN) ?

I hope someone can help me with performance issues about JOIN.

Thanks.

Flo 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL Query Speed

2002-05-23 Thread Simon Green

Sounds about right.
How have you got your my.cnf set?
Have you got RAID on you HDD?
Simon

-Original Message-
From: Florian Wilken [mailto:[EMAIL PROTECTED]]
Sent: 23 May 2002 10:17
To: [EMAIL PROTECTED]
Subject: MySQL Query Speed


Hello

our database consists of one table with approx. 150 columns.
We have a query going over 11 columns (most tinyint, int and some varchar)
Out of 217237 rows the query found 56 matches.
Without Indexing, the query took 2,55 seconds.
With Indexing, the query took 0,04 seconds.

The database and query is currently being tested on the following computer:
AMD Athlon(TM) XP1700+ (1500MHz), 768MB RAM
MySQL 3.23.41 unter SuSE Linux 7.3 (Kernel 2.4.10-4GB)

Is the speed of the query (2,55 sec, respectively 0,04 seconds) reasonable ?
With the Index, should the query even be faster than 0,04 seconds ?

Thanks for your Help.

Flo


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MySQL Query Speed

2002-05-23 Thread Florian Wilken

Hello

our database consists of one table with approx. 150 columns.
We have a query going over 11 columns (most tinyint, int and some varchar)
Out of 217237 rows the query found 56 matches.
Without Indexing, the query took 2,55 seconds.
With Indexing, the query took 0,04 seconds.

The database and query is currently being tested on the following computer:
AMD Athlon(TM) XP1700+ (1500MHz), 768MB RAM
MySQL 3.23.41 unter SuSE Linux 7.3 (Kernel 2.4.10-4GB)

Is the speed of the query (2,55 sec, respectively 0,04 seconds) reasonable ?
With the Index, should the query even be faster than 0,04 seconds ?

Thanks for your Help.

Flo


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Query Speed

2002-02-13 Thread RBRoa

Yeah,

My tables structure

+---+---+--+-+-+
+
| id| int(11)   |  | PRI | NULL| auto_increment
|
| edate | date  | YES  | MUL | NULL|
|
| etime | int(5) unsigned zerofill  | YES  | | NULL|
|
| lec   | varchar(5)| YES  | | NULL|
|
| trunk | varchar(5)| YES  | | NULL|
|
| bound | varchar(5)| YES  | | NULL|
|
| att   | int(10) unsigned zerofill | YES  | | NULL|
|
| suc   | int(10) unsigned zerofill | YES  | | NULL|
|
| ovf   | int(10) unsigned zerofill | YES  | | NULL|
|
| fir   | int(10) unsigned zerofill | YES  | | NULL|
|
| alt   | int(10) unsigned zerofill | YES  | | NULL|
|
| ter   | int(10) unsigned zerofill | YES  | | NULL|
|
| rng   | int(10) unsigned zerofill | YES  | | NULL|
|
| tot   | float unsigned zerofill   | YES  | | NULL|
|
| cal   | float unsigned zerofill   | YES  | | NULL|
|
| mtu   | float unsigned zerofill   | YES  | | NULL|
|
| eqp   | int(10) unsigned zerofill | YES  | | NULL|
|
| srv   | int(10) unsigned zerofill | YES  | | NULL|
|
| loc   | int(10) unsigned zerofill | YES  | | NULL|
|
| flt   | int(10) unsigned zerofill | YES  | | NULL|
|
+---+---+--+-+-+
+
20 rows in set (0.00 sec)

I indexed 


Edate
Etime
Lec
Trunk
Bound
Att
Suc
Cal

The table above is filled with 24 hour data every day...so it grows up to a
million since January...
Before I got the speedbut now..it slows up if you query the maximum
value of every day in one month coverage...so how much more if get the
maximum of every month to cover a one year period...

How do I make a query on this?...I tried the group by...still I stuck on 6
minutes..



R.B.Roa 
Traffic Management Engineer
PhilCom Corporation
Tel.No. (088) 858-1028
Mobile No. (0919) 30856267


-Original Message-
From:   Anvar Hussain K.M. [SMTP:[EMAIL PROTECTED]]
Sent:   Thursday, February 14, 2002 11:38 AM
To: [EMAIL PROTECTED]
Subject:Re: Query Speed

Hi,

Your mail does not tell about the table structure or the index
available.
But try this:
Select truncate(datecol/100,0), max(colyouwantmaxof) from tablename
  group by (truncate(datecol/100,0);

Here it is assumed datecol is of date data type.  If it is datetime
you will
have to divide by 1 instead of 100.  A still better solution
is to
add another colum to the table where you can keep the yearmonth of
the date column as an integer and create an index on it.

Anvar.


At 09:40 AM 14/02/2002 +0800, you wrote:
>Hi,
>
>I got stuck on queryabout a million rows in one table...it
slows up my
>server...
>
>How can I improve my server on managing a request like this?
>
>I manually loop the monthly data query...I got a database that is
full of
>data daily...so all I need to get the maximum of every month is to
query by
>loop than to use the SELECT MAX() FROM FOO...is there any way to
improve a
>GROUP BY query..i know it can be done in group by...I done it
before...
>
>Anybody done this beforei need your help
>
>R.B.Roa
>Traffic Management Engineer
>PhilCom Corporation
>Tel.No. (088) 858-1028
>Mobile No. (0919) 30856267
>
>

>-
>Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
>
>To request this thread, e-mail <[EMAIL PROTECTED]>
>To unsubscribe, e-mail 
><[EMAIL PROTECTED]>
>Trouble unsubscribing? Try:
http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Query Speed

2002-02-13 Thread RBRoa

Hi,

I got stuck on queryabout a million rows in one table...it slows up my
server...

How can I improve my server on managing a request like this?

I manually loop the monthly data query...I got a database that is full of
data daily...so all I need to get the maximum of every month is to query by
loop than to use the SELECT MAX() FROM FOO...is there any way to improve a
GROUP BY query..i know it can be done in group by...I done it before...

Anybody done this beforei need your help

R.B.Roa 
Traffic Management Engineer
PhilCom Corporation
Tel.No. (088) 858-1028
Mobile No. (0919) 30856267


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Query Speed problem...

2001-12-28 Thread Matthew Walker

I just added a fulltext index to a table, and MATCH queries on the table
are timing out. Can anyone offer any insight on this?

The table has 2000 rows of data in it, and phpMyAdmin reports it as
having a total size of 244,228 bytes.

Table Structure:
CREATE TABLE Customers (
  ID mediumint(9) NOT NULL auto_increment,
  FirstName varchar(255) NOT NULL default '',
  LastName varchar(255) NOT NULL default '',
  Street1 varchar(255) NOT NULL default '',
  Street2 varchar(255) NOT NULL default '',
  City varchar(255) NOT NULL default '',
  State varchar(50) NOT NULL default 'Utah',
  Zipcode varchar(15) NOT NULL default '',
  Phone1 varchar(50) NOT NULL default '',
  Phone2 varchar(50) NOT NULL default '',
  Fax varchar(50) NOT NULL default '',
  Email varchar(255) NOT NULL default '',
  Type varchar(255) NOT NULL default 'Customer',
  Memo varchar(255) NOT NULL default '',
  PRIMARY KEY  (ID),
  FULLTEXT KEY Name (FirstName,LastName)
) TYPE=MyISAM;

Query:
SELECT * FROM Customers WHERE MATCH (FirstName,LastName) AGAINST
('walker')

Matthew Walker
Ecommerce Project Manager
Mountain Top Herbs



---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.310 / Virus Database: 171 - Release Date: 12/19/2001
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: kaka's query speed is slow

2001-11-29 Thread Wilfred Li, Ph.D.

Hi, Philip,

Here is the query as you suggested:

SELECT ox.ensembl_id, x.dbprimary_id, x.display_id, db.db_name
FROM objectXref ox, Xref x, externalDB db
WHERE ox.ensembl_id IN ('7263', '7318', '8991', '17508')
AND x.xrefid = ox.xrefid
AND db.externalDBId = x.externalDBId;

>
> Maybe "WHERE ... IN ('7263', '7318', '8991', '17508') "
>

This change descreased the time between 20.21 sec to 23.71 sec. But repeating the 
original
query (no single quote) only used 21.63 sec. The differences may be due to
server load at execution time. So string or number datatype is not the cause.

>
>   EXPLAIN SELECT 
>

Here is the explain result:
+---++---+-+-+++-+
| table | type   | possible_keys | key | key_len | ref| 
|rows   | Extra   |
+---++---+-+-+++-+
| ox| index  | NULL  | ensembl_object_type |  45 | NULL   | 
|133764 | where used; Using index |
| x | eq_ref | PRIMARY   | PRIMARY |   4 | ox.xrefId  |
|  1 | |
| db| eq_ref | PRIMARY   | PRIMARY |   4 | x.externalDBId |
|  1 | |
+---++---+-+-+++-+
3 rows in set (0.17 sec)


mysql> show index from objectXref;
+++-+--+-+---+-+--++-+
| Table  | Non_unique | Key_name| Seq_in_index | Column_name | 
|Collation | Cardinality | Sub_part | Packed | Comment |
+++-+--+-+---+-+--++-+
| objectXref |  0 | ensembl_object_type |1 | ensembl_object_type | 
|A |NULL | NULL | NULL   | NULL|
| objectXref |  0 | ensembl_object_type |2 | ensembl_id  | 
|A |NULL | NULL | NULL   | NULL|
| objectXref |  0 | ensembl_object_type |3 | xrefId  | 
|A |NULL | NULL | NULL   | NULL|
| objectXref |  1 | xref_index  |1 | objectxrefId| 
|A |NULL | NULL | NULL   | NULL|
| objectXref |  1 | xref_index  |2 | xrefId  | 
|A |NULL | NULL | NULL   | NULL|
| objectXref |  1 | xref_index  |3 | ensembl_object_type | 
|A |NULL | NULL | NULL   | NULL|
| objectXref |  1 | xref_index  |4 | ensembl_id  | 
|A |NULL | NULL | NULL   | NULL|
+++-+--+-+---+-+--++-+
7 rows in set (0.18 sec)

It seems that mysql is not using the index xref_index, but using ensembl_object_type.

I tried to use hint (mysql version 3.23.25 beta)

SELECT ox.ensembl_id, x.dbprimary_id, x.display_id, db.db_name
FROM Xref x, externalDB db, objectXref ox
USING INDEX (xref_index)
WHERE ox.ensembl_id IN ('7263', '7318', '8991', '17508')
AND x.xrefid = ox.xrefid
AND db.externalDBId = x.externalDBId;

ERROR 1064: You have an error in your SQL syntax near 'USING INDEX (xref_index)
WHERE ox.ensembl_id IN ('7263', '7318', '8991', '17508'' at line 3

Anyone has tried using index hint before?

Thanks

Wilfred


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: kaka's query speed is slow

2001-11-29 Thread Wilfred Li, Ph.D.

Hi,

Upon closer examination of the query, it seems that both
ensembl_object_type, and xref_index does not have ensembl_id in the 1st
position for the index. So the hints would not work any way. I would still
appreciate if someone points out the syntax error I got.

Looking at the new schemaCore table at
http://www.ensembl.org/Docs/wiki/html/EnsemblDocs/SchemaCore.html
ensembl_id is the referred by transcript_stable_id:stable_id, etc. This
points to the problem of incompatible format between ensembl_id (number
e.g. 7263)
and stable_id (string e.g. ENSX000...).

Will this be resolved?

Thanks

Wilfred



On Thu, 29 Nov 2001, Wilfred Li, Ph.D. wrote:

> Hi, Philip,
>
> Here is the query as you suggested:
>
> SELECT ox.ensembl_id, x.dbprimary_id, x.display_id, db.db_name
> FROM objectXref ox, Xref x, externalDB db
> WHERE ox.ensembl_id IN ('7263', '7318', '8991', '17508')
> AND x.xrefid = ox.xrefid
> AND db.externalDBId = x.externalDBId;
>
> >
> > Maybe "WHERE ... IN ('7263', '7318', '8991', '17508') "
> >
>
> This change descreased the time between 20.21 sec to 23.71 sec. But repeating the 
>original
> query (no single quote) only used 21.63 sec. The differences may be due to
> server load at execution time. So string or number datatype is not the cause.
>
> >
> >   EXPLAIN SELECT 
> >
>
> Here is the explain result:
> 
>+---++---+-+-+++-+
> | table | type   | possible_keys | key | key_len | ref| 
>rows   | Extra   |
> 
>+---++---+-+-+++-+
> | ox| index  | NULL  | ensembl_object_type |  45 | NULL   | 
>133764 | where used; Using index |
> | x | eq_ref | PRIMARY   | PRIMARY |   4 | ox.xrefId  |  
>1 | |
> | db| eq_ref | PRIMARY   | PRIMARY |   4 | x.externalDBId |  
>1 | |
> 
>+---++---+-+-+++-+
> 3 rows in set (0.17 sec)
>
>
> mysql> show index from objectXref;
> 
>+++-+--+-+---+-+--++-+
> | Table  | Non_unique | Key_name| Seq_in_index | Column_name 
>| Collation | Cardinality | Sub_part | Packed | Comment |
> 
>+++-+--+-+---+-+--++-+
> | objectXref |  0 | ensembl_object_type |1 | ensembl_object_type 
>| A |NULL | NULL | NULL   | NULL|
> | objectXref |  0 | ensembl_object_type |2 | ensembl_id  
>| A |NULL | NULL | NULL   | NULL|
> | objectXref |  0 | ensembl_object_type |3 | xrefId  
>| A |NULL | NULL | NULL   | NULL|
> | objectXref |  1 | xref_index  |1 | objectxrefId
>| A |NULL | NULL | NULL   | NULL|
> | objectXref |  1 | xref_index  |2 | xrefId  
>| A |NULL | NULL | NULL   | NULL|
> | objectXref |  1 | xref_index  |3 | ensembl_object_type 
>| A |NULL | NULL | NULL   | NULL|
> | objectXref |  1 | xref_index  |4 | ensembl_id  
>| A |NULL | NULL | NULL   | NULL|
> 
>+++-+--+-+---+-+--++-+
> 7 rows in set (0.18 sec)
>
> It seems that mysql is not using the index xref_index, but using ensembl_object_type.
>
> I tried to use hint (mysql version 3.23.25 beta)
>
> SELECT ox.ensembl_id, x.dbprimary_id, x.display_id, db.db_name
> FROM Xref x, externalDB db, objectXref ox
> USING INDEX (xref_index)
> WHERE ox.ensembl_id IN ('7263', '7318', '8991', '17508')
> AND x.xrefid = ox.xrefid
> AND db.externalDBId = x.externalDBId;
>
> ERROR 1064: You have an error in your SQL syntax near 'USING INDEX (xref_index)
> WHERE ox.ensembl_id IN ('7263', '7318', '8991', '17508'' at line 3
>
> Anyone has tried using index hint before?
>
> Thanks
>
> Wilfred
>



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Query speed seems wrong...

2001-11-20 Thread Anthony R. J. Ball


  key buffer is about 8M 
key_buffer_size | 8388600

I just tried bumping my settings up to these that I found in the
manual...

safe_mysqld -O key_buffer=64M -O table_cache=256 \
-O sort_buffer=4M -O record_buffer=1M &

It shaved a second off... 2.29, and later call took
only .88 seconds once it was cached...

This is running on a solaris box with... I believe 16
processors and 6 gigs of ram...

Although... it is by no means dedicated to MySQL :)


> How large is your key buffer?
> 
> Jeremy
> -- 
> Jeremy D. Zawodny, <[EMAIL PROTECTED]>
> Technical Yahoo - Yahoo Finance
> Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936
> 
> MySQL 3.23.41-max: up 74 days, processed 1,610,499,363 queries (249/sec. avg)
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 

-- 
 ___  __  ____  _  _  _  _     
/ __)(  )(  )  /__\( \/ )( ___)  ( \( )( ___)(_  _)
\__ \ )(__)(  /(__)\\  /  )__))  (  )__)   )(  
(___/(__)(__)(__)\/  ()()(_)\_)() (__) 
Life is fragile, handle with prayer.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Query speed seems wrong...

2001-11-19 Thread Jeremy Zawodny

On Mon, Nov 19, 2001 at 03:29:26PM -0500, Anthony R. J. Ball wrote:
> 
> 3.23.41 on Solaris
> 
>   I have an indexed table of cusips (9 character identifiers)
> which I am comparing against a lookup tables of over
> 1,000,000 securities, also keyed by cusip, both fields are
> char(9) fields.

How large is your key buffer?

Jeremy
-- 
Jeremy D. Zawodny, <[EMAIL PROTECTED]>
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936

MySQL 3.23.41-max: up 74 days, processed 1,610,499,363 queries (249/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Query speed seems wrong...

2001-11-19 Thread Anthony R. J. Ball


3.23.41 on Solaris

  I have an indexed table of cusips (9 character identifiers)
which I am comparing against a lookup tables of over
1,000,000 securities, also keyed by cusip, both fields are
char(9) fields.

  My query is taking over 3 seconds, which may be the best I
can do, but I'm hoping I am missing something. Also, unless
I use STRAIGHT_JOIN, it takes MUCH longer, since the optimizer
is starting with the 1,000,000 record table (which is a MERGE 
table, BTW).

  The explains look like:
mysql> explain select lid from TEST_1006199805 AS A,
  pool_lookup.lkup AS L WHERE A.cusip = L.cusip;

+---++---+-+-+-+-+-+
| table | type   | possible_keys | key | key_len | ref | rows| Extra   
||
+---++---+-+-+-+-+-+
| L | ALL| cusip | NULL|NULL | NULL| 1296422 | 
||
| A | eq_ref | PRIMARY   | PRIMARY |   9 | L.cusip |   1 | Using index 
||
+---++---+-+-+-+-+-+

mysql> explain select lid from TEST_1006199805 AS A 
  STRAIGHT_JOIN pool_lookup.lkup AS L WHERE A.cusip = L.cusip;

+---+---+---+-+-+-+--+-+
| table | type  | possible_keys | key | key_len | ref | rows | Extra   |
+---+---+---+-+-+-+--+-+
| A | index | PRIMARY   | PRIMARY |   9 | NULL| 1995 | Using index |
| L | ref   | cusip | cusip   |   9 | A.cusip |  768 | |
+---+---+---+-+-+-+--+-+

   I'm not sure what the 768 rows in L is, since all but 4 of the
1995 records in A have matches L... though sometimes there will
be more than one match, but not more than 4 or so...

   Any suggestions on speeding this up?

-- 
 ___  __  ____  _  _  _  _     
/ __)(  )(  )  /__\( \/ )( ___)  ( \( )( ___)(_  _)
\__ \ )(__)(  /(__)\\  /  )__))  (  )__)   )(  
(___/(__)(__)(__)\/  ()()(_)\_)() (__) 
An attacker must vanquish, a defender need only survive.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Query speed

2001-11-01 Thread Paul DuBois

At 10:26 AM + 11/1/01, Leon Noble wrote:
>Hi All,
>
>Tried the following three statements and they are wither too slow or do not
>give me what I want. Basically what I want is to search for records for a
>whole month and display totals for that month for each individual day. The
>date field is indexed.

It might be indexed, but all your tests on the field involve values
that must be calculated from the date, which makes the index useless.
(Every row must be fetched so that the expressions can be evaluated
and tested.)

Since you apparently want dates in the month of August, 2001, try
using a WHERE clause of:

WHERE date >= '2001-08-01' AND date <= '2001-08-31'

or

WHERE date BETWEEN '2001-08-01' AND '2001-08-31'

Either of these involve no transformation on the date values and the index
can be used.  That may speed up your query.

>
>Tried..
>
>select count(num) as mycount from table_name where month(date) = 08 and
>year(date) = 2001 and action = 1 group by dayofmonth(date);
>
>this one takes too long
>
>
>As does this one.
>
>select dayofmonth(date) as mydate, count(num) as mycount from table_name
>where month(date)=08 and year(date)=2001 and action=1 group by
>dayofmonth(date);
>
>
>and this one just outputs the total figure for the month
>
>select dayofmonth(date) as mydate, count(num) as mycount from table_name
>where date='TO_DAYS(2001-08-01) - TO_DAYS(2001-08-31)' and action=1 group by
>dayofmonth(date);
>
>
>can anyone help to point me in the right direction.
>
>many thanks
>
>Leon.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Query speed

2001-11-01 Thread Carl Troein


Leon Noble writes:

> select dayofmonth(date) as mydate, count(num) as mycount from table_name
> where date='TO_DAYS(2001-08-01) - TO_DAYS(2001-08-31)' and action=1 group by
> dayofmonth(date);

This query makes no sense at all. I don't think the date will
ever be equal to that string constant, unless it's the special
-00-00. Really, what you want to consider is all dates
between 2001-08-01 and 2001-08-31, right? The why not use
something like WHERE date BETWEEN '2001-08-01 AND '2001-08-31'?

//C

-- 
 Carl Troein - Círdan / Istari-PixelMagic - UIN 16353280
 [EMAIL PROTECTED] | http://pixelmagic.dyndns.org/~cirdan/
 Amiga user since '89, and damned proud of it too.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Query speed

2001-11-01 Thread Quentin Bennett

Hi,

Check the section about How MySQL Optimizes queries for you version - there
is a bit about how indices are not used when calling a function that may be
relevant.

Hope it helps

Quentin

-Original Message-
From: Leon Noble [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 1 November 2001 11:26 p.m.
To: [EMAIL PROTECTED]
Subject: Query speed


Hi All,

Tried the following three statements and they are wither too slow or do not
give me what I want. Basically what I want is to search for records for a
whole month and display totals for that month for each individual day. The
date field is indexed.

Tried..

select count(num) as mycount from table_name where month(date) = 08 and
year(date) = 2001 and action = 1 group by dayofmonth(date);

this one takes too long


As does this one.

select dayofmonth(date) as mydate, count(num) as mycount from table_name
where month(date)=08 and year(date)=2001 and action=1 group by
dayofmonth(date);


and this one just outputs the total figure for the month

select dayofmonth(date) as mydate, count(num) as mycount from table_name
where date='TO_DAYS(2001-08-01) - TO_DAYS(2001-08-31)' and action=1 group by
dayofmonth(date);


can anyone help to point me in the right direction.

many thanks


Leon.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

The information contained in this email is privileged and confidential
and intended for the addressee only. If you are not the intended 
recipient, you are asked to respect that confidentiality and not 
disclose, copy or make use of its contents. If received in error 
you are asked to destroy this email and contact the sender immediately. 
Your assistance is appreciated.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Query speed

2001-11-01 Thread Leon Noble

Hi All,

Tried the following three statements and they are wither too slow or do not
give me what I want. Basically what I want is to search for records for a
whole month and display totals for that month for each individual day. The
date field is indexed.

Tried..

select count(num) as mycount from table_name where month(date) = 08 and
year(date) = 2001 and action = 1 group by dayofmonth(date);

this one takes too long


As does this one.

select dayofmonth(date) as mydate, count(num) as mycount from table_name
where month(date)=08 and year(date)=2001 and action=1 group by
dayofmonth(date);


and this one just outputs the total figure for the month

select dayofmonth(date) as mydate, count(num) as mycount from table_name
where date='TO_DAYS(2001-08-01) - TO_DAYS(2001-08-31)' and action=1 group by
dayofmonth(date);


can anyone help to point me in the right direction.

many thanks


Leon.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Query speed

2001-05-02 Thread Roger Karnouk

you are right
I tried specifying it to use the crcid index with "USE INDEX (crcindex)" in
the from clause but it made no difference. any ideas as to which indexes I
could create to speed up the second query.

Here is an explain on my test database which is significantly smaller:
mysql> explain select a.crcid, avg ( b.total/a.total ) as average
-> from server01_history a, server01_history b
-> where a.crcid = b.crcid
-> and a.tag = 100 and b.tag = 104
-> group by a.crcid;
+---+--+--+--+-+-+---+--
---+
| table | type | possible_keys| key  | key_len | ref | rows  |
Extra   |
+---+--+--+--+-+-+--
--+
| a | ALL  | PRIMARY,crcindex | NULL |NULL | NULL| 73665 |
where used; Using temporary |
| b | ref  | PRIMARY,crcindex | crcindex |   4 | a.crcid | 3 |
where used  |
+---+--+--+--+-+-+---+--
---+
2 rows in set (0.00 sec)

Roger kanrouk
-Original Message-
From: Braxton Robbason [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 02, 2001 11:55 AM
To: Roger Karnouk; [EMAIL PROTECTED]
Subject: RE: Query speed


seems to me that the first query uses your primary key index. Since you have
specified qualifications on crcid and tag in both aliases, it will resolve
to a small number of rows in each alias table.  The second query will join
your aliases on the crcid index, and then the tag qualifications will
resolve to a larger number of rows.  A way to verify this is to run:


select count(*) as rcount
from server01_history a, server01_history b
where a.day = b.day and a.crcid = 24 and a.tag = 100
and b.crcid = 24 and b.tag = 104 ;

and
select count(*) as rcount
from server01_history a, server01_history b
where a.crcid = b.crcid
and a.tag = 100 and b.tag = 104;

I bet the latter rcount value is much greater than the former.  Did you run
explains on these queries?

braxton


-Original Message-
From: Roger Karnouk [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 02, 2001 9:57 AM
To: [EMAIL PROTECTED]
Subject: Query speed


I am trying to run two queries which seem to me should execute at abut the
same speed.

My table is setup as follows:
day   -  number of days since 1970
crcid  - a number between 0 and 24
tag - a number used to identify record type
total - the value stored (the rest of the record is just to identify this
value)

the primary key is day,crcid and tag
I also have and index on crcid
and on day seperately
Table contains about 1,000,000 records

This query takes 0.02 sec to execute:

select (a.day+4)%7 as dow,avg(b.total/a.total) as average
from server01_history a, server01_history b
where a.day = b.day and a.crcid = 24 and a.tag = 100
and b.crcid = 24 and b.tag = 104
group by dow;

note: dow is "day of week" which is used to group


This query takes 1min 47 sec to execute:

select a.crcid, avg ( b.total/a.total ) as average
from server01_history a, server01_history b
where a.crcid = b.crcid
and a.tag = 100 and b.tag = 104
group by a.crcid;

Both queries are similar they both alias the same table in order to use two
separate records.
Does anyone know why one query is so much faster than the other, and what
can I do to speed up the second query without slowing down the first.

Roger Karnouk

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Query speed

2001-05-02 Thread Braxton Robbason

seems to me that the first query uses your primary key index. Since you have
specified qualifications on crcid and tag in both aliases, it will resolve
to a small number of rows in each alias table.  The second query will join
your aliases on the crcid index, and then the tag qualifications will
resolve to a larger number of rows.  A way to verify this is to run:


select count(*) as rcount
from server01_history a, server01_history b
where a.day = b.day and a.crcid = 24 and a.tag = 100
and b.crcid = 24 and b.tag = 104 ;

and
select count(*) as rcount
from server01_history a, server01_history b
where a.crcid = b.crcid
and a.tag = 100 and b.tag = 104;

I bet the latter rcount value is much greater than the former.  Did you run
explains on these queries?

braxton


-Original Message-
From: Roger Karnouk [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, May 02, 2001 9:57 AM
To: [EMAIL PROTECTED]
Subject: Query speed


I am trying to run two queries which seem to me should execute at abut the
same speed.

My table is setup as follows:
day   -  number of days since 1970
crcid  - a number between 0 and 24
tag - a number used to identify record type
total - the value stored (the rest of the record is just to identify this
value)

the primary key is day,crcid and tag
I also have and index on crcid
and on day seperately
Table contains about 1,000,000 records

This query takes 0.02 sec to execute:

select (a.day+4)%7 as dow,avg(b.total/a.total) as average
from server01_history a, server01_history b
where a.day = b.day and a.crcid = 24 and a.tag = 100
and b.crcid = 24 and b.tag = 104
group by dow;

note: dow is "day of week" which is used to group


This query takes 1min 47 sec to execute:

select a.crcid, avg ( b.total/a.total ) as average
from server01_history a, server01_history b
where a.crcid = b.crcid
and a.tag = 100 and b.tag = 104
group by a.crcid;

Both queries are similar they both alias the same table in order to use two
separate records.
Does anyone know why one query is so much faster than the other, and what
can I do to speed up the second query without slowing down the first.

Roger Karnouk

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail
<[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Query speed

2001-05-02 Thread Roger Karnouk

I am trying to run two queries which seem to me should execute at abut the
same speed.

My table is setup as follows:
day   -  number of days since 1970
crcid  - a number between 0 and 24
tag - a number used to identify record type
total - the value stored (the rest of the record is just to identify this
value)

the primary key is day,crcid and tag
I also have and index on crcid
and on day seperately
Table contains about 1,000,000 records

This query takes 0.02 sec to execute:

select (a.day+4)%7 as dow,avg(b.total/a.total) as average
from server01_history a, server01_history b
where a.day = b.day and a.crcid = 24 and a.tag = 100
and b.crcid = 24 and b.tag = 104 
group by dow;

note: dow is "day of week" which is used to group


This query takes 1min 47 sec to execute:

select a.crcid, avg ( b.total/a.total ) as average
from server01_history a, server01_history b 
where a.crcid = b.crcid
and a.tag = 100 and b.tag = 104
group by a.crcid;

Both queries are similar they both alias the same table in order to use two
separate records. 
Does anyone know why one query is so much faster than the other, and what
can I do to speed up the second query without slowing down the first.

Roger Karnouk

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: query speed issues

2001-03-16 Thread

If you are repeatedly querying tables on non-key fields you can improve query speeds 
by implementing indexes on those fields...

For instance, if you had a personnel table with the following fields: id, lastname, 
firstname, etc  Where id was an auto-increment,primary key you could index the 
lastname field to aid queries based upon last names.

Dacian Herbei <[EMAIL PROTECTED]> wrote:
>
> 
> Hi,
> I have some huge databases generated from text file of a dictionary.
> I would like to know if I make a query based on a field which was not metion as 
>a key
> at the creation of the table. Is the speed of the query affected by the size of the
> records and by the number of fields a record has?
> best regards,
> dacian
> 
> 
> -
> Before posting, please check:
>http://www.mysql.com/manual.php   (the manual)
>http://lists.mysql.com/   (the list archive)
> 
> To request this thread, e-mail <[EMAIL PROTECTED]>
> To unsubscribe, e-mail <[EMAIL PROTECTED]>
> Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
> 
> 
--
===
"If you put three drops of poison into a 100 percent pure Java, you get - Windows. If 
you put a few drops of Java into Windows, you still have Windows."
-- Sun Microsystems CEO, Scott McNealy

__
Get your own FREE, personal Netscape Webmail account today at 
http://webmail.netscape.com/

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




query speed issues

2001-03-16 Thread Dacian Herbei


Hi,
I have some huge databases generated from text file of a dictionary.
I would like to know if I make a query based on a field which was not metion as a 
key
at the creation of the table. Is the speed of the query affected by the size of the
records and by the number of fields a record has?
best regards,
dacian


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php