Re: Large Query Question.

2008-09-04 Thread mos

At 02:49 PM 9/3/2008, Jim Leavitt wrote:

Hi Mike,

Yes sometimes,  the application is an online book selection tool with 
about 1 million titles in it.  Now the queries which return 100,000 rows 
would be something like returning all titles from a given publisher. Most 
of the common searches are fairly quick (1-5 sec).  But this was a 
specific example given to me.  As you can imaging we're joining on many 
tables to pull author data, publication data, etc and displaying it all on 
a detail page.  An example query is.  (note: this is on a development box 
with nothing else on it)


SELECT p.Title FROM products AS p LEFT JOIN productcontributors AS pc ON 
p.RecordReference = pc.RecordReference WHERE pc.rowtype = PR8 AND 
p.feedid = 5 GROUP BY p.id LIMIT 0,10;


returns

10 rows in set (42.12 sec).
(Total of 194557 rows found.)

Now we've never dealt with anything like this before, but there are other 
sites returning similar counts fairly quickly.  The only thing I can think 
of is hardware.  What hardware upgrades would you recommend?  Would it 
even help? Would clustering be an option here?


Any advice is greatly appreciated.

Thanks much.


Jim,
 The problem is likely your index is not defined properly. Use an 
Explain in front of the query to see if it can use just one index from 
each table.


I would try building a compound index on

Products: (RecordReference, FeedId)

ProductContributors: (RecordReference, RowType)

This should get it to execute the join and where clause using just one 
index from each table. Give that a try and see if it speeds things up. :)


Mike





On 3-Sep-08, at 3:02 PM, mos wrote:


Jim,
 Retrieving 100,000 rows will always take some time. Do you really 
need to return that many rows? Are you selecting just the columns you 
need? What are the slow queries?


Mike

At 12:05 PM 9/3/2008, Jim Leavitt wrote:

Greetings List,

We have a medium-large size database application which we are trying
to optimize and I have a few questions.

Server Specs
1 Dual Core 2.6 Ghz
2GB Ram

Database Specs
51 Tables
Min 10 rows, Max 100 rows
Total size approx 2GB

My.cnf
[mysqld]
set-variable=local-infile=0
log-slow-queries=slow-queries.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
old_passwords=1
key_buffer = 512M
max_allowed_packet=4M
sort_buffer_size = 512M
read_buffer_size = 512M
read_rnd_buffer_size = 256M
record_buffer = 256M
myisam_sort_buffer_size = 512M
thread_cache = 128
query_cache_limit = 1M
query_cache_type = 1
query_cache_size = 32M
join_buffer = 512M
table_cache = 512


We are having trouble with certain queries which are returning
anywhere from 10 - 30 rows.  Total query time is taking approx
1 - 2 mins depending on load.  Is there anything in our conf file
which could improve our performance?  Are there any hardware
recommendations that could help us improve the speed?  Would more
memory help us?  Any comments or recommendations are greatly
appreciated.

Thanks much.


Jim Leavitt
Developer
Treefrog Interactive Inc. (http://www.treefrog.cawww.treefrog.ca)
Bringing the Internet to Life






--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysqlhttp://lists.mysql.com/mysql
To 
unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]http://lists.mysql.com/[EMAIL PROTECTED]


Jim Leavitt
Developer
Treefrog Interactive Inc. (http://www.treefrog.ca/www.treefrog.ca)
Bringing the Internet to Life
ph: 905-836-4442 ext 104
fx: 905-895-6561





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large Query Question.

2008-09-04 Thread Jim Lyons
It's highly unlikely hardware upgrades are needed unless you're on a really
underpowered machine.  How similar are the queries on the other machines?
The limit clause won't reduce the time taken to do the join and grouping,
it will only reduce the amount of output.

Also, I assumeyou have indexes on p.RecordReference, pc.RecordReference,
pc.rowtype, and p.feedid, otherwise you'll be doing table scans.  Are the
indexes up-to-date, ie have you run analyze or optimize table to be sure
they're balanced?  I found that analyze out-of-date stats can make a HUGE
difference in performance.

Also, look at the memory set aside for joins in join_buffer_size.

On Thu, Sep 4, 2008 at 9:38 AM, mos [EMAIL PROTECTED] wrote:

 At 02:49 PM 9/3/2008, Jim Leavitt wrote:

 Hi Mike,

 Yes sometimes,  the application is an online book selection tool with
 about 1 million titles in it.  Now the queries which return 100,000 rows
 would be something like returning all titles from a given publisher. Most of
 the common searches are fairly quick (1-5 sec).  But this was a specific
 example given to me.  As you can imaging we're joining on many tables to
 pull author data, publication data, etc and displaying it all on a detail
 page.  An example query is.  (note: this is on a development box with
 nothing else on it)

 SELECT p.Title FROM products AS p LEFT JOIN productcontributors AS pc ON
 p.RecordReference = pc.RecordReference WHERE pc.rowtype = PR8 AND p.feedid
 = 5 GROUP BY p.id LIMIT 0,10;

 returns

 10 rows in set (42.12 sec).
 (Total of 194557 rows found.)

 Now we've never dealt with anything like this before, but there are other
 sites returning similar counts fairly quickly.  The only thing I can think
 of is hardware.  What hardware upgrades would you recommend?  Would it even
 help? Would clustering be an option here?

 Any advice is greatly appreciated.

 Thanks much.


 Jim,
 The problem is likely your index is not defined properly. Use an
 Explain in front of the query to see if it can use just one index from
 each table.

 I would try building a compound index on

 Products: (RecordReference, FeedId)

 ProductContributors: (RecordReference, RowType)

 This should get it to execute the join and where clause using just one
 index from each table. Give that a try and see if it speeds things up. :)

 Mike




  On 3-Sep-08, at 3:02 PM, mos wrote:

  Jim,
 Retrieving 100,000 rows will always take some time. Do you really
 need to return that many rows? Are you selecting just the columns you need?
 What are the slow queries?

 Mike

 At 12:05 PM 9/3/2008, Jim Leavitt wrote:

 Greetings List,

 We have a medium-large size database application which we are trying
 to optimize and I have a few questions.

 Server Specs
 1 Dual Core 2.6 Ghz
 2GB Ram

 Database Specs
 51 Tables
 Min 10 rows, Max 100 rows
 Total size approx 2GB

 My.cnf
 [mysqld]
 set-variable=local-infile=0
 log-slow-queries=slow-queries.log
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 old_passwords=1
 key_buffer = 512M
 max_allowed_packet=4M
 sort_buffer_size = 512M
 read_buffer_size = 512M
 read_rnd_buffer_size = 256M
 record_buffer = 256M
 myisam_sort_buffer_size = 512M
 thread_cache = 128
 query_cache_limit = 1M
 query_cache_type = 1
 query_cache_size = 32M
 join_buffer = 512M
 table_cache = 512


 We are having trouble with certain queries which are returning
 anywhere from 10 - 30 rows.  Total query time is taking approx
 1 - 2 mins depending on load.  Is there anything in our conf file
 which could improve our performance?  Are there any hardware
 recommendations that could help us improve the speed?  Would more
 memory help us?  Any comments or recommendations are greatly
 appreciated.

 Thanks much.


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (http://www.treefrog.cawww.treefrog.ca)
 Bringing the Internet to Life





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
 http://lists.mysql.com/[EMAIL PROTECTED]


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (http://www.treefrog.ca/www.treefrog.ca)
 Bringing the Internet to Life
 ph: 905-836-4442 ext 104
 fx: 905-895-6561




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


RE: Large Query Question.

2008-09-04 Thread Jerry Schwartz
-Original Message-
From: Brent Baisley [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 03, 2008 5:35 PM
To: Jim Leavitt
Cc: mysql@lists.mysql.com
Subject: Re: Large Query Question.

That's a lot of data to return, make sure you factor in data load and
transfer time. You may try breaking your query into smaller parts and
recombining the results in a scripting language. If you are searching
on a range (i.e. date range), break the range into smaller parts and
run multiple queries.
Divide and conquer, it will scale better.

[JS] I'm considering changing one of my programs so that it leaves the
result set on the server and pulls one record at a time. Do you have any
sense of how much that might hurt me? We're talking about less than 100,000
records but they are relatively chunky.

In this case, it's the memory usage for the result set that is a concern. I
have to keep increasing the amount of memory available for PHP.
Brent Baisley

On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt [EMAIL PROTECTED] wrote:
 Greetings List,

 We have a medium-large size database application which we are trying
to
 optimize and I have a few questions.

 Server Specs
 1 Dual Core 2.6 Ghz
 2GB Ram

 Database Specs
 51 Tables
 Min 10 rows, Max 100 rows
 Total size approx 2GB

 My.cnf
 [mysqld]
 set-variable=local-infile=0
 log-slow-queries=slow-queries.log
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 old_passwords=1
 key_buffer = 512M
 max_allowed_packet=4M
 sort_buffer_size = 512M
 read_buffer_size = 512M
 read_rnd_buffer_size = 256M
 record_buffer = 256M
 myisam_sort_buffer_size = 512M
 thread_cache = 128
 query_cache_limit = 1M
 query_cache_type = 1
 query_cache_size = 32M
 join_buffer = 512M
 table_cache = 512


 We are having trouble with certain queries which are returning
anywhere from
 10 - 30 rows.  Total query time is taking approx 1 - 2 mins
 depending on load.  Is there anything in our conf file which could
improve
 our performance?  Are there any hardware recommendations that could
help us
 improve the speed?  Would more memory help us?  Any comments or
 recommendations are greatly appreciated.

 Thanks much.


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (www.treefrog.ca)
 Bringing the Internet to Life







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
infoshop.com





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large Query Question.

2008-09-04 Thread mos

Jim,
 I've re-posted your message to the list so others can join in the 
fray. :)


Mike

At 10:50 AM 9/4/2008, you wrote:

Hi Mike,

I do believe we have done the indexing properly.  Please advise if we can 
make any adjustments.  Here is the output from the explain statements;


16634be.png


Thanks,

Jim

On 3-Sep-08, at 10:02 PM, mos wrote:


At 02:49 PM 9/3/2008, Jim Leavitt wrote:

Hi Mike,

Yes sometimes,  the application is an online book selection tool with 
about 1 million titles in it.  Now the queries which return 100,000 rows 
would be something like returning all titles from a given publisher. 
Most of the common searches are fairly quick (1-5 sec).  But this was a 
specific example given to me.  As you can imaging we're joining on many 
tables to pull author data, publication data, etc and displaying it all 
on a detail page.  An example query is.  (note: this is on a development 
box with nothing else on it)


SELECT p.Title FROM products AS p LEFT JOIN productcontributors AS pc ON 
p.RecordReference = pc.RecordReference WHERE pc.rowtype = PR8 AND 
p.feedid = 5 GROUP BY p.id LIMIT 0,10;


returns

10 rows in set (42.12 sec).
(Total of 194557 rows found.)

Now we've never dealt with anything like this before, but there are 
other sites returning similar counts fairly quickly.  The only thing I 
can think of is hardware.  What hardware upgrades would you 
recommend?  Would it even help? Would clustering be an option here?


Any advice is greatly appreciated.

Thanks much.


Jim,
 The problem is likely your index is not defined properly. Use an 
Explain in front of the query to see if it can use just one index from 
each table.


I would try building a compound index on

Products: (RecordReference, FeedId)

ProductContributors: (RecordReference, RowType)

This should get it to execute the join and where clause using just one 
index from each table. Give that a try and see if it speeds things up. :)


Mike





On 3-Sep-08, at 3:02 PM, mos wrote:


Jim,
 Retrieving 100,000 rows will always take some time. Do you really 
need to return that many rows? Are you selecting just the columns you 
need? What are the slow queries?


Mike

At 12:05 PM 9/3/2008, Jim Leavitt wrote:

Greetings List,

We have a medium-large size database application which we are trying
to optimize and I have a few questions.

Server Specs
1 Dual Core 2.6 Ghz
2GB Ram

Database Specs
51 Tables
Min 10 rows, Max 100 rows
Total size approx 2GB

My.cnf
[mysqld]
set-variable=local-infile=0
log-slow-queries=slow-queries.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
old_passwords=1
key_buffer = 512M
max_allowed_packet=4M
sort_buffer_size = 512M
read_buffer_size = 512M
read_rnd_buffer_size = 256M
record_buffer = 256M
myisam_sort_buffer_size = 512M
thread_cache = 128
query_cache_limit = 1M
query_cache_type = 1
query_cache_size = 32M
join_buffer = 512M
table_cache = 512


We are having trouble with certain queries which are returning
anywhere from 10 - 30 rows.  Total query time is taking approx
1 - 2 mins depending on load.  Is there anything in our conf file
which could improve our performance?  Are there any hardware
recommendations that could help us improve the speed?  Would more
memory help us?  Any comments or recommendations are greatly
appreciated.

Thanks much.


Jim Leavitt
Developer
Treefrog Interactive Inc. (http://www.treefrog.cawww.treefrog.ca)
Bringing the Internet to Life






--
MySQL General Mailing List
For list archives: 
http://lists.mysql.com/mysqlhttp://lists.mysql.com/mysql
To 
unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]http://lists.mysql.com/[EMAIL PROTECTED]


Jim Leavitt
Developer
Treefrog Interactive Inc. (http://www.treefrog.ca/www.treefrog.ca)
Bringing the Internet to Life
ph: 905-836-4442 ext 104
fx: 905-895-6561




Jim Leavitt
Developer
Treefrog Interactive Inc. (http://www.treefrog.ca/www.treefrog.ca)
Bringing the Internet to Life
ph: 905-836-4442 ext 104
fx: 905-895-6561








-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: Large Query Question.

2008-09-04 Thread David Ashley
On Thu, Sep 4, 2008 at 10:38 AM, mos [EMAIL PROTECTED] wrote
Jim,

The problem is likely your index is not defined properly. Use an Explain
in front of the query to see if it can use just one index from each table.

I would try building a compound index on

Products: (RecordReference, FeedId)

ProductContributors: (RecordReference, RowType)

This should get it to execute the join and where clause using just one index
from each table. Give that a try and see if it speeds things up. :)

Mike



I concur.

The SELECT time is going to resemble something like:

K_1 * F_1(number_of_records_in_database) + K_2 *
F_2(number_of_records_selected)

If the indices are effective, F_1 = log(N), but if the indices are not
effective, F_1 = N.

One thing you may want to try to narrow down the problem is just
retrieving 100 records (the COUNT clause of a query) and see how that
affects the speed, then try the full set and see how it is different.

If they aren't very different, then it is a F_1 problem.

But if they are different, then it is a K_2 / F_2 problem.

As far as K_2 or F_2 problems ...

Another possibility is that you are using ORDER BY on a large result set
that isn't indexed for an effective sort.  Try dropping the ORDER BY and see
what happens.

My view of how MySQL might work internally is perhaps naive.  But sorting
can be worst case O(N**2).

Dave.

 On 3-Sep-08, at 3:02 PM, mos wrote:

 Jim,
 Retrieving 100,000 rows will always take some time. Do you really
 need to return that many rows? Are you selecting just the columns you need?
 What are the slow queries?

 Mike

 At 12:05 PM 9/3/2008, Jim Leavitt wrote:

 Greetings List,

 We have a medium-large size database application which we are trying
 to optimize and I have a few questions.

 Server Specs
 1 Dual Core 2.6 Ghz
 2GB Ram

 Database Specs
 51 Tables
 Min 10 rows, Max 100 rows
 Total size approx 2GB

 My.cnf
 [mysqld]
 set-variable=local-infile=0
 log-slow-queries=slow-queries.log
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 old_passwords=1
 key_buffer = 512M
 max_allowed_packet=4M
 sort_buffer_size = 512M
 read_buffer_size = 512M
 read_rnd_buffer_size = 256M
 record_buffer = 256M
 myisam_sort_buffer_size = 512M
 thread_cache = 128
 query_cache_limit = 1M
 query_cache_type = 1
 query_cache_size = 32M
 join_buffer = 512M
 table_cache = 512


 We are having trouble with certain queries which are returning
 anywhere from 10 - 30 rows.  Total query time is taking approx
 1 - 2 mins depending on load.  Is there anything in our conf file
 which could improve our performance?  Are there any hardware
 recommendations that could help us improve the speed?  Would more
 memory help us?  Any comments or recommendations are greatly
 appreciated.

 Thanks much.


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (http://www.treefrog.cawww.treefrog.ca)
 Bringing the Internet to Life





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
 http://lists.mysql.com/[EMAIL PROTECTED]


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (http://www.treefrog.ca/www.treefrog.ca)
 Bringing the Internet to Life
 ph: 905-836-4442 ext 104
 fx: 905-895-6561




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




Fwd: Large Query Question.

2008-09-04 Thread David Ashley
 I concur.

The SELECT time is going to resemble something like:

K_1 * F_1(number_of_records_in_database) + K_2 *
F_2(number_of_records_selected)

If the indices are effective, F_1 = log(N), but if the indices are not
effective, F_1 = N.

One thing you may want to try to narrow down the problem is just
retrieving 100 records (the COUNT clause of a query) and see how that
affects the speed, then try the full set and see how it is different.

If they aren't very different, then it is a F_1 problem.

But if they are different, then it is a K_2 / F_2 problem.

As far as K_2 or F_2 problems ...

Another possibility is that you are using ORDER BY on a large result set
that isn't indexed for an effective sort.  Try dropping the ORDER BY and see
what happens.

My view of how MySQL might work internally is perhaps naive.  But sorting
can be worst case O(N**2).

Dave.


Addendum:  I misremembered the SQL keywords.  It isn't COUNT.  It is (I
think) LIMIT.

Also, ORDER BY might be GROUP BY.

Oopsie.


Large Query Question.

2008-09-03 Thread Jim Leavitt

Greetings List,

We have a medium-large size database application which we are trying  
to optimize and I have a few questions.


Server Specs
1 Dual Core 2.6 Ghz
2GB Ram

Database Specs
51 Tables
Min 10 rows, Max 100 rows
Total size approx 2GB

My.cnf
[mysqld]
set-variable=local-infile=0
log-slow-queries=slow-queries.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
old_passwords=1
key_buffer = 512M
max_allowed_packet=4M
sort_buffer_size = 512M
read_buffer_size = 512M
read_rnd_buffer_size = 256M
record_buffer = 256M
myisam_sort_buffer_size = 512M
thread_cache = 128
query_cache_limit = 1M
query_cache_type = 1
query_cache_size = 32M
join_buffer = 512M
table_cache = 512


We are having trouble with certain queries which are returning  
anywhere from 10 - 30 rows.  Total query time is taking approx  
1 - 2 mins depending on load.  Is there anything in our conf file  
which could improve our performance?  Are there any hardware  
recommendations that could help us improve the speed?  Would more  
memory help us?  Any comments or recommendations are greatly  
appreciated.


Thanks much.


Jim Leavitt
Developer
Treefrog Interactive Inc. (www.treefrog.ca)
Bringing the Internet to Life







Re: Large Query Question.

2008-09-03 Thread mos

Jim,
  Retrieving 100,000 rows will always take some time. Do you really 
need to return that many rows? Are you selecting just the columns you need? 
What are the slow queries?


Mike

At 12:05 PM 9/3/2008, Jim Leavitt wrote:

Greetings List,

We have a medium-large size database application which we are trying
to optimize and I have a few questions.

Server Specs
1 Dual Core 2.6 Ghz
2GB Ram

Database Specs
51 Tables
Min 10 rows, Max 100 rows
Total size approx 2GB

My.cnf
[mysqld]
set-variable=local-infile=0
log-slow-queries=slow-queries.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
old_passwords=1
key_buffer = 512M
max_allowed_packet=4M
sort_buffer_size = 512M
read_buffer_size = 512M
read_rnd_buffer_size = 256M
record_buffer = 256M
myisam_sort_buffer_size = 512M
thread_cache = 128
query_cache_limit = 1M
query_cache_type = 1
query_cache_size = 32M
join_buffer = 512M
table_cache = 512


We are having trouble with certain queries which are returning
anywhere from 10 - 30 rows.  Total query time is taking approx
1 - 2 mins depending on load.  Is there anything in our conf file
which could improve our performance?  Are there any hardware
recommendations that could help us improve the speed?  Would more
memory help us?  Any comments or recommendations are greatly
appreciated.

Thanks much.


Jim Leavitt
Developer
Treefrog Interactive Inc. (www.treefrog.ca)
Bringing the Internet to Life








--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large Query Question.

2008-09-03 Thread Jim Lyons
What are the queries?  Are they straight forward selects or joins?  Are the
columns you select from indexed and are the indexes up-to-date?

On Wed, Sep 3, 2008 at 12:05 PM, Jim Leavitt [EMAIL PROTECTED] wrote:

 Greetings List,

 We have a medium-large size database application which we are trying to
 optimize and I have a few questions.

 Server Specs
 1 Dual Core 2.6 Ghz
 2GB Ram

 Database Specs
 51 Tables
 Min 10 rows, Max 100 rows
 Total size approx 2GB

 My.cnf
 [mysqld]
 set-variable=local-infile=0
 log-slow-queries=slow-queries.log
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 old_passwords=1
 key_buffer = 512M
 max_allowed_packet=4M
 sort_buffer_size = 512M
 read_buffer_size = 512M
 read_rnd_buffer_size = 256M
 record_buffer = 256M
 myisam_sort_buffer_size = 512M
 thread_cache = 128
 query_cache_limit = 1M
 query_cache_type = 1
 query_cache_size = 32M
 join_buffer = 512M
 table_cache = 512


 We are having trouble with certain queries which are returning anywhere
 from 10 - 30 rows.  Total query time is taking approx 1 - 2 mins
 depending on load.  Is there anything in our conf file which could improve
 our performance?  Are there any hardware recommendations that could help us
 improve the speed?  Would more memory help us?  Any comments or
 recommendations are greatly appreciated.

 Thanks much.


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (www.treefrog.ca)
 Bringing the Internet to Life








-- 
Jim Lyons
Web developer / Database administrator
http://www.weblyons.com


Re: Large Query Question.

2008-09-03 Thread Brent Baisley
That's a lot of data to return, make sure you factor in data load and
transfer time. You may try breaking your query into smaller parts and
recombining the results in a scripting language. If you are searching
on a range (i.e. date range), break the range into smaller parts and
run multiple queries.
Divide and conquer, it will scale better.

Brent Baisley

On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt [EMAIL PROTECTED] wrote:
 Greetings List,

 We have a medium-large size database application which we are trying to
 optimize and I have a few questions.

 Server Specs
 1 Dual Core 2.6 Ghz
 2GB Ram

 Database Specs
 51 Tables
 Min 10 rows, Max 100 rows
 Total size approx 2GB

 My.cnf
 [mysqld]
 set-variable=local-infile=0
 log-slow-queries=slow-queries.log
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 old_passwords=1
 key_buffer = 512M
 max_allowed_packet=4M
 sort_buffer_size = 512M
 read_buffer_size = 512M
 read_rnd_buffer_size = 256M
 record_buffer = 256M
 myisam_sort_buffer_size = 512M
 thread_cache = 128
 query_cache_limit = 1M
 query_cache_type = 1
 query_cache_size = 32M
 join_buffer = 512M
 table_cache = 512


 We are having trouble with certain queries which are returning anywhere from
 10 - 30 rows.  Total query time is taking approx 1 - 2 mins
 depending on load.  Is there anything in our conf file which could improve
 our performance?  Are there any hardware recommendations that could help us
 improve the speed?  Would more memory help us?  Any comments or
 recommendations are greatly appreciated.

 Thanks much.


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (www.treefrog.ca)
 Bringing the Internet to Life







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Large Query Question.

2008-09-03 Thread Tom Horstmann
Right... and perhaps try MySQL Enterprise Monitor. A trial is available from
mysql.com. It may give you hints on your mysql.cnf.

Kind regards,

TomH

-Original Message-
From: Brent Baisley [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 03, 2008 11:35 PM
To: Jim Leavitt
Cc: mysql@lists.mysql.com
Subject: Re: Large Query Question.

That's a lot of data to return, make sure you factor in data load and
transfer time. You may try breaking your query into smaller parts and
recombining the results in a scripting language. If you are searching
on a range (i.e. date range), break the range into smaller parts and
run multiple queries.
Divide and conquer, it will scale better.

Brent Baisley

On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt [EMAIL PROTECTED] wrote:
 Greetings List,

 We have a medium-large size database application which we are trying to
 optimize and I have a few questions.

 Server Specs
 1 Dual Core 2.6 Ghz
 2GB Ram

 Database Specs
 51 Tables
 Min 10 rows, Max 100 rows
 Total size approx 2GB

 My.cnf
 [mysqld]
 set-variable=local-infile=0
 log-slow-queries=slow-queries.log
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 old_passwords=1
 key_buffer = 512M
 max_allowed_packet=4M
 sort_buffer_size = 512M
 read_buffer_size = 512M
 read_rnd_buffer_size = 256M
 record_buffer = 256M
 myisam_sort_buffer_size = 512M
 thread_cache = 128
 query_cache_limit = 1M
 query_cache_type = 1
 query_cache_size = 32M
 join_buffer = 512M
 table_cache = 512


 We are having trouble with certain queries which are returning anywhere
from
 10 - 30 rows.  Total query time is taking approx 1 - 2 mins
 depending on load.  Is there anything in our conf file which could improve
 our performance?  Are there any hardware recommendations that could help
us
 improve the speed?  Would more memory help us?  Any comments or
 recommendations are greatly appreciated.

 Thanks much.


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (www.treefrog.ca)
 Bringing the Internet to Life







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large Query Question.

2008-09-03 Thread David Ashley
On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt [EMAIL PROTECTED] wrote:


 We are having trouble with certain queries which are returning anywhere
 from 10 - 30 rows.  Total query time is taking approx 1 - 2 mins
 depending on load.  Is there anything in our conf file which could improve
 our performance?  Are there any hardware recommendations that could help us
 improve the speed?  Would more memory help us?  Any comments or
 recommendations are greatly appreciated.

Returning 100,000 to 300,000 rows will take some time no matter how you
slice it.

A more common approach is to be sure that the database is organized for
O(log N) retrieval, then to retrieve only the records you need (the ones you
need to display, for example), then to execute a second query to get more,
and then a third query, etc.

O(log N) retrieval = indices for the columns and the database arranged so
that equality and ordering are implemented using native data types.

What is your application?

Do you really need all those rows at one time?