Jim,
I've re-posted your message to the list so others can join in the fray. :)

Mike

At 10:50 AM 9/4/2008, you wrote:
Hi Mike,

I do believe we have done the indexing properly. Please advise if we can make any adjustments. Here is the output from the explain statements;

16634be.png


Thanks,

Jim

On 3-Sep-08, at 10:02 PM, mos wrote:

At 02:49 PM 9/3/2008, Jim Leavitt wrote:
Hi Mike,

Yes sometimes, the application is an online book selection tool with about 1 million titles in it. Now the queries which return 100,000 rows would be something like returning all titles from a given publisher. Most of the common searches are fairly quick (1-5 sec). But this was a specific example given to me. As you can imaging we're joining on many tables to pull author data, publication data, etc and displaying it all on a detail page. An example query is. (note: this is on a development box with nothing else on it)

SELECT p.Title FROM products AS p LEFT JOIN productcontributors AS pc ON p.RecordReference = pc.RecordReference WHERE pc.rowtype = "PR8" AND p.feedid = "5" GROUP BY p.id LIMIT 0,10;

returns

10 rows in set (42.12 sec).
(Total of 194557 rows found.)

Now we've never dealt with anything like this before, but there are other sites returning similar counts fairly quickly. The only thing I can think of is hardware. What hardware upgrades would you recommend? Would it even help? Would clustering be an option here?

Any advice is greatly appreciated.

Thanks much.

Jim,
The problem is likely your index is not defined properly. Use an "Explain" in front of the query to see if it can use just one index from each table.

I would try building a compound index on

Products: (RecordReference, FeedId)

ProductContributors: (RecordReference, RowType)

This should get it to execute the join and where clause using just one index from each table. Give that a try and see if it speeds things up. :)

Mike




On 3-Sep-08, at 3:02 PM, mos wrote:

Jim,
Retrieving 100,000 rows will always take some time. Do you really need to return that many rows? Are you selecting just the columns you need? What are the slow queries?

Mike

At 12:05 PM 9/3/2008, Jim Leavitt wrote:
Greetings List,

We have a medium-large size database application which we are trying
to optimize and I have a few questions.

Server Specs
1 Dual Core 2.6 Ghz
2GB Ram

Database Specs
51 Tables
Min 100000 rows, Max 1000000 rows
Total size approx 2GB

My.cnf
[mysqld]
set-variable=local-infile=0
log-slow-queries=slow-queries.log
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
old_passwords=1
key_buffer = 512M
max_allowed_packet=4M
sort_buffer_size = 512M
read_buffer_size = 512M
read_rnd_buffer_size = 256M
record_buffer = 256M
myisam_sort_buffer_size = 512M
thread_cache = 128
query_cache_limit = 1M
query_cache_type = 1
query_cache_size = 32M
join_buffer = 512M
table_cache = 512


We are having trouble with certain queries which are returning
anywhere from 100000 - 300000 rows.  Total query time is taking approx
1 - 2 mins depending on load.  Is there anything in our conf file
which could improve our performance?  Are there any hardware
recommendations that could help us improve the speed?  Would more
memory help us?  Any comments or recommendations are greatly
appreciated.

Thanks much.


Jim Leavitt
Developer
Treefrog Interactive Inc. (<http://www.treefrog.ca>www.treefrog.ca)
"Bringing the Internet to Life"





--
MySQL General Mailing List
For list archives: <http://lists.mysql.com/mysql>http://lists.mysql.com/mysql To unsubscribe: <http://lists.mysql.com/[EMAIL PROTECTED]>http://lists.mysql.com/[EMAIL PROTECTED]

Jim Leavitt
Developer
Treefrog Interactive Inc. (<http://www.treefrog.ca/>www.treefrog.ca)
"Bringing the Internet to Life"
ph: 905-836-4442 ext 104
fx: 905-895-6561



Jim Leavitt
Developer
Treefrog Interactive Inc. (<http://www.treefrog.ca/>www.treefrog.ca)
"Bringing the Internet to Life"
ph: 905-836-4442 ext 104
fx: 905-895-6561






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to