Re: Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread Dan Buettner
This kind of timeframe (2 - 2.5 secs) could just be the result of running on a laptop. You've got a small amount of RAM compared to many servers, a bit slower processor, and *much* slower hard disk system than most servers. If your query has to access multiple records spread out throughout the t

Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread Dan Nelson
In the last episode (Nov 25), John Kopanas said: > Sorry about these questions. I am used to working with DBs with less > then 10K rows and now I am working with tables with over 500K rows > which seems to be changing a lot for me. I was hoping I can get some > people's advice. > > I have a 'com

Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas
If I just SELECT id: SELECT id FROM purchased_services WHERE (company_id = 1000) It takes approx 2-2.5s. When I look at the process list it looks like that it's state seems to always be in sending data... This is after killing the db and repopulating it again. So what is going on? On 11/25/06

Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas
I tried the same tests with the database replicated in a MyISAM engine. The count was instantaneous but the following still took 3-6seconds: SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263) The following though was instantaneous: SELECT * FROM purchased_services

Re: Master Switch (Or Write by SQL_THREAD only)

2006-11-25 Thread Mathieu Bruneau
Dominik Klein a écrit : >> Is there a way to allow the >> SQL_THREAD to write while holding everything else ? > > iptables -A INPUT -p tcp --dport 3306 -s MASTER_IP -j ACCEPT > iptables -A INPUT -p tcp --dport 3306 -j REJECT > Well technically that will refuse connection. Since the switch wo

Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas
Sorry about these questions. I am used to working with DBs with less then 10K rows and now I am working with tables with over 500K rows which seems to be changing a lot for me. I was hoping I can get some people's advice. I have a 'companies' table with over 500K rows and a 'purchased_services'

Re: Table of type=memory is full... but not

2006-11-25 Thread John Kopanas
When I moved from type=memory to the default DB it worked fine. I am starting to think that the quantity of rows i.e. 550K approx was too much for my memory on my computer to handle. Does this make sense? On 11/25/06, John Kopanas <[EMAIL PROTECTED]> wrote: I create a memory table with the fol

Table of type=memory is full... but not

2006-11-25 Thread John Kopanas
I create a memory table with the following query: CREATE TABLE company_totals type=memory SELECT company_id id, SUM(annual_service_charge) service_charge FROM purchased_services ps GROUP BY company_id;") When I try this I get the following error: Mysql::Error: The table 'com

FIND_IN_SET question

2006-11-25 Thread Lars Schwarz
hi all, this is what i got: SELECT find_in_set( box, '2,3,4,5,6,12' ) <1 AS YESNO, box FROM f2g_booking ORDER BY box what i need is those values that haven't been found from the find_in_set list. f2g_booking holds box values of 1,2,4,5,12, that means that are no entries with 3 and 6 in the tabl