On 2005-07-25 at 13:06:42 [+0200], David Fowler
<[EMAIL PROTECTED]>
wrote:
> SELECT count(*) FROM table1
> INNER JOIN table2 ON (table1.id = table2.rel_id)
> INNER JOIN table3 ON (table3.rel_id = table2.id)
> INNER JOIN table4 ON (table3.id = table4.rel_id)
> INNER JOIN table5 ON (table5.rel_id = table4.id)
> WHERE table1.value = 'value1'
> AND ((table3.value LIKE '%value3%' AND table5.value = 'value5')
> OR (table3.value LIKE '%value3%' AND table5.value = 'value5'));
>
> This query executes in 0.02s on MySQL, but about 14s on SQLite (to return
34
> rows).
> I think the problem is the size of the temporary table the JOIN creates
> (37,434) which is probably the issue. There is (7,579) records in my
largest
> table. I'm not sure what my best option would be to get this size down.
But
> I'm thinking along the lines of some nested SELECTs to get needed rows in
> stages rather than all in one go at the end. My application is probably
> going to get much larger data wise than it already is and MySQL is not
> really an option anymore. Could possibly go back to UNIONs also if the OR
> operation isn't as efficient as two SELECTs (though I highly doubt that
is
> the case).
Run this through the analyser (EXPLAIN) which will show where the time is
spent. I would also suggest trying PostgreSQL as my experience with MySQL is
that performance degrades significantly with lots of JOINs as it just
generates lots of Cartesian products and I think SQLite is the same, ie. a
huge table is generated in memory and the conditions are applied to this row
by row.
Certainly seems a strange query, though.
Charlie
====================================================
Well the query isn't that strange, I have 3 values that need matching
per-record returned, all in different tables (takes care of 3 tables) the
actual data I want is stored in another related table, and 2 of the values
that need to match are not related directly, so a 5th table is required for
that relationship.
Anyway, my original query does infact work, however it takes 10 minutes to
run. I'm guessing this is because it generates 1,530,200,100 rows of data to
do the WHERE clause on (just did a count(*) on it in mySQL) . Why MySQL can
do this so much faster, 1/10th of a second I dont know (though I am using
InnoDB not the default MyISAM engine. I image the BDB engine would be even
faster).
Just MySQL is to heavy to install with my application. Anyway, back to the
drawing board for an efficient way to get my data. Thanks again.