Hello Adam

Adam Wolff wrote:
Actually runs through the table four times instead of twice, and maybe
can't even use the index for the whole query.

Assuming my results are not typical of MySQL query times, this would explain the sqrt() relationship of returned rows to query time.

I have tried your suggestions of using a sub-query and have had trouble getting the syntax valid. But on using explain, it seems that 4 bytes of the index (either lat or lon) are being used and a brute force search on the index for the other constraint.

If the query is returning 25600 points from a 100m dataset, it is brute seaching through 1.6m records in the second part of the index.

If it were an option of creating 2 1.6M lists then looking for commonalities, it may be faster to instead use 1 1.6m item list then brute force constraint search.

I have received suggestions to use spatial indexes, which I am looking into. Alternatively, I could optimise queries by creating multiple slices of the data set accross one axis then use a key on the other axis. MySQL 5.1 partitioning scheme may help.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to