On 5/6/2014 10:19 AM, Woody Wu wrote:
I observed a strange behavior. I was operating on a big table, there are
200,000 records in it. The table has a primary key or unique index, (time, id1,
id2), all of these indexed columns are integers.
The following query statement executed very slow, it took 15 secs on my ARM
device,
1. select max(time) from mytable where time < 99999999 and id1 = k1 and id2 = n.
where 99999999 is a value that large enough that no a record has its time field
equals to it. k1 is a not existed value that cannot be matched by any id1
column in the table, while n is a normal value that can be matched in the table
by a subset of records.
However, if I replace k1with another value that can be found in the table and
keep everything unchanged, like below,
2. select max(time) from mytable where where time < 999999999 and id1 = k2 and
id2 = n.
This query run very well, it only took less than 1 second on the same system.
In this query, only the "time" part of the index helps, but not the
"id1, id2" part. SQLite scans the index backwards, from highest time
down, and stops as soon as it finds a record that satisfies the two
equality checks. In the first case, no such record exists, so every
single record ends up being looked at. In the second case, apparently a
matching record is found early enough.
--
Igor Tandetnik
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users