Am Mon, 1 Jan 2018 10:45:50 +0100 schrieb Clemens Ladisch: > Wolfgang Enzinger wrote: >> First, query the overall extent of your data, like this: >> SELECT MIN(mingx),MAX(maxgx),MIN(mingy),MAX(maxgy) FROM flst_shape_index; > > This results in a full table scan. Instead of caching these values manually, > it would be a better idea to read them from the index: > > SELECT rtreenode(2, data) FROM flst_shape_index_node WHERE nodeno = 1; > > (rtreenode() is undocumented; maybe you should use your own decoder.)
Thanks, didn't know that, I'll look into it. You're right, my query results in a full table scan, however it's pretty fast anyway - less than a second with 160,000 rows and cold cache. >> Let SQLite know about that likelihood in a JOIN query > > This does not appear to change anything with a virtual table: > > CREATE TABLE t(id INETGER PRIMARY KEY, x, [...]); > CREATE VIRTUAL TABLE i USING rtree(id, minx, maxx); > > SELECT t.* FROM t JOIN i USING (id) WHERE likelihood(i.minx BETWEEN 10 AND > 20, 0.000001); > --EQP-- 0,0,1,SCAN TABLE i VIRTUAL TABLE INDEX 2:D0B0 > --EQP-- 0,1,0,SEARCH TABLE t USING INTEGER PRIMARY KEY (rowid=?) > SELECT t.* FROM t JOIN i USING (id) WHERE likelihood(i.minx BETWEEN 10 AND > 20, 0.999999); > --EQP-- 0,0,1,SCAN TABLE i VIRTUAL TABLE INDEX 2:D0B0 > --EQP-- 0,1,0,SEARCH TABLE t USING INTEGER PRIMARY KEY (rowid=?) This is not surprising because only criteria concerning the "i" table are in effect here. So it is clear that even a likelihood of 0.999999 is more selective than a likelihood of 1.000 (= no filter criteria in this table). However, if your query has criteria both in the "i" and the "t" table, it can make a difference. Of course, anybody correct me if I'm mistaken. Happy new year! Wolfgang _______________________________________________ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users