Hi, We are using the following schema : 
CREATE TABLE [BlobLastNameTest] ([FieldName] CHAR (25) PRIMARY KEY, [Vertices] 
BLOB )

index|sqlite_autoindex_BlobLastNameTest_1|BlobLastNameTest|3|
 
 
This table could potentially hold 10 to 40 million rows. We are using the 
following query to obtain the minumum rowid for each unique LastName:
 
sqlite> explain query plan select t1.FieldName,t1.rowid from BlobLastNameTest t1
 GROUP BY t1.FIELDNAME HAVING t1.rowid = (SELECT MIN(rowid) FROM BlobLastNameTes
t where FieldName = t1.FIELDNAME);
 
0|0|0|SCAN TABLE BlobLastNameTest AS t1 USING COVERING INDEX 
sqlite_autoindex_BlobLastNameTest_1 (~1000000 rows)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SEARCH TABLE BlobLastNameTest USING COVERING INDEX 
sqlite_autoindex_BlobLastNameTest_1 (FieldName=?) (~1 rows)
 
This query runs in less than 5 seconds on the lastest version of sqlite with 
2,700,000 rows. But what if we have 10 million rows to 40 million rows. Would 
it possible to optimize this query further? Thank you.                          
               
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to