Hello Cesar, may be you can have a look at ft3 (ft3.sourceforge.net) which does the same thing. Some advices: * partition your table ft_index on wordid I use 1024 tables and use them like that: ft_index[ wordid % 1024 ] to access them. This reduce the size of the table and thus the size of the index. * use a large block size via pragma (this speed up large reads)
FT3 ways is to switch to a binary index stored into a blob for each word. Something like create table ft_index ( wordid int, scores blob ); and the blob is an array of (docid, rank, position, ...) This is a lot faster of course but you have to deal with the join yourself. PIerre. P.S.: I will try to use the new virtual table interface to access the blob datas. ----- Message d'origine ---- De : Cesar David Rodas Maldonado <[EMAIL PROTECTED]> À : sqlite-users@sqlite.org Envoyé le : Mardi, 22 Août 2006, 10h10mn 30s Objet : [sqlite] how can i optimize this query I have the next table with about 10.000.000 of records- CREATE TABLE ft_index ( docid int(11) NOT NULL default '0', wordid int(11) NOT NULL default '0', posicion int(11) NOT NULL default '0', ranking float NOT NULL default '0', lang int(11) NOT NULL default '0', KEY docid (docid,wordid,posicion,ranking), KEY lang (lang) ); How can i optimize the next query, couse i need velocity (this is for a fulltext search project): select t0.*, t0.ranking + t1.ranking + t2.ranking + t3.ranking + t4.ranking as ranking from ft_index as t0 inner join ft_index as t1 on (t0.docid = t1.docid) inner join ft_index as t2 on (t0.docid = t2.docid) inner join ft_index as t3 on (t0.docid = t3.docid) inner join ft_index as t4 on (t0.docid = t4.docid) where (t0.wordid = '18929') AND (t1.wordid = '27283') AND( t2.wordid = '4351' and t2.posicion + 1 = t3.posicion and t3.wordid = '9418' and t3.posicion + 1 = t4.posicion ) group by t0.docid order by ranking; Every inner join is for search a word that i save in another table (with the number of words).