On 5 Apr 2011, at 6:54pm, Guilherme Batista wrote:

> let's say I get the number of rows from the first table scan of the
> execution. I would compare it with the number of rows defined for the table
> in sqlite_stat1 (I ran the analyze once, and there are no index in my
> tables). If the difference between them is too big, the optimizer may have
> built an inefficient query execution plan.

I think you've left out a lot of considerations there.  You will get far more 
of a speed up simply by creating appropriate indexes than by anything you 
describe here.

> So I would stop the query
> execution, execute the analyze for that table and start the query
> optimization all over again.
> Of course this would improve almost nothing. But it's just the beginning of
> my study...

Okay, you're trying to pick apart the inner workings of SQLite, and are 
assuming that the query analyzer makes mistakes.  In that case I have a 
suggestion for you: Have an option that does an ANALYZE on every table.  Have 
your users run it overnight once every six months.

But first, and more important, make the indexes appropriate to your searches.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to