>I totally disagree with you. Let's say you have 1,000,000 rows and 100
>of them contain NULL. In this situation selecting NOT NULL will select
>almost all rows which means that using index in this case doesn't give
>any performance boost. So here using full scan for NOT NULL condition
>is better and for NULL condition using index is better. Everything is
>completely legitimate.

The number of rows concerned either way is not the point.  Also the 
NULL condition could be almost anything else.  Situations where the 1M 
and 100 figures are reversed would invalidate your point, so?

select * from T where col1 like 'abc%' and col2 between 3 and 18;
uses the index on T

select * from T where col1 NOT like 'abc%' and col2 between 1 and 24;
doesn't, due to the use of NOT.

That in some particular situation using an index could be better or 
worse is again not the point.  The point was that NOT <condition> is 
simply reversing the issue of a binary test, in fine, and that seems 
essentially independant of the use of an index for determining <condition>.


>BTW, when SQLite uses index on "text" field it needs to load full
>values of "text" field anyway.

Do you mean that
     select * from T where rowid = 1;
needs loading the _entire_ index when T has rowids in 1..10000000 ?



_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to