> Right.  So you have a database with 2 000 000 rows that is 1.8GB
> So your first 1 000 000 rows takes up about 1GB.
> And your test case with just 1 000 000 rows in runs really fast.
>
> So what is happening is that most of the first 1 000 000 rows fits in memory. 
>  Once the database gets bigger than that the application has to keep fetching 
> information from disk all the time, and that's far slower.
>

i m affraid so ... but what it's will be with 50 000 000 rows ? i don't 
have 100 gigabytes of memory :(
with 50 000 000 rows and btree index like where fiedA = yyy then result 
is imediate ! but why not also
with rtree index ?

thanks you by advance
stéphane

>> how to speed up this query :
>>
>> Select
>>    H1.ID
>> from
>>    HASH1 H1
>> where
>>    x1_y1_min>= max((<#randomnumber>  % 255)-10,0) and
>>    x1_y1_max<= min((<#randomnumber>  % 255)+10,255) and
>>    x1_y2_min>= max((<#randomnumber>  % 255)-10,0) and
>>    x1_y2_max<= min((<#randomnumber>  % 255)+10,255) and
>>    x1_y3_min>= max((<#randomnumber>  % 255)-10,0) and
>>    x1_y3_max<= min((<#randomnumber>  % 255)+10,255) and
>>    x1_y4_min>= max((<#randomnumber>  % 255)-10,0) and
>>    x1_y4_max<= min((<#randomnumber>  % 255)+10,255) and
>>    x1_y5_min>= max((<#randomnumber>  % 255)-10,0) and
>>    x1_y5_max<= min((<#randomnumber>  % 255)+10,255);
> If there is only one random number involved in all that, precalculate all the 
> max and mins:
>
> ll = max((<#randomnumber>  % 255)-10,0)
> mm = min((<#randomnumber>  % 255)+10,255)

in my test all the random are different ...
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to