On Sun, 4 Oct 2009, Simon Slavin wrote:

> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> From: Simon Slavin <slav...@hearsay.demon.co.uk>
> Subject: Re: [sqlite] SQLite performance with lots of data
> 
>
> On 4 Oct 2009, at 6:11pm, Cory Nelson wrote:
>
>> On Fri, Oct 2, 2009 at 12:34 PM, Cory Nelson <phro...@gmail.com>
>> wrote:
>>> On Fri, Oct 2, 2009 at 9:45 AM, Francisc Romano <fran...@gmail.com>
>>> wrote:
>>>> Wow. I did not expect such a quick answer...
>>>> Is there somewhere I can read exactly how fast and how big
>>>> databases SQLite
>>>> can take, please?
>>>
>>> SQLite uses a b+tree internally, which is logarithmic in complexity.
>>> Every time your dataset doubles in size, worse-case performance will
>>> be halved.
>>
>> Woops, I of course meant to say performance halves every time the size
>> of your dataset is squared.
>
> But note that the fields of the row are stored in (more or less) a
> list.  So accessing the 20th column takes twice (-ish) as long as
> accessing the 10th column.  If you make a table with 100 columns it
> can take a long time to access the 100th column.
>
> Simon.

Could this not be implemented as a B-tree search algo as 
well? Maybe something like a FAT (RAT?) index stuck 
on the front of the table rows?

Keith

-----------------------------------------------------------------
Websites:
http://www.php-debuggers.net
http://www.karsites.net
http://www.raised-from-the-dead.org.uk

All email addresses are challenge-response protected with
TMDA [http://tmda.net]
-----------------------------------------------------------------
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to