Hi all!
Following the recent thread "Virtual tables used to query big external 
database", and the discussion with Mike Owens and Jay A. Kreibich, it seems 
that :

- The "old" way of dealing with dirty pages with bitmaps limited SQLite to an 
approximate maximal capacity of 10s of GBs, as opposed to therical TBs, because 
it imposed to malloc 256 bytes for every 1Mb of database during each 
transaction.

- The "new" way of dealing with dirty pages with a bitvec structure (introduced 
in SQLite v3.5.7) allows for sparse bitmaps and is then supposed to push away 
the "10s of GBs" limit.

Now the questions are:
1) What are the new practical limits with SQLite v3.5.7?
2) Does somebody have any real-life experience (or home-made tests and figures) 
on SQLite v3.5.7 and really big tables? (say 100 000 000 lines).
3) Does the new "bitvec" algorithm really help with such a big table?

I am mainly interested in performance of INSERTs (for creating the big table) 
and SELECTs (for queries). UPDATEs, DROPs, TRIGGERs etc. have a lower priority 
in my case. Those questions are really important for me because if SQLite is 
now able to handle really big tables, I no longer need to implement my own 
"virtual table" in order to link SQLite to a "big external database"... because 
I could directly use SQLite itself for the whole application! (no virtual table 
and no "external" database needed).

Thank you for any help about that subject.
Have a nice day,
Aladdin


_________________________________________________________________
Découvrez les profils Messenger de vos amis !
http://home.services.spaces.live.com/
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to