Hello,

I'm currently investigating how far I can go with my
favorite DB engine. For that purpose I'm testing
my application with an artificial database that is
appx. 50 times bigger that the maximum I have
seen in the field so far.
The test creates a database from the scratch and just fills
the tables with random data. To speed up this operation
(which takes 3 hours) I drop all irrelevant indices
prior running the inserting.
Afterwards I need to create these indices because they are
necessary for the regular database operations.
Now, this (CREATE INDEX) fails after a few minutes with an error code
of 7 (malloc failed). I'm using the native C-Api...
I also specify: "PRAGMA cache_size=500000;" if that matters.

The table/index in question has appx. 300 million rows...

Is there a workaround, other than having the indices defined
from the beginning ? Haven't tried yet... though.

It could be that I'll need to add indices in future versions
of the application and I'm concerned that sqlite will not be able
to do so if the database exceeds a certain size.

Please note that sqlite can (I think) very well handle that DB size,
it's just the CREATE INDEX that is, so far, a bit disappointing.

Any comment on this ?

I tried with sqlite 3.7.14.1 and 3.7.8 - no difference.


Kind regards

Marcus
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to