Am 12.08.2010 13:04, schrieb TeDe:
>   Am 12.08.2010 12:16, schrieb Martin.Engelschalk:
>> Am 12.08.2010 12:08, schrieb TeDe:
>>>    Hello,
>>>
>>> I want to import a big subset of data from one database to a new one. I
>>> attach the two databases together and use
>>>
>>> insert into customers select * from source.customers where name LIKE 'x%'
>>>
>>> I can approximately calculate, how big the new database will grow. Is
>>> there a way to tell SQLite to reserve an inital space or numer of pages
>>> instead of letting the database file grow again and again? I'm looking
>>> for a way to speed up the import.
>>>
>> Hello Thomas,
>>
>> I create a dummy table with a blob field and fill it with a very large
>> empty blob. Then I drop the table. The empty pages remain behind an can
>> the be used by the followimng inserts.
>>
> Hello Martin,
>
> that sounds like a good idea. Do you use it to have enough space for
> later operations or because you want to speed up the inserts?
> How big is the space you reserve by this and how much faster is it? I
> presume, you have to allocate quite big BLOBs.
>
> Best regards,
>
> Thomas
Hello Thomas,

My primary goal was not a speedy insert but to avoid fragmentation of 
the resulting database file, which slows down later access to the file. 
So, this is not exactly on topic of your post.
I did not measure the changes in speed of the insert. However, later 
selects, which in my case use practically all the data in the database, 
speed up on the order of 20%.
I have to admit that this does not seem like much. However, my customer 
for some reason did not like the fragmentation and insisted on a solution.

I calculate the size of the blobs dynamically based on what I know of 
the following inserts. Also, i watch the result of pragma 
freelist_count. The size varies between 1 MB and 1 GB.

Martin
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to