> In my case (which is certainly not typical), a (several GB) large
> database is built up in several batches, one table at a time, while in
> parallel many intermediate files on the disk are created. This resulted
> in a very fragmented database file. After that, also several times, the
> data is selected in a way that uses 80-90% of the data in the database,
> using joins of all tables and sorting.
>
> ...
>
> With the new feature available, i can remove my own workaround, which
> does not work so well annyway. Many thanks to the developers.
>
>
Martin, you gave a good example of the case when this really helps. Although
I suppose you still need some tweaking. As Dan Kennedy wrote you have to set
the the parameter and "From that point on, connection "db" extends and
truncates  the db file in 1MB chunks". So for example if you just created a
db and maybe did minor changes to the db and have plans to extend it to
larger size, you have to set SQLITE_FCNTL_CHUNK_SIZE with
sqlite3_file_control and also write something new and not only write but be
sure it's not going to be written to a previously disposed page.

As long as cases like yours is real and can be used in real life, maybe a
change to existing freelist_count pragma is possible? If it is writable
(PRAGMA freelist_count=1024;), sqlite compares the value supplied with the
current count and if it is bigger allocates necessary space. It seems this
syntax will be straightforward and self-explaing. What you think?

Max
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to