Hi,

Some time ago I worked with a database repeating the same sequence of
actions multiply times. They're basically:
- create table
- populate table
- do some deletes with some criteria
- drop table

After about 20 times I started to notice the usual effects of internal
fragmentation (slowness in some usually quick operations and reports
of large seek from VFS). I assume this has something to do with the
way new pages allocated from free list. I narrowed it to a little test
that can reproduce this (tested with 3.7.10)

CREATE TABLE [TestTable] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT)
Insert into TestTable Default Values /* do this 1,000,000 times */
Delete from TestTable where (Id/1000) % 2 = 0
Drop table TestTable

This test makes the db very fragmented after about 10 steps.

I thought recently that the main source of internal fragmentation is
the nature of the data added.  But looks like not only. Even if your
data is sequential, but the free_list is fragmented, then you would
probably get fragmented internal data. Is it possible to automatically
sort free_list from time to time? Or maybe some other solution if this
would cost too much?

Thanks

Max
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to