Stop that BULLSHIT
----- Original Message -----
From: "Simon Slavin" <[email protected]>
To: "General Discussion of SQLite Database" <[email protected]>
Sent: Monday, April 23, 2012 1:35 PM
Subject: Re: [sqlite] free list performance
On 23 Apr 2012, at 11:15am, Max Vlasov <[email protected]> wrote:
Some time ago I worked with a database repeating the same sequence of
actions multiply times. They're basically:
- create table
- populate table
- do some deletes with some criteria
- drop table
Okay. That's obviously pointless. To help us it would be better to tell
us whether you do reads before or after 'do some deletes', or both.
After about 20 times I started to notice the usual effects of internal
fragmentation (slowness in some usually quick operations and reports
of large seek from VFS). I assume this has something to do with the
way new pages allocated from free list. I narrowed it to a little test
that can reproduce this (tested with 3.7.10)
CREATE TABLE [TestTable] ([Id] INTEGER PRIMARY KEY AUTOINCREMENT)
Insert into TestTable Default Values /* do this 1,000,000 times */
Delete from TestTable where (Id/1000) % 2 = 0
Drop table TestTable
This test makes the db very fragmented after about 10 steps.
I thought recently that the main source of internal fragmentation is
the nature of the data added. But looks like not only. Even if your
data is sequential, but the free_list is fragmented, then you would
probably get fragmented internal data. Is it possible to automatically
sort free_list from time to time? Or maybe some other solution if this
would cost too much?
Apart from the above, nice description of the problem. Makes it easier to
answer. Your guess about the free_list is correct, as far as I know.
You have two types of fragmentation: fragmentation of the database file in
your disk file system, and fragmentation of the data inside the database
file. The first one will make a big difference only under Windows and I
assume you can deal with it using the Windows tools.
For the second one, the simplest way to do something like defragment the
data inside the file is to use the VACUUM command:
<http://www.sqlite.org/lang_vacuum.html>
Do it whenever you like. In your case, immediately after dropping the
table might be good. An alternative to manually issuing the VACUUM
command would be to create a new database and set 'PRAGMA
auto_vacuum=FULL' for it before creating any tables. However note that
the vacuuming done by this command isn't identical to what VACUUM does.
Read about it here:
<http://www.sqlite.org/pragma.html#pragma_auto_vacuum>
Simon.
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users