Dennis Cote wrote:
Insun Kang wrote:
The machine is iPAQ h5550 (CPU speed is about 400MHz).
Cache size = 500 pages * 2KB = 1MB
Cache size = 50 pages * 2KB = 100KB
Cache size = 25 pages * 2KB = 50KB
The test code is written in c code and the flow is like this.
- The data table has 11 columns and 5 single-column indices and 5
multi-column indices.
- insert 3000 recs within a single transaction. (begin / insert 3000
recs / commit)
INSERT INTO MDS VALUES (:1, :2, :3, :4, :5, :6, :7, :8, :9,
:10, :11)
* I use sqlite3_prepare() and sqlite3_bind_xxx() functions.
The data bind time can be ignored because all data to be
binded are already loaded in memory before the transaction begins.
- delete 1000 recs among 3000 recs within a single transaction (begin
/ delete 1000 recs / commit )
DELETE FROM T WHERE eleven LIKE :1
* I do not think sqlite3 uses drop & rebuild scheme for this
SQL statement.
One possible scheme that I guess is sqlite3 removes recs only from
data table not from all indices. (lazy update for indices) But I am
not certain. Any ideas?
-------------------------------------------
<create table & index>
create table T (
one text NOT NULL,
two text,
tree int,
four text,
five int,
six int NOT NULL,
seven int,
eight int,
nine int,
ten int UNIQUE,
eleven text )
create index i1 ~ i5 (single column indices)
create index mi1 ~ mi5 (multi column indices : consists of 2 or 3
columns)
Insun,
This now looks quite strange. With your 10 indexes, each insert or
delete must update all eleven btrees. I would expect that the execution
times would be dominated by the index updates, and so should be quite
similar for an insert or a delete.
The only additional overhead I can think of for inserts is the need to
expand the file size as the tables and indexes grow. Deletes simply
accumulate the released pages on a free list to be reused later. Inserts
must acquire additional space to store the new pages. For a normal file
system this is a very fast operation (simply write beyond the current
end of the file), but memory allocation might be slow for a particular
memory manager. Is this a :memory: or a file based database?
Dennis Cote
A B-Tree index does a lot of work when inserting since it grows the
index by adding new root levels and splitting nodes to maintain balance.
Deletions tend to be less intensive, and may actually be implemented
to be quite simple by avoiding node merges. One would expect insertions
to consume more resources than deletions.
The large number of indices on this table mean that index insertion
activity would be the dominant performance factor.