Ah! I found my fault in the test code.

The delete performance was measured incorrectly, so I re-examed the
performance. It still shows outstanding performance compared to insert
performance, but it is a reasonable gap.

[Cache: 1MB   ] Delete 1000recs/3000recs : 2.6 secs  . Insert 2000recs
: 17secs .
[Cache: 100KB] Delete 1000recs/3000recs : 10.8 secs  . Insert 2000recs
: 28secs .


On 6/22/06, Insun Kang <[EMAIL PROTECTED]> wrote:
Hi guys. Thank all of you for reply.

I think I need to inform more details about my test-bed to you.
The machine is iPAQ h5550 (CPU speed is about 400MHz).

Cache size = 500 pages * 2KB = 1MB
Cache size = 50 pages * 2KB = 100KB
Cache size = 25 pages * 2KB = 50KB

The test code is written in c code and the flow is like this.

- The data table has 11 columns and 5 single-column indices and 5
multi-column indices.

- insert 3000 recs within a single transaction. (begin / insert 3000
recs / commit)
       INSERT INTO MDS VALUES (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11)

       * I use sqlite3_prepare() and sqlite3_bind_xxx() functions.
         The data bind time can be ignored because all data to be
binded are already loaded in memory before the transaction begins.

- delete 1000 recs among 3000 recs within a single transaction (begin
/ delete 1000 recs / commit )
        DELETE FROM T WHERE eleven LIKE :1

       * I do not think sqlite3 uses drop & rebuild scheme for this
SQL statement.

One possible scheme that I guess is sqlite3 removes recs only from
data table not from all indices. (lazy update for indices) But I am
not certain. Any ideas?

Thank you in advance.

-------------------------------------------
<create table & index>

create table T  (
     one     text NOT NULL,
     two     text,
     tree     int,
     four     text,
     five      int,
     six      int NOT NULL,
     seven  int,
     eight   int,
     nine    int,
     ten      int UNIQUE,
     eleven  text )

create index i1 ~ i5  (single column indices)
create index mi1 ~ mi5 (multi column indices : consists of 2 or 3 columns)


On 6/22/06, Dennis Cote <[EMAIL PROTECTED]> wrote:
> Insun Kang wrote:
> > Hi.
> >
> > I tested big deletes performance and big insert performance on a
> > Windows CE
> > device in various cache size configurations.
> > ( 1MB, 100KB, 50KB )
> >
> > Insert 3000 records performs within 23sec, 43sec and 61sec, with
> > respect to
> > each cache size configuration.
> > However, delete 1000 records among 3000 records performs within about
> > 0.4secs for all cache size configurations.
> >
> > Why does delete operation outperform insert operation? and how come the
> > delete operation is independent of cache size?
> > I think the updating indices costs are almost same in both insert and
> > delete
> > operations.
> > Can anybody explain the reason?
> >
> Insun,
>
> I think that you must be doing your inserts as 3000 separate
> transactions. Try wrapping the inserts with a pair of "begin
> transaction"/"commit transaction" commands.
>
> I just wrote a test script to try doing something similar to what you
> described using the sqlite3 shell. I get times of 8 seconds for 64K +
> 128K inserts with one preexisting index, and 4 seconds to delete the
> first 64K records from that table. The script is:
>
>    create table t1(a integer, b text);
>    create table t2(a integer, b text);
>    create table t3(a integer, b text);
>    create table times(
>        id integer primary key,
>        t timestamp default current_time
>    );
>
>    -- create 64K records in t1
>    begin;
>    insert into t1 values( random(), 'test');
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>    insert into t1 select random(), b from t1;
>
>    -- create 128K records in t2
>    insert into t2 select random(), b from t1;
>    insert into t2 select random(), b from t1;
>    commit;
>
>    -- index the random numbers
>    create index t3_a on t3(a);
>
>    -- insert all records into t3 with an index in one transaction
>    insert into times(id) values(1);
>    insert into t3
>        select * from t1
>        union
>        select * from t2;
>    insert into times(id) values(2);
>
>    -- delete one third of records from t1
>    insert into times(id) values(3);
>    delete from t3 where a in (select a from t1);
>    insert into times(id) values(4);
>
>    -- display times
>    select 'inserts: ' ||
>        round(86400 *
>                (julianday((select t from times where id = 2))
>                - julianday((select t from times where id = 1))
>            ), 0) || ' seconds';
>    select 'deletes: ' ||
>        round(86400 *
>            (julianday((select t from times where id = 4))
>            - julianday((select t from times where id = 3))
>            ), 0) || ' seconds';
>
> This script produces the following results.
>
>    C:\Documents and Settings\DennisC>sqlite3 test.db3 <test.sql
>    inserts: 8.0 seconds
>    deletes: 4.0 seconds
>
> Note that my inserts into t3 all happen in one transaction. These rows
> all contain a random integer value and are indexed on that value.
>
> I'm inserting around 24K records per second, and deleting around 16K
> records per second.  This is compared to you rates of about 49 inserts
> per second (with a 50KB cache size), and 2.5K deletes per second. I
> suspect that with a transaction around your inserts you will also see an
> insert rate around 3K records per second.
>
> HTH
> Dennis Cote
>
>
>



--
Insun Kang



--
Insun Kang

Reply via email to