Insun Kang wrote:
Hi.

I tested big deletes performance and big insert performance on a Windows CE
device in various cache size configurations.
( 1MB, 100KB, 50KB )

Insert 3000 records performs within 23sec, 43sec and 61sec, with respect to
each cache size configuration.
However, delete 1000 records among 3000 records performs within about
0.4secs for all cache size configurations.

Why does delete operation outperform insert operation? and how come the
delete operation is independent of cache size?
I think the updating indices costs are almost same in both insert and delete
operations.
Can anybody explain the reason?

Insun,

I think that you must be doing your inserts as 3000 separate transactions. Try wrapping the inserts with a pair of "begin transaction"/"commit transaction" commands.

I just wrote a test script to try doing something similar to what you described using the sqlite3 shell. I get times of 8 seconds for 64K + 128K inserts with one preexisting index, and 4 seconds to delete the first 64K records from that table. The script is:

   create table t1(a integer, b text);
   create table t2(a integer, b text);
   create table t3(a integer, b text);
   create table times(
       id integer primary key,
       t timestamp default current_time
   );

   -- create 64K records in t1
   begin;
   insert into t1 values( random(), 'test');
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;
   insert into t1 select random(), b from t1;

   -- create 128K records in t2
   insert into t2 select random(), b from t1;
   insert into t2 select random(), b from t1;
   commit;

   -- index the random numbers
   create index t3_a on t3(a);

   -- insert all records into t3 with an index in one transaction
   insert into times(id) values(1);
   insert into t3
       select * from t1
       union
       select * from t2;
   insert into times(id) values(2);
-- delete one third of records from t1
   insert into times(id) values(3);
   delete from t3 where a in (select a from t1);
   insert into times(id) values(4);

   -- display times
   select 'inserts: ' ||
       round(86400 *
               (julianday((select t from times where id = 2))
               - julianday((select t from times where id = 1))
           ), 0) || ' seconds';
   select 'deletes: ' ||
       round(86400 *
           (julianday((select t from times where id = 4))
           - julianday((select t from times where id = 3))
           ), 0) || ' seconds';

This script produces the following results.

   C:\Documents and Settings\DennisC>sqlite3 test.db3 <test.sql
   inserts: 8.0 seconds
   deletes: 4.0 seconds

Note that my inserts into t3 all happen in one transaction. These rows all contain a random integer value and are indexed on that value.

I'm inserting around 24K records per second, and deleting around 16K records per second. This is compared to you rates of about 49 inserts per second (with a 50KB cache size), and 2.5K deletes per second. I suspect that with a transaction around your inserts you will also see an insert rate around 3K records per second.

HTH
Dennis Cote


Reply via email to