On 20 Aug 2009, at 2:33pm, pierr wrote:

>  We have a database with a very simple schema:
>    CREATE TABLE IF NOT EXISTS tblIndex(
>     frame_type  INT,
>     pts  VARCHAR(5),
>     ts_start  INT primary key,
>     ts_end  INT)
>
> And the application scenario is :
> 1. Every second usr will insert 2 ~ 50 records ,and the ts_start  
> fields of
> those records is always increasing.

If ts_start is an always increasing integer (i.e. no two records have  
the same value) then I would see it as your primary key and as your  
row id.  In other words, your structure is more like

  CREATE TABLE IF NOT EXISTS tblIndex(
     ts_start   INTEGER PRIMARY KEY,
     ts_end     INTEGER,
     frame_type INTEGER,
     pts                VARCHAR(5))

inserting 50 records a second shouldn't be a problem.  If they're  
generated in batches rather than individually, use BEGIN ... COMMIT to  
speed up the insertion dramatically but it should be fast enough  
either way.

> After 8 hours ,there are at most
> 1_800_000 records. By setting the sync mode to off, the performance  
> seems OK
> so far.And because the data of each record has only 16 bytes, we may  
> use
> some buffer even if the insert speed is not too fast.


Won't do any harm, but might not be worth doing either.  Test it with  
minimal programming first.

> 2. After 8 hours , usr will told me to delete the oldest data by  
> telling the
> upper bound ts_start , so I will do DELETE FROM tblIndex WHERE  
> ts_start <
> upper_bound_ts_start. Delete 90_000 (which is the records for half a  
> hour)
> out of the 1_800_000 now take 17 seconds. A litter bit longer than  
> expected.

What interface or API to SQLite are you using ?  Or are you writing C  
code and using the native library commands ?  17 seconds is far longer  
than expected if you did everything as you described above.  It should  
be far faster even without multiple threads/connections.

To test it, get your database ready to do one of the DELETE FROM  
commands, and have your program quit or abort.  Then open the database  
in the sqlite3 command-line application, and issue the exact DELETE  
FROM command your application would execute.  If the command-line  
program takes 17 seconds then the problem is in your data somewhere.   
If it doesn't, the problem is in your own application or in your use  
of the API you're using.

> Any way to reduce this ? We don't care if the records is synced to  
> the hard
> disk immediately. We are thinking start a seperated thread to do the  
> delete
> so to make this call to be async. The things I am not sure is  
> whether the
> (long time )delete will impact the insert performance if they share  
> the same
> connection? Or should I use a seperated connection for insert and  
> delete?
> But In this way, do they need be synced in application level?

Whichever one of these is at fault, a delete command selecting on an  
indexed column and deleting 90000 records from a five column table  
should not take 17 seconds.

> 3. Search. SELECT ts_start FROM tblIndex WHERE ts_start BETWEEN ?  
> AND ? " -
> As ts_start is the primary key so the performance is OK for our need  
> now. I
> am thinking i should use a seperated connection for search , right?

Your data requirements are not huge.  The system should be fast enough  
no matter whether you use one connection or many.

Simon.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to