Simon Slavin-2 wrote:
> 
> If ts_start is an always increasing integer (i.e. no two records have  
> the same value) then I would see it as your primary key and as your  
> row id.  In other words, your structure is more like
> 
>   CREATE TABLE IF NOT EXISTS tblIndex(
>      ts_start INTEGER PRIMARY KEY,
>      ts_end   INTEGER,
>      frame_type       INTEGER,
>      pts              VARCHAR(5))
> 
> 
Thanks. I did not know the sequence in defining the field matters. This is
what I should have done.


Simon Slavin-2 wrote:
> 
> inserting 50 records a second shouldn't be a problem.  If they're  
> generated in batches rather than individually, use BEGIN ... COMMIT to  
> speed up the insertion dramatically but it should be fast enough  
> either way.
> 
> What interface or API to SQLite are you using ?  Or are you writing C  
> code and using the native library commands ?  17 seconds is far longer  
> than expected if you did everything as you described above.  It should  
> be far faster even without multiple threads/connections.
> 
> To test it, get your database ready to do one of the DELETE FROM  
> commands, and have your program quit or abort.  Then open the database  
> in the sqlite3 command-line application, and issue the exact DELETE  
> FROM command your application would execute.  If the command-line  
> program takes 17 seconds then the problem is in your data somewhere.   
> If it doesn't, the problem is in your own application or in your use  
> of the API you're using.
> 
>> Any way to reduce this ? We don't care if the records is synced to  
>> the hard
>> disk immediately. We are thinking start a seperated thread to do the  
>> delete
>> so to make this call to be async. The things I am not sure is  
>> whether the
>> (long time )delete will impact the insert performance if they share  
>> the same
>> connection? Or should I use a seperated connection for insert and  
>> delete?
>> But In this way, do they need be synced in application level?
> 
> Whichever one of these is at fault, a delete command selecting on an  
> indexed column and deleting 90000 records from a five column table  
> should not take 17 seconds.
> 
> 

I am sorry, I should have mentioned It (17 seconds to delete 90000) was
tested on a 350M MIPS CPU. And after changing to the schema you suggested,
it still take 17 seconds to delete 90000 records.
On my 1.8G Hz Ubuntu desktop it tooks 800ms. So is this delete performance
is the limit we can achieve? Any other options I can improve this?

BTW:  I used following option to build the libarary. Is there any thing I
can expore here? (-O2 and -Os seem has no big difference on performance.)
mips24k-linux-gcc -Os -fPIC -c *.c
mips24k-linux-gcc -shared  -o mips_libsqlite3.so.1 sqlite3.o 

-- 
View this message in context: 
http://www.nabble.com/suggestion-on-the-database-design-tp25062134p25073115.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to