I am working with SQLite in an embedded environment.  With synchronous
= full, I can say large inserts are abysmal (of course I need the
protection that full synchronous offers).  Of course, as always what I
call large may not be what you call large.  Keep in mind that sqlite
will make a journal file equal to roughly the size of the data you
will be moving.  Instead of moving the data to a backup, could you
create a new table and start dumping data there?  You know, in your
program remember the current table (DataLogX).  When it comes time to
roll over the log .... "CREATE TABLE DataLog(X+1) .....Just one man's
opinion.


On Fri, Jun 13, 2008 at 5:25 AM, Al <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I'm using sqlite to implement a fast logging system in an embbeded system. For
> mainly space but also performance reason, I need to rotate the databases.
>
> The database is queried regularly and I need to keep at least $min rows in it.
>
> What I plan, is inside my logging loop, to do something like this.
>
> while(1) {
>    read_informations_from_several_sources();
>    INSERT(informations);
>
>    if(count > max) {
>       /* I want to move all oldest rows in another database */
>       BEGIN;
>       INSERT INTO logs_backup
>            SELECT * FROM logs order by rowid limit ($max - $min);
>
>       DELETE FROM logs WHERE rowid IN (SELECT rowid FROM logs ORDER BY rowid
>            LIMIT ($max - $min));
>       COMMIT;
>    }
> }
>
> rowid is an autoincremented field.
> I am not an sql expert, and would like to find the fastest solution to move 
> the
> oldest rows into another database. Am I doing silly things ? Can it be 
> improved ?
>
> Thanks in advance.
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to