Actually my reason for writing into a seperate database is more...
well crude.  I tar several databases together then encrypt using
openSSL. Then an FTP like program transmits the data a central server.
 I must suspend writing into the database for the duration of the tar
operation since tar does not abide by sqlites file locking rules.

Thanks for your input, any and all help is appreciated!

--
Rich

On Mon, May 19, 2008 at 11:50 AM, Ken <[EMAIL PROTECTED]> wrote:
> Rich,
>
> From your design it appears you are writing to a seperate db while a 
> "download" is happening? I'm guessing that is to prevent a read/write lock 
> contention correct?
>
> It seems to me that any new data coming in will need to write and you are 
> simply looking to read during a download operation and trying to avoid lock 
> contention and delays correct?
>
> DownloadInfo table is used to keep track of the point where the last download 
> completed successfully.
>
> data to download = last successful to max rowid. (ie a subset).
>
> One thought I had to avoid the contention is if this is a threaded 
> application? you could enable the shared cache and  read_uncommitted 
> isolation. It might be a bit tricky in that you'll probably have to get the 
> "committed" data in a txn, then set the uncomitted mode to read to avoid 
> waiting for locks.
>
> Ken
>
> Rich Rattanni <[EMAIL PROTECTED]> wrote: Hi I have a general design question. 
>  I have the following senario...
>
> In an embedded system running linux 2.6.2x I have a sqlite database
> constantly being updated with data acquired by the system.  I cant
> lose data (hence why I am using sqlite in the first place).  However
> periodically I have download the data contain within the database to a
> central server.  The system cannot stall during the download and must
> continue to record data.  Also, after the download I need to shrink
> the database size, simply because if the database is allowed to grow
> to its max size (~50MB) then every download thereafter would be 50MB,
> which is unacceptable.  I would simply vacuum the database, but this
> takes too much time and stalls the system.
>
> My solution is the following (still roughed out on scraps of paper and
> gray matter).
>
> have two databases on the system at all times (data.sqlite.(x) and
> data.sqlite.(x+1))
> All data written into x.
> When a download is requested...
>     Mark highest rowid in each table in database (x) in a table
> called DownloadInfo
>     Begin logging data to (x+1)
> Download done (success or failure - downloads may be cancelled or timeout)
>    Attach x+1 to x
>    Begin transaction
>    delete all data in x from tables equal to <= rowid saved in DownloadInfo
>    move any data stored in x+1 to x
>    if download was successful...
>        mark in x that a download was successful in DownloadInfo
>
> At next powerup...
> Scan x.DownloadInfo, see if a download was successful...
>    Yes....
>        Attach x+1 to x
>        attach x+2 to x
>        begin transaction
>        Build new database x+2
>        Move data from x to x+1
>        Mark database has been deleted in DownloadInfo
>        commit.
>        delete (using os, unlink perhaps)
>   No....
>        Do nothing.
>
>
> So its kinda complicated, but I think such things are necessary.  For
> instance, a vacuum is out of the question, it just takes too long.
> Thats why  the double database scheme works good for deleting old
> databases.  I guess i want to stop here and leave some info out.  That
> way I don't suppress any good ideas.
>
> And as always I really appreciate any help i can get.  I tried to
> implement something similar, but I was copying an already prepared
> sqlite database which was not very reliable.  Guess another question,
> maybe one that solves this one..... has any improvements on
> auto-vacuum been made?  Does anyone trust it or can anyone attest to
> its fault tolerance.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to