I think your trying to overcome the read/write concurrency issue with sqlite 
correct?  You want to have the ability to copy data (ie ftp) and recieve new 
data into an overflow database. 

Main.db  = contains download.db and is an attachment point for ancillary db's.
wrtdb_###.db  = Always write to this location.

When a download is needed simply close the current wrtdb_###. Create a new 
wrrtdb_###.db and Incrementing new wrtdb table in the main.db.... 

It seems unclear to me what your requirements are trying to attempt.

Do you need to keep any of this data, if so for how long?
Do you need to be able to read the older data? (except for downloads)
Do you need to be able to subset the data? 

Ken




Rich Rattanni <[EMAIL PROTECTED]> wrote: > Perhaps i've missed something or 
don't understand it well. Your
> databases is all on the same file or do you have 2 separate sqlite
> sessions to 2 different databases files? In the first scenario you
> must be very fast and in the second you can switch from one database
> to the other, unbind (close) the sqlite, do ftp or what ever you want
> and delete database file.
>
Yes in my code, I was thinking of having two database files on the filesystem
x and x+1.
During the download process I was going to drop any data generated during
the download process into x+1 (that is to say the system continues running
normally while a download is in progress).

> You attach x+1 to x. Why do you need it? If you delete old records on
> x after the ftp you can trash x, work with x+1 and recreate a void x.

I can see where I may not need it. I was just thinking of when the unit powers
back up I need to know which database is the 'main' database and which
database is the 'overflow'.  I would use the rule that x is the main and x+1
is overflow data. Strickly policy.  Incase it is unclear x and x+1 refer to
the actual filename of the database on disk, so I would have....
flags.sqlite.0  <- Main
flags.sqlite.1 <- Overflow
***After download and next power up***
flags.sqlite.1 <- Main
flags.sqlite.2 <- Overflow


> I think you only need 2 databases and while you add data to A, you
> copy and delete B. Then switch A and B. Perhaps you need 3 databases,
> and separate the download and . On the other side you can attach the
> databases and reconstruct one big database.
>
Ah the design process.... I thought I had a good reason for my switching
policy but as I look back perhaps it is overly complex.  My original design
was a two database scheme, but as mentioned I thought the filename
was a slick way of determining which database was the primary (of course
a simple table in each database could do the same, that I join to and
update who is Main and Overflow).

Oh thats right, I actually remember now why I implemented this the way
I did.  The system has file size constraints on the amount of data
stored in the database, and downloads may be interrupted.  In the
event of a cancel I wanted all data to be in one database, hence the
copy of data from X+1 back into X.  I figured this works well because
when I move data from X+1 to X, I can check if storage constriants
have been violated and clear old data if necessary.

Also, I wanted to save the deletion and recreation of databases for
the next powerup, because the device is battery powered.  I have
a backup battery that allows me to run briefly after power is
removed, but this time is limited.  I figured doing this operation at
powerup is the safest bet (in the worst case, the power is removed
and I am back to relying on the backup battery, but on average the
battery is not removed immediately after insertion).

At the heart of the matter is the fact that vacuum's are too costly
(time wise) and while the device is not 'real time' per se, I must
services requests from another processor fairly quickly (<1 sec).

> If you need compression you can check any lzp, lzo or lzrh
> algorithms, they are very fast, and compress the files "on the fly".
> This compression algorithms works well with text data and bad with
> binary data. Take care because sqlite does already compress data in
> the databases files.

I cant reveal the nature of the data I am compressing, but on average, with
gzip, I see a reduction of 50 -> 70% in size.


Thanks for your reply, I implemented something similar to this but I end up
with corrupt databases if a download is performed, and power is removed,
and the sun and the stars align....blah blah blah.   In a word, its buggy.
I think violating sqlite and moving databases around using OS calls is
what is getting me.  I am up against a wall to design a solution
that....works.  Stupid input specs!  Anyways, thats why I posted to the
list and I really do apprecaite your input
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to