Three thoughts;

*First*, buy a bulk amount of cheap, inexpensive USB keys and start
throwing your data at the key instead of your OS's card.  I'm not 100%
clear on how a USB key handles itself as far as writing to certain parts of
its memory, but you could partition off chunks of space and just move the
mount point to a different partition periodically.  I picked up an
EXTREMELY small 16gig USB key (Smaller than a freak'n quarter) for like
$30.  This particular keys life span is going to be in a read-only state in
my car stereo.  I'm sure they sell smaller (Volume size, not physical) for
cheaper.

*Second*, instead of writing to the SD card 8640 times a day, why not once
every hour?  Since you losing an hour worth of data isn't significant, and
since the Pi has a decent chunk of memory (If you've got the B version),
you could just throw the data you're accumulating into memory, then at the
top of the hour, dump the data to the SD via the Backup API naming it with
a date/time stamp as the file name, dump the memory contents, and start
over again.  You'll save significant writes, pending swaps to cache by the
OS you'll be writing to the disk only 24 times a day instead of 8640 times.

The only changes you'd need to do to your code is to create the table in
memory, or, have a 'template' database sitting around that can use the
Backup API to put into memory.  Create the in-memory database with "
:memory: " as the file name (Excluding quotes, INCLUDING colons), and
you're off to the races.

The other thing I'd look into is that because of the varying speeds of SD,
the volume of information you could be writing, you may run into an issue
where you call the backup API but due to write speeds, something else
writes to the live in-memory database, and then your data becomes expunged
when the DELETE command is executed while the backup is happening.

So what I would do is;
- use a temporary variable to hold the memory location of the current
in-memory database, (Essentially NewTempDatabase = OldLiveDatabase)
- directly free-and-nill the OldLiveDatabase variable
- re-create a new :memory: database assigned to OldLiveDatabase,
- recreate/reload the template database against OldLiveDatabase,
- call the temporary variables backup API.
- I'm unsure if you'll get an event when the backup is complete, however,
once you call the backup, you should be able to use close the database,
then the object will free itself as usual.

So for a short period of time, you'll have two in-memory databases, one
containing the previous hours worth of data, and a new, fresh new
database.  The only issue that I can see coming up is if you're re-opening
the database each time you add a new record, which would be really bad to
begin with first off (Very expensive to do, but since you're doing one
thing every 10 seconds, not such a big deal I suppose), second, if you're
functions are aware of how to use a globally accessible database, or if
you're passing the database into your functions, etc, then using a memory
database is a total scrub.

*Third*, if you're thinking about using the second option, throw the backup
at a network drive, there by ELIMINATING writes to SD card by your
application.


On Mon, Feb 10, 2014 at 8:40 AM, Clemens Eisserer <linuxhi...@gmail.com>wrote:

> Hi,
>
> I would like to use sqlite for storing temperature data acquired every
> 10s running on my raspberry pi.
> As my first SD card died within a week with this workload, I am
> looking for opportunities to reduce write operations triggered by
> fsyncs to flash.
> For me loosing 1h of data at a power failure isn't an issue, however
> the DB shouldn't be corrupt afterwards.
>
> I found the pragma "synchronous", which when set to "NORMAL" does seem
> to do exactly what I am looking for - when sqlite is used in WAL mode.
> Am I right that with this configuration, fsync is only executed very
> seldomly?
>
> > In WAL mode when synchronous is NORMAL (1), the WAL file is synchronized
> before each checkpoint
> > and the database file is synchronized after each completed checkpoint
> and the WAL file header is synchronized
> > when a WAL file begins to be reused after a checkpoint, but no sync
> operations occur during most transactions.
>
> Thank you in advance, Clemens
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to