On Tue, 7 Sep 2004, Guillaume Fougnies wrote:

>Mon, Sep 06, 2004 at 11:56:21PM -0700: Darren Duncan wrote:
>>
>> What you probably saw with the 3ms is the time between when you
>> issued the insert command and when control was returned to your app,
>> but the new record was simply in RAM and not on disk.  The operating
>> system would have written it to the disk some time later.  So in
>> other words, the time is so much faster because the slower action
>> actually did something but the faster action did nothing during the
>> time.  The main risk is that your app is thinking the data is saved
>> at a certain point in time, but it actually isn't. -- Darren Duncan
>
>Using synchronous=OFF is a design choice, not a risk.
>For non-mission critical databases, you can easily have a "good"
>data reliability after a crash using a journaling FS and incremental
>backups.


So long as data is slow changing. I wouldn't like to trust my data to
software with that attitude.

And what's a few milliseconds between friends if it results in more
reliable data. If changes are that frequent that performance is vital,
then batching updates makes sense. If updates are so infrequent as to make
batching unfeasible, then single insert performance should not be a
problem in the grand scheme of things.


>If you want the speed and the fs sync, you probably need the hardware
>(less and less expensive).


Hmm, I might try some benchmarks of SQLite on different filesystems. I
think results for SQLite on a ext3 filesystem with data logging should be
interesting. A project for the weekend, maybe?


>
>--
>Guillaume FOUGNIES
>

-- 
    /"\
    \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
     X                           - AGAINST MS ATTACHMENTS
    / \

Reply via email to