On Jul 30, 2018, at 4:51 AM, Rob Willett <rob.sql...@robertwillett.com> wrote:
> 
> The system has been running for 2-3 years

Has performance changed over that span?  Which direction?

> we have taken the opportunity to try and prune the database from 60GB down to 
> 600MB.

SQLite’s speed is only weakly affected by database size.  A starting guess is 
log2(N) where N is file size.

Since your data access didn’t get 10x faster from your 100x size drop, you can 
fairly guess that the speed problem isn’t due to the inherent time required to 
traverse tree-based data structures in SQLite.

> The system has a restriction of IOPS as it's a Virtual Private Server. 
> Technically it's running Ubuntu 16.04 under OpenVZ. We can get 69MB/sec with 
> a disk to disk copy, which isn't brilliant if we had sustained disk traffic 
> which we don’t.

I am sensing a spinning disk.  (Else, throughput should be a lot higher.)

SQLite takes data durability (the D in ACID) seriously, unlike most other 
software applications, so it is uncommon in that it flushes each transaction to 
disk before proceeding with further writes to that table.

A flush to disk takes a full disk rotation, and due to the way SQLite’s journal 
operates, each transaction requires two flushes.  That means that with a 7200 
RPM disk, you can get a maximum of 60 transactions per second per table with 
SQLite.

Sound familiar?

If I’ve guessed the problem correctly, the solutions are:

1. Batch multiple writes in a transaction.

2. Switch to an SSD.

3. Use multiple tables and/or multiple DB files.  In your case, I’d suggest one 
SQLite DB per sensor, with one thread per sensor, each of which keeps one of 
the SQLite DBs open continuously.  That way, a blocked DB conn won’t block any 
other writers.

Those solutions are given in order of ease of application and cost of 
implementation.

> Nothing in the log files gives us any concern

Have you tried SQLite’s new .expert feature?

   https://sqlite.org/cli.html#index_recommendations_sqlite_expert_

> dropping a very large table is really, really, really slow.

If you put your main data table in a file of its own, you can quickly “drop” 
the table by just closing the DB and removing the DB file from disk.  

When you then recreate the DB file with a fresh schema, it’s effectively 
defragged/vacuumed as well.
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to