2015-07-03 5:51 GMT+07:00 Simon Slavin <slavins at bigfraud.org>:

>
> On 2 Jul 2015, at 11:16pm, Rob Willett <rob.sqlite at robertwillett.com>
> wrote:
>
> > We process this XML feed and pull out chunks of the XML, process it and
> update our database with it. This database is currently 16GB in size. Our
> concern is the time taken to process each file every five minutes. It has
> been getting steadily longer and longer. It started out at around 6 seconds
> and is now around 22-24 seconds. Whilst we expect processing time to get
> bigger, we are getting concerned about the performance and decided to have
> a look.
>
> This suggests fragmentation.  There can be fragmentation at various levels
> for a SQLite database.  Fortunately it's simple to get rid of them.  First,
> take some timings.
>
> Using the '.dump' and '.read' commands from the command-line tool to
> create a new database.  It will be created with each row in each table in
> primary-key order.  Once you've done that defragment the disk the database
> is stored on using your OS tools (if possible).  Once you've done that take
> the same timings and see whether anything improved.
>

VACUUM is the ultimate anti-fragmenting tool isn't it (in particular when
combined with WAL_checkpoint() .. or??)?

Reply via email to