On 7 Jul 2010, at 10:03am, Igor Sereda wrote:

>> It's never time to VACUUM a database.
> 
> This is an interesting statement. In our application, all tables get
> heavily fragmented with time (99% or more). I was considering VACUUM
> as a means to defragment, and, presumably, improve search performance.
> Was I misguided, and search performance does not degrade significantly
> with the increased fragmentation of the DB?
> 
> (I guess it well might not on an SSD disk, but on a conventional
> rotational disk, pager could read several pages ahead with one seek -
> but does it?)

You are asking the correct questions, but only you can provide the answers.  It 
depends on things like what proportion of pages are taken up with indexes, how 
'clumpy' your data is, etc..  And fragmentation has large effects on disks in 
some formats (e.g. NTFS) but almost none on disks in other formats (e.g. ext3). 
 I don't think it's possible to get any sort of 'standard answer' to that 
question which would help the 'standard SQLite user' who is using a 'standard 
schema'.

> It would be great to see performance comparison, if anyone has ever did it.

This mailing list tends to gather many examples of premature optimization.  The 
sort of computing we do contains so many different operations and subsystems 
that it's easy to pick one you know some techie detail about (for example, it's 
faster to read from the same sector twice) and work out how to optimize that in 
an ingenious way.  However, it turns out that that whole operation is only a 
very minor part of the entire load on the computers, and that the entire 
project runs fast and small enough that no optimization is really needed 
anyway.  You waste more time programming and debugging the optimization than 
you save by it.

Simon.
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to