I've just updated my LMS to use the latest DBD::sqlite (1.58) and could
run some benchmarks, then downgrade to 1.34 and re-run them.
The question is how to benchmark (re-scanning the library is simple, but
not really representative of normal use) - how did you measure the 20%
performance gain?

I only tested the scan. It's very heavy on database stuff, with a good mix of read and write. When scanning the collection is 20% faster then something has been improved. Either optimized query paths, processing of data, or I/O or whatever.

Running this in a loop

Code:
--------------------
      while true; do ./bench-lms.sh; done
--------------------

puts 100% load on the server and should be fully DB-bound.

Agreed, benchmarking is difficult to do correctly. Running this kind of stuff in a loop would likely only hit SQLite's buffer - which is not representative either. In general I've had little reason for complaints with my setup in day-to-day use. But I know that eg. some of Erland's plugins are extremely heavy on database stuff. If you're running one of those, that might be interesting, too.

One other reason why I was looking into updating SQLite is that it supports new features in the fulltext indexing I'd like to leverage.

--

Michael
_______________________________________________
discuss mailing list
discuss@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/discuss

Reply via email to