The main bottleneck during a scan is disk I/O (see' this post'
...
> Memory isn't really important (since the scan mostly writes to the
> database, making caching impossible)

I'm sorry to say that this is oversimplified. Many IO issues are addressed by using more memory (caching, buffers). There are too many variables you don't take into account:

- memory base line: are we talking about a NAS or old Pi with 512MB of RAM or a PC with gigabytes of it
- are we talking about SSD vs. USB2 connected HD?
- I/O of what? Music file storage or DB cache location?
- scanner plugins doing remote lookups
- configuration of those plugins to use local caching or not
- memory/caching can make a huge difference on I/O
- number of playlists and playlist sizes (an important factor for the FTS indexing)
- size of your online library (again with playlists of thousands of tracks)

Ok, you could say that doing online lookups (library or covers or whatever) was I/O, too. But that's not what you meant.

The "maximum memory" option I introduced as the result of a lot of scanner profiling. While you were right that it mostly addressed the IO bottleneck, it did so by throwing more memory at the scanner. Memory definitely makes a difference. If you look at the code you'll see that there are a lot of tweaks which would configure the DB connection differently in the scanner vs. server. More memory is being used in the scanner to limit the writes, while in the server it's used to limit the reads from disk.

Therefore I'd say you can't generalize this problem without looking at the individual configuration.
_______________________________________________
Squeezecenter mailing list
Squeezecenter@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/squeezecenter

Reply via email to