>Does anyone know if an analyze tables is done after inserting each 50000
>tracks (or different count) 
>Every major database needs this if there are bulk inserts and reads
>happening at the same time (which I expect the scanner to does?)

One suggestion during "new schema" discussions, was to have two phases - 1st 
phase to read all tags for each song into a songs table, and 2nd phase to then 
process that content.

If that were the case, the scanner could delete/disable all indexes whilst bulk 
inserting songs during first scan phase, and then create indexes at the end, as 
it wouldn't need to read any content back until then.

Then work out distinct albums with compilation flags, genres, years, etc by 
querying the song tags table.


I'm not sure how the scanner works now, but assume it needs to read content 
back out of the DB when deciding how to process each file scanned.  As it looks 
like there aren't any tidy-up phases any more, it must be running queries or 
reading content back out to determine if adding a new song to an album, whether 
the album should become a compilation, etc.

If that is the case, the scanner is going to become increasingly inefficient as 
the number of songs increases.
_______________________________________________
discuss mailing list
discuss@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/discuss

Reply via email to