2010/1/18 evan.raskob [lists] <[email protected]> > Usually it's as Kassen said - looks like an exponential curve to me, but > the theory behind it is that the frequency bands should be double with each > step, like Kassen said.
I don't think they need double; that would put a lot of the bands in a extremely low range for higher numbers of bands. If we have a grand piano and split the keyboard into 8 sections then they indeed double for every section (give or take a few keys). If we'd split it equally into more sections then clearly they won't double per band any more (instead the factor becomes some number between 1 and 2). For less bands the step per band would be some factor over 2. I don't think it makes much sense to consider DC offset ( 0 Hz) because general purpose adc's can't reach that anyway, 20Hz as a lowest frequency and the Nyquist as a upper bound makes more sense to me. I assume we can poll Jack for the sample-rate, 20Hz as a lower bound is a constant so from there on the band width will follow from the desired number of bands. Instead of trying to deal with unequal band distribution for variable band numbers we could also make it set a number of bands that would default to a equal distribution, then have bands that are movable from code. This would enable us to focus on certain regions, for example when working with a live acoustical instrument with a limited frequency range. Conditions like that can't really be anticipated; what's reasonable for electronic dance music will not be for solo piccolo performance. It might also be interesting to try to get the most prominent frequency from the fft as a exact number (instead of a approximation by band) so we could track that too. If we are already spending the cpu on a fft analysis we might as well try to get as much from it as we can. Yours, Kas.
