Thanks for your ideas, I'll look into those!

It's actually just a digital delay effect or a sample playback system,
where I have a playhead that have to read samples from a buffer, but the playhead position can be modulated, so the output will be pitching up/down depending on the actual direction. It's realtime resampling of the original material where if the playhead is moving faster than the original sample rate, then the higher frequencies will be folding back at Nyquist. So before sampling I should apply an antialias filter to prevent it, but as the rate of the playback is always modulated, there is not an exact frequency where I should apply the
lowpass filter, it's changing constantly.

This is what I meant by comparing to resampling.

--
Kevin


Hello Kevin

I am not convinced that your application totally compares to a
continously changed sampling rate, but anyway:

The maths stays the same, so you will have to respect Nyquist and take
the artifacts of your AA filter as well as your signal processing into
account. This means you might use a sampling rate significantly higher
than the highest frequency to be represented correctly and this is the
edge frequency of the stop band of your AA-filter.

For a wave form generator in an industrial device, having similar
demands, we are using something like DSD internally and perform a
continous downsampling / filtering. According to the fully digital
representation no further aliasing occurs. There is only the alias from
the primary sampling process, held low because of the high input rate.

What you can / must do is an internal upsampling, since I expect to
operate with normal 192kHz/24Bit input (?)

Regarding your concerns: It is a difference if you playback the stream
with a multiple of the sampling frequency, especially with the same
frequency, performing modulation mathematically or if you perform a
slight variation of the output frequency, such as with an analog PLL
with modulation taking the values from a FIFO. In the first case, there
is a convolution with the filter behaviour of you processing, in the
second case, there is also a spreading, according to the individual
ratio to the new sampling frequency.

  From the view of a musical application, case 2 is preferred, because
any harmonics included in the stream , such as the wave table, can be
preprocess, easier controlled and are a "musical" harmonic. In one of my
synths I operate this way, that all primary frequencies come from a PLL
buffered 2 stage DDS accesssing the wave table with 100% each so there
are no gaps and jumps in the wave table as with classical DDS.

j

_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to