On Wed, May 26, 2010 at 10:43 PM, arthur de conihout <arthurdeconih...@gmail.com> wrote: >> >> >>> Hi, >>> i try to implement a real-time convolution module refreshed by head >>> listener location (angle from a reference point).The result of the >>> convolution by binaural flters(HRTFs) allows me to spatialize a monophonic >>> wavfile. I got trouble with this as long as my convolution doesnt seem to >>> work properly: >>> np.convolve() doesnt convolve the entire entry signal >>> ->trouble with extracting numpyarrays from the audio wav. filters and >>> monophonic entry >>> ->trouble then with encaspulating the resulting array in a proper wav >>> file...it is not read by audacity >>> Do you have any idea of how this could work or do you have any >>> implementation of stereo filtering by impulse response to submit me
For reading audio files into numpy, I suggest you use audiolab. It uses libsndfile underneath, which handles various wav format *really* well (and is most likely the one used by audacity): http://pypi.python.org/pypi/scikits.audiolab/ Concerning the filtering part, filtering in the time domain is way too consuming, because HRTF impulse responses are quite long, so you should use FFT (with windowing of course, using overlap add methods) David _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion