Hello,

I'd like to be able to analyze incoming audio from a sound card using Python, and I'm trying to establish a correct architecture for this.

Getting the audio is OK (using PyAudio), as well as the calculations needed, so won't be discussing those, but the general idea of being able at (roughly) the same time: getting audio, and performing calculation on it, while not loosing any incoming audio. I also make the assumption that my calculations on audio will be done faster than the time I need to get the audio itself, so that the application would be almost real time.


So far my idea (which works according to the small tests I did) consist of using a Queue object as a buffer for the incoming audio and two threads, one to feed the queue, the other to consume it.


The queue could store the audio as a collection of numpy array of x samples.
The first thread work would be to put() into the queue new chunks of audio as they are received from the audio card, while the second would get() from the queue chunks and perform the necessary calculations on them.

Am I in the right direction, or is there a better general idea?

Thanks!
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to