Re: [Alsa-user] Constant delay between starting a playback and a capture stream
Hi, just for correction if someone will ever read my question again: of course its microseconds and not milliseconds Sorry for the disturbance and regards -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft___ Alsa-user mailing list Alsa-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/alsa-user
Re: [Alsa-user] Constant delay between starting a playback and a capture stream
Hi Dominique, thanks for your answer! Ye, have already thought that as well. Just thought its maybe not appropriate to ask there since Im not involved in the ALSA project. But will give it a try. The JACK API was already in my mind, too. Just dont know if its rather for small (synchron but differnet after a new start of the application) delays between streams or it can handle that its always the same delay between streams, even after a new start, as well. Maybe Ill give it a try, too. Thanks again and cheers! Gesendet:Donnerstag, 26. Juni 2014 um 12:48 Uhr Von:Dominique Michel dominique.mic...@vtxnet.ch An:alsa-user@lists.sourceforge.net Betreff:Re: [Alsa-user] Constant delay between starting a playback and a capture stream Le Mon, 23 Jun 2014 18:20:43 +0200, Max Schmidt schmidti...@web.de a crit : Hi, I am not an audio developer but I think the LAD list is a better place for such highly technical issues. I also think most developers that want a constant audio latency with their application will use the JACK API instead of the ALSA API. http://lists.linuxaudio.org/listinfo/linux-audio-dev Hey all, first a big hello to everyone since Im new to this mailing-list. Ive had a look regarding this issue here and in generell in the internet but didnt find anything relating. So, sorry if I overlooked something. Ive got a BeagleBone Black (ARM Cortex A 8 P) with an Audio-Cape (using McAsp and ALSA-Davinci-Drivers) and wrote an mmap based playback-capture application (both devices hw:0,0). The important thing is that the application needs a constant delay (not necessarily small) between the start of the playback and the capture stream. So when looping back the signal to the microfone the first played sample e.g. always (even after a restart of the BeagleBone or the app) has a delay in the capture stream of exactly 80 samples (when e.g. played with 48 kHz), and once measured can be seen as constant. To realize that I use the ALSA API function snd_pcm_link(c_handle, p_handle). When starting the playback stream (and therefore linked capture stream as welll) manually its buffer already is filled. There is no buffer underrun/overrun recovery. It already works quite fine but I still have some questions and it would be really nice if someone can help: 1) Looking at a plot of multiple measurements of looped back and captured square waves (or sines) there still is a jitter of about two to four samples (at 48 kHz; which is about 40 to 80 ms), no matter if I run it as RT-app (energy save modes etc. disabled, just important Processes have a higher Priority e.g. EDMA) or normal app. Please correct me if Im wrong, but as far as I understand when the streams are linked, at start the processor is going through a linked list triggering all linked streams. And the trigger start is an atomic process so shouldnt get interrupted. Shouldnt it always take the same time between starting the playback and capture stream then? And if yes, where could the variable start delay come from? 2) Looking at the time stamp of both streams it tells that the difference between the start trigger normally is between 2 to 7 ms. What, I think, does not really fit to the observation written above since then there normally should be no big sample jitter at 48 kHz. Are the time stamp values not precise enough (actually how precise are the time stamps, just that they can show microseconds does not imply that the resolution really is microseconds, or am I wrong?) or are they correct and the difference in latency seen in measured plots comes from somewhere else? Delay due to Hardware should be constant so the issue must have something to do with ALSA or Linux. Would be nice if someone can help or has an idea! Tipps for an improvement are wlecome as well! Many thanks and cheers! -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Alsa-user mailing list Alsa-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/alsa-user -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft___ Alsa-user mailing list Alsa-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/alsa-user
Re: [Alsa-user] Constant delay between starting a playback and a capture stream
Le Mon, 23 Jun 2014 18:20:43 +0200, Max Schmidt schmidti...@web.de a écrit : Hi, I am not an audio developer but I think the LAD list is a better place for such highly technical issues. I also think most developers that want a constant audio latency with their application will use the JACK API instead of the ALSA API. http://lists.linuxaudio.org/listinfo/linux-audio-dev Hey all, first a big hello to everyone since I'm new to this mailing-list. I've had a look regarding this issue here and in generell in the internet but didn't find anything relating. So, sorry if I overlooked something. I've got a BeagleBone Black (ARM Cortex A 8 µP) with an Audio-Cape (using McAsp and ALSA-Davinci-Drivers) and wrote an mmap based playback-capture application (both devices hw:0,0). The important thing is that the application needs a constant delay (not necessarily small) between the start of the playback and the capture stream. So when looping back the signal to the microfone the first played sample e.g. always (even after a restart of the BeagleBone or the app) has a delay in the capture stream of exactly 80 samples (when e.g. played with 48 kHz), and once measured can be seen as constant. To realize that I use the ALSA API function snd_pcm_link(c_handle, p_handle). When starting the playback stream (and therefore linked capture stream as welll) manually its buffer already is filled. There is no buffer underrun/overrun recovery. It already works quite fine but I still have some questions and it would be really nice if someone can help: 1) Looking at a plot of multiple measurements of looped back and captured square waves (or sines) there still is a jitter of about two to four samples (at 48 kHz; which is about 40 to 80 ms), no matter if I run it as RT-app (energy save modes etc. disabled, just important Processes have a higher Priority e.g. EDMA) or normal app. Please correct me if I'm wrong, but as far as I understand when the streams are linked, at start the processor is going through a linked list triggering all linked streams. And the trigger start is an atomic process so shouldn't get interrupted. Shouldn't it always take the same time between starting the playback and capture stream then? And if yes, where could the variable start delay come from? 2) Looking at the time stamp of both streams it tells that the difference between the start trigger normally is between 2 to 7 ms. What, I think, does not really fit to the observation written above since then there normally should be no big sample jitter at 48 kHz. Are the time stamp values not precise enough (actually how precise are the time stamps, just that they can show microseconds does not imply that the resolution really is microseconds, or am I wrong?) or are they correct and the difference in latency seen in measured plots comes from somewhere else? Delay due to Hardware should be constant so the issue must have something to do with ALSA or Linux. Would be nice if someone can help or has an idea! Tipps for an improvement are wlecome as well! Many thanks and cheers! -- Open source business process management suite built on Java and Eclipse Turn processes into business applications with Bonita BPM Community Edition Quickly connect people, data, and systems into organized workflows Winner of BOSSIE, CODIE, OW2 and Gartner awards http://p.sf.net/sfu/Bonitasoft ___ Alsa-user mailing list Alsa-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/alsa-user
[Alsa-user] Constant delay between starting a playback and a capture stream
Hey all, first a big hello to everyone since Im new to this mailing-list. Ive had a look regarding this issue here and in generell in the internet but didnt find anything relating. So, sorry if I overlooked something. Ive got a BeagleBone Black (ARM Cortex A 8 P) with an Audio-Cape (using McAsp and ALSA-Davinci-Drivers) and wrote an mmap based playback-capture application (both devices hw:0,0). The important thing is that the application needs a constant delay (not necessarily small) between the start of the playback and the capture stream. So when looping back the signal to the microfone the first played sample e.g. always (even after a restart of the BeagleBone or the app) has a delay in the capture stream of exactly 80 samples (when e.g. played with 48 kHz), and once measured can be seen as constant. To realize that I use the ALSA API function snd_pcm_link(c_handle, p_handle). When starting the playback stream (and therefore linked capture stream as welll) manually its buffer already is filled. There is no buffer underrun/overrun recovery. It already works quite fine but I still have some questions and it would be really nice if someone can help: 1) Looking at a plot of multiple measurements of looped back and captured square waves (or sines) there still is a jitter of about two to four samples (at 48 kHz; which is about 40 to 80 ms), no matter if I run it as RT-app (energy save modes etc. disabled, just important Processes have a higher Priority e.g. EDMA) or normal app. Please correct me if Im wrong, but as far as I understand when the streams are linked, at start the processor is going through a linked list triggering all linked streams. And the trigger start is an atomic process so shouldnt get interrupted. Shouldnt it always take the same time between starting the playback and capture stream then? And if yes, where could the variable start delay come from? 2) Looking at the time stamp of both streams it tells that the difference between the start trigger normally is between 2 to 7 ms. What, I think, does not really fit to the observation written above since then there normally should be no big sample jitter at 48 kHz. Are the time stamp values not precise enough (actually how precise are the time stamps, just that they can show microseconds does not imply that the resolution really is microseconds, or am I wrong?) or are they correct and the difference in latency seen in measured plots comes from somewhere else? Delay due to Hardware should be constant so the issue must have something to do with ALSA or Linux. Would be nice if someone can help or has an idea! Tipps for an improvement are wlecome as well! Many thanks and cheers! -- HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions Find What Matters Most in Your Big Data with HPCC Systems Open Source. Fast. Scalable. Simple. Ideal for Dirty Data. Leverages Graph Analysis for Fast Processing Easy Data Exploration http://p.sf.net/sfu/hpccsystems___ Alsa-user mailing list Alsa-user@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/alsa-user