Hey all,
 
first a big hello to everyone since I'm new to this mailing-list.
I've had a look regarding this issue here and in generell in the internet but didn't find anything relating. So, sorry if I overlooked something.
 
I've got a BeagleBone Black (ARM Cortex A 8 µP) with an Audio-Cape (using McAsp and ALSA-Davinci-Drivers) and wrote an mmap based playback-capture application (both devices "hw:0,0"). The important thing is that the application needs a constant delay (not necessarily small) between the start of the playback and the capture stream. So when looping back the signal to the microfone the first played sample e.g. always (even after a restart of the BeagleBone or the app) has a delay in the capture stream of exactly 80 samples (when e.g. played with 48 kHz),  and once measured can be seen as constant. 
To realize that I use the ALSA API function "snd_pcm_link(c_handle, p_handle)". When starting the playback stream (and therefore linked capture stream as welll) manually its buffer already is filled. There is no buffer underrun/overrun recovery.
It already works quite fine but I still have some questions and it would be really nice if someone can help:
 
1) Looking at a plot of multiple measurements of looped back and captured square waves (or sines) there still is a jitter of about two to four samples (at 48 kHz; which is about 40 to 80 ms), no matter if I run it as RT-app (energy save modes etc. disabled, just important Processes have a higher Priority e.g. EDMA) or normal app.
Please correct me if I'm wrong, but as far as I understand when the streams are linked, at start the processor is going through a linked list triggering all linked streams. And the trigger start is an atomic process so shouldn't get interrupted. Shouldn't it always take the same time between starting the playback and capture stream then? And if yes, where could the variable start delay come from?
 
2) Looking at the time stamp of both streams it tells that the difference between the start trigger normally is between 2 to 7 ms. What, I think, does not really fit to the observation written above since then there normally should be no big sample jitter at 48 kHz. Are the time stamp values not precise enough (actually how precise are the time stamps, just that they can show microseconds does not imply that the resolution really is microseconds, or am I wrong?) or are they correct and the difference in latency seen in measured plots comes from somewhere else? Delay due to Hardware should be constant so the issue must have something to do with ALSA or Linux.
 
 
Would be nice if someone can help or has an idea!
Tipps for an improvement are wlecome as well!
Many thanks and cheers!
------------------------------------------------------------------------------
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
http://p.sf.net/sfu/hpccsystems
_______________________________________________
Alsa-user mailing list
Alsa-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/alsa-user

Reply via email to