Pete Black wrote:
> Perhaps I can phrase my question a little differently.
> 
> I understand that synchronous, sample-accurate minimal-latency operation 
> is in the realm of JACK etc., and for this to be properly implemented 
> requires a change of application architecture to fit with a callback 
> approach. This is good, and one day i hope we can look forward to seeing 
> all 'professional' audio apps on Linux use the Jack API.
> 
> I understand that simple, low-latency and application transparent is 
> probably a tough one given the audio application architecture that is 
> predominant across most modern operating systems.
> 
> I am not overly concerned about latency, the main thing is that it 
> sounds good enough - i.e. i don't require perfection, I just want to get 
> it 'as good as I can' without rewriting the applications to use, for 
> example JACK.
> 
> So, while I wait for JACK support to be mainstream, what I would like to 
> know is:
> 
> 1. Can ALSA be configured to provide an application-transparent 
> mechanism to mix multiple independent stereo sound streams (generated by 
> different, independent applications) and route the resulting mixed 
> stream to a single stereo output on a soundcard?
> 
> 2. If the answer to question 1 is 'yes', then how can I configure my 
> system to support this capability?
> 
> I have found references to the enigma that is 'aserver', the mythical 
> 'smix' plugin, the mysterious 'share' plugin, and in each case there is 
> either no documentation whatsoever (aserver), the only documentation are 
> the two words 'unknown reference' (smix plugin), or the documentation 
> clearly states that the component is not fit for the purpose it has been 
> suggested to be used for (share plugin).
> 
> I would be very willing to write a HOWTO on making this work, as the 
> question has come up under different guises many times on the ALSA 
> lists, and never an actual resolution or categorical 'it's just not 
> doable currently'.
> 
> Can someone help me (and the others who are as confused as I am on this 
> issue)?
> 

Even if you could access directly the plugin you are requiring you still 
have to code support for alsa into every app or write code for the oss 
emulation layer to deal with this scenario. I don't think it would be 
worthless effort to do the latter. After all that is what WDM/MME does.

IIUC the share plugin is not even intended to do what you want (as you 
mention above) it is written to share audio between different devices 
specified in the asoundrc file. Not to mix multiple streams from 
multiple apps. Paul you should stop suggesting the share plugin as an 
almost possible option. It is a red herring.

The deal is that Abramo got the smix plugin working (probably only on 
his personal tree) and then got laid off from Suse and has since not had 
the time or interest to contribute to the code base.

It sure would be nice if he would release that old code so that other 
people could use it as a base to work from. I don't know what is 
stopping him apart from an extremely busy workload over the past year. 
So busy he hasn't been able to find 10 minutes to make the patch and 
post it to the list or put it online. (hint hint nudge nudge).

-- 
Patrick Shirkey - Boost Hardware Ltd.
For the discerning hardware connoisseur
Http://www.boosthardware.com
Http://www.djcj.org - The Linux Audio Users guide
========================================

"Um...symbol_get and symbol_put... They're
kindof like does anyone remember like get_symbol
and put_symbol I think we used to have..."
- Rusty Russell in his talk on the module subsystem



-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
_______________________________________________
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel

Reply via email to