On Mon, 10 Jun 2002 01:38:10 -0400 Paul Davis <[EMAIL PROTECTED]> wrote:
> >Here's a problem I commonly find in existing audio apps or in > >programming audio apps: Audio routing. > > > >The way things work now, it's hard for apps to implement a standard > >way of: > > First, you can't do any better on MacOS or Windows, because ReWire > or DirectConnect are the only (low latency) options and many > programs don't support them. This is not a defense of the status > quo, just an observation about how cutting edge this idea is, and to > point out how much progress we are making on it. Yes, that does suck on such OSs too :\ > > >1-The application has to be able to "proovide" inputs and outputs, > >which may be used or not. By default an app may connect directly to > >the output or just not conect at all, expecting YOU to define the > >audio routes. Most of the times, unless using a direct output or > >known audio modifier objects, an app will not want to connect > >somewhere from whithin it. You will do that from an abstracted > >interface. > > Is there some problem here? The app should be able to save its > state. If you invoke it again and the destinations/sources exist, it > can restore its own state. if they don't, then you're asking it to > do the impossible. Yeah, but what I mean is 2 things, first that you should be able to change transparently where the app is sending the data, without the app noticing, and second that such configuration shouldnt be stored by the app but from an abstracted app/interface that handles connections. > > We discussed this issue of audio routing for at least 2 years before > JACK was written. No ideas other than the one represented by JACK > emerged that I recall emerged till Abramo suggested extending > alsa-lib. Very few people liked that idea, so now we have JACK as a > viable option (well, the folks on jackit-dev think so, anyway), and > alsa-lib remains without any options in this area. As they say "show > me the code!" :) > Ahh, i think i didnt make myself clear enough, so i'll try to go into more specific detail. What I propose is: -Audio routing/data sharing for ALL programs. For most it should be audio routing via api calls. -Low latency/syncronous execution only for the ones that _need_ that. -Of course taking the audio from routes at different latency will become lag. -Transparency! What i've meant is someting like program-> alsalib -> jack -> alsalib for normal apps and: program <-> jack -> alsalib for low latency apps. Does alsa need to be modified for such thing? cant jack work as an intermediate driver for this? I would basicaly like to be able to take the sound output of any program and do whathever I want with it, such as connecting it as sound input of another program, stabilishing a network of those. The idea is to do this from an external program to the ones in use. It's just like what the sequencer API does with MIDI, but the idea is using audio, This would go well even from within alsa because the same tols cold be used to connect stuff. If you have ever used PD/J-max you might be used to this concept, this proposal is simpler but also gives you a great amount of versatility to your programs/environment. The advantages I see are: 1-Saves a _huge_ amount of time to programmers since the only thing they have to do is register audio in/out slots, and then route the external sources. How many times do we see programs use the same code over, over and over again? (freeverb/chorus/flange/ladspa chains/equalizers/normalizers/mixers/vu bars/etc). Well, this would put an end to that, and audio programming becomes a lot easier. 2-Saves enormous time to the user. Why capturing/dumping/editing if your CPU can do everyting at once? just chain your favorite programs! It actually even gives you the ability to build up your own chains of modifiers for program to program. 3-Encourages program interoperability, ala good old unix way. It's easy, most programmers, and specially the new ones, dont care about side libs such as JACK/Arts. They want to go straight to the official api first. Many dont even care/bother/know how to write multipe output drivers support so they just go to the official api. Many dont even bother to find out if someting besides that exists. Or worse, many will just not even bother to write a program because of how much it takes to do certain things that, even when may be proovided by other programs/modules, there's not an easy/standard way to use them. 4-Works for any application! put some groove to quake III, filter out the noise of those old mpgs, raise the treble in those cheapo speakers and add some reverb in the way. 5-Humillate the poor windows/mac users that cant do that on their OSs ;) I understand that the "Show me the code" is worth more than a proposal. I'm trying to give my view on the subject as an application programmer. Having written sequencers, trackers and softsynth apps, i'm giving you a perspective on how I think things would have been much easier when programminf. Also how i'd like them to intercommunicate to other programs for certain tasks. A nice example is that a lot of people has mailed me saying "Could you add a plugin api for CheeseTracker so I can use more effects than chorus and reverb and chain them?". Others have asked me "Can you get CheeseTracker to open multiple devices for each track so I can mix the output in my roland mixing workstation?" Sure, I could go and write that, even ladspa suppport, but I think it's far from the optimium solution. I already have some plans for messing with the ALSA internals, but I dont have much time, so i'd rather ask first to the developers here, familiarized with the API about how real/crazy is my proposal ;) Regards Juan Linietsky