I raise this question out of some form of curiosity about how people deal with the idea of putting their and other people's "plugins" or processing blocks or software pieces together to form a whole.

Say you wrote a filter, and you want to add a source and a sink. Once you've chosen your operating system and audio interface of choice, you implement your audio filter, say in 44.1/16, and connect it to a source, maybe non-real-time from a file, maybe demand or supply driven in some form of flow, and subsequently, or semi-parallel connect the output of your filter to a sink, maybe a sound card.

You could do that all by hand, you could use existing streams (line Unix sockets), OO classes to in some way implement this game, or you could use the Steinberg plugin-kit (I forgot what it's called, even though I've downloaded one recently), or some other pre-fixed audio streaming regime.

Myself, I often use the Linux Jack/Ladspa Free and Open Source tools, which offer realtime streams without much of an upper bound (at least a few hundred streams aren't a problem on a good computer) between audio callback routines in one or multiple processes/threads, where Jack does a smart schedule such that the illusion remains throughout the audio processing graph that callback routines work on a given buffer size, aren't running for nothing, and always if there's enough compute power, servicing the parts of a larger flow remains correct and predictable. Now, I've been into PhD level stuff around these kinds of subjects, so I know the main underpinnings of Unix streams and solving schedules for functional decomposition with intermediate result re-use, so I'm not fishing for solutions, just wondering what people here think about this subject, or would like to work on, etc.

Ir. (M.Sc.) T. Verelst

_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to