[EMAIL PROTECTED] wrote:

As I said many times It's not that I hate Linux Audio, but mainly that I

mmh, i love it,

believe that it is too poor
in features/API for my taste. I tried very hard for several years to make my apps play together with jack/alsa, but I find myself very limited in many areas as a programmer and user:

-Alsa/Jack integration in timestamping is poor, syncronizing audio to midi is a pain

so use Jack MIDI


Just found it it's been released! Awesome!

-I have no way to ask from a sequencer, the patch names available in a softsynth to list them

i do this all the time, grabbing lists of automatable params and patches from 
my sequencer, using OSC aware apps like PD, Om etc as the LADSPA/DSSI hosts. 
sounds like a sequencer limitation if it's not exposing these things 
programatticaly for you..

OSC aware apps? wait, wait wait.. you can for sure use osc, but using jack and osc separatedly? That is like, a bad hack.. I mean, how do you know which host is what? Much less sync properly
(read answers below)

-Jack lack of midi

false..


Right, Just checked on jack ML.. this is pretty amazing!

-Jack lack of OSC or any way to do parameter automation from the sequencer

im glad jack concentrates on audio and not "OSC parameter automation". its the 
UNIX way, a utility doing one thing and doing it well, but if Jack could deliver 
sample-accurate OSC messages in a more convenient manner than UDP, im all for it, but the 
UI thing you speak of is a level above that.
What are you talking about? without frame-based precision you are relying on wathever jack server period size is, which means unreasonably forcing your apps to _really_ low latency for just "decent" accuracy. It is the same problem you have when you use alsaseq+jack. The typical hack to fix this is to have a separate thread receiving and timestamping events, but this is plain SHIT because you have to anyway rely on the lowlatencyness of the kernel to not jitter the alsa/osc IPC. I'm _NOT_ glad that jack concentrates only on audio, because I cant do that, share tempo maps, and many of the things described here.

-It is Impossible to do any sort of offline render, or high quality render of a song (like, in 32/192khz) using JACK/Alsa

so you want to be able to run the JACK samplerate at 'infinite' or at least a 
high as possible until processing finishes. an interesting idea, to have 
offline render across the entire chain.. but last i asked about even so much as 
multiple samplerates or vari-speed scrub across all jack spps,  they had 
reasons, not the least of which was it would be useless without everyone 
supporting it..

Come on!! *ANY* Comercial grade sequencer/daw/etc supports offline render. Like for example when you want to compose for movie/DVD, you need to use very high sampling rates/channels/etc that if you try to play in realtime would crawl your computer to death. So you just offline render the whole thing. But for more normal-composer uses, you probably want to render in high sampling rates and then downsample to reduce aliasing produced by several dsp techniques. Currently this is impossible with jack.

-Saving/Restoring your project is just painfully hard. LASH doesnt help, and even when I came up with the idea of it in the first place.

this could be improved, but monolithic Cubase-esque hosts is not the way id prefer 
it to be done. personally i just have my sequencer spawn off a few shell processes 
coupled with rohan drape's excellent patching utils to handle app launches, while 
storing all the params&patches id want to recall in the sequence database..

I dont really mind if it is monolithic or not, I'm just noting that I need it, and I cant do it.

-Adding/Removing softsynths, linking connections, etc takes a while having to use qjackctl, etc

''
Sure, you have to go and connect everything yourself, every time you start up, go to the shell find an empty console, run fluidsynth or wathever, have everything organized, etc. Basically
the same as in the above point.

-Lack of send%.. I just cant have a jack client doing a very high quality reverb, only as wet processing and have clients send different amounts of the signal to it, thus saving CPU

so you want jack to do mixing + connections, instead of just connections. take 
it up with them, but variable wet/dry requires a 100% connection, so im not 
sure thats not a UI issue a level above jack too..


Well, seeing how jack works i'm not sure if it does zerocopy because at some point it has to add up the connectons together, so I dont think adding sends per connection is impossible. Yes, ican probably make an inbetween proces that does a send, but this is annoying to use and probably reduces performance a lot more too.

-Lack of tempo-map based transport, I cant adapt my midi-only sequencer , which works in bars, beats, etc to a frame-based transport. Say I want to run my sequencer, then go thru softsynths
and record/edit in ardour.. no go.

confused here.. as to why it wouldnt work, other than your unwillingless to 
make a simple function to cnovert between beats/bars to samples..

Because ardour for example, also has it's own tempomap and what I record on it wont fit. If i then change the tempo of my song or decide to do other changes (signature/etc), everything screws up.

But overall, what mostly annoys me of linux audio is that most API programmers just implement the features THEY use and need, and not what others may need. And since they mantain the thing,

is this not what the economics dictate? if someone sat down with major backing 
to implement a 'grand vision' API, wouldnt we be talking about Apple?

even adding them yourself is pointless, as they will most certainly not accept patches. Ok, that's

not true, even my small trivial bugfix and usability patches have been 
incorporated into various projects.. if youre talking about something enormous, 
maybe a branch is better at least until its utility/superiority/whatever is 
proven..

fine, they are on their right to do it, after all i'm not paying them to do it, they work for themselves.

All this has simply led me to decide to not use such APIs anymore and integrate everything I do in big, and monolithic apps, such as reason, cubase, etc and not care

have fun, im personally glad to get away from that..

Yes, see what I mean, you have your point of view on the subject, so do API/lib/app developers.
You could just go and argue about a lot of topics like:
-Should jack support OSC?
-Should apps be monolithic or a modular set of them?
-Should audio/midi plugins separate UI from core in two processes or should it be integrated?
-Should transport work frame based or BarBeatTick/etc?
-Should we make a more advanced plugin standard and adapt it to an app, or wait for GMPI?
-Should we use plugins at all?
-Should we advocate for low latency, or better timing/syncronization so low latency is not needed as much?
-etc, etc

Everyone has their point of view. It's not like you will tell someone "I want to add this feature to your app/api" and will say "Ok". You will simply get an answers like: -No, sorry, I wont accept that patch, i'd rather the library concentrates only on this.
-Why dont you do it as a separate library?
-Feel free to fork this and add it yourself.
-Yeah I recognize it's useful, but I think it's out of place, inconsistent with the rest, that I try to keep simple.

etc, etc, etc.

And forking of course is not an option. I mean it's a lot of work because you have to fork the lib, mantain it, recompile all the apps that use it against yours and not the main one, merge bugfixes and improvements to keep compatible. Also if you are using a distro, you cant use the binary packages because they are not compiled with your version of the lib, etc. So forking is not really worth it in most cases.

about the outside world anymore.
After all, it takes me less time to write the features I need for myself, and into my own than dealing with people's religious software views to get them integrated into other projects.

wish i could code that fast!


Well, I dont have much time, nor I code very fast, but it's more like.
I made Sequencer A, and softsynth B. They ar communicated thru library
X. I want Sequencer A to do the things I want with softsynth B, both are my work. But library X doesnt support, it, so I go talk with library X developer and it seems there are no chances of that being implemented in the near future, or ever. So what do I do? I'm not left with much choice, Either I fork library X, or make A and B a monolithical app. The second is the easier/faster so I go for that.

Juan


Reply via email to