Re: [linux-audio-dev] Poll about linux music audio app usability

2002-06-09 Thread Paul Davis

>I think this raises some questions.. My feeling is that most people
>aiming to write music on this OS is expecting to have apps with super
>easy and intuitive interfaces, where you only go trough displays,
>knobs, sliders and paintabe areas. 
>Why we dont have apps such as Reason, Reaktor, Sonar, Sound Forge,
>etc? I dont mean full apps, but at least projects aiming for that kind
>of thing.

Because they are really, really, really hard to write (well) and they
take a long time either way. They are classic examples of the 80/20
rule (also known as the 90/10 rule): 80% of the functionality takes
20% of the time, but the remaining 20% takes 80% of the time *and*
covers 80% of the most visible and cool features :) Ardour could
record 24 tracks of audio simultaneously more than 2 years ago - what
I originally thought was a major milestone turned out to be a tiny
pebble on the beach.

This is not like Apache, which though its ancestral line to httpd,
actually *invented* HTTP service, and was in turn connected to the
various FTP servers before it. We don't have *any* open source
examples of these kinds of programs to study. We have to (re)invent it
all as we go, and that takes time. Thats partly why MusE and Ardour
are so important - even if they ultimately are not the best tools for
Linux, they (and many other tools) start to provide large pools of
code for other programmers to see "how its done (so far)". "it" is not
the simplistic kind of playback offered by the many soundfile players,
but the complex stuff done by non-linear, EDL-based engines used by
the most desirable proprietary programs.

>We do have very powerful tools, but i have to admit that for most of
>them we have to learn some script programming.

Some people think this is a good thing because the tools are
ultimately more capable and less limiting. Others disagree. 

>Do we lack good APIs? Alsa MIDI api is the best I have seen for it's
>kind. Also, sould linux apps really take this windows approach of
>making huge bloated interfaces with lots of eye candy, or should we
>try to improve on making our apps intercommunicate between eachother,
>while still giving some importance to ease of use?

Part of the point of JACK was not being forced to make this choice.

--p



RE: [linux-audio-dev] Poll about linux music audio app usability

2002-06-09 Thread Ivica Bukvic

However, forgot to mention, that it would be still nice to see
user-friendliness become a standard in Linux ;-)

> -Original Message-
> From: Ivica Bukvic [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 10, 2002 1:42 AM
> To: '[EMAIL PROTECTED]'
> Subject: RE: [linux-audio-dev] Poll about linux music audio app
usability
> 
> What I think is that this is great since there is less likelihood that
> someone else will be using the same tools I do and hence less likely
will
> my music sound like thousands of others :-)
> 
> Ivica Ico Bukvic, composer, multimedia sculptor,
> programmer, webmaster & computer consultant
> http://meowing.ccm.uc.edu/~ico/
> [EMAIL PROTECTED]
> 
> "To be is to do"   - Socrates
> "To do is to be"   - Sartre
> "Do be do be do"   - Sinatra
> "I am" - God
> 
> > -Original Message-
> > From: [EMAIL PROTECTED]
[mailto:linux-audio-dev-
> > [EMAIL PROTECTED]] On Behalf Of Juan Linietsky
> > Sent: Monday, June 10, 2002 1:10 AM
> > To: [EMAIL PROTECTED]; linux-audio-
> > [EMAIL PROTECTED]
> > Subject: [linux-audio-dev] Poll about linux music audio app
usability
> >
> > I thought this may be of interest to the list.
> > In a k5 poll about usability of linux audio apps,
> > ( http://www.kuro5hin.org/poll/1023512126_OSelOkZS )
> > So far, out of 38 answers the results are:
> >
> > -How do you like music software for Linux?
> >
> > 2 % - Great! It has everything I need.
> >
> > 13 % - Good, but i wish apps were more userfriendly (Like Reaktor or
> > SoundForge)
> >
> > 31 % - Could be better, I think the apps are not yet mature enough
for
> > my needs.
> >
> > 15 % - It's unusable, the apps plain suck.
> >
> > 10 % - Dont care about composing on computers
> >
> > 26 % - Dont care about composing.
> >
> > 
> >
> > I think this raises some questions.. My feeling is that most people
> > aiming to write music on this OS is expecting to have apps with
super
> > easy and intuitive interfaces, where you only go trough displays,
> > knobs, sliders and paintabe areas.
> > Why we dont have apps such as Reason, Reaktor, Sonar, Sound Forge,
> > etc? I dont mean full apps, but at least projects aiming for that
kind
> > of thing.
> > We do have very powerful tools, but i have to admit that for most of
> > them we have to learn some script programming.
> > Do we lack good APIs? Alsa MIDI api is the best I have seen for it's
> > kind. Also, sould linux apps really take this windows approach of
> > making huge bloated interfaces with lots of eye candy, or should we
> > try to improve on making our apps intercommunicate between
eachother,
> > while still giving some importance to ease of use?
> >
> > What do you think about this issue?
> >
> >
> > Juan Linietsky
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >





RE: [linux-audio-dev] Poll about linux music audio app usability

2002-06-09 Thread Ivica Bukvic

What I think is that this is great since there is less likelihood that
someone else will be using the same tools I do and hence less likely
will my music sound like thousands of others :-)

Ivica Ico Bukvic, composer, multimedia sculptor, 
programmer, webmaster & computer consultant 
http://meowing.ccm.uc.edu/~ico/ 
[EMAIL PROTECTED] 

"To be is to do"   - Socrates
"To do is to be"   - Sartre
"Do be do be do"   - Sinatra
"I am" - God

> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:linux-audio-dev-
> [EMAIL PROTECTED]] On Behalf Of Juan Linietsky
> Sent: Monday, June 10, 2002 1:10 AM
> To: [EMAIL PROTECTED]; linux-audio-
> [EMAIL PROTECTED]
> Subject: [linux-audio-dev] Poll about linux music audio app usability
> 
> I thought this may be of interest to the list.
> In a k5 poll about usability of linux audio apps,
> ( http://www.kuro5hin.org/poll/1023512126_OSelOkZS )
> So far, out of 38 answers the results are:
> 
> -How do you like music software for Linux?
> 
> 2 % - Great! It has everything I need.
> 
> 13 % - Good, but i wish apps were more userfriendly (Like Reaktor or
> SoundForge)
> 
> 31 % - Could be better, I think the apps are not yet mature enough for
> my needs.
> 
> 15 % - It's unusable, the apps plain suck.
> 
> 10 % - Dont care about composing on computers
> 
> 26 % - Dont care about composing.
> 
> 
> 
> I think this raises some questions.. My feeling is that most people
> aiming to write music on this OS is expecting to have apps with super
> easy and intuitive interfaces, where you only go trough displays,
> knobs, sliders and paintabe areas.
> Why we dont have apps such as Reason, Reaktor, Sonar, Sound Forge,
> etc? I dont mean full apps, but at least projects aiming for that kind
> of thing.
> We do have very powerful tools, but i have to admit that for most of
> them we have to learn some script programming.
> Do we lack good APIs? Alsa MIDI api is the best I have seen for it's
> kind. Also, sould linux apps really take this windows approach of
> making huge bloated interfaces with lots of eye candy, or should we
> try to improve on making our apps intercommunicate between eachother,
> while still giving some importance to ease of use?
> 
> What do you think about this issue?
> 
> 
> Juan Linietsky
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 





Re: [linux-audio-dev] Audio routing issues for linux..

2002-06-09 Thread Paul Davis

>Here's a problem I commonly find in existing audio apps or in
>programming audio apps: Audio routing.
>
>The way things work now, it's hard for apps to implement a standard
>way of:

First, you can't do any better on MacOS or Windows, because ReWire or
DirectConnect are the only (low latency) options and many programs
don't support them. This is not a defense of the status quo, just an
observation about how cutting edge this idea is, and to point out how
much progress we are making on it.

>1-The application has to be able to "proovide" inputs and outputs,
>which may be used or not. By default an app may connect directly to
>the output or just not conect at all, expecting YOU to define the
>audio routes. Most of the times, unless using a direct output or known
>audio modifier objects, an app will not want to connect somewhere from
>whithin it. You will do that from an abstracted interface.

Is there some problem here? The app should be able to save its
state. If you invoke it again and the destinations/sources exist, it
can restore its own state. if they don't, then you're asking it to do
the impossible.

>4-I know jack likes to give root privileges to apps that need low
>latency I imagine that for normal apps this isnt an issue, so this
>should be considered.

This is solved if you use capabilities. Since capabilities are known
to be The Future of POSIX (which includes linux), this is the correct
solution. If you have a kernel with capabilities enabled, JACK can be
used without any root permissions on any program except "jackstart"
which is used to start the server itself.

Otherwise, this is just a basic problem with any POSIX-like multiuser
OS. You can't grant the right to run SCHED_FIFO to anyone because it
implies the capability to DoS the machine. If Linux was a single-user
system, perhaps this would be OK (though perhaps not). But anyway,
Linux isn't, and so one way or another, there are hoops to jump
through here.

And also try to clarify things a little better: its not applications
that need low latency. All JACK clients get whatever latency the
server provides them. Its *users* who need low latency and its
therefore users who choose to run jackd with or without realtime
scheduling.

>Probably the easier and more natural approach to this is just
>integrating JACK to ALSA in some way. 

Look, LAD has been over this dozens of times:

 * ALSA already has the "share" PCM device type which allows
 multiple access to the same underlying hardware. It hasn't
 been tested very much, but it does basically work.

 * such a design is like artsd and esd in that it cannot guarantee
 synchronous execution of all participants in the "sharing".
 This might be OK for some purposes, but its not OK for
 serious audio work.

 * aserver by itself doesn't permit inter-app routing, but alsa-lib
 could support such a thing. the LAD archives are full of a raging
 debate (mostly between Abramo and myself) about the appropriateness
 of such a design strategy.

 * the only known design that enforces synchronus execution is the one
 used by JACK, PortAudio, CoreAudio, VST, LADSPA, ReWire, TDM and
 many others.

So, yes, we can carry on promoting APIs that don't enforce synchronous
execution and end up with a complete mess, or we can promote APIs that
will allow this (even if a given implementation might not use that
feature). 

As I've said before, Linux is not like MacOS, where Apple have been
able to say "CoreAudio is the way". All we can do is to advocate
certain design approaches. There are a bunch of people out there who
seem to think that using OSS and/or ALSA makes sense. We can either
persuade them that they are wrong, or make do with a non-synchronous
execution, with or without inter-app data sharing. But neither you nor
I nor anyone else can make that choice for other developers.

We discussed this issue of audio routing for at least 2 years before
JACK was written. No ideas other than the one represented by JACK
emerged that I recall emerged till Abramo suggested extending
alsa-lib. Very few people liked that idea, so now we have JACK as a
viable option (well, the folks on jackit-dev think so, anyway), and
alsa-lib remains without any options in this area. As they say "show
me the code!" :)

--p





[linux-audio-dev] Poll about linux music audio app usability

2002-06-09 Thread Juan Linietsky

I thought this may be of interest to the list.
In a k5 poll about usability of linux audio apps, 
( http://www.kuro5hin.org/poll/1023512126_OSelOkZS )
So far, out of 38 answers the results are:

-How do you like music software for Linux?

2 % - Great! It has everything I need.

13 % - Good, but i wish apps were more userfriendly (Like Reaktor or
SoundForge)  

31 % - Could be better, I think the apps are not yet mature enough for
my needs. 

15 % - It's unusable, the apps plain suck.  

10 % - Dont care about composing on computers 

26 % - Dont care about composing.



I think this raises some questions.. My feeling is that most people
aiming to write music on this OS is expecting to have apps with super
easy and intuitive interfaces, where you only go trough displays,
knobs, sliders and paintabe areas. 
Why we dont have apps such as Reason, Reaktor, Sonar, Sound Forge,
etc? I dont mean full apps, but at least projects aiming for that kind
of thing.
We do have very powerful tools, but i have to admit that for most of
them we have to learn some script programming.
Do we lack good APIs? Alsa MIDI api is the best I have seen for it's
kind. Also, sould linux apps really take this windows approach of
making huge bloated interfaces with lots of eye candy, or should we
try to improve on making our apps intercommunicate between eachother,
while still giving some importance to ease of use?

What do you think about this issue? 


Juan Linietsky

















[linux-audio-dev] Audio routing issues for linux..

2002-06-09 Thread Juan Linietsky

Here's a problem I commonly find in existing audio apps or in
programming audio apps: Audio routing.

The way things work now, it's hard for apps to implement a standard
way of:

-Route audio from an app to another
-Share audio devices/routes
-Apply Audio modifiers (Effects)

LADSPA is great for integrating to your programs and very fast too,
but still not what I'm refering to..

For example, what would you think of the following audio setup?


I choose my favorite audio composing app, say muse.
Now i choose my favorite softsynth, iiwusynth.
Alsa works great for this... now
Using a theorical audio routing api Iiwusynth will
proovide me with the following audio sources: A stereo channel (gobal
mix) 16 more stereo channels (for the instruments) and 2 more channels
(effect send buffers). Now, I could create addition object and connect
channels 1-5 of iwusynth to it. I can also get one of the send effect
send buffer channels, connect it to a reverb object, and connect that
to the final mix. I could also get channel 6 of iiwusynth, connect it
to a distortion object and connect it back to the mix.

Now let's say that, since IIwusynth performance isnt that great and
I'm using so many channels that i'm running out of CPU (dont kill me
josh/peter ;) Seems that I want to do a typical multichannel dump:
Muse will proovide me with an output plug to where i can connect all
the output of the network i did before. This way I dont need to have a
sound card that proovides recording from output (and even if it does,
many do it using a da/ad conversion where there is a certain quality
loss or it just adds noise/distortion). 
Now let's say that I want to use saturno besides iiwusynth as synth
output (and the same approach with buffers on it). This helps me,
because I just couldnt do it if I had a soundcard without multichannel
output. 

Ok, done with the output, now let's say i have a nice base going and I
want to play my guitar over it. Using the same approach, i'll connect
the guitar to the soundcard line in, then in the audio network, the
line in to an object or program that proovides me special kind of
distortion, then flanger,etc. Maybe the line in of my computer is a
bit noisy, so i'll probably want to go thru a noise gate first.
At the end of the chain, I'll pug it to an input in MUSE.
Now i can play with my guitar over what i'm doing!

For a final touch, I can connect all the outputs to a mixer, and
adjust everything until I like it.

Yes, I know programs such as ARTS/JACK can do this kind of
thing, but there are some issues with this.

1-The application has to be able to "proovide" inputs and outputs,
which may be used or not. By default an app may connect directly to
the output or just not conect at all, expecting YOU to define the
audio routes. Most of the times, unless using a direct output or known
audio modifier objects, an app will not want to connect somewhere from
whithin it. You will do that from an abstracted interface.

2-Jack is great, but if you want to run a certain synthethizer that
doesnt use jack, together with one that does and you have a consumer
soundcard that doesnt support multichannel output, you are dead.

3-You may also want to put just any program that uses native OSS/ALSA
through this. Imagine running xmms and wanting to put the sound thru a
better equalizer than the one included. Instead of botherering in
writing a SPECIALIZED equalizer plugin for xmms, you just redirect the
output to an equalizer program that takes many inputs/outputs.
Or better yet, imagine you want to play a game or watch a movie and
you want special audio settings, you just yet again redirect to such
object.

4-I know jack likes to give root privileges to apps that need low
latency I imagine that for normal apps this isnt an issue, so this
should be considered.

5-You know you cant force application owners to convert their stuff to
jack/arts/etc. You'd also rather not waste your time converting their
applications to that, and the application owners would rather not
having to support multiple apis. So, this saves us time to all.

Probably the easier and more natural approach to this is just
integrating JACK to ALSA in some way. 

What do you think?

Regards!

Juan Linietsky 



Re: [linux-audio-dev] Low latency and X11 Direct Rendering

2002-06-09 Thread Fernando Pablo Lopez-Lezcano

> I've been running kernel with the DRM patches since January and it has been
> rock solid. However my machines are UP only, so if someone happens to be
> running kernel with the patches on SMP machine I would like to know if it
> works irl.

I've been using them with a Radeon card for about a month in a dual athlon
machine and so far it looks good. I was getting latency hits of about 18
msecs without the dri patches.

[but I'm still getting latency hits of around 8-10 mSecs when using the
3ware scsi raid driver (3w-), I tried to find out what is causing them
but eventually gave up, at least for now]

BTW, thanks a lot for the patch...
-- Fernando




Re: [linux-audio-dev] Low latency and X11 Direct Rendering

2002-06-09 Thread Jussi Laako

Vincent Touquet wrote:
> 
> Do you know if these
> patches will make it into
> the mainline kernel ?

I offered those to Andrew Morton months ago.
I think it's up to him to decide, Andrew?

> I don't know what objections there
> could be to a conditional_reschedule() ?

I have read the DRM driver sources and didn't find any spinlocks being held
at that time.

Very first version of the patch had some hacks to ethernet drivers and those
had some spinlock issues. I have removed the ethernet parts since then
(January 19). I still have some temptation to do some changes to 3com, intel
and realtek ethernet drivers which have bad latency behaviour in some
situations.

I've been running kernel with the DRM patches since January and it has been
rock solid. However my machines are UP only, so if someone happens to be
running kernel with the patches on SMP machine I would like to know if it
works irl.

Original motivation for the patch set was to create kernel that is able to
run my signal analysis software properly under full load. It does heavy DSP
operations with 8-32 channels of audio and also heavy network and graphics
load.


- Jussi Laako

-- 
PGP key fingerprint: 161D 6FED 6A92 39E2 EB5B  39DD A4DE 63EB C216 1E4B
Available at PGP keyservers




Re: [linux-audio-dev] Low latency and X11 Direct Rendering

2002-06-09 Thread Vincent Touquet

Do you know if these
patches will make it into
the mainline kernel ?

I don't know what objections there
could be to a conditional_reschedule() ?

regards
Vincent

On Sun, Jun 09, 2002 at 11:00:53PM +0300, Jussi Laako wrote:
>Yes, I have made some lowlatency additions to Matrox and ATI drivers.
>At least latencies caused by my ATI Radeon dropped from ~100 ms to < 1 ms.
>The -ll version contains full lowlatency + lowlatency DRM.
>
>See http://uworld.dyndns.org/projects/linux/
>   - Jussi Laako



Re: [linux-audio-dev] Low latency and X11 Direct Rendering

2002-06-09 Thread Jussi Laako

Enrique Robledo Arnuncio wrote:
> 
> Running latencytest I have found a quite bad behaviour with the high
> X11 load test when the X server has the DRI module loaded and active.
>
> Is anyone there using DRI and lowlatency-patch at the same time?
> 
> Have you experienced this kind of problems?
> 
> Any known solution? I use OpenGL for real time visualization of audio,
> and it is not nice if I need to disable HW accel!!!

Yes, I have made some lowlatency additions to Matrox and ATI drivers.
At least latencies caused by my ATI Radeon dropped from ~100 ms to < 1 ms.
The -ll version contains full lowlatency + lowlatency DRM.

See http://uworld.dyndns.org/projects/linux/


- Jussi Laako

-- 
PGP key fingerprint: 161D 6FED 6A92 39E2 EB5B  39DD A4DE 63EB C216 1E4B
Available at PGP keyservers




[linux-audio-dev] How to "connect" two audio devices with alsa?

2002-06-09 Thread Richard Guenther

Hi!

I'd like to create a virtual 2-(stereo)-channel alsa device from one
ISA SB AWE and one on-board VIA alsa device. Has anyone figured out
how to do this using .asoundrc magic? [I know Jaroslav knows and told
Joern, but I think this is of greater interest]

Joern/Jaroslav, can you post a quick howto on this topic? Preferrably
including some .asoundrc quoting...

Thanks, Richard.

--
Richard Guenther <[EMAIL PROTECTED]>
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/
The GLAME Project: http://www.glame.de/