Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 12:33 +0900, michael noble wrote:
 
 Speaking of existing work, I vaguely recall mention of a
 plugin with a
 Qt GUI? Where is this, I need one for testing...
 
 
 Take a look at latest svn of CLAM Network Editor. It is apparently
 able to export networks as LV2 with a Qt GUI. See
 http://clam-project.org/wiki/Development_screenshots

Interesting. Judging by the fact that they're shown in Ardour, they must
be doing the wrapping in the UI code. It would be nice if the next CLAM
release just exposed Qt UIs directly, and Gtk hosts (e.g. Ardour)
switched to a library to do the embedding.

-dr



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Rui Nuno Capela
On Wed, 23 Feb 2011 02:58:56 -0500, David Robillard d...@drobilla.net 
wrote:

On Wed, 2011-02-23 at 12:33 +0900, michael noble wrote:


Speaking of existing work, I vaguely recall mention of a
plugin with a Qt GUI? Where is this, I need one for 
testing...



Take a look at latest svn of CLAM Network Editor. It is apparently
able to export networks as LV2 with a Qt GUI. See
http://clam-project.org/wiki/Development_screenshots


Interesting. Judging by the fact that they're shown in Ardour, they 
must
be doing the wrapping in the UI code. It would be nice if the next 
CLAM

release just exposed Qt UIs directly, and Gtk hosts (e.g. Ardour)
switched to a library to do the embedding.



red herring alert! :)

this features a qt-widget embedded *in* a gtk-widget via gtk-socket 
w/e--the lv2 ui plugin produced by the clam framework implicitly assumes 
the lv2_gtk_ui (pseudo)extension and for that matter it is a plain 
gtk-gnostic ui :)--the host must still get to libgtk et al. to handle 
the gtk widget/socket--i'm afraid this is not what Dave really asked for 
:/


cheers
--
rncbc aka Rui Nuno Capela
rn...@rncbc.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] On LAD (WAS: Re: [OT] IR: LV2 Convolution Reverb)

2011-02-23 Thread Alexandre Prokoudine
On 2/23/11, David Robillard wrote:

 Lignux will never replace OSX as The music production platform until our
 plugin technology is at least as capable.

That makes it sound as if plug-ins were the only obstacle :) You
probably didn't intend to mean it :)

Also, the notion of replacing Mac with Linux is somewhat, er, weird.
Do we really do all this to get Mac out of theway, or do we do things,
because we happen to have our own ideas we want to try? I'd say. the
latter.

Other than that, completely agreed.

Alexandre Prokoudine
http://libregraphicsworld.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Alexandre Prokoudine
On 2/23/11, Philipp Überbacher hollun...@lavabit.com wrote:

 Again I disagree, in my opinion web UIs have exactly one benefit and
 many drawbacks. The benefit is that they can be accessed from anywhere
 with an internet connection and sufficiently capable browser (which is
 pretty much everywhere these days) without installing anything. The
 drawbacks are too many to list really but I'll try to show some with an
 example or two:

 Example number one is the CUPS web interface, accessible using the
 obvious address http://localhost:631. First of all it gives me the
 creeps every time I have to use it, because I have to use the browser to
 modify my system.

So problem number one is that you have old-fashioned view on system
configuration.

 Besides that the interface is slow and buggy, despite running on the
 same machine. I wouldn't call it a good interface in general.

So problem number two is that because CUPS's UI is bad, you
extrapolate that on other web UIs. Very interesting.

 The other example is google docs/spreadsheet which I have to use
 sometimes. There are the obvious privacy concerns, it should be clear
 that giving your possibly sensitive data to what's probably the worlds
 biggest Ad company isn't a good idea.

So problem number three is conspiracy theories.

 the way of the user interface. You want keyboard shortcuts to make your
 life easier? Forget it, chances are the browser will chew them, all you
 get is the mouse.

So problem number four is that you have no idea whatsoever about
possibility to use hotkeys in a web app. Just FYI Gmail has lots of
shortcuts for both replying, forwarding and navigation between mails.
I use it all the time. Why you have no idea it is possible with AJAX
-- I really couldn't say. But you said something about
short-sightedness :-P

 Accessibility? Forget it, text browser don't do JS.

So the problem number five is being one of few hundred people around
the globe who still use text browsers in the world of Firefox, Chrome,
Opera, Safari and IE.

Ever heard of http://www.w3.org/WAI/ btw?

In other words, most of your points are made on the basis of you not
being up to day with modern technologies.

 Sorry, I could rant on forever.

Pray continue. I love reading stuff like that.

Alexandre Prokoudine
http://libregraphicsworld.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread torbenh
On Wed, Feb 23, 2011 at 09:03:03AM +, Rui Nuno Capela wrote:
 On Wed, 23 Feb 2011 02:58:56 -0500, David Robillard d...@drobilla.net
 wrote:
 On Wed, 2011-02-23 at 12:33 +0900, michael noble wrote:
 
 Speaking of existing work, I vaguely recall mention of a
 plugin with a Qt GUI? Where is this, I need one for
 testing...
 
 
 Take a look at latest svn of CLAM Network Editor. It is apparently
 able to export networks as LV2 with a Qt GUI. See
 http://clam-project.org/wiki/Development_screenshots
 
 Interesting. Judging by the fact that they're shown in Ardour,
 they must
 be doing the wrapping in the UI code. It would be nice if the next
 CLAM
 release just exposed Qt UIs directly, and Gtk hosts (e.g. Ardour)
 switched to a library to do the embedding.
 
 
 red herring alert! :)
 
 this features a qt-widget embedded *in* a gtk-widget via gtk-socket
 w/e--the lv2 ui plugin produced by the clam framework implicitly
 assumes the lv2_gtk_ui (pseudo)extension and for that matter it is a
 plain gtk-gnostic ui :)--the host must still get to libgtk et al. to
 handle the gtk widget/socket--i'm afraid this is not what Dave
 really asked for :/

i think this IS what dave asked for :)
he can just move the gtk shell code, and move it into his library, and
it will be a qt plug :) 

 
 cheers
 -- 
 rncbc aka Rui Nuno Capela
 rn...@rncbc.org
 ___
 Linux-audio-dev mailing list
 Linux-audio-dev@lists.linuxaudio.org
 http://lists.linuxaudio.org/listinfo/linux-audio-dev

-- 
torben Hohn
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Realtime threads and security

2011-02-23 Thread Olivier Guilyardi
Hello Daniel,

thanks for sharing your experience.

On 02/18/2011 02:43 PM, Daniel Poelzleithner wrote:
 On 02/17/2011 09:40 PM, Olivier Guilyardi wrote:
 
 I have written a dynamic system optimizer for linux called ulatencyd. I
 stumbled more by accident over rt issues my self. I use dynamic cgroups
 to adjust the resources of the system to what heuristic rules think the
 user expects. At least in the default desktop configuration but you can
 actually optimise every system to every load.

I assume it's here:
https://github.com/poelzi/ulatencyd/

 Now, what you can do if you have no trust in processes at all is, to
 create dynamic cgroups with carefully calculated cpu.rt_runtime_us
 values for the processes. If you have trusted processes, you may give
 them seperate cgroups with fixed runtimes, like for example your mixer
 process. But remember: you can't overcommit rt_runetime_us and they will
 not borrow their rt if they don't use it.

The dynamic cgroups solution seems good.

On Android there basically is one priviledged and trusted audio process,
audioflinger, the sound server, which performs mixing and access the hardware.
All audio clients are untrusted. I think there is a need to both assign a
careful cpu.rt_runtime_us to each client but also to limit the number of clients
which can run at the same time. This would allow to keep the CPU time assigned
to all clients within a certain quota, but also to prevent clients from fighting
with each other for realtime bandwidth.

Can you limit the maximum number of realtime processes with ulatencyd?

--
  Olivier
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Alexandre Prokoudine
On 2/23/11, Alexandre Prokoudine wrote:

 I'm thinking mostly about blind users when I talk about accessibility,
 and I'm not sure how usable graphical browsers are for the blind.

 Again, it's up to web developers how much efforts they put into making
 their apps accessible.

Oh, and speaking of accessibility, both GTK+ and Qt are somewhat
broken: GTK+ on Windows doesn't support accelerators in non-Latin
locales, and Qt on (at least) Linux doesn't do it either. How about
fixing that first? :)

Alexandre Prokoudine
http://libregraphicsworld.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LAD Activity (WAS: [ANN] IR: LV2 Convolution Reverb)

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 13:01 +0200, Stefano D'Angelo wrote:
 2011/2/23 Alexandre Prokoudine alexandre.prokoud...@gmail.com:
  On 2/22/11, David Robillard wrote:
 
  I have a working plugin (called dirg) that provides a UI by hosting a
  web server which you access in the browser. It provides a grid UI either
  via a Novation Launchpad, or in the browser if you don't have a
  Launchpad. Web UIs definitely have a ton of wins (think tablets, remote
  control (i.e. network transparency), etc.)
 
  I also have a complete LV2 message system based on Atoms which is
  compatible with / based on the event extension.  Atoms, and thus
  messages, can be serialised to/from JSON (among other things,
  particularly Turtle).
 
  Any of them available to have a look at?
 
  Currently dirg provides the web server on its own with no host
  involvement, but every plugin doing this obviously doesn't scale, so
  some day we should figure this out... first we need an appropriately
  high-level/powerful communication protocol within LV2 land (hence the
  messages stuff).
 
  Where do you stand with priorities now? That sounds like something
  very much worth investing time in.
 
  You see, one thing I'm puzzled about is that you have beginnings of
  what could be significant part of a potentially successful cloud
  computing audio app, and then you talk about how donations don't even
  pay your rent :)
 
 Before I totally forget about it... I think it might be a very clever
 thing to do to have some web-based thing (wiki or whatever, ideally a
 social network kind of thing) were LAD people can notify of what they
 are working on and what are their plans, so that it's easier to: a.
 know about it and b. start cooperations, etc.

There's Planet LAD, made for this reason a while ago. RSS and a planet
is definition The way to do this, IMO. I am subscribed to it in my feed
reader and keep up to date. If everything interesting going on was
pushed on the feed, it would indeed be very nice...

 For example, Dave is doing lots of stuff that I plan to reuse, but I
 only know it because I happen to lurk on #lv2 on freenode from time to
 time, and the same goes for lots of stuff I'm seeing coming out
 lately.

Yeah, been meaning to blog more, what can I say :)

I'll throw one out today about the UI stuff, anyway. Qt plugins embedded
in Ingen working at least somewhat via a library, with minimal nuisance
on either end.

-dr


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LAD Activity (WAS: [ANN] IR: LV2 Convolution Reverb)

2011-02-23 Thread Philipp Überbacher
Excerpts from Alexandre Prokoudine's message of 2011-02-23 14:01:10 +0100:
 On 2/23/11, Philipp Überbacher wrote:
 
  I do live behind the moon when it comes to web technology, but isn't rss
  meant for notifications? Maybe simpler, email?
 
 Nope. I'll elaborate below.
 
  If it needs a social networking thing for some reason, maybe diaspora
  will do the trick? With diaspora chances are better that someone will
  write a good user interface of some sort.
 
 Yes, diaspora is much closer to what's required.

But the question is whether the whole writing stuff is necessary and
adds something. Social networks are general purpose, not special
purpose. Writing stuff there is additional effort.

  Right now github and the likes seems to be used as a social network
  thing among developers, but I don't think it's a good idea to rely on
  such a service for communication.
 
 Yes, a good idea IMO would be a service on top of existing service
 like github, twitter etc. They all expose API after all, no?

Each of those will be used by some people at most, so do you want to tie
them all together? Basing everything on a single service such as github
will force people to choose between exclusion or adoption of said
service, a really bad idea. The benefit of github and similar is that
commits and other stuff that happens can be monitored so the dev doesn't
need to expend additional effort to let people know what he's doing.

 Now, here is why rss, email et al don't do a good work enough: they
 don't provide perspective and they don't expose connections between
 people right away.

There could be a catch-all mailinglist. For rss and the likes, there
are aggregators like planet (which is in use already, for example:
http://planet.linuxaudio.org/ [but includes stuff like Traktor...])

 I've served several years as social hub for free
 graphics software developers and I can tell you that while email and
 Jabber and IRC and whatnot, as well as F2F meetings at LGM, LAC etc
 are the ultimate communication means, it's very important to stay
 tuned to all things happening. For same reason I woudn't limit such a
 dream service to audio developers, because audio is related to video
 (audio effects in NLE, JACK compatibility), and video is related to
 things like static graphics and video drivers (likewise audio is
 related to kernel, ALSA and FFADO), and so it all is intertwined.
 
 AFAIK, Linux.com was supposed to become a kind of social service.
 Maybe it's worth investigating what their plans are.
 
 Alexandre Prokoudine
 http://libregraphicsworld.org

I do agree that it should be open for related fields, but there should
be some barriers IMHO, simply because I don't think closed source
non-linux development would add anything (just an example, because of
that Traktor review. I'd be pissed to see stuff like this).

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 11:03 +, Rui Nuno Capela wrote:
 On Wed, 23 Feb 2011 10:57:25 +0100, torbenh torb...@gmx.de wrote:
  On Wed, Feb 23, 2011 at 09:03:03AM +, Rui Nuno Capela wrote:
  On Wed, 23 Feb 2011 02:58:56 -0500, David Robillard d...@drobilla.net
  wrote:
  On Wed, 2011-02-23 at 12:33 +0900, michael noble wrote:
  
  Speaking of existing work, I vaguely recall mention of a
  plugin with a Qt GUI? Where is this, I need one for
  testing...
  
  
  Take a look at latest svn of CLAM Network Editor. It is apparently
  able to export networks as LV2 with a Qt GUI. See
  http://clam-project.org/wiki/Development_screenshots
  
  Interesting. Judging by the fact that they're shown in Ardour,
  they must
  be doing the wrapping in the UI code. It would be nice if the next
  CLAM
  release just exposed Qt UIs directly, and Gtk hosts (e.g. Ardour)
  switched to a library to do the embedding.
  
 
  red herring alert! :)
 
  this features a qt-widget embedded *in* a gtk-widget via gtk-socket
  w/e--the lv2 ui plugin produced by the clam framework implicitly
  assumes the lv2_gtk_ui (pseudo)extension and for that matter it is a
  plain gtk-gnostic ui :)--the host must still get to libgtk et al. to
  handle the gtk widget/socket--i'm afraid this is not what Dave
  really asked for :/
 
  i think this IS what dave asked for :)
  he can just move the gtk shell code, and move it into his library, 
  and
  it will be a qt plug :)
 
 
  oh come on. do you mean Dave's library will have a so called 
  specialized gtk shell for each toolkit out there? wrapping everything 
  under gtk is not what i would call a pretty good solution at least the 
  one we've agreed about earlier.
 
  Fons is right suggesting a common-denominator term: the lv2_ui 
  descriptor should have carried a system window-id instead, in 
  alternative to, a plain toolkit-dependent widget pointer that 
  lv2_gtk_ui's been doing all this time as LV2UI_Widget*. on X11 based 
  systems it would cast to a Window type; on windows it would be a HWND; 
  i'm sure there's something native and equivalent on macosx/carbon/cocoa 
  w/e... depending on the system the plugins are built/targeted then the 
  host will/must know what to do with that window-id--embed, show, hide, 
  realize, destroy, trap and send events, etc... look, it is this 
  window-id in fact the corner stone for the gtk-socket to xembed a 
  qt-widget on the clam example.
 
  imnsho, a GtkWidget* is not, cannot and never will be the way to 
  toolkit agnosticism :) why is that not obvious to you?

You seem to have forgotten, or decided to ignore, every single solitary
point brought up in this conversation over the night ;)

You are wrong, but why that is so, and what the correct solution is, has
been described plenty enough already, so I won't waste time doing so
once again. If you're just interested in blissfully ignorant trolling
about Gtk gnosticism or whatever, I'll adjust my mental ignore list
accordingly.  If you're actually interested in making the solution
happen, then let's do it:

This is Qt LV2 UIs embedded in Ingen:

http://drobilla.net/files/qt_in_ingen.png

Ingen has absolutely no idea about anything Qt or X11 related
whatsoever, and the float plugin has absolutely no idea about anything
Gtk or X11 related whatsoever. They both just do exactly what their
developers want, without any of the PITA of dealing with foreign
toolkits and window systems and embedding and whatever else. In other
words, this solution is superior.

This, of course, means I wrote that library:

http://svn.drobilla.net/lad/trunk/suil/

I have not tested the Gtk-in-Qt direction yet. You're a Qt host author.
Hint, hint. I got stuck in qtractor autohell and gave up last night.

The relevant code in Ingen is here:

http://svn.drobilla.net/lad/trunk/ingen/src/client/PluginUI.cpp

The make a UI set thing is a bit tedious, but is needed to avoid SLV2
= Suil dependency. I am thinking that maybe I should make SLV2 depend
on this library(*) and provide a simpler interface there (basically just
suil_instance_new, which is the real meat). The next SLV2 release will
break API slightly anyway. Feedback from you or any other SLV2 users
welcome, I am inclined to break the SLV2 UI related API if it makes it
obvious and trivial for hosts to do the right thing.

-dr

(* The library itself depends on no toolkits, it uses dynamically loaded
modules for all the wrapping, but this depends on packagers doing it
right)

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Realtime threads and security

2011-02-23 Thread Daniel Poelzleithner
On 02/23/2011 11:10 AM, Olivier Guilyardi wrote:

 The dynamic cgroups solution seems good.
 
 On Android there basically is one priviledged and trusted audio process,
 audioflinger, the sound server, which performs mixing and access the hardware.
 All audio clients are untrusted. I think there is a need to both assign a
 careful cpu.rt_runtime_us to each client but also to limit the number of 
 clients
 which can run at the same time. This would allow to keep the CPU time assigned
 to all clients within a certain quota, but also to prevent clients from 
 fighting
 with each other for realtime bandwidth.
 
 Can you limit the maximum number of realtime processes with ulatencyd?

Yes, that should be possible, even i never used that. So, you want a
first come first get policy ?

There is no nice api for detecting the number of processes in a group
but i will add one as i think it's a valid decision to consider. There
is no per se limit of processes in a group, in fact: there is no way to
limit processes numbers anyhow, unfortunately. Makes it very hard to
write write a fork bomb protector ;-)

But, you can of course just stop moving processes into a group.

Yesterday i added a instant filter functionality that allows you with
witelists to move processes extremly quickly. This is required for some
daemons that request realtime and then messure the cpu time they get to
test if it's enough for running smoothly. Those problematic processes
must be whitelisted because the normal delay mechanism would schedule
them to late, as many processes started are just short running and
looking at them is just a waste of cpu time.

kind regards
 Daniel





signature.asc
Description: OpenPGP digital signature
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Realtime threads and security

2011-02-23 Thread Robin Gareus
On 02/23/2011 11:22 AM, Olivier Guilyardi wrote:
 Hi,
 
 On 02/17/2011 10:53 PM, Robin Gareus wrote:
 
 /proc/sys/kernel/sched_rt_runtime_us
 /proc/sys/kernel/sched_rt_period_us

 kernel-source/Documentation/scheduler/sched-rt-group.txt
 
 There's something which confuses me. But I'm not sure how this realtime period
 settings relate to the audio I/O period.

It only does iff the audio-I/O thread (or process) is executed with
realtime privileges (`man chrt` - man `pthread_setschedparam` ).

if you have a RT_PREEMPT kernel you can also raise the priority of the
audio-device interrupt handler.

Well, it's a bit more complex than that.. not sure if you can go that
way on Android easily.

 On Android, the closest that one can get to hardware in a more or less 
 portable
 way is libaudio [1]. It's Android's audio HAL. This API exposes blocking 
 read()
 and write() calls, with fixed buffer sizes (input and output buffer sizes
 generally do not match, but that's another problem).
 
 So, this may be a silly/newbie question, but can one access this blocking API
 from a realtime thread? What will happen when it blocks? How does the 
 read/write
 period relate to sched_rt_period_us?
 
 [1] http://source.android.com/porting/audio.html
 
 --
   Olivier
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] libsuil (was: IR: LV2 Convolution Reverb)

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 19:03 +, Rui Nuno Capela wrote:
 On 02/23/2011 05:22 PM, David Robillard wrote:
 snip
  
  http://svn.drobilla.net/lad/trunk/suil/
  
  I have not tested the Gtk-in-Qt direction yet. You're a Qt host
  author. Hint, hint. I got stuck in qtractor autohell and gave up last
  night.
  
 snip
  
  (* The library itself depends on no toolkits, it uses dynamically loaded
  modules for all the wrapping, but this depends on packagers doing it
  right)
 
 i see. and these dlload'ed modules which do all the wrapping have these
 revealing names like libsuil_qt4_in_gtk2 and libsuil_gtk2_in_qt4... :/

I have no idea why this is something to :/ at...

 nice, but i figure it's a solution in a world where the host and the
 plugin are either of those 2 toolkits and only under a x11 umbrella.
 what if a plugin developer wishes to do it on fltk, juce, plain xlib,
 win32/64, carbon, cocoa, whatever? (lv2_external_ui already allows that
 *grin*))

Yes, it allows that by shifting the burden on the plugin authors (which
is even worse than doing it to host authors), encouraging half-assed
solutions, rampant copy/paste code duplication, bugs, and a high barrier
of entry for writing a plugin UI. On top of all that, it throws out
embedding and other niceties entirely. This is why it is a poor
solution.

  aha, you'll probably say there will be a plethora of
 combinations on those modules like
 libsuil_$(plugin-toolkit)_in_$(host-toolkit)... is that it?

Yes, that is precisely the idea. It all gets implemented in one place,
and neither the host nor plugin authors have to worry about any of it.
This is the only solution where that is true. Making all the UI authors
deal with it (repeatedly, via half-assed solutions and copy paste code
duplication) is not a good solution.

People probably will start using these new toolkits, and it will Just
Work in all the host, for free, as soon as it's implemented here. This
is a Good Thing. (The same applies to a UI exposing a low level window
ID, by the way, but there is no longer any good reason to do that).

The point, that has been repeatedly missed (which is really starting to
get irritating), is that all this IMPLEMENTATION DETAIL has become just
that. The burden on you, as a host author, to worry about the plethora
of combinations, has been removed. That burden has also been removed
from the UI authors (which is not true of the solutions you keep
championing. As for the plethora, there simply aren't that many
toolkits, and the modules are neither large nor complicated, but
again... implementation detail. As far as you are concerned, the magic
library just works.

Suppose Fltk plugin UIs do come along. Wonderful. Implement that in
Suil, and it will just magically work in e.g. Qtractor and Ardour with
zero changes required.

In short: Problem Solved.

 sorry to be such a troll:) maybe i'll shut up now.
 
 anyway, i'm still looking forward to this libsuil project, by all means
 an excellent effort. sincerely agree that it will do a lot better than
 the current lv2_gtk_ui situation.

No future tense required, there it is. Make it happen. If I had a Qt
host to test with right now, I'd make sure Gtk2-in-Qt4 actually works,
release, and that's that.

That said, I think I will just modify the SLV2 API accordingly so you
don't have to use Suil directly, so maybe wait a day (but switching
would be easy, and the sooner I have something to test with, the sooner
this problem is solved, so don't let that stop you).

-dr


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] libsuil

2011-02-23 Thread Rui Nuno Capela
On 02/23/2011 07:37 PM, David Robillard wrote:
 On Wed, 2011-02-23 at 19:03 +, Rui Nuno Capela wrote:

 anyway, i'm still looking forward to this libsuil project, by all means
 an excellent effort. sincerely agree that it will do a lot better than
 the current lv2_gtk_ui situation.
 
 No future tense required, there it is. Make it happen. If I had a Qt
 host to test with right now, I'd make sure Gtk2-in-Qt4 actually works,
 release, and that's that.
 
 That said, I think I will just modify the SLV2 API accordingly so you
 don't have to use Suil directly, so maybe wait a day (but switching
 would be easy, and the sooner I have something to test with, the sooner
 this problem is solved, so don't let that stop you).
 

i won't :)

one question,

(btw, i know my english is weird, even on my mothers language i'm lousy:)

are you saying that this suil api will get it implicit and integrated
into slv2? in a matter of days? i've looked into the suil.h and it
makes perfect sense...

yep, i might arrange some time to test this gtk2_in_qt4 stuff (granted i
don't fall into wafhell over more than a week-end ;))

cheers
-- 
rncbc aka Rui Nuno Capela
rn...@rncbc.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] libsuil

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 20:11 +, Rui Nuno Capela wrote:
 On 02/23/2011 07:37 PM, David Robillard wrote:
  On Wed, 2011-02-23 at 19:03 +, Rui Nuno Capela wrote:
 
  anyway, i'm still looking forward to this libsuil project, by all means
  an excellent effort. sincerely agree that it will do a lot better than
  the current lv2_gtk_ui situation.
  
  No future tense required, there it is. Make it happen. If I had a Qt
  host to test with right now, I'd make sure Gtk2-in-Qt4 actually works,
  release, and that's that.
  
  That said, I think I will just modify the SLV2 API accordingly so you
  don't have to use Suil directly, so maybe wait a day (but switching
  would be easy, and the sooner I have something to test with, the sooner
  this problem is solved, so don't let that stop you).
  
 
 i won't :)
 
 one question,
 
 (btw, i know my english is weird, even on my mothers language i'm lousy:)
 
 are you saying that this suil api will get it implicit and integrated
 into slv2? in a matter of days? i've looked into the suil.h and it
 makes perfect sense...

Well, I'm just thinking it might be a bit less of a hassle, and make
doing the right thing extremely easy/obvious, if SLV2 just had a
function like suil_instance_new. It also allows me to deprecate (or just
outright remove) the old SLV2UIInstance stuff which encourages the wrong
thing (poking through the UIs and instantiating them yourself, i.e.
caring about toolkits).

The independent suil API is just slightly more annoying because you have
to take your SLV2UIs and stick its contents in a SuilUIs... it's just a
little loop, not a huge deal, but it's not pretty. Have one Do The Right
Thing UI instantiation function in SLV2 is nice and idiot-proof, but
perhaps the dependency isn't worth it.

I could, of course, just literally implement all of this in SLV2 itself,
but I figured a zero-dependency library would be a good thing, and in
general I like to keep UI things separate...

 yep, i might arrange some time to test this gtk2_in_qt4 stuff (granted i
 don't fall into wafhell over more than a week-end ;))

From the user POV it's the usual pkg-config routine.

(I can't even begin to describe what a relief it is to trade up to the
sensible prettiness of waf and escape the convoluted layer upon
convoluted layer of inconsistent fugly line noise that is autohell, but
that's another conversation entirely)

-dr


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 20:21 +, Chris Cannam wrote:
 On 9 February 2011 16:49, David Robillard d...@drobilla.net wrote:
  new librdf-free slv2
 
 Entirely Redland-free, or still using Raptor?

Entirely Redland free. I hand-wrote a Turtle parser and serialiser.

 And: why?

In short, it's been a PITA for everyone in numerous ways since day one.
Some abandoned SLV2 entirely because of it. Others were in the process
of doing so, until I decided enough was enough. Clearly SLV2 was
deficient somehow if people were abandoning it, and they were abandoning
it explicitly because of Redland...

Redland is great if you really need a fully featured RDF implementation,
and I still use it in such cases. To simply implement LV2, however, you
don't, and such a heavyweight dependency certainly doesn't induce the
best knee-jerk reaction. Often the librdf packages would pull in
ridiculously massive mysql libraries and such - to implement a simple
LADSPA based plugin API?! That this left a bad taste in people's mouths
is completely understandable. It has definitely hurt LV2 adoption.

(Because of historical reasons, RDF can seem bloatey, but it's really
just an elegant abstract data model, and we are using a terse and simple
syntax for it. The new lean-and-mean SLV2 implementation shows that
there is no bloat inherent in LV2, and it's all a much easier pill to
swallow in practice).

Some less hand-wavey practical reasons: there were mysterious and very
un-fun problems with librdf-in-librdf that crop up when you have plugins
that load plugins (e.g. Ingen, NASPRO(*)). Portability was also an
issue. Stefano D'Angelo (of NASPRO) and myself are now cooperating on
LV2 implementation rather than duplicating effort because of Redland
related problems (e.g. he'll be helping with win32 portability, and
Ingen now depends on NASPRO for LADSPA support). I am all about
resolving any fragmentation that has happened in the LV2 world, and
dropping Redland has been a big positive step in that regard.

The new implementation is thousands of times smaller, lighter, and
faster. The entire thing is much smaller than libxml2 alone, for
example. I should have just written one like this from the get-go, and
the initial reception of LV2 would have been a lot better. Oh well, live
and learn.

SLV2 is now based on two new libraries: Serd (RDF syntax) and Sord (RDF
store). Both are roughly 2 thousand lines of C, solid and thoroughly
tested (about 95% code coverage, like SLV2 itself). Serd has zero
dependencies, Sord depends only on Glib (for the time being, possibly
not in the future). There is still some optimization to be done, but
it's already so much leaner it's not a huge priority for me.

The new SLV2 should be appropriate for, say, implementing LV2 on
embedded hardware with limited resources. The old one, frankly, smelled
of bloat even on a desktop system.

Unfortunately, this ground-up reimplementation thing consumed the
majority of my January, but I am very happy with the outcome.

-dr

(* For the unfamiliar, NASPRO is a bridge which transparently exposes
LADSPA, VST, etc. plugins as LV2 plugins, among other things)

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] LAD Activity (WAS: [ANN] IR: LV2 Convolution Reverb)

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 20:05 +0100, Robin Gareus wrote:
 On 02/23/2011 12:49 PM, Luis Garrido wrote:
  On 02/23/2011 12:01 PM, Stefano D'Angelo wrote:
  
  Before I totally forget about it... I think it might be a very clever
  thing to do to have some web-based thing (wiki or whatever, ideally a
  social network kind of thing) were LAD people can notify of what they
  are working on and what are their plans, so that it's easier to: a.
  know about it and b. start cooperations, etc.
  
  You could use a search engine and, if nothing pops out, just ask here.
  That would be a more legitimate and pleasant use of this list than
  others, IMHO. ;-)
  Luis
 
 I quite agree; besides there is the linux-audio-announce [LAA]
 email-list: http://lists.linuxaudio.org
 
 If you want to reach out to the public (announce new projects, new
 releases, events, etc) just post there. One can get good idea of what
 PPL are working on by actually reading LAA, too.
 
 Note: all posts to LAA are being moderated. Once they make it through
 moderation, the message will get on the linuxaudio.org front-page and is
 automatically added to planet LAD. If you prefer to blog and would like
 the blog to be included in planet.linuxaudio.org: read the sidebar of
 planet LAD.

For the record, I have found it frustrating that if you announce a
release on LAA, and blog it, it ends up on Planet LAD twice.

I suppose I just shouldn't push those announcements to the RSS feed
picked up by Planet LAD, but.. well, it /is/ LAD :)

-dr


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] RDF libraries, was Re: [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Chris Cannam
On 23 February 2011 22:11, David Robillard d...@drobilla.net wrote:
 SLV2 is now based on two new libraries: Serd (RDF syntax) and Sord (RDF
 store). Both are roughly 2 thousand lines of C, solid and thoroughly
 tested (about 95% code coverage, like SLV2 itself). Serd has zero
 dependencies, Sord depends only on Glib (for the time being, possibly
 not in the future).

Can you point me at the API or code?  I couldn't see it in a quick
browse on your SVN server.

I have a library (Dataquay,
http://code.breakfastquay.com/projects/dataquay -- preparing a 1.0
release of it at the moment, so if anyone wants to try it, go for the
repository rather than the old releases) which provides a Qt4 wrapper
for librdf and an object-RDF mapper.

It's intended for applications whose developers like the idea of RDF
as an abstract data model and Turtle as a syntax, but are not
particularly interested in being scalable datastores or engaging in
the linked data world.

For my purposes, Dataquay using librdf is fine -- I can configure it
so that bloat is not an issue (and hey! I'm using Qt already) and some
optional extras are welcome.  But I can see the appeal of a more
limited, lightweight, or at least less configuration-dependent
back-end.

I've considered doing LV2 as a simple example case for Dataquay, but
the thought of engaging in more flamewars about LV2 and GUIs is really
what has put me off so far.  In other words, I like the cut of your
jib here.


Chris
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] the future of display/drawing models (Was: Re: [ANN] IR: LV2 Convolution Reverb)

2011-02-23 Thread Dominique Michel
Le Tue, 22 Feb 2011 22:11:27 +,
Fons Adriaensen f...@linuxaudio.org a écrit :

  2) more and more apps able to take advantage of v-blank sync to
  reduce computational load due to unnecessary redraws. instead, the
  whole system will be a lot like a video-framebuffer version of
  JACK: the vblank interrupt arrives. everything with a surface gets
  a chance to redraw if it needs to, the surfaces are composited
  together, and boom, its on the display.
 
 Two remarks on this:
 
 1. Syncing updates to the video frame rate of course makes sense,
 and there is no reason why it couldn't be done in X. All it takes
 is some support from the driver to generate an event at the right
 time.
 
 2. But at the same time this is sort of backwards. There is no reason
 today why a computer display should be driven by a 'video' signal that
 refreshes the complete screen at a fixed rate. *That* itself is very
 old technology and completely useless in this age.

Another problem is the hardware. All the PC video cards are video
driven. That imply than the card have to refresh the whole screen in
order to change one pixel. That is not old technology, that is PC
technology. At the same time than the first PC was computers like the
Amiga or the Atary. 

In the Amiga, the video card was vectorial, to change one pixel, all
that was needed was the new pixel value and its x y coordinates. To
change a part of the screen, the Amiga was using vectorial
objects called sprites. So, even for complex visual objects, the
computational time was much lower than with the video approach, and the
2D on such old machines is still competitive against the 2D on the most
powerful PC of today.

At that time, 3D was almost non existent. To develop the 3D
capabilities, most efforts from the manufacturers was spend on
improving the video based cards. Now, the situation is than the 2D part
of a video card is so little than the manufacturers are considering to
remove it and use the 3D part to get the 2D from the card.

I don't get the advantage of this approach for a workstation. A
workstation is not about 3D gaming but about making some work. 3D
cards are very hungry for electricity, and they will be an overkill for
anyone that is not working on some kind of 3D development. The
electricity providers will certainly like them very much, but my wallet
and the environment don't like them.

So, I think than a complete discussion on that matter should include
the hardware part, that is how to make power and computational
efficient 2D video cards.

Dominique
  
 
 Ciao,
 


-- 
We have the heroes we deserve.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Gordon JC Pearce
On Wed, 2011-02-23 at 19:55 +0300, Alexandre Prokoudine wrote:

 For how many years did we have to use Rosegarden/MusE and Ardour *and*
 Hydrogen simultaneously just to get *basic* DAW functionality only
 because everyone went on saying things like Oh, this is UNIX
 philosophy -- do one thing and do it well or Divide and conqer? So,
 we divided, and what have we conquered? :) After so many years we are
 ending up with A3 approaching that has integrated MIDI and audio
 anyway, a decade (too?) late.

... and this is why I've stopped using computers for music, and why the
nekosynth plugins haven't progressed in two years.

There is no way in hell I'm going near the utterly fundamentally
retarded mess of shit and fail that is Ardour 3.

It's a DAW.  It shouldn't have *any* MIDI beyond control automation and
some idea of sync.  Leave that to a sequencer.

Of course, there are no *usable* PC-based sequencers, so after gathering
dust for some ten years my 1/4 tape machine and Alesis MMT-8 are having
all the fun, and the PC just sits with pidgin, evolution and an ssh
session to my IRC client.

Linux audio is nowhere.  There isn't a usable sample editor, there are a
couple of brave attempts at sequencers that lack pretty fundamental
features, and we have one DAW that seems to be going down the do
everything, even if badly route.

It's 2011.  I've been at this for a decade.  It's just as bad as it was
when I started trying to use PCs for music.  I give up.

Gordon MM0YEQ

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] the future of display/drawing models (Was: Re: [ANN] IR: LV2 Convolution Reverb)

2011-02-23 Thread Fons Adriaensen
On Wed, Feb 23, 2011 at 11:47:33PM +0100, Dominique Michel wrote:
 
 In the Amiga, the video card was vectorial, to change one pixel, all
 that was needed was the new pixel value and its x y coordinates.

That is still the case with even the most simple display hardware
today.

 To change a part of the screen, the Amiga was using vectorial
 objects called sprites. 

Sprites are not 'vectorial', they were what are called pixmaps
or images in X11, just blocks of pixel values stored in memory.
They are used to avoid having to repeat drawing operations, by
precomputing the result. The same thing is still routine today.

 So, even for complex visual objects, the
 computational time was much lower than with the video approach,

How the graphics card is controlled has nothing to do with what
sort of signal it outputs to the screen, video or whatever else.
You seem to be mixing up the two. In the Amiga days, displays
were CRTs and video output was the only way.


Ciao,

-- 
FA

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RDF libraries, was Re: [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 22:35 +, Chris Cannam wrote:
 On 23 February 2011 22:11, David Robillard d...@drobilla.net wrote:
  SLV2 is now based on two new libraries: Serd (RDF syntax) and Sord (RDF
  store). Both are roughly 2 thousand lines of C, solid and thoroughly
  tested (about 95% code coverage, like SLV2 itself). Serd has zero
  dependencies, Sord depends only on Glib (for the time being, possibly
  not in the future).
 
 Can you point me at the API or code?  I couldn't see it in a quick
 browse on your SVN server.

They're all in my LAD meta-repository:

http://svn.drobilla.net/lad/trunk/

Or, individually:

http://svn.drobilla.net/lad/trunk/slv2
http://svn.drobilla.net/serd/trunk/
http://svn.drobilla.net/sord/trunk/

All the usual disclaimers about unreleased software apply.

 I have a library (Dataquay,
 http://code.breakfastquay.com/projects/dataquay -- preparing a 1.0
 release of it at the moment, so if anyone wants to try it, go for the
 repository rather than the old releases) which provides a Qt4 wrapper
 for librdf and an object-RDF mapper.
 
 It's intended for applications whose developers like the idea of RDF
 as an abstract data model and Turtle as a syntax, but are not
 particularly interested in being scalable datastores or engaging in
 the linked data world.

More implementation alternatives for working with that micro-stack
would be great, it's a good one...

 For my purposes, Dataquay using librdf is fine -- I can configure it
 so that bloat is not an issue (and hey! I'm using Qt already) and some
 optional extras are welcome.  But I can see the appeal of a more
 limited, lightweight, or at least less configuration-dependent
 back-end.

You can compile a far more lightweight librdf yourself, but, well, users
don't, nor should they have to. That said, sure, it's still a good
option sometimes, but I am shooting for an extremely low barrier of
entry. Small C libraries with no dependencies are an easy sell, because
they're not a pain in anyone's ass.

The sord API is vaguely reminiscent of the specific subset of the librdf
API I needed at the time, but I made it as part of a hyper pragmatic
mission to get a Redland-free SLV2, not a librdf replacement in general.
The API probably still needs a bit of polish. In other words, I'm not
pitching Sord as a viable Redland replacement in general (nor is making
it one a priority), but if you're just interested in Turtle + in-memory
model, it may be. Let me know what you think.

 I've considered doing LV2 as a simple example case for Dataquay, but
 the thought of engaging in more flamewars about LV2 and GUIs is really
 what has put me off so far.  In other words, I like the cut of your
 jib here.

So don't engage in it. Virtually all such nonsense on here is nothing
but the peanut gallery. Mentioning what you plan to do here before doing
it is usually not a good idea (dumbass on mailing list hinders progress,
film at eleven). Don't let L-A-D deter you from working on LV2 things.
If you need useful feedback before proceeding, the LV2 mailing list
http://lists.lv2plug.in/listinfo.cgi or IRC channel (#lv2 on
irc.freenode.net) are productive, on-topic, flame-free venues for LV2
related discussion/coordination. You can of course also just work
directly with another host/plugin author, which is typical, and is why
most of the LV2 noise here is just that.

Anyway, as for that simple example, a Qt based LV2 host based on a new
RDF stack would be great to see. It's a useful real-world example. I
don't know exactly how a flame against writing that would go, but I do
know it would be painfully stupid, and not worth your attention. Anchors
away!

-dr


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Paul Davis
On Wed, Feb 23, 2011 at 5:58 PM, Gordon JC Pearce gordon...@gjcp.net wrote:

 There is no way in hell I'm going near the utterly fundamentally
 retarded mess of shit and fail that is Ardour 3.

gordon, we love you too. honest.

 It's a DAW.  It shouldn't have *any* MIDI beyond control automation and
 some idea of sync.  Leave that to a sequencer.

its wierd how the overwhelming majority of folks out there in the
world don't seem to feel that this distinction is relevant to their
working style, and even more, whenever anybody does bring out a cool
new sequencer (e.g. nodal, or some of the hex- or octagonal sequencers
that have appeared recently) everyone starts wondering about how it
can be integrated into theirDAW.

 Linux audio is nowhere.  There isn't a usable sample editor, there are a
 couple of brave attempts at sequencers that lack pretty fundamental
 features, and we have one DAW that seems to be going down the do
 everything, even if badly route.

we haven't done OSC sequencing yet, so we still have plenty of room to
screw things up even more. oh, i forgot, *video*. yep, once we're done
with that, the stink will be so bad you'll have to wear a class 5
volatile vapor breathing apparatus to even sit down in front of your
computer. yeah, we are going to fuck with your mind you'll wish you
could just get back to DOS. which we'll run in a virtualbox instance
reparented inside an ardour window, just so that dave phillips can run
sequencer gold without any hassles. the future's so bright i've got to
wear a straightjacket! frizz me down with that kielbasse, officer!

 It's 2011.  I've been at this for a decade.  It's just as bad as it was
 when I started trying to use PCs for music.  I give up.

/me slaps on the peter gabriel and  hugs .

really, gordon, its OK. its going to be OK.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread David Robillard
On Wed, 2011-02-23 at 22:58 +, Gordon JC Pearce wrote:
 On Wed, 2011-02-23 at 19:55 +0300, Alexandre Prokoudine wrote:
 
  For how many years did we have to use Rosegarden/MusE and Ardour *and*
  Hydrogen simultaneously just to get *basic* DAW functionality only
  because everyone went on saying things like Oh, this is UNIX
  philosophy -- do one thing and do it well or Divide and conqer? So,
  we divided, and what have we conquered? :) After so many years we are
  ending up with A3 approaching that has integrated MIDI and audio
  anyway, a decade (too?) late.
 
 ... and this is why I've stopped using computers for music, and why the
 nekosynth plugins haven't progressed in two years.
 
 There is no way in hell I'm going near the utterly fundamentally
 retarded mess of shit and fail that is Ardour 3.
 
 It's a DAW.  It shouldn't have *any* MIDI beyond control automation and
 some idea of sync.  Leave that to a sequencer.

LOL. Let me see if I have this straight: I don't personally want a
'sequencer', therefore this program is a 'DAW', and therefore it
obviously should not be doing MIDI anything because I have conveniently
defined it as something that shouldn't. Solid argument.

Did you happen to notice how virtually every single popular PC DAW in
existence doesn't agree with your take on what they should do? How you
completely disregard overwhelming user demand (with no actual argument
behind it, no less)?

I'm not sure how you would attempt to justify this hilariously
curmudgeony opinion to a user who wants to work with audio and MIDI on a
timeline, but I'd sure like to hear it for entertainment's sake. Or is
working with audio and MIDI on a timeline somehow an inherently invalid
goal? Why, exactly? If I want to arrange some MIDI and audio on a
timeline (i.e. make music) I'm supposed to deal with the massively
clunky PITA of using separate programs to do so? What, exactly, is the
win there? What, exactly, is the user gaining?

If you're into do one thing and do it well, there is actually a
logical argument in saying Ardour shouldn't have a mixer: particularly
with Jack, a process barrier between timeline and mixer/effects/etc
actually makes some sense. A process barrier between two timelines based
on the irrelevant detail of what kind of data you can stick in them does
not. The obscene amount of code duplication involved in that scenario is
pretty telling evidence that something is crap. What you are saying
simply does not make any sense whatsoever, and, coincidentally,
virtually everybody who has made music on a computer in the past several
decades disagrees with it. Hm. Everyone is wrong and you are right, eh?
Must be rough. Kids these days!

-dr


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Gordon JC Pearce
On Wed, 2011-02-23 at 19:11 -0500, Paul Davis wrote:
 On Wed, Feb 23, 2011 at 5:58 PM, Gordon JC Pearce gordon...@gjcp.net wrote:
 
  There is no way in hell I'm going near the utterly fundamentally
  retarded mess of shit and fail that is Ardour 3.
 
 gordon, we love you too. honest.

Oh, that reminds me, it's just about donation time again, isn't it?

  It's a DAW.  It shouldn't have *any* MIDI beyond control automation and
  some idea of sync.  Leave that to a sequencer.
 
 its wierd how the overwhelming majority of folks out there in the
 world don't seem to feel that this distinction is relevant to their
 working style, and even more, whenever anybody does bring out a cool
 new sequencer (e.g. nodal, or some of the hex- or octagonal sequencers
 that have appeared recently) everyone starts wondering about how it
 can be integrated into theirDAW.

What happened to the idea of doing one thing, and doing it well?  I'm
not even totally sold on the idea of having the recorder and mixer in
the same app...

To that end, why has no-one managed to produce a PC (by which I mean the
general case of modern personal computer, not x86/PCI cards/beige box
PC) sequencer that doesn't suck overweight elephants through extremely
fine mesh?  Cubase 3 on the Atari was simple, intuitive (well, apart
from the Interactive Phrase Synthesizer, I never met anyone that could
figure that out except for one guy from Orkney who lived in a house full
of cats and ate only yoghurt and green tea) and reliable.  Surely it
cannot be beyond the wit of man to come up with something as good as
20-year-old software on hardware a million times as fast?

Gordon MM0YEQ



___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Paul Davis
On Wed, Feb 23, 2011 at 7:33 PM, Gordon JC Pearce gordon...@gjcp.net wrote:

 What happened to the idea of doing one thing, and doing it well?  I'm
 not even totally sold on the idea of having the recorder and mixer in
 the same app...

you probably want ayyi then. except ... oh well.

 To that end, why has no-one managed to produce a PC (by which I mean the
 general case of modern personal computer, not x86/PCI cards/beige box
 PC) sequencer that doesn't suck overweight elephants through extremely
 fine mesh?  Cubase 3 on the Atari was simple, intuitive

... and couldn't do shit with audio sequencing. so shall we move on?

 cannot be beyond the wit of man to come up with something as good as
 20-year-old software on hardware a million times as fast?

what is beyond the wit of man is to define the meaning of as good in
a context where every neighbour has different intents and purposes.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Paul Davis
On Wed, Feb 23, 2011 at 7:39 PM, Paul Davis p...@linuxaudiosystems.com wrote:
 On Wed, Feb 23, 2011 at 7:33 PM, Gordon JC Pearce gordon...@gjcp.net wrote:

 What happened to the idea of doing one thing, and doing it well?

oh, and to answer that question: what happened was huge great
boatloads of data that need to be shovelled around between all the
relevant components, complicated synchronization in both the backend
models and the user interfaces of all the relevant components, and a
general disdain for complex *systems* when one can settle for merely
complex programs.

i mean, good god, why do i need to deal with freaking color channels
in the GIMP when all i want to do is a posterization, rotation,
surface mask and illumination effect?
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] [ANN] IR: LV2 Convolution Reverb

2011-02-23 Thread Alexandre Prokoudine
On 2/24/11, Gordon JC Pearce wrote:

 It's a DAW.  It shouldn't have *any* MIDI beyond control automation and
 some idea of sync.  Leave that to a sequencer.

I think I know your next argument: we don't need virtual instruments
as plug-ins, eh?  And while at that, let's dump lash/ladish crap
altogether. Session management is for n00bs, real musicians have sound
engineers to do it for them, so they can focus on actual music :)

(On reflection, it provides a new dimension to my recent little visual
joke about Dream Theater's approach to music:
http://prokoudine.info/files/dream-theater.png (says What's the
rush?) Paul, how about visualization of little pixies doing JACK
transport or pitch-shifting in A3? I know Gordon would love it :))

 Of course, there are no *usable* PC-based sequencers, so after gathering
 dust for some ten years my 1/4 tape machine and Alesis MMT-8 are having
 all the fun, and the PC just sits with pidgin, evolution and an ssh
 session to my IRC client.

Gordon, there's no shame admitting you make a good use of hardware for
making music. We all did it, honestly. Some of us still do. Hardware
is joy to use.

 Linux audio is nowhere.  There isn't a usable sample editor, there are a

Sample editor as in Swami or gigedit? I wouldn't mind see them
merged, actually.

 It's 2011.  I've been at this for a decade.  It's just as bad as it was
 when I started trying to use PCs for music.  I give up.

Giving up is easy. Patching A3 to remove offensive MIDI tracks so that
the sight of the word MIDI in few parts of UI doesn't give the
willies is a real task for a real man. Be a man, Gordon, control your
software :)

Alexandre Prokoudine
http://libregraphicsworld.org
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] the future of display/drawing models (Was: Re: [ANN] IR: LV2 Convolution Reverb)

2011-02-23 Thread David Olofson
On Wednesday 23 February 2011, at 23.47.33, Dominique Michel 
dominique.mic...@vtxnet.ch wrote:
[...]
 Another problem is the hardware. All the PC video cards are video
 driven. That imply than the card have to refresh the whole screen in
 order to change one pixel. That is not old technology, that is PC
 technology. At the same time than the first PC was computers like the
 Amiga or the Atary.

Though it would be theoretically possible to do partial refreshing of some 
types of displays, that would be very, very implementation specific - and 
quite pointless. In applications where you care about frequent updates with 
accurate timing, you'll usually want that for the whole screen anyway. In 
other applications, you just use a frame buffer and some $2 of RAMDAC hardware 
to repeatedly pump that out in some standard serialized format.

This has nothing whatsoever to do with the way graphics is rendered.

(Well, true vector displays as seen in some ancient arcade games would be a 
gray zone - but believe even those had some sort of frame buffer of sorts 
somewhere; they just stored a table of coordinates rather than a 2D array of 
color codes.)


 In the Amiga, the video card was vectorial, to change one pixel, all
 that was needed was the new pixel value and its x y coordinates.

I still do exactly that in various projects, via SDL, on Windows, Linux, Mac 
OS X and a dozen other platforms. It works with OpenGL and Direct3D as well, 
though it can be a bit tricky to get right, as buffering and page flipping can 
be set up in a number of different ways.


 To change a part of the screen, the Amiga was using vectorial
 objects called sprites.

The sprites were just tiny hardware overlays, not actually changing anything 
persistently - which is the very point with them. Lots of restrictions though, 
there were only 8 of them, though you could multiplex them vertically.


 So, even for complex visual objects, the
 computational time was much lower than with the video approach, and the
 2D on such old machines is still competitive against the 2D on the most
 powerful PC of today.

As someone who's done quite a bit of to-the-metal graphics programming on the 
Amiga and PC/VGA, as well as the usual DirectDraw, GDI, X11, OpenGL etc, I 
don't quite see what you mean here. Even the C64 did pretty much what we do 
today; it just had a lot less data to move around!

Indeed, the C64, Amiga (and VGA) had hardware scrolling, and the former two 
had those hardware sprites. Those features could indeed save loads of cycles - 
but only in some very special cases. Many Amiga trackers used hardware 
scrolling of a single bitplane for low cost scrolling of the pattern view, but 
that's a clever trick with many limitations. The sprites were used for slider 
knobs and VU meters, but as there were only 8 channels, that required some 
clever coding in any non-trivial application. And, it would actually have a 
*higher* cost (due to DMA stealing CPU cycles, among other things), except for 
the moments when the user is actually dragging a knob...!

So, for the most part, it was the usual pixel pushing we're still doing today 
- and not only that; it was dog slow and awkward to do due to the bitplane 
memory layout; for each pixel you had to twiddle individual bits in multiple 
locations. Same deal with anything before AGA Chunky pixels, 256 color VGA 
and the HighColor/TrueColor era.

The Amiga had the blitter of course, but that's just a simple precursor of the 
3D accelerators we have now. That is, it did the same thing as you'd do with 
the CPU, only a bit faster (provided you were on the standard 7.14 MHz 68000) 
and more restricted.


 At that time, 3D was almost non existent. To develop the 3D
 capabilities, most efforts from the manufacturers was spend on
 improving the video based cards. Now, the situation is than the 2D part
 of a video card is so little than the manufacturers are considering to
 remove it and use the 3D part to get the 2D from the card.

So, what's wrong with that? The alternatives are to use ancient, limited 2D 
APIs, or tax the CPU with custom software rendering. 2D rendering APIs 
essentially just cover random crippled subsets of 3D accelerator 
functionality.

I long for the day when OpenGL (or similar) is the single, obvious answer to 
any realtime graphics rendering needs, so we can just forget about all this 
rectangular blits and limited or no blending nonsense!

(And this is coming from some weirdo who actually likes to play around with 
software rendering and other low level stuff...! :-)


 I don't get the advantage of this approach for a workstation.

It's just so much quicker and easier to get the job done! As a bonus, even a 
dirt cheap integrated GPU will get the job done many times faster than a 
software implementation, and without stealing cycles from your DSP code.


 A workstation is not about 3D gaming but about making some work.

2D rendering is just a subset of 3D... Where does 3D 

Re: [LAD] do any of the jack client examples show playing a file from disk?

2011-02-23 Thread drew Roberts
On Wed, Feb 16, 2011 at 9:19 PM, Erik de Castro Lopo
mle...@mega-nerd.com wrote:
 drew Roberts wrote:

 do any of the jack client examples show playing a file from disk?
 if so, which?
 if not, any links to simple code that does this?
 c++ or c?

 sndfile-jackplay:

    http://www.mega-nerd.com/libsndfile/tools/#jackplay

 from the sndfile-tools package:

    http://www.mega-nerd.com/libsndfile/files/sndfile-tools-1.03.tar.gz

I finally got it to compile and run. Cool. Then I chopped it up and
put bits and pieces here and there in my class and got it to play a
file then I clicked the button. The problem is that the loop blocks
(right?) and the gui is then unresponsive. I need it to be responsive
and to be able to play multiple files at once but clicking on each
files button in the grid.

Do you know of any simple to understand gui based jack file players?
qt4 preferred. Or can you give me a hint on how to structure things to
get what I want above.

For now, I have switched to playing through jack via a process that
uses mplayer. You can take a peek here to get a better idea of what I
am after if you like.

https://github.com/zotz/jSoundz


 Erik

all the best,

drew
-- 
http://freemusicpush.blogspot.com/
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] do any of the jack client examples show playing a file from disk?

2011-02-23 Thread Paul Davis
On Wed, Feb 23, 2011 at 10:21 PM, drew Roberts zotz...@gmail.com wrote:

 I finally got it to compile and run. Cool. Then I chopped it up and
 put bits and pieces here and there in my class and got it to play a
 file then I clicked the button. The problem is that the loop blocks
 (right?) and the gui is then unresponsive. I need it to be responsive
 and to be able to play multiple files at once but clicking on each
 files button in the grid.

 Do you know of any simple to understand gui based jack file players?

i'm pretty confused. first it appeared that you wanted a simple,
non-GUI JACK audio file player. now it appears that you want a fairly
complex, GUI-driven audio file player that happens to use JACK.
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Call for alpha testers: FLAM

2011-02-23 Thread Sean Bolton

On Feb 23, 2011, at 2:21 AM, Luis Garrido wrote:

I am preparing FLAM's (Front-ends for Linux Audio Modules) first
release. Among other things, FLAM intends to allow programmers and
non-programmers alike to create their own (external) GUIs for audio
plugins. At this moment only Rosegarden as a host and LADSPA as plugin
type are supported, but this is hopefully just a first step.

Project page:

http://vagar.org/code/projects/flam

...


I don't have time at the moment to test this out, but it looks
cool.  Note that Rosegarden is not the only host able to load
DSSI-style GUIs for LADSPA plugins: both jack-dssi-host and
ghostess can as well.

-Sean

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] the future of display/drawing models (Was: Re: [ANN] IR: LV2 Convolution Reverb)

2011-02-23 Thread Loki Davison
On Thu, Feb 24, 2011 at 12:24 PM, David Olofson da...@olofson.net wrote:

 I don't see how one could realistically design anything that'll come close to
 a down-clocked low end 3D accelerator in power efficiency. What are you going
 to remove, or implement more efficiently...?

 Also, 3D accelerators are incredibly complex beasts, with ditto drivers. (Part
 because of many very clever optimizations that both save power and increase
 performance!) But, hardcore gamers and other power users need or want them, so
 they get developed no matter how insanely overkill and pointless they may
 seem. As a result, slightly downscaled versions of that technology is
 available dirt cheap to everyone. Why not just use it and be done with it?


I am often amazed by the broad range of skills on LAD. Well written
and easy to understand. Now i think i should have got something
chunkier than my very nice GTX 460... speaking of energy, I can always
get another 2 for 3X SLI... ;)

Loki
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev