The way VST does it however, that wouldn't be needed, since
timestamps are related to buffers. 0 == start of this buffer.
Might look nice to plugins, but I forsee minor nightmares in
multithreaded hosts, hosts that want to split buffers, hosts that
support different buffer sizes in
Hi everybody. I've been reading this list for a week. Thought I'd pitch
in here because I'm also writing a softstudio; it's pretty far already and
the first public release is scheduled Q1/2003.
First, I don't understand why you want to design a synth API. If you
want to play a note, why not
On Tue, 2002-12-10 at 08:38, Sami P Perttu wrote:
Hi everybody. I've been reading this list for a week. Thought I'd pitch
in here because I'm also writing a softstudio; it's pretty far already and
the first public release is scheduled Q1/2003.
for Linux, obviously? ;-)
First, I don't
On Tue, Dec 10, 2002 at 10:38:52 +0200, Sami P Perttu wrote:
First, I don't understand why you want to design a synth API. If you
want to play a note, why not instantiate a DSP network that does the job,
connect it to the main network (where system audio outs reside), run it
for a while and
On Tue, Dec 10, 2002 at 01:07:48 +0100, Tim Goetze wrote:
Steve Harris wrote:
Linuxsampler is using a similar approach, but its not blockless (and
it wouldn't be noticably better if it was).
Still if anyone ever wants to tackle this they can have a fair chunk of my
spare brain cycles.
On Tue, Dec 10, 2002 at 02:03:42 +0100, Tim Goetze wrote:
Not if it generates machine code. That way, it could (theoretically)
in realtime? we'll need everybodies' spare cpu cycles!
No, it really isn't that slow.
SyncModular does a similar trick.
- Steve
On Mon, Dec 09, 2002 at 10:56:42 -0800, Tim Hockin wrote:
* Must get enough so that if something else asks you can dole some out
I'm not sure why this is neccesary. Can someone explain. The others seem
reasonable (its not like youre going to run out of ints).
- Steve
On Tue, Dec 10, 2002 at 03:56:32 +0100, David Olofson wrote:
linear_pitch = note_pitch * (12.0 / 16.0);
That is, stretch the scale so you need 16.0 note_pitch units to
span one octave. Now, all of a sudden, your synths - apparently
written for 12tET - can play 16tET. They don't
On Tue, Dec 10, 2002 at 01:13:26 +0100, David Olofson wrote:
Then you're missing the point. My 12.0/octave linear_pitch is
*exactly* the same thing as your 1.0 - except that it's 12.0 instead
of 1.0. (See previous post.)
Yeah, and thats why its bad.
They will as long as you dont try to
On Monday 09 December 2002 11:40 pm, David Olofson wrote:
I would be happy to see a clean solution for this, but so far, these
are the only alternatives we have managed come up with:
1. 1.0/note for note_pitch, 1.0/octave for linear_pitch.
Converter plugins required
The argument against c++ has been a constantly changing ABI, but with
the release of GCC 3.2 it finally looks like G++ will have a stable API.
time will tell i guess.
i have my doubts about this. with the flexibility that c++
compile-time flags provide, i'm not sure one can ever talk about a
Steve Harris wrote:
On Tue, Dec 10, 2002 at 02:03:42 +0100, Tim Goetze wrote:
Not if it generates machine code. That way, it could (theoretically)
in realtime? we'll need everybodies' spare cpu cycles!
No, it really isn't that slow.
SyncModular does a similar trick.
it'd still be
On Tue, Dec 10, 2002 at 09:14:36 -0500, Paul Davis wrote:
So time starts at some point decided by the host. Does the host pass the
current timestamp to process(), so plugins know what time it is? I assume
that if the host loops, or the user jumps back in song-position, time does
not jump
On Tue, Dec 10, 2002 at 03:08:01 +0100, Tim Goetze wrote:
it'd still be interesting to know how the sync problems this
method poses are solved: you cannot rely on executable code
modifications to be atomic. an indirect jump instruction is
not guaranteed to work ok: a pointer on x86 is 32
Paul Davis wrote:
no, atomic is 32 bits on x86. its only 24 bits on sparc, where
the need to provide a spinlock to cover cache write-back effects
forces it to 24 bits. you can do atomic exchange and compare-and-swap
on pointers for x86.
thanks for the correction. sacrificing portability, this
Steve Harris wrote:
On Tue, Dec 10, 2002 at 03:08:01 +0100, Tim Goetze wrote:
it'd still be interesting to know how the sync problems this
method poses are solved: you cannot rely on executable code
By sync problemt do you mean loop latency? There not solved exactly its
nope, i meant dynamic
On Mon, Dec 09, 2002 at 10:56:42 -0800, Tim Hockin wrote:
* Must get enough so that if something else asks you can dole some out
I'm not sure why this is neccesary. Can someone explain. The others seem
reasonable (its not like youre going to run out of ints).
There was talk of plugins
Hi,
ams-1.5.5 is available from http://www.suse.de/~mana/kalsatools.html.
It fixes a serious bug in synth.cpp which causes the machine to freeze
when ams is started as root.
Some example patches for the bode frequencer LADSPA plugin included in
the new 0.3.3 version of Steve's plugin set have
On Tue, Dec 10, 2002 at 05:06:29 +0100, Tim Goetze wrote:
Steve Harris wrote:
On Tue, Dec 10, 2002 at 03:08:01 +0100, Tim Goetze wrote:
it'd still be interesting to know how the sync problems this
method poses are solved: you cannot rely on executable code
By sync problemt do you mean
On Tuesday 10 December 2002 12:27 am, Bob Ham wrote:
Hi again,
I just completed a huge introduction to ladcca. I hope this will help
generate some interest. It's part of a very incomplete manual, which
I've put up on a webpage for the thing at http://pkl.net/~node/ladcca.html
The
On Tuesday 10 December 2002 07.17, Tim Hockin wrote:
a Channel has p Controls and q Ports
Well, a Channel can have p Controls OR p Audio Ports. I would say
that a Channel can vave p *Slots* - where a slot can be one of:
Audio Input Slot
Audio Output Slot
Control Input
On Tuesday 10 December 2002 07.48, Tim Hockin wrote:
[All sorts of stuff about get_event_port() and returning a cookie
...]
Is this ok?
I think it sounds good - We'll need a constant so the host can ask
for the port which is to receive control-agnostic events, like
VOICE_ON.
Yes. Those
Steve Harris wrote:
nope, i meant dynamic updates on a realtime (lock-free)
code path; it's an interesting problem with, afaict, no
obviously elegant solutions.
Argh! I was thinking of dumping the code and rebuilding (hopefully keeping
the state). Doing it that way would be interesting, but
-Original Message-
From: Joshua Haberman [mailto:[EMAIL PROTECTED]]
Paul Davis [EMAIL PROTECTED] wrote:
Has anybody actually tried to get gtk+ and qt working in the same
application?
its been done.
it was ugly as sin.
This is a strong counterexample to the oft-repeated
Nathaniel Virgo said:
people with inclinations toward non-GPL open source licences
To pick a nit, code under non-GPL OSS licenses /can/ be linked to GPLed
(not LGPLed) libraries as long as the OSS license is a GPL compatible Free
Software License.
See
On Tuesday 10 December 2002 07.56, Tim Hockin wrote:
the RT engine - *unless* you decide on a number of VVIDs to
allocate for each Channel of every plugin, right when they're
instantiated.
That sound most sensible. The instrument has to allocate voice
table space, so there is likly
On Tuesday 10 December 2002 08.00, Tim Hockin wrote:
It doesn't have to, unless it actually cares. If you save a
preset with a bad value in some field, the plugin will just fix
it when you load the preset.
There's just one problem: In what order do you write the controls
back to ensure
Just an observation about an alternative path on softsynths: a LADSPA plugin
or network can be used easily enough as a softsynth using control-voltage
(CV) approaches (a few already exist). It's just a matter of agreeing the
conventions - implementation is trivial.
I've been meaning to finish
On Tuesday 10 December 2002 08.56, Tim Hockin wrote:
[...timestamps...]
Wrapping is not a problem, so why avoid it? :-)
So time starts at some point decided by the host. Does the host
pass the current timestamp to process(), so plugins know what time
it is?
In Audiality, there is a host
I assume that if the host loops, or the user jumps back in
song-position, time does not jump with it, it just keeps on
ticking?
Yes. You can't rewind *time*, can you? ;-)
Seriously though, the reason to do it this way is that timestamp time
is directly related to audio time (ie sample count)
On Tuesday 10 December 2002 09.38, Sami P Perttu wrote:
Hi everybody. I've been reading this list for a week. Thought I'd
pitch in here because I'm also writing a softstudio; it's pretty
far already and the first public release is scheduled Q1/2003.
Sounds interesting! :-)
First, I don't
On Tuesday 10 December 2002 11.38, nick wrote:
[...]
For a complete contrast, please look over
http://amsynthe.sourceforge.net/amp_plugin.h which i am still
toying with as a(nother) plugin api suitable for synths. I was
hoping to wait until i had a nicely written host and plugins to
On Tuesday 10 December 2002 12.36, Steve Harris wrote:
On Mon, Dec 09, 2002 at 10:56:42 -0800, Tim Hockin wrote:
* Must get enough so that if something else asks you can dole
some out
I'm not sure why this is neccesary. Can someone explain. The others
seem reasonable (its not like youre
On Tuesday 10 December 2002 12.32, Steve Harris wrote:
On Tue, Dec 10, 2002 at 01:07:48 +0100, Tim Goetze wrote:
Steve Harris wrote:
Linuxsampler is using a similar approach, but its not blockless
(and it wouldn't be noticably better if it was).
Still if anyone ever wants to tackle
i will be talking more about this issue at the LAD meeting in
karlsruhe (plug, plug :)
which is impossible for Californians to attend on a budget :(
On Tuesday 10 December 2002 13.00, Steve Harris wrote:
[...pseudocode and stuff...]
I know which I prefer. There are other solutions to the sclaing
problem, but AFAICT they all involve actualy using 1.0/octave
really and just scaling it up and down every time you want to use
it. Pointless.
On Tuesday 10 December 2002 13.15, Steve Harris wrote:
[...]
If you just represent pitch, then I can create a virtual
instrument (connected to a physical one if neccesary) that can
create the right pitches for the scale (or be analogue).
I *am* suggesting to represent pitch; just that
On Tuesday 10 December 2002 14.48, Nathaniel Virgo wrote:
On Monday 09 December 2002 11:40 pm, David Olofson wrote:
I would be happy to see a clean solution for this, but so far,
these are the only alternatives we have managed come up with:
1. 1.0/note for note_pitch, 1.0/octave for
Tim Goetze wrote:
Steve Harris wrote:
On Tue, Dec 10, 2002 at 03:08:01 +0100, Tim Goetze wrote:
it'd still be interesting to know how the sync problems this
method poses are solved: you cannot rely on executable code
By sync problemt do you mean loop latency? There not solved
On Tuesday 10 December 2002 6:14 pm, Bob Ham wrote:
LGPL would still protect your code in that any alterations to ladcca
itself would still have to be released under LGPL - it just means that if
commercial companies (or people with inclinations toward non-GPL open
source licences) were
On Tuesday 10 December 2002 15.08, Tim Goetze wrote:
Steve Harris wrote:
On Tue, Dec 10, 2002 at 02:03:42 +0100, Tim Goetze wrote:
Not if it generates machine code. That way, it could
(theoretically)
in realtime? we'll need everybodies' spare cpu cycles!
No, it really isn't that
On Tuesday 10 December 2002 15.54, Steve Harris wrote:
On Tue, Dec 10, 2002 at 09:14:36 -0500, Paul Davis wrote:
So time starts at some point decided by the host. Does the host
pass the current timestamp to process(), so plugins know what
time it is? I assume that if the host loops, or
On Tuesday 10 December 2002 20.31, Paul Davis wrote:
I assume that if the host loops, or the user jumps back in
song-position, time does not jump with it, it just keeps on
ticking?
Yes. You can't rewind *time*, can you? ;-)
Seriously though, the reason to do it this way is that
Yes. Event/audio time is one thing, and musical time is something
completely different, although related.
you've just defined event time to be the same as audio time. thats
a mistake, i think. there are some definite benefits to being able to
define events' time in musical time as well.
Musical
Tim Hockin wrote:
i will be talking more about this issue at the LAD meeting in
karlsruhe (plug, plug :)
which is impossible for Californians to attend on a budget :(
which in turn calls for an audio recording to be streamed over the
net
/me needs to dig around a little for streaming
On Tuesday 10 December 2002 23.02, Paul Davis wrote:
Yes. Event/audio time is one thing, and musical time is something
completely different, although related.
you've just defined event time to be the same as audio time.
thats a mistake, i think. there are some definite benefits to being
able
[representing pitch]
i'm a classically trained musician, and i even failed to learn csound
properly, but even so it strikes me as highly arbitrary and somewhat
anachronistic to stick to the 12 semitones/octave model.
so i'd strongly second steve's suggestion to have 1.0f per octave and
nothing
Hi,
Richard Furse hat gesagt: // Richard Furse wrote:
Just an observation about an alternative path on softsynths: a LADSPA plugin
or network can be used easily enough as a softsynth using control-voltage
(CV) approaches (a few already exist). It's just a matter of agreeing the
conventions -
On Tue, Dec 10, 2002 at 07:16:52 +0100, Tim Goetze wrote:
Argh! I was thinking of dumping the code and rebuilding (hopefully keeping
the state). Doing it that way would be interesting, but much harder. Youd
have to either use a lot of function calls or do some hard code relocation
stuff I
On Tue, Dec 10, 2002 at 07:51:58 +0100, David Olofson wrote:
Hmm... IIRC, someone initially misunderstood my design and thought
the VVIDs were a common resource maintaned by the host.
That was me I think. Which is why I though it was a good idea :)
This is optional for plugins, of course.
On Tue, Dec 10, 2002 at 08:53:02 +0100, David Olofson wrote:
You could have an interpreted mode, where it tries to evaluate it
noramlly, but that /will/ be slow.
Or an intermediate mode, where the GUI generates unoptimized machine
code more or less directly, by pasting micro plugins
On Tue, Dec 10, 2002 at 10:28:29 +0100, David Olofson wrote:
it'd still be interesting to know how the sync problems this
method poses are solved: you cannot rely on executable code
modifications to be atomic. an indirect jump instruction is
not guaranteed to work ok: a pointer on x86 is
David Olofson wrote:
thats a mistake, i think. there are some definite benefits to being
able to define events' time in musical time as well.
Like what? Since we're talking about sample accurate timing, isn't
asking the host about the musical time for an event timestamp
sufficient for when
On Tue, Dec 10, 2002 at 07:11:51PM -, Richard Furse wrote:
pure-LADSPA networks. BTW, is anyone doing this already? If so, 50% of the
code is already done. ;-) I'm thinking in terms of defining a synth using
two patches - one to define the per-note network required (e.g.
CV-osc-filter-OUT)
On Tue, Dec 10, 2002 at 09:10:01PM +0100, David Olofson wrote:
What I'm trying to say is that (1/12)/note certainly doesn't look
nice in note oriented code, considering that there's *not*
nescessarilly 12 notes per octave:
No, but if theres not 12 notes per octave then you dont need a
On Tue, Dec 10, 2002 at 09:22:13PM +0100, David Olofson wrote:
It *does* hurt the 12tET case, at least unless you're suggesting
that sequencers should always store 1.0/octave...?
I thought sequencers were going to send note numbers?
Sequencers are going to store pitch in the form of
On Tue, Dec 10, 2002 at 10:56:41PM +, Steve Harris wrote:
Yeah, saol .so's would rock. Some problem with global varaibles IIRC, I
think there might be an ELF hack to get round it but I never looked into
it too closely.
This came up a long time ago when I looked into making
LADSPA a
On Wednesday 11 December 2002 00.05, Steve Harris wrote:
On Tue, Dec 10, 2002 at 10:28:29 +0100, David Olofson wrote:
it'd still be interesting to know how the sync problems this
method poses are solved: you cannot rely on executable code
modifications to be atomic. an indirect jump
On Wednesday 11 December 2002 00.01, Steve Harris wrote:
On Tue, Dec 10, 2002 at 07:51:58 +0100, David Olofson wrote:
Hmm... IIRC, someone initially misunderstood my design and
thought the VVIDs were a common resource maintaned by the host.
That was me I think. Which is why I though it was
On Tue, Dec 10, 2002 at 11:18:53PM +, Steve Harris wrote:
I'm not quite sure how either of them handle that newfangled poly-phoney
that seems so popular these days ;)
AFAICT, they both punt and do everything monophonic.
PD can handle polyphony, and is about as modular as they come;
but I
Yep, pd and suchlike excellent environments for putting together networks.
However, other applications don't import pd patches or instruments and there
is no way to share softsynths on Linux. Hence the current API debate.
What I'd like to see is a simple XML format for LADSPA plugin networks and
On Wednesday 11 December 2002 00.08, Tim Goetze wrote:
David Olofson wrote:
thats a mistake, i think. there are some definite benefits to
being able to define events' time in musical time as well.
Like what? Since we're talking about sample accurate timing, isn't
asking the host about the
On Tue, Dec 10, 2002 at 03:49:14PM -0800, Paul Winkler wrote:
Then JACK came along, and I decided to drop that idea and pursue
getting sfront to compile JACK clients. It works, mostly...
and one day I'll clean it up enough to submit to John L. to
distribute with sfront... really, I will...
On Wed, Dec 11, 2002 at 12:53:44AM +0100, David Olofson wrote:
The solution in SyncModular is much simpler, theres a big Compile
button and when you press it the sound goes away for a second or so
:)
Well, yeah - but I was under the impression that the idea was to
*avoid* that. :-)
On Wednesday 11 December 2002 00.29, Steve Harris wrote:
On Tue, Dec 10, 2002 at 09:10:01PM +0100, David Olofson wrote:
What I'm trying to say is that (1/12)/note certainly doesn't look
nice in note oriented code, considering that there's *not*
nescessarilly 12 notes per octave:
No, but
See how I handle this in Audiality. Originally, I thought it would be
a nice idea to be able to queue events ahead of the current buffer,
but it turned out to be a very bad idea for various reasons.
And normal plugins don't generate and output audio or control data
an arbitrary number of
On Tuesday 10 December 2002 8:39 pm, David Olofson wrote:
More ideas, anyone?
4. Raw frequency in Hz.
How would that make anything easier?
I'm not saying it necessarily would, I was just suggesting an alternative
that hadn't been mentioned at the time I started typing.
It's
On Wednesday 11 December 2002 00.33, Steve Harris wrote:
On Tue, Dec 10, 2002 at 09:22:13PM +0100, David Olofson wrote:
It *does* hurt the 12tET case, at least unless you're
suggesting that sequencers should always store 1.0/octave...?
I thought sequencers were going to send note
Steve Harris [EMAIL PROTECTED] writes:
Yeah, please do that would be damn useful. For rapid prototyping if
nothing else
FYI, making sfront produce code suitable for .so's is at the
top of the list of things to do these days, because AudioUnits
support awaits it. But, that's the sfront
I'm not conviced Bay has the correct connotation...
Well, the intention is that it should be thought of something like a
physical panel or area on a real device, where you have a number of
jacks, all of the same kind.
Maybe there's a better word for it.
There has to be - see below for
On Wednesday 11 December 2002 01.44, Paul Davis wrote:
See how I handle this in Audiality. Originally, I thought it would
be a nice idea to be able to queue events ahead of the current
buffer, but it turned out to be a very bad idea for various
reasons.
And normal plugins don't generate
David Olofson wrote:
And normal plugins don't generate and output audio or control data
an arbitrary number of buffers ahead. Why should they do that with
events?
you may have an algorithm written in a scripting (non-rt
capable) language to generate events for example. or you
don't wan't to
John Lazzaro wrote:
Basically, many Logic users would like to use SAOL as a scripting
language for their own plugins ... thus, AudioUnits support.
make that 'Logic and Linux users' please.
This could actually be a catalyst for SAOL becoming more popular
generally, if it works out ...
i'm
unsunscribe
you are discussing an API that is intended to support
*instruments*.
And very few instruments understand musical time, and practically
none *should* think in terms of notes.
i didn't say anything about notes (which is why i deliberately used a
non-MIDI number to stand for a pitch code of some
Hi,
I would like to request that if there are any users of the new RME HDSP
9652 card that are able to successfully install and use this card, would you
please get in touch with me and let me know what your system configurations
are? I understand that there are at least a couple of you out
have you tried doing it manually?
modprobe -v snd-hammerfall-mem
modprobe -v snd-hdsp
what happens? Make sure you load the snd-hammerfall-mem modules before
any other module.
also you will have to set the output levels on the channels. If you run
the command amixer contents you should see
D R,
Hi. Actually, I tried that last night, but I didn't load the
snd-hammerfall-mem. It complained about some sort of missing references.
I'll give this a try later today. Thanks.
I haven't tried amixer. I did try alsamixer and received a message 'no
mixer elems found', or something to
Hi. Actually, I tried that last night, but I didn't load the
snd-hammerfall-mem. It complained about some sort of missing references.
I'll give this a try later today. Thanks.
in the message you sent recently, snd-hammerfall-mem worked just fine,
and reported allocating buffers for the card.
Paul,
I think that if the dmesg/var/log/messages output said there were
buffers, then it was getting loaded by automatically. Most likely this is
because it is in modules.conf.
However, when I ran insmod snd-hdsp by hand, I got some error messages
that were a bit different than the
Apparent success! The HDSP 9652 is now recognized and Jack is running with
no xruns. (Under KDE no less...)
I have not had a chance to test audio yet, but thanks to Fernando I now have
the HDSP 9652 up and running under what will soon be standard in the Planet
flow.
It appears that either we did
On Wednesday 11 December 2002 01.42, Tim Hockin wrote:
I'm not conviced Bay has the correct connotation...
Well, the intention is that it should be thought of something
like a physical panel or area on a real device, where you have
a number of jacks, all of the same kind.
Maybe
On Wednesday 11 December 2002 02.06, Tim Goetze wrote:
David Olofson wrote:
And normal plugins don't generate and output audio or control
data an arbitrary number of buffers ahead. Why should they do
that with events?
you may have an algorithm written in a scripting (non-rt
capable)
Joern Nettingsmeier wrote:
[representing pitch]
i'm a classically trained musician, and i even failed to learn csound
properly, but even so it strikes me as highly arbitrary and somewhat
anachronistic to stick to the 12 semitones/octave model.
so i'd strongly second steve's suggestion to have
On Wednesday 11 December 2002 02.43, Paul Davis wrote:
you are discussing an API that is intended to support
*instruments*.
And very few instruments understand musical time, and practically
none *should* think in terms of notes.
i didn't say anything about notes (which is why i
ShezZan wrote:
unsunscribe
congrats. not even did you fail to read the instructions, you also
outwitted the administrivia filter.
sigh.
--
Jörn Nettingsmeier
Kurfürstenstr 49, 45138 Essen, Germany
http://spunk.dnsalias.org (my server)
http://www.linuxdj.com/audio/lad/ (Linux
86 matches
Mail list logo