[LAD] current state of the art for python to midi bindings?

2009-09-05 Thread Iain Duncan
Hi folks, I have done any linux audio dev in a few years, and wondering
what I should be using for a simple cross-platform real time capable
python library for handling midi. I was previously using portmidi, but
noticed there has been more recent releases of rtmidi. Any advice
appreciated.

thanks
Iain

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


[LAD] pyqt vs wxpython for audio apps?

2009-09-05 Thread Iain Duncan
Hi everyone, I'm wondering what most linux audio developers think about
pyqt vs wxpy for writing audio app guis now that qt is gpl'd.
Specifically I'm interested in tightly controlling the timing of event
loop ( ie making some accurate external clock source like the csound
engine be the time boss ) and making decent faders widgets.

thanks!
iain

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] pyqt vs wxpython for audio apps?

2009-09-06 Thread Iain Duncan
On Sun, 2009-09-06 at 11:02 +0100, victor wrote:
> Hi Ian,
> 
> I have seen good results with wxPython. The guys in Montreal are using it
> with great success, see for instance http://code.google.com/p/ounk/
> We're also be hearing from them soon on the new version of a
> certain Grand Lady, which uses wxPython too.

Hey Victor, good to hear from you! Thanks for the tips. Does anyone know
if there are any decent audio widgets that can be added to wx now? Last
time I checked it was some work to get audio style faders in there, but
that was several years ago.

Iain


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] pyqt vs wxpython for audio apps?

2009-09-06 Thread Iain Duncan
On Sun, 2009-09-06 at 11:01 -0400, Joshua D. Boyd wrote:
> Paul Davis wrote:
> > On Sat, Sep 5, 2009 at 5:33 PM, Iain Duncan wrote:
> >> Hi everyone, I'm wondering what most linux audio developers think about
> >> pyqt vs wxpy for writing audio app guis now that qt is gpl'd.
> >> Specifically I'm interested in tightly controlling the timing of event
> >> loop ( ie making some accurate external clock source like the csound
> >> engine be the time boss ) and making decent faders widgets.
> > 
> > my recommendation is that you rethink whatever architecture you're
> > imagining. you will not, and almost certainly should not, drive a GUI
> > event loop from anything audio related. you should (IMHO) be thinkng
> > about two different loops: a GUI event loop driven by mouse, keyboard
> > and system time(r|out) events, and an audio engine loop driven by the
> > "clock" of the audio API (JACK, ALSA, whatever). the GUI doesn't need
> > tight timing (remember that your display almost certainly only
> > refreshes at no more than 100 times per second, and quite possibly
> > more in the range  of 60-80 times per second.

Thanks. How do you communicate to the gui loop when it should update
itself based on audio activity? 

thanks
Iain


___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


Re: [LAD] pyqt vs wxpython for audio apps?

2009-09-06 Thread Iain Duncan
On Sun, 2009-09-06 at 17:36 -0400, Paul Davis wrote:
> On Sun, Sep 6, 2009 at 2:38 PM, Iain Duncan wrote:
> >> Paul Davis wrote:
> >> > my recommendation is that you rethink whatever architecture you're
> >> > imagining. you will not, and almost certainly should not, drive a GUI
> >> > event loop from anything audio related. you should (IMHO) be thinkng
> >> > about two different loops: a GUI event loop driven by mouse, keyboard
> >> > and system time(r|out) events, and an audio engine loop driven by the
> >> > "clock" of the audio API (JACK, ALSA, whatever). the GUI doesn't need
> >> > tight timing (remember that your display almost certainly only
> >> > refreshes at no more than 100 times per second, and quite possibly
> >> > more in the range  of 60-80 times per second.
> >
> > Thanks. How do you communicate to the gui loop when it should update
> > itself based on audio activity?
> 
> why would it do that? most of what happens in the GUI as far as a
> display is driven by timers, since the screen update rate is
> relatively low. there is no point, for example, trying to display peak
> meters at "audio rate" - this is hundreds of times faster than the
> screen refresh. similarly for waveform displays (e.g. oscilloscopes).
> the data is flowing by at a much faster rate than the screen can
> display it, so you pick an update based mostly on the screen, not what
> is happening in the data. in an ideal world, GUI's would be driven by
> the video interface's "sync to vblank" signal, in the same way that we
> drive audio via the interrupt from the audio interface. without
> openGL, this concept doesn't exist, alas.
> 
> for specific notifications between the audio engine and GUI, you will
> want some kind of (relatively) lock free communication method. there
> are a variety of ways to do this, some better than others. ardour
> currently tends to use a FIFO sometimes read from a timeout in the
> GUI, and sometimes coupled to a pthread_cond_t (in this latter case,
> the audio engine will signal the GUI that something has happened).
> this latter technique technically violates RT programming guidelines
> because to raise a condition variable (pthread_cond_t) you need to
> take a lock. however, contention on the lock is almost non-existent
> and so for practical purposes it ends up not being an issue. you can
> also register callbacks with the engine to be called when things
> happen there - the callback must be realtime safe, but can queue up
> some kind of further action in the GUI that it knows will be "picked
> up" in the "near future".

Thanks Paul, that should give me lots to think about. =)
iain

___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/mailman/listinfo/linux-audio-dev


[LAD] prototyping callback based architecture in python?

2011-11-01 Thread Iain Duncan
Hi, I'm working on an a project that I intend to do using the STK in a
callback style, but am hoping I can prototype the architecture in python
until I've figured out the various components and their responsibilities
and dependencies. Does anyone know of any kind of python library ( or
method? ) that would let me simulate the way callback based STK apps using
RTAudio work? IE I want to have a python master callable that gets called
once per audio sample and has a way of sending out it's results to the
audio subsystem.

I've found a bunch of python audio libs but it doesn't seem like they work
that way, maybe I'm missing something?

thanks!
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] prototyping callback based architecture in python?

2011-11-02 Thread Iain Duncan
Thanks guys, it looked from what I could see on the port audio page that
only non-blocking was supported, but Gary said on the stk list that it
might be possible with the python wrappers in the rtaudio package. I
realize it's probably not going to be practical as a long term solution (
though I sure with it were possible )  but as I actually earn my living
coding python and am a total C++ amateur, it's probably worth saving some
frustration figuring out architecture in a python prototype. I'm ok with
high latency for now.

Kjetil, do you know if anyone has experimented with a real time memory
allocator for Python?

Thanks
Iain

On Wed, Nov 2, 2011 at 6:37 AM, Kjetil Matheussen  wrote:

> > On Wed, Nov 2, 2011 at 9:24 AM, Kjetil Matheussen
> >  wrote:
> >
> >> I also think I remember someone using Python for real time sample
> >> by sample signal processing in Pd...
> >
> > right, but not sample-by-sample, or am i misremembering Pd internals?
> >
>
> It is possible (and quite simle) to write a wrapper for letting python do
> sample-by-sample processing in Pd. I remember someone mentioning
> someone doing it, but this was in 2005, and the performance was
> so bad it wasn't useful. But I might remember wrong.
>
>
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-02 Thread Iain Duncan
I looked into this about five years ago, but didn't get too far. Wondering
if anyone on here has experience splitting apps up into:

- realtime low latency engine in C++ using per sample callback audio (
either RTAudio or Jack or PortAudio )

- user interfaces and algo composition routines in python that either
communicate with the engine by shared memory or over a queue

Basically I want to be able to do the gui and data transforming code in
Python whenever possible and allow plugins to be written to work on the
data in Python

I'm hoping to arrive at some sort of design that ultimately lets the engine
act as an audio server with multiple user interface clients, possibly not
even on the same machines, but definitely not on the same cores. If anyone
has tips, war stories, suggestions on reading or projects to look at, I'd
love to hear them.

Thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-02 Thread Iain Duncan
On Wed, Nov 2, 2011 at 10:25 AM, Paul Davis wrote:

> On Wed, Nov 2, 2011 at 1:09 PM, Iain Duncan 
> wrote:
> > - realtime low latency engine in C++ using per sample callback audio (
> > either RTAudio or Jack or PortAudio )
>
> this conflicts with this:
>
> > Basically I want to be able to do the gui and data transforming code in
> > Python whenever possible and allow plugins to be written to work on the
> data
> > in Python
>
> the GUI is one thing; transforming data *in general* with python isn't
> going to fit into a low latency engine.
>
> now, of course, if you mean "performing edits to high level data
> structures", which you might, then there isn't really a problem
> (though you'll likely want to get into RCU to manage things). but if
> you are talking about DSP processing with python plugins, i *doubt*
> that it will work reliably.
>

thanks Paul, I think I was unclear. If I understand you correctly, then I
meant transforming high level data. I'm making a CV style step sequencer
for live looping, so I'm talking about having Python be used to do things
like apply transformative routines to material that is in the sequences but
is not yet in the audio output chain.  Stuff like, when I hit this midi
key, run this routine over the sequence data. These Python transformations
are meant to be run at lower priority, ie if the engine wants the processor
to spit out the next sample while the transformation is part way through,
that's fine, it gets interrupted. I wasn't intending to use Python to apply
dsp to signals going out in realtime, that would likely be accomplished
using either the STK or embedded Csound instances.

Does that sound more feasible? BTW, excuse my ignorance, but what is RCU?

I found some blog posts on Ross Bencina's (sp?) site about making sure
communication between the high priority engine and lower priority processes
works right, but I'm hoping to find more concrete examples of this, and
figure out how to do it between python and c++.

thanks for your help,
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-02 Thread Iain Duncan
Thanks guys, that's very helpful. I'll no doubt have further questions once
I get to prototyping that part. Paul, if you could let me know where to
look for examples in the ardour code that would be cool too.

thanks
iain

On Wed, Nov 2, 2011 at 10:45 AM, Jeff Koftinoff wrote:

> A good starting point for reading is on Software Transactional Memory
> (STM):
>http://en.wikipedia.org/wiki/Software_transactional_memory
>
> Jeff
>
> On 2011-11-02, at 10:40 AM, Paul Davis wrote:
>
> > On Wed, Nov 2, 2011 at 1:34 PM, Iain Duncan 
> wrote:
> >
> >> Does that sound more feasible? BTW, excuse my ignorance, but what is
> RCU?
> >
> > Yes, that's certainly feasible. RCU = Read-Copy-Update is a software
> > pattern for dealing with situations where you need to update a complex
> > data structure while it is in use. The general approach is to make a
> > copy, modify the copy, and the atomically (normally) swap a pointer to
> > the original for a pointer to the new one. somehow you have to clean
> > up the old one.
> >
> > you won't find this written up in any texts on programming, i think.
> > its in wide use in the linux kernel, and there have been attempts to
> > create some general purpose user space libraries that do it too. my
> > own sense is that almost every implementation of RCU will end up being
> > incredibly context (app) dependent. we use it a lot in ardour. its
> > quite complex.
> > ___
> > Linux-audio-dev mailing list
> > Linux-audio-dev@lists.linuxaudio.org
> > http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] prototyping callback based architecture in python?

2011-11-02 Thread Iain Duncan
For anyone interested, it turns out there is an alpha package in rtaudio
that wraps RTAudio's callback facility in python, added this year by one of
Gary Scavone's students, Antoine Lefebvre.  It's incomplete (doesn't yet
support all the rtaudio options) so it's hard to tell whether it's working
super well on my machine, but it compiles and runs and makes a sine wave
ok. It'll give me something to port with for now at any rate, and maybe I
can help complete it.

it's in rtaudio-4.0.10/contrib/python/pyrtaudio
iain

On Wed, Nov 2, 2011 at 10:02 AM, Iain Duncan wrote:

> Thanks guys, it looked from what I could see on the port audio page that
> only non-blocking was supported, but Gary said on the stk list that it
> might be possible with the python wrappers in the rtaudio package. I
> realize it's probably not going to be practical as a long term solution (
> though I sure with it were possible )  but as I actually earn my living
> coding python and am a total C++ amateur, it's probably worth saving some
> frustration figuring out architecture in a python prototype. I'm ok with
> high latency for now.
>
> Kjetil, do you know if anyone has experimented with a real time memory
> allocator for Python?
>
> Thanks
> Iain
>
> On Wed, Nov 2, 2011 at 6:37 AM, Kjetil Matheussen <
> k.s.matheus...@notam02.no> wrote:
>
>> > On Wed, Nov 2, 2011 at 9:24 AM, Kjetil Matheussen
>> >  wrote:
>> >
>> >> I also think I remember someone using Python for real time sample
>> >> by sample signal processing in Pd...
>> >
>> > right, but not sample-by-sample, or am i misremembering Pd internals?
>> >
>>
>> It is possible (and quite simle) to write a wrapper for letting python do
>> sample-by-sample processing in Pd. I remember someone mentioning
>> someone doing it, but this was in 2005, and the performance was
>> so bad it wasn't useful. But I might remember wrong.
>>
>>
>>
>> ___
>> Linux-audio-dev mailing list
>> Linux-audio-dev@lists.linuxaudio.org
>> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>>
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-02 Thread Iain Duncan
> If the python stuff is only for the gui and non-realtime stuff, this is a
> very
> practical approach. There are quite a few people doing that. I believe
> Fons'
> session managment and assorted apps are running that way (altough he
> doesn't
> seem to release them). Some of my prototype apps for the next-generation
> JackMix are built that way. And I would have done this for my
> university-job
> project had I learned python earlier.
>
> Doing applications in python with the sound-stuff happening in a separate
> C-
> compiled thread) gives that advantage that you can implement the apps as
>  modules and run them either stand-alone or within a bigger controlling
> app.
>

Thanks. If you have any, or know of any, examples I'd love to look at them.

Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-02 Thread Iain Duncan
> Yes, that's certainly feasible. RCU = Read-Copy-Update is a software
> pattern for dealing with situations where you need to update a complex
> data structure while it is in use. The general approach is to make a
> copy, modify the copy, and the atomically (normally) swap a pointer to
> the original for a pointer to the new one. somehow you have to clean
> up the old one.
>
> you won't find this written up in any texts on programming, i think.
> its in wide use in the linux kernel, and there have been attempts to
> create some general purpose user space libraries that do it too. my
> own sense is that almost every implementation of RCU will end up being
> incredibly context (app) dependent. we use it a lot in ardour. its
> quite complex.
>

Thanks Paul. I'm wondering if it might be easier to begin with to have my
python side only read the shared data and send messages to the c engine for
writes. Is there a relatively straightforward way to have a python process
have read access to a block of memory that the c engine has full access to?
Would this simplify the concurrency issues?

thanks for the tips everyone.
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-03 Thread Iain Duncan
Thanks everyone. Yeah, I did think of using osc to make some kind of
message passing protocal, but also wondered if that would get restrictive.
Wow, this thread gives me a lot to think about.

Paul, would it be practical/possible to have the python process allowed to
read the shared memory only? Does that simplify the locking problems, or
not really? I could imagine that I could do what I want pretty well by
doing read-copy-send messages back to engine. But obviously some of you
have thought a lot harder about this.

thanks
iain


On Thu, Nov 3, 2011 at 2:47 PM, Harry van Haaren wrote:

> Paul C,
>
> I seem to remeber getting into Jack programming using a Jack module for
> python... just looked it up from my backup of programming projects,
> "pyjack" is the name of the module, i was using version 0.1 at the time. It
> allows capture / playback of "standard" python arrays of floats.
>
> Project is currently located at:
> http://sourceforge.net/projects/py-jack/files/py-jack/  and is at version
> 0.5.2.
>
> HTH, -Harry
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] timing the processing of queues between engine and ui threads?

2011-11-03 Thread Iain Duncan
Further to the conversation about Python to C++ ( with many helpful
responses, thanks everyone! ).

For my particular case, no drop outs is critical, and I really really want
to be able to run multiple UIs on lots of cheap machines talking to the
same engine over something (osc I expect). So I'm ok with the fact that
user input and requests for user interface updates may lag, as the queue is
likely to be really busy sometimes. I'm imagining:

Engine thread, which owns all the data actually getting played ( sequences,
wave tables, mixer/synth/effect controls, the works )
- gets called once per sample by audio subsystem ( rtaudio at the moment )
- does it's audio processing, sends out audio
- loop by Magical Mystery Number 1:
   - get message off input queue describing change to a data point (
sequence or sample data )
   - updates data point
- loop by mystery number 2:
  - get message off 2nd UI queue requesting the state of a data point
  - sends back a message with that data to the requester
done

GUI thread
- keeps it's own copy of whatever data is pertinent to that particular gui
at that point
- sends a bunch of requests if user changes the view
- sends messages data change requests according to user actions

Here's my question, how do I determine the magical mystery numbers? I need
to make sure engine callback is always done in time, no matter how many
messages are in the queue, which could be very high if someone is dropping
in a sample of audio. By making the data point messages very simple, I hope
that I'll have a pretty good idea of how long one takes to process. It's
just a lock-get-write to simple data structure. But how much audio
processing has happened before that point will be variable. Anyone have
suggestions on that? Is the system clock accurate enough to check the time
and see how much a sample period is left and make some safe calculation
with headroom left over there? It is totally ok for the queue and the
inputs to lag if the audio number crunching goes through a spike.

suggestions  most welcome. (including 'that design sucks and here's why')

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] timing the processing of queues between engine and ui threads?

2011-11-03 Thread Iain Duncan
thanks Dave, that's what I was looking for! Have you used this technique
yourself? Do you have any suggestions on how that is done with non jack
systems? And any open source code that uses that technique?

thanks so much.
iain

On Thu, Nov 3, 2011 at 5:42 PM, David Robillard  wrote:

> On Thu, 2011-11-03 at 16:29 -0700, Iain Duncan wrote:
> > Further to the conversation about Python to C++ ( with many helpful
> > responses, thanks everyone! ).
> >
> >
> >
> > For my particular case, no drop outs is critical, and I really really
> > want to be able to run multiple UIs on lots of cheap machines talking
> > to the same engine over something (osc I expect). So I'm ok with the
> > fact that user input and requests for user interface updates may lag,
> > as the queue is likely to be really busy sometimes. I'm imagining:
> >
> >
> > Engine thread, which owns all the data actually getting played
> > ( sequences, wave tables, mixer/synth/effect controls, the works )
> > - gets called once per sample by audio subsystem ( rtaudio at the
> > moment )
> > - does it's audio processing, sends out audio
> > - loop by Magical Mystery Number 1:
> >- get message off input queue describing change to a data point
> > ( sequence or sample data )
> >- updates data point
> > - loop by mystery number 2:
> >   - get message off 2nd UI queue requesting the state of a data point
> >   - sends back a message with that data to the requester
> > done
> >
> >
> > GUI thread
> > - keeps it's own copy of whatever data is pertinent to that particular
> > gui at that point
> > - sends a bunch of requests if user changes the view
> > - sends messages data change requests according to user actions
> >
> >
> > Here's my question, how do I determine the magical mystery numbers? I
> > need to make sure engine callback is always done in time, no matter
> > how many messages are in the queue, which could be very high if
> > someone is dropping in a sample of audio. By making the data point
> > messages very simple, I hope that I'll have a pretty good idea of how
> > long one takes to process. It's just a lock-get-write to simple data
> > structure. But how much audio processing has happened before that
> > point will be variable. Anyone have suggestions on that? Is the system
> > clock accurate enough to check the time and see how much a sample
> > period is left and make some safe calculation with headroom left over
> > there? It is totally ok for the queue and the inputs to lag if the
> > audio number crunching goes through a spike.
> >
> >
> > suggestions  most welcome. (including 'that design sucks and here's
> > why')
>
> Time stamp the events as they come in (e.g. with jack_frame_time()), and
> aim to execute them at (time + block_size_in_frames).  This avoids
> jitter, and keeps the rate that you execute them bounded by the rate
> they come in.
>
> You'll also probably want some kind of hard upper limit to ensure
> realtimeyness when things get crazy.  That truly is a Magical Mystery
> Number and will depend greatly on how expensive your events are.  Make
> one up.
>
> -dr
>
>
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] timing the processing of queues between engine and ui threads?

2011-11-03 Thread Iain Duncan
> For my particular case, no drop outs is critical, and I really really want
> to be able to run multiple UIs on lots of cheap machines talking to the
same
> engine over something (osc I expect). So I'm ok with the fact that user
> input and requests for user interface updates may lag, as the queue is
> likely to be really busy sometimes. I'm imagining:

you're going to want at least 3 threads:
>
>  1) inside the engine, something to handle requests from a UI that
> cannot be done under RT constraints
> and route others that can into ...
>  2) the engine thread
>  3) some UI (not necessarily in the same process)
>

Thanks, can you elaborate on the first two? ( I appreciate the time,
understand if not ). Is thread 1 spawned by thread 2? Is the idea that the
engine thread can then start stuff that it allows to be interrupted but
still owns all the data for? And how would that be handled if that thread
is being handled by the audio subsystem and I'm just writing a callback
function that runs once a sample?

thanks again
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] prototyping callback based architecture in python?

2011-11-04 Thread Iain Duncan
On Fri, Nov 4, 2011 at 10:36 AM, Emanuel Rumpf  wrote:

> 2011/11/2 Iain Duncan :
> > Hi, I'm working on an a project that I intend to do using the STK in a
> > callback style, but am hoping I can prototype the architecture in python
> > ... ... ...
> > until I've figured out the various components and their responsibilities
> and
> > dependencies. Does anyone know of any kind of python library ( or
> method? )
> > that would let me simulate the way callback based STK apps using RTAudio
> > work? IE
>
> You could implement the callbacks (and link it to STK) with Cython.
> (not CPython)
> This would require you to write a Cython-Header file for the called
> STK functions.
> Cython works very well for this, although you have to learn it, because
> it's neither real C nor real Python, but it's close.
>
> With some attentiveness, you could write rt- functions in Cython,
> because they compile to pure C / binary.
>

I was wondering about that, has anyone here had real success with Cython?


> I want to have a python master callable that gets called once per
> audio sample

> No, Once per audio buffer, consisting of many (e.g. 128) samples.
> That's a better choice, usually, even if you do sample-by-sample
> processing within the function/callback.
>

oops, yeah, I realizes I was mistaken there after sending it. That's what
I'm doing, thanks


>
> > and has a way of sending out it's results to the audio
> > subsystem.
> >
> > I've found a bunch of python audio libs but it doesn't seem like they
> work
> > that way,
>
> Note:
>  Don't use python threads (as implemented in CPython), they do not
> work for this.
>  You might have more luck with the more recent multiprocessing module,
>  ( http://docs.python.org/library/multiprocessing.html )
>  It was introduced to circumvent some of PythonThreads limitations.
>
>
thanks for the tips!

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-04 Thread Iain Duncan
On Fri, Nov 4, 2011 at 10:23 AM, Paul Davis wrote:

> On Fri, Nov 4, 2011 at 1:16 PM, Emanuel Rumpf  wrote:
>
> > While this is restrictive, in the way you mention, I think it's a
> > welcome simplification
> > ( compared to implementing a real-time-capable-linked-list + other
> > rt-structures ),
> > if your use-case doesn't require direct calls to the list (for any
> reason),
> > then you could request insert/remove/update operations through the
> > rt-ring-buffer.
>
> you can't perform insert/remove/update operations on a "normal" linked
> list in an RT thread.
>

Sorry, what do you need instead? ( trying hard to absorb all this.. )

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] prototyping (wrong) communication between python and rt threads?

2011-11-04 Thread Iain Duncan
Sounds like there are a number of options for me to look at and spend some
time getting the communication right, I'm wondering if anyone can give me a
suggestion on how I can start experimenting with object design *deferring*
the question of getting communication *right*, while I study options. I'd
like to come up with a well encapsulated API, wondered if anyone has ideas
for what would 'sort of work' for now while I'm writing experimental code,
but still be layered properly so that when it's time to examine the
threading and timing issues in detail I can. or maybe this isn't possible?

For example, what kind of queuing system would one suggest for just getting
started where occasional blocking is ok? Does anyone use boost queues or is
strictly a roll-your-own endeavor?

Is planning on sending messages with an osc protocal realistic as a
starting point?

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-04 Thread Iain Duncan
>> > While this is restrictive, in the way you mention, I think it's a
>> > welcome simplification
>> > ( compared to implementing a real-time-capable-linked-list + other
>> > rt-structures ),
>> > if your use-case doesn't require direct calls to the list (for any
>> > reason),
>> > then you could request insert/remove/update operations through the
>> > rt-ring-buffer.
>>
>> you can't perform insert/remove/update operations on a "normal" linked
>> list in an RT thread.
>
> Sorry, what do you need instead? ( trying hard to absorb all this.. )

either
>
>(1) a lock free data structure
>(2) perform the modifications on a copy in a non-RT context and
> then make the result
>   available to the RT context (i.e. RCU)
>

Sounds like it will be worth learning about both. I gather the Ardour
source is a good example of RCU correct? Can anyone point me at
documentation or examples of a lock free data structure?

thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] audio architecture wiki/docs/e-book idea?

2011-11-04 Thread Iain Duncan
Hey everyone, especially those who have been helping me with me
architecture questions. I'm wondering whether some of you would be
interested in helping in a simple advisory/editorial capacity if I were to
try to write up a wiki or e-book of some kind on real time audio
architecture. It seems to me like there is a lot of knowledge here on it,
and not very many (if any) good starting points for such things. I've found
decent references on the dsp side of audio coding, but not seen anything on
'how to put together a real time audio app from a-z' kind of thing. I find
writing docs helps me clarify things in my head, I'd be interested in doing
some writing if I know that people who know what they are doing would be
interested in advising and correcting. I figured if I put it online it
might be a good source of publicity for your work and we could link back to
projects ( ardour, etc, )

It would take a while of course, but might also help people new to these
lists and give us all some thing to point at and say: there's a good write
up on that here ->

thoughts?
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] prototyping callback based architecture in python?

2011-11-04 Thread Iain Duncan
On Fri, Nov 4, 2011 at 11:32 AM, Kjetil Matheussen <
k.s.matheus...@notam02.no> wrote:

> >> Thanks guys, it looked from what I could see on the port audio page that
> >> only non-blocking was supported, but Gary said on the stk list that it
> >> might be possible with the python wrappers in the rtaudio package. I
> >> realize it's probably not going to be practical as a long term solution
> >> (
> >> though I sure with it were possible )  but as I actually earn my living
> >> coding python and am a total C++ amateur, it's probably worth saving
> >> some
> >> frustration figuring out architecture in a python prototype. I'm ok with
> >> high latency for now.
> >>
> >> Kjetil, do you know if anyone has experimented with a real time memory
> >> allocator for Python?
> >>
> >
> > No, I don't know.
> > Are you sure you need to use Python? There are other high level languages
> > letting you do this, which are much faster than Python.
> > LuaAV for Lua is the most obvious since Lua is the same
> > type of language as Python. Other alternatives I know of are Faust,
> > Kronos (if it's available now) and Snd-RT.
> > (Of those last three, you should look at Faust first.)
> >
>

Well, I code in Python for a living, so as far as maximizing developer time
it makes sense! And there are a lot of nice libraries out there for doing
things in Python ( gui, midi, osc interfaces, etc )

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] prototyping callback based architecture in python?

2011-11-04 Thread Iain Duncan
On Fri, Nov 4, 2011 at 1:42 PM, Emanuel Rumpf  wrote:

> 2011/11/4 Iain Duncan :
> >
> > On Fri, Nov 4, 2011 at 11:32 AM, Kjetil Matheussen
> >> > Are you sure you need to use Python? There are other high level
> >> > languages
> >> > letting you do this, which are much faster than Python.
> >> > LuaAV for Lua is the most obvious since Lua is the same
> >> > type of language as Python. Other alternatives I know of are Faust,
> >> > Kronos (if it's available now) and Snd-RT.
> >> > (Of those last three, you should look at Faust first.)
> >> >
> >
> >
> > Well, I code in Python for a living, so as far as maximizing developer
> time
> > it makes sense! And there are a lot of nice libraries out there for doing
> > things in Python ( gui, midi, osc interfaces, etc )
> >
>
> While I can understand your desire to code in Pyhton,
> what you are trying to accomplish is not simple with Pyhton
> (if possible at all), not even with C.
> All your questions are just the beginning of a very huge problem ;)
> I presume: by trying to use Python for this, dev-time will grow
> exponentially ...
> Do you have any experience in connecting python to C, "wrapping" ?
>
> I would advice (if asked ;) to skip
> Python for the real-time part.
> Use any language that is capable.
>
> (Apparently, most "Python Modules" are actually Wrapper-Classes for C
> - would you call that a capable language ? ;)
>
> You could still write the GUI in Python,
> choosing any applicable protocol (OSC, D-Bus, ...) l in order to
> connect to the non-rt part of the engine.
>

Thanks for the opinion Emmanual, very helpful. Yeah, it may be better to
just start off in C++ for the engine side and bite the bullet there.

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] timing the processing of queues between engine and ui threads?

2011-11-05 Thread Iain Duncan
On Thu, Nov 3, 2011 at 8:05 PM, David Robillard  wrote:

> On Thu, 2011-11-03 at 18:32 -0700, Iain Duncan wrote:
> > thanks Dave, that's what I was looking for! Have you used this
> > technique yourself? Do you have any suggestions on how that is done
> > with non jack systems? And any open source code that uses that
> > technique?
>
> Yes, I use it in Ingen, where all control is via Events.
>
> Event has three main methods:
>
> pre_process() - ("prepare") Execute everything in a non-realtime thread
> that has to happen before execution in the audio thread
>
> execute() - ("apply") Execute/apply command in the audio thread
>
> post_process() - ("finalize") Execute anything that needs to happen
> after execution in a non-realtime thread, like clean up and notifying
> the UI(s) about changes.
>
> The only difference non-jack would make is you need some function to
> tell you roughly what audio time it is you can call from another thread.
>

Does one use the system clock for that? Is it accurate enough? Any further
elaboration would be great. ( But I appreciate all the help so far either
way)

thanks
Iain


>
> -dr
>
>
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-05 Thread Iain Duncan
> Basically one "RtQueue" instance can pass messages back and forth between
> two threads lock free.
> I'd like to improve it so that one get / set method is there, and its
> thread-aware so it will automatically push / pull to the right ringbuffer.
> But that's a touch hard for me as I'm using GLib threads and finding which
> thread is running is something i can't do yet :D

this approach requires that all your data structures can be
> representable as POD (plain old data) and be effectively and easily
> de/serialized from/to a bytestream.
>
> this can be quite restrictive. linked lists, for example ...
>

Just wondering if I can get a clarification, this is the internal
audio/sequence data for the engine that needs to be representable as POD?
The reason for this is so that when inserts happen the RT thread is not
dealing with memory management for a linked list? Am I correct in
understanding that it's the memory allocation part of creating new pointers
that needs to be avoided in the RT thread in case it blocks or gets stalled
by the OS?

Does this mean that if I can get away with some kind of data scheme for the
engine that is not dependent on the pointer management that it will be a
lot easier to update based on incoming queued messages?

thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-05 Thread Iain Duncan
On Thu, Nov 3, 2011 at 2:40 PM, Harry van Haaren wrote:

> On Thu, Nov 3, 2011 at 8:25 PM, Paul Davis wrote:
>
>> this can be quite restrictive. linked lists, for example ...
>>
>
> Good point, and in some places I do find myself iterating lists into a
> vector in a non-RT context, and then sending the vector as a blob via OSC.
> It is a drawback of this separation.
>

What kind of context requires that?

thanks
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Communicating between python UI and C++ engine for real time audio?

2011-11-05 Thread Iain Duncan
@ Iain and others
Before you start to implement your own, which can take as much time as
your whole project ...
Here is a list of some libs I found, that implement lock-free
structures, STM, etc.
http://wiki.linuxaudio.org/wiki/programming_libraries#lockfree_non-blocking_data_structures_-_libraries
If anything is missing, let me know, or edit yourself.

Do you have a suggestion for which to use to 'just get started'? I plan to
revisit the threading in detail down the road, and keep it well wrapped up
in specific modules for that purpose, but also want to avoid spending too
much time exploring all the options before figuring out the rest of the
app. I need to able to run on linux and os x, windows is optional at this
point.

thanks!
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] timing the processing of queues between engine and ui threads?

2011-11-06 Thread Iain Duncan
>>> The only difference non-jack would make is you need some function to
>>> tell you roughly what audio time it is you can call from another thread.
>>
>> Does one use the system clock for that?
> I think frame time (a frame of samples) is meant here  ? That time is
> delivered in the jackd process callback.
>
>> Is it accurate enough?
> Depends on the system clock used, I presume.
> For best accuracy, you have to configure your kernel to support HPET
> (high precision event) timers
> and make ALSA use it as default.

the clock used for the system clock is less important than using a DLL
> to "link" the audio clock and the system clock. this enables you to
> answer the question "if its time T on clock1, what time is it on
> clock2?"
>
> fons wrote the canonical paper on this for a Linux Audio conference a
> few years ago, and JACK contains a DLL for this purpose
> (jack_get_microseconds() will return a prediction of the current time
> according to the audio clock, based on the system clock and the DLL.
>

thanks, it's sounding increasing like I should be using jack for the time
being.

Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] audio architecture wiki/docs/e-book idea?

2011-11-11 Thread Iain Duncan
Thanks! looking forward to those! have you considered having links to those
added to the aforementioned pages? hard to find this stuff. =)

iain

On Fri, Nov 11, 2011 at 8:57 AM, Harry van Haaren wrote:

> On Thu, Nov 10, 2011 at 9:08 PM, David García Garzón  > wrote:
>
>> http://xavier.amatriain.net/PFC/dgarcia-master.pdf
>>
>
>
>> http://parumi.org/thesis/pauarumi_thesis.pdf
>>
>
> Thanks!! I wasn't aware of either of these, and both shed light on the
> design of RT audio apps, very much appreciated :)
>
> -Harry
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] platform choices, jack for sequencing?

2011-11-13 Thread Iain Duncan
Thanks everyone for all the help on my architecture questions. It seems
like a lot of the best practise functionality has tools/components for it
already in Jack. I *was* planning on using rtaudio in order to be cross
platform, but if it's a lot easier to get things done in Jack, i could live
with being limited to linux and OS X.

Just wondered if I could poll opinions, for a real time step sequencer
meant to do super tight timing and by syncable with other apps, is Jack
going to be a lot easier to work with? Should I just lay into the jack
tutorials?

And is it straightforward to use the perry cook stk in a jack app?

thanks everyone
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] platform choices, jack for sequencing?

2011-11-13 Thread Iain Duncan
btw, the tools/components I was referring to are the jack ring buffer and
jack clock functions mentioned by some folks here. The project is meant for
live shows, so stable timing, low latency, and no glitching out is
essential. More essential than anything really. =)

thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] platform choices, jack for sequencing?

2011-11-13 Thread Iain Duncan
Thanks for the opinions!

iain

On Sun, Nov 13, 2011 at 8:20 PM, Patrick Shirkey  wrote:

>
> > Thanks everyone for all the help on my architecture questions. It seems
> > like a lot of the best practise functionality has tools/components for it
> > already in Jack. I *was* planning on using rtaudio in order to be cross
> > platform, but if it's a lot easier to get things done in Jack, i could
> > live
> > with being limited to linux and OS X.
> >
>
> Jack2 runs on windows too. Just that it hasn't seen as much adoption as
> most of us round here refuse to work with MS tech unless paid a lot of
> money to do so. Some of us just refuse outright. But Stephan and his team
> have put in a lot of effort to make it work on MS platforms.
>
> > Just wondered if I could poll opinions, for a real time step sequencer
> > meant to do super tight timing and by syncable with other apps, is Jack
> > going to be a lot easier to work with? Should I just lay into the jack
> > tutorials?
> >
>
> It doesn't take long to get a jack app up and running. Its the front end
> that will consume the vast majority of your time.
>
> > And is it straightforward to use the perry cook stk in a jack app?
> >
>
> https://ccrma.stanford.edu/software/stk/usage.html
>
> Several options can be supplied to the configure script to customize the
> build behavior:
>
> --disable-realtime to only compile generic non-realtime classes
> --enable-debug to enable various debug output
> --with-alsa to choose native ALSA API support (default, linux only)
> --with-oss to choose native OSS audio API support (linux only, no native
> OSS MIDI support)
> --with-jack to choose native JACK API support (linux and Macintosh OS-X)
> --with-core to choose Core Audio API support (Macintosh OS-X)
>
>
>
> > thanks everyone
> > iain
> > ___
> > Linux-audio-dev mailing list
> > Linux-audio-dev@lists.linuxaudio.org
> > http://lists.linuxaudio.org/listinfo/linux-audio-dev
> >
>
>
> --
> Patrick Shirkey
> Boost Hardware Ltd
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] tutorial or example for using jack ring buffer between threads?

2011-11-13 Thread Iain Duncan
Can anyone point me at what they consider the best thing to look at for an
introduction to communication between threads in a jack app using the
ringbuffer?

I found some, but as docs appear a bit scattered, wondered if there was a
known best-first-reference type thing.

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] RAUL?

2011-11-13 Thread Iain Duncan
I found it on Dave's site, but other than that, couldn't find find much
mention of it. Do many people use it? Would it be wise to dig into RAUL for
writing a real time jack app?

Dave, any comments on it?

http://drobilla.net/software/raul/


thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] tutorial or example for using jack ring buffer between threads?

2011-11-14 Thread Iain Duncan
On Mon, Nov 14, 2011 at 2:03 AM, Harry van Haaren wrote:

> On Mon, Nov 14, 2011 at 6:29 AM, Iain Duncan wrote:
>
>> Can anyone point me at what they consider the best thing to look at for
>> an introduction to communication between threads in a jack app using the
>> ringbuffer?
>>
>
> There's an example client in the JACK source called "capture client",
> that's a pretty common response to said question. It writes audio data to
> the ring, and a disk thread takes it and does the writes in a non-RT
> context.
>
> I will do a tutorial style blog post on "extreme" simple ringbuffer usage
> in the near future. -Harry
>

Thanks, I'd be interested in seeing your tutorial, perhaps I can contribute
by helping you make sure it's clear to the newby.

Thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL?

2011-11-14 Thread Iain Duncan
> If you want to use it but the license is a problem, I can be convinced
> to change it to LGPL3+.  I simply default to GPL (as IMO everyone
> should) in the absence of specific arguments why that is not best in a
> given scenario.
>
> It never hurts to ask ;)
>
> > That said, the library seems a lot more feature complete than the
> > current code I have for the same functions, so there's something to
> > say for the efforts Dave Robbillard put into the library. Also in my
> > experience of librarys DR they're of high standard and I found no
> > bugs.
>

HI Dave, I would def be more interested in checking it out if it were LGPL
or MIT or somesuch. As I'm sure you know, Csound went LGPL a number of
years ago now, and that definitely increased uptake in the long run. Like
Harry, I just never know what use I might put some code to, so have a bit
of a kneejerk be-careful reaction to GPL.

Thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL?

2011-11-14 Thread Iain Duncan
On Mon, Nov 14, 2011 at 10:28 AM, Paul Davis wrote:

> On Mon, Nov 14, 2011 at 1:12 PM, Iain Duncan 
> wrote:
>
> > HI Dave, I would def be more interested in checking it out if it were
> LGPL
> > or MIT or somesuch. As I'm sure you know, Csound went LGPL a number of
> years
> > ago now, and that definitely increased uptake in the long run. Like
> Harry, I
> > just never know what use I might put some code to, so have a bit of a
> > kneejerk be-careful reaction to GPL.
>
> just to be clear, CSound went LGPL *from* the ridiculous "MIT
> educational license". that was a license that made its source code all
> but unusable (and at the very least, extremely unclear as to its
> usability).
>

Yes, the old csound license was a source of pain and agnst for sure. I just
meant to point out that LGPL was chosen specifically so that the engine &
new API could be used in commercial products, and I think that has been a
really good move. I'm sure that the csound~ object for Max resulted in a
lot of interest, and I expect Csound For Live to do the same.

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL?

2011-11-14 Thread Iain Duncan
Harry has a good point. If Dave had solicited opinions on it, I wouldn't
have dreamed of asking! I'd just assume you chose GPL for your reasons and
that they should be respected. =)

Iain

On Mon, Nov 14, 2011 at 6:10 PM, Harry van Haaren wrote:

> On Mon, Nov 14, 2011 at 4:38 PM, David Robillard  wrote:
>
>> It never hurts to ask ;)
>>
>
> Yes I suppose your right. I'll note though, that I'd concider myself
> hesitant to request an author of GPL software to re-license as LGPL. Its
> basically asking "mind if *I* earn money from *your* work?", and that's not
> something that I'd like to promote.
>
> On the other hand, a LGPL library that would allow easy usage of threads,
> ringbuffers, etc would definatly benefit the project I'm doing.
>
> Somewhere on the GNU site I read a pretty good paragraph on "choosing the
> right license", and basically it comes down to this:
> if there's a commercial version of library X available, and an LGPL
> alternative of X appears, it will promote the use of open libraries, and
> hopefully the use of the library will improve it.
> If an GPL version appears, it will be used in open-source software, but
> most likely never be used in commercial / closed software solutions, and
> hence its "publicity" and user-base will be smaller...
>
> (Overcame my dose of laziness:
> http://www.gnu.org/licenses/why-not-lgpl.html )
>
> End of the day, I think respect for the authors work is the most important
> thing, and I'd personally be slightly agitated if somebody emailed "Hey can
> I take your efforts and make money without promising anything back?" as it
> would imply that they don't want to respect the license that I'd chosen for
> the efforts I'd done.
>
> Rant over I think :) -Harry
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL?

2011-11-16 Thread Iain Duncan
> The bottom line here, for this paragraph, is that if you don't like the
> license terms, you are perfectly free to write your own version of the
> wheel, just do it in a clean room, you cannot have ever seen a copy of that
> source code.  If, OTOH, you are not capable of doing that, and the only way
> to get the job done is to use something that has a license that is
> distasteful to you, then you should retrain your taste buds and comply with
> the terms.  The license & copyright notices the author chooses to put on
> his output ARE what he/she puts on it and you have zero rights to decide
> otherwise.
>

In Canada, at least, the above is not quite true. I don't know about
elsewhere, but we are fortunate to still have protections for educational
use, so I definitely can learn from other peoples code, regardless of
license, and then re-implement it myself. Where the line is between copying
and re-implementing based on education is for the courts to decide, but
just thought i should point out that 'never having seen the source code'
would only be an issue IFF the infringement was already judged to be
copying.

Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] RAUL or other libraries for real this time? ;-)

2011-11-16 Thread Iain Duncan
Still curious about RAUL. As i have no immediate plans beyond learning how
to write proper audio app, even if license restrictions prevent from using
RAUL in a hypothetical commercial product years down the road, it may well
be worth me using for my own personal needs in the meantime.

Would love to hear feedback on technical merits of RAUL, minus license
conversations which now have my other thread. ;-)

heck I'd like to hear about any other libraries worth looking into too. I'm
trending at the moment to STK, embedded csound, Jack, and QT for gui. The
part I need library help with is likely synchronization and
interprocess/interthread communication. ( ie do I use the jack ringbuffer?
Do I look at boost queue implementations? does RAUL have a higher level
convenience ring buffer?

thanks!
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] tutorial or example for using jack ring buffer between threads?

2011-11-16 Thread Iain Duncan
Thanks! Did you just write it? I'll go through it in the next few days and
can send comments if you want help making it more accessible/understandable
to the student

much appreciated Harry!
Iain

On Wed, Nov 16, 2011 at 11:13 AM, Harry van Haaren wrote:

> Hey,
>
> Small tutorial here that writes integers into a ringbuffer, and the JACK
> thread reads them out:
> http://harryhaaren.blogspot.com/2011/11/tutorial-jack-ringbuffers.html
>
> Comments / improvements welcomed! -Harry
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL?

2011-11-16 Thread Iain Duncan
> The bottom line here, for this paragraph, is that if you don't like the
> license terms, you are perfectly free to write your own version of the
> wheel, just do it in a clean room, you cannot have ever seen a copy of that
> source code.  If, OTOH, you are not capable of doing that, and the only way
> to get the job done is to use something that has a license that is
> distasteful to you, then you should retrain your taste buds and comply with
> the terms.  The license & copyright notices the author chooses to put on
> his output ARE what he/she puts on it and you have zero rights to decide
> otherwise.
>

In Canada, at least, the above is not quite true. I don't know about
elsewhere, but we are fortunate to still have protections for educational
use, so I definitely can learn from other peoples code, regardless of
license, and then re-implement it myself. Where the line is between copying
and re-implementing based on education is for the courts to decide, but
just thought i should point out that 'never having seen the source code'
would only be an issue IFF the infringement was already judged to be
copying.

By which I meant to say, if copying is determined to be infringing. IE, it
doesn't matter that a bebop contrafact is obviously a copy, because
copyright law already states that chord progressions aren't copyrightable.

iain

Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL or other libraries for real this time? ;-)

2011-11-17 Thread Iain Duncan
Thanks Tim, I'll def check it out. What's the difference to a newbie like
myself between yours and the one in Jack?

Congrats btw, that's awesome that your work will be in boost!

iain

On Thu, Nov 17, 2011 at 2:09 AM, Tim Blechmann  wrote:

> > part I need library help with is likely synchronization and
> > interprocess/interthread communication. ( ie do I use the jack
> ringbuffer?
> > Do I look at boost queue implementations? does RAUL have a higher level
> > convenience ring buffer?
>
> my boost.lockfree library has been accepted and will be shipped with future
> boost releases. it contains an mpmc-stack, an mpmc-queue and a wait-free
> spsc
> ringbuffer (same algorithm as the jack/kernel/supercollider ringbuffer).
>
> git repo: http://tim.klingt.org/git?p=boost_lockfree.git;a=summary
> (note that the addressing_reviews branch will be the one that will go into
> boost)
>
> cheers, tim
>
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL or other libraries for real this time? ;-)

2011-11-17 Thread Iain Duncan
Great, I'll likely be a user then! Was planning on using boost whenever
possible anyway.

iain

On Thu, Nov 17, 2011 at 11:48 AM, Tim Blechmann  wrote:

> > Thanks Tim, I'll def check it out. What's the difference to a newbie like
> > myself between yours and the one in Jack?
> >
> > Congrats btw, that's awesome that your work will be in boost!
>
> the main difference is prbly that the jack ringbuffer is plain c and prbly
> needs libjack, while boost.lockfree is a c++ library and header-only.
>
> cheers, tim
>
> > On Thu, Nov 17, 2011 at 2:09 AM, Tim Blechmann  wrote:
> > > > part I need library help with is likely synchronization and
> > > > interprocess/interthread communication. ( ie do I use the jack
> > >
> > > ringbuffer?
> > >
> > > > Do I look at boost queue implementations? does RAUL have a higher
> > > > level
> > > > convenience ring buffer?
> > >
> > > my boost.lockfree library has been accepted and will be shipped with
> > > future boost releases. it contains an mpmc-stack, an mpmc-queue and a
> > > wait-free spsc
> > > ringbuffer (same algorithm as the jack/kernel/supercollider
> ringbuffer).
> > >
> > > git repo: http://tim.klingt.org/git?p=boost_lockfree.git;a=summary
> > > (note that the addressing_reviews branch will be the one that will go
> > > into boost)
>
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] RAUL or other libraries for real this time? ;-)

2011-11-19 Thread Iain Duncan
On Fri, Nov 18, 2011 at 9:42 PM, David Robillard  wrote:

> On Thu, 2011-11-17 at 20:48 +0100, Tim Blechmann wrote:
> > > Thanks Tim, I'll def check it out. What's the difference to a newbie
> like
> > > myself between yours and the one in Jack?
> > >
> > > Congrats btw, that's awesome that your work will be in boost!
> >
> > the main difference is prbly that the jack ringbuffer is plain c and
> prbly
> > needs libjack, while boost.lockfree is a c++ library and header-only.
>
> If you happen to need a C ringbuffer in a program with no libjack
> dependency, here is mine:
>
> http://svn.drobilla.net/zix/trunk/zix/ring.h
> http://svn.drobilla.net/zix/trunk/src/ring.c
>
> (No, "zix" isn't a released library, I am doing the copy-paste code
> reuse thing for now to avoid having yet another stable ABI to worry
> about.  It might be some day.  There's a few other things in there that
> might be useful...)
>

Thanks! Is this something that is also in RAUL or does RAUL have a
 different ring buffer implementation? ( or not have one?)

Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] "bleeding edge html5" has interesting Audio APIs

2011-11-19 Thread Iain Duncan
On Sat, Nov 19, 2011 at 3:24 AM, Stefano D'Angelo wrote:

> 2011/11/19 David Robillard :
> > On Fri, 2011-11-18 at 12:01 +0100, Adrian Knoth wrote:
> >> On Thu, Nov 17, 2011 at 11:48:28AM -0800, Niels Mayer wrote:
> >>
> >> > http://kinlan-presentations.appspot.com/bleeding/index.html#42
> >>
> >> Another step towards "What is an OS? I do everything in the browser."
> >> I don't really like it, though I see large-scale advantages when people
> >> don't have to install office anymore. ;)
> >>
> >> > Simple low-latency, glitch-free, audio playback and scheduling
> >> > Real-time processing and analysis
> >> > Low-level audio manipulation
> >> > Effects: spatial panning, low/high pass filters, convolution, gain,
> ...
> >>
> >> GUI-wise, using HTML5 sounds sane to me. Definitely for static UI
> >> elements, no idea about meters.
> >>
> >> Thanks to browsers, the Javascript compilers are damn good these days.
> >> If they add decent ways to do DSP with it, I don't see a reason why the
> >> whole concept shouldn't fly.
> >
> > I have every intention of moving as much GUI into the browser as
> > possible, FWIW.  Whatever isn't good enough now will be soon enough.
> >
> > Writing to native toolkits has always been the worst part of programming
> > an app, by far the biggest hindrance to true portability, and encourages
> > lack of UI/engine separation.  I will not miss it one little bit.
> >
> > There are things I don't like about it, and I'm sure a large number of
> > fellow retro curmudgeons around here feel likewise... but sometimes you
> > have to take a look around and acknowledge reality.  How many people
> > reading this keep a device with a full blown web browser in their
> > pocket?  When is the last time you used a computer that couldn't display
> > a web page?  QED.
> >
> > Writing one UI that works on all reasonable devices for free with zero
> > software installation?  Free "remote control" with any PC or tablet or
> > phone with wifi?  Yes please.  Whatever cons there are, they don't even
> > come close to trumping that very tangible user-visible win.
>

Hmm, I'd have to say though, as someone who does RIA apps for a living,
(mostly in Python + Dojo and JQuery) it's still a freaking pain. Compared
to using PyQT or PyWx, all of the javascript widget libraries still really
hurt. Mind you, I'm sure that will change. Probably in less than a few
years too!

What I'd like to see is something that fills the same role as the browser,
but is a clean break from the sorry cludgey state of javascript. I'm sure
the Android guys are on it though, so I'd agree it's likely the path of the
future. Maybe we will at last see the arrival of the universal appliance
platform that java was supposed to give us 20 years ago? ;-)

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] tutorial or example for using jack ring buffer between threads?

2011-11-19 Thread Iain Duncan
On 11/16/2011 01:16 PM, harryhaa...@gmail.com wrote:

> On , Iain Duncan  wrote:
> > Thanks! Did you just write it?
>
> Yup. As in literally just there. And I was reading your post in the new
> RAUL thread as you were typing that :D
> All the best, -Harry
>
> _
>
>
Thanks Harry, that was really helpful, the audio capture one has a lot of
code in there specific to recording to disk, it was helpful to just see the
ring buffer and nothing else. I hope you expand on it later to make a next
step!

Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] jack position/timebase tutorial/starting points?

2011-11-19 Thread Iain Duncan
Just because everyone's tips here were so helpful for the ringbuffer
conversation, does anyone have any pointers for where to start
understanding jack transport and clocking, other than the transport client
example?

thanks!
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] jack transport change accuracy for looping

2011-11-19 Thread Iain Duncan
Just wondering if I understand this correctly. I making a loop based app
for step sequencing. When I previously did this in Csound, I clocked it off
a phasor, so the timing was sample accurate ( but that brought all it's own
issues to be sure ). I'm wondering whether I should do the same thing in
jack app, or use the jack transport clock, or some hybrid.

My question, am I correct in understanding that if I use the jack transport
position to rewind in time, I'll get:

C) any other clients with running audio looping back to ( may or may not be
desirable )

B) a jitter based on the amount of time left between when the loop should
end and the end of the frame buffer in which the loop length runs out?

Has anyone solved B? Could it be done by some complex tempo cheating trick?

Does anyone have any methods they've used for tight timing of looping in a
jack app?

Pointers at code appreciated of course. =)

thank!
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack transport change accuracy for looping

2011-11-19 Thread Iain Duncan
On Sat, Nov 19, 2011 at 7:19 PM, Paul Davis wrote:

> On Sat, Nov 19, 2011 at 10:15 PM, Iain Duncan 
> wrote:
> > Just wondering if I understand this correctly. I making a loop based app
> for
> > step sequencing. When I previously did this in Csound, I clocked it off a
> > phasor, so the timing was sample accurate ( but that brought all it's own
> > issues to be sure ). I'm wondering whether I should do the same thing in
> > jack app, or use the jack transport clock, or some hybrid.
>
> there was a proposal many (*many*) years ago from jesse chappell that
> fully covered looping with JACK. it was never implemented. as it
> stands, it is not possible to get seamless looping with jack
> transport. in practice it might sound right for a given user with a
> given set of clients, but change any aspect of the configuration and
> it would no longer be seamless.
>

Thanks, I know you did some stuff with this with your old step sequencer.
Do you think it's better to just ignore moving around the jack transport
and do my looping internally to my app only? If there are known issues that
I'm just going to discover the hard ware, might as well save time not
barking up the wrong tree.

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack transport change accuracy for looping

2011-11-19 Thread Iain Duncan
On Sat, Nov 19, 2011 at 7:23 PM, Iain Duncan wrote:

> On Sat, Nov 19, 2011 at 7:19 PM, Paul Davis wrote:
>
>> On Sat, Nov 19, 2011 at 10:15 PM, Iain Duncan 
>> wrote:
>> > Just wondering if I understand this correctly. I making a loop based
>> app for
>> > step sequencing. When I previously did this in Csound, I clocked it off
>> a
>> > phasor, so the timing was sample accurate ( but that brought all it's
>> own
>> > issues to be sure ). I'm wondering whether I should do the same thing in
>> > jack app, or use the jack transport clock, or some hybrid.
>>
>> there was a proposal many (*many*) years ago from jesse chappell that
>> fully covered looping with JACK. it was never implemented. as it
>> stands, it is not possible to get seamless looping with jack
>> transport. in practice it might sound right for a given user with a
>> given set of clients, but change any aspect of the configuration and
>> it would no longer be seamless.
>>
>
>
Also, is that proposal a dead duck now? Seems to me like if it worked, it
would be a pretty killer feature given the popularity of the Ableton Live
style of working these days.

Has anyone else looked into it since then?

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack transport change accuracy for looping

2011-11-20 Thread Iain Duncan
On Sun, Nov 20, 2011 at 5:10 AM, Paul Davis wrote:

> On Sat, Nov 19, 2011 at 10:34 PM, Iain Duncan 
> wrote:
>
> > Also, is that proposal a dead duck now?
>
> No, its not a dead duck. But there is little to no manpower around to
> do an initial implementation so that it can be evaluated beyond paper.
>
> >Seems to me like if it worked, it
> > would be a pretty killer feature given the popularity of the Ableton Live
> > style of working these days.
>
> If you think that JACK transport looping has anything to do with the
> Live workflow, then sad to say but you either don't get jack transport
> or Live or both :)
>
> Many people look at Live and do not realize the sophistication of what
> they are doing. It has very little to do with the ability to set a
> global loop.
>

I probably was not clear enough, in my attempts at brevity. In my csound
app, and to the best of my knowledge with live, you have more than one
layer of looping go at once. In my case, I had a global master loop ( that
acted somewhat like x0x song mode ) and channels that independent loops
able to change loop lenght, phasor speed, and offset on the fly. The global
loop was useful for doing things like saying, 'on to the next song'.

This is what I was imagining as the use case for jack transport looping.
All the independent looping might run simultaneously while jack sat there
thinking it was playing the first 32 bars of a piece, allowing one to use
ardour or other apps to play one shot tracks of that length using whatever
they have going.

So, yeah, I'm aware that at the least I'm going to need phasor calculations
for the individual tracks, but was wondering whether I could use some of
what was already there in jack transport for the master song-mode sequencer.

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack transport change accuracy for looping

2011-11-20 Thread Iain Duncan
> Can you use the same approach you did in csound, using
> jack_transport's BBT info to run your phasor?  It would require that
> some app set a tempo and time signature, of course.  I use klick, or
> gtklick (which has a tap tempo feature), to do this.
>

That's precisely what I'm trying to figure out right now, and that was one
of things I was looking at jack transport for. In my csound only version,
the master clock was a master 8, ( or 16, or 32 ) bar phasor with modulo
calculations for subloops. so 120 bpm * 32 became the frequency of the
master phasor clock. This was primitive in some ways, but was a happy
surprise musically, as it turned out to be awesome that one could change
loop length and start points willy nilly, if it didn't add up nicely, you
got some truncation on the end but a hard reset at the top of the phasor,
and that *sounded great*. ( Actually that effect is the whole reason I'm
redoing it, it was a super great way to screw with loops and when I got
demo'd Live, I was like, ok we gotta do this with that cool phasor
technique again ).

Where my brain is hurting right now is making the jump between the way I
though about it Csound, and the putting in to buffer callbacks in Jack ( or
RTAudio or PortAudio for that matter ).

What I do know, is that if I'm using Jack, I might as well make sure that
the 8 bar phasor is using the exact same version of '8 bars' as jack
transport.

I suppose one option is to do it STK style and just treat it like single
sample calculations until I get some draft thing running and revisit later.

Any suggestions welcome, I can see doing the math on running counts of
individual samples, or doing it on time taken from the jack transport
clock. Not sure which to try first,

thanks for the input!
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack transport change accuracy for looping

2011-11-20 Thread Iain Duncan
> i think you may be confused about what JACK transport offers. its a
> global transport. that means that when you locate (which includes to
> looping) to a new position, *all* clients must be ready to continue
> processing audio before it can roll again after the locate. For some
> kinds of clients, the transport is totally irrelevant (e.g. a software
> synthesizer). For other clients, they can be ready immediately that
> the locate happens (e.g. a MIDI sequencer running instrument plugins).
> For other clients, there is a delay between the locate and them being
> ready to play because of the need to load any amount of data from disk
> (imagine locating to a new position in a 100 track session in a DAW).
>
> now, if the clients all knew that (a) JACK transport was in a looping
> state (b) where the loop points were, they could prepare for this, and
> everyone could be ready immediately after the locate. Ardour can
> already do this without JACK, for example (its called "Seamless
> looping" in ardour, and rather than actually locate and then do the
> data i/o, it understands ahead of time the the loop point is coming up
> and gets the next chunk of data from the right place).
>
> however, JACK currently does not have a mechanism to mark the
> transport state as "looping" or to convey the loop points to clients.
> that's why in Ardour, for example, seamless looping is not allowed if
> you are synced to JACK transport (or MTC or MIDI clock or ...)
>

Thanks, that clarifies things a lot for me. I didn't think about the disk
seeking issue. (For my app, my intent is to have everything in RAM for live
playing ). Was the intent of the jack looping transport proposal to allow
clients to know that they were looping and do seamless looping too if they
are able?

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] jack transport change accuracy for looping

2011-11-20 Thread Iain Duncan
On Sun, Nov 20, 2011 at 9:27 AM, Paul Davis wrote:

> On Sun, Nov 20, 2011 at 12:26 PM, Iain Duncan 
> wrote:
>
> > Thanks, that clarifies things a lot for me. I didn't think about the disk
> > seeking issue. (For my app, my intent is to have everything in RAM for
> live
> > playing ). Was the intent of the jack looping transport proposal to allow
> > clients to know that they were looping and do seamless looping too if
> they
> > are able?
>
> yes.
>

Off the official ardourd-jack roadmap record, do you think there's a
realistic chance of this still happening in the next few years? At least
for Ardour? I might code such that I can use ardour with it when available
if that's in the works because that would be super cool, but I totally get
that it may be a very low priority for Ardour dev.

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Question 0

2011-11-24 Thread Iain Duncan
Depending on your experience, if you're new to developing audio and aren't
already an expert C programmer, the new book "The Audio Programming Book"
by Boulanger and Lazzarini and others is good. lots of examples of audio
coding techniques.

Also the tutorials on the Perry Cook STK and RTAudio site are good too.

Iain

On Thu, Nov 24, 2011 at 7:47 AM, Celeven  wrote:

> Yes, it's a good beginning.
>
> Thanks
>
>
> On 24/11/11 15:31, Louigi Verona wrote:
>
>> Hey!
>> You might want to start with JACK: http://jackaudio.org/
>>
>> --
>> Louigi Verona
>> http://www.louigiverona.ru/
>>
>
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Question 0

2011-11-24 Thread Iain Duncan
On Thu, Nov 24, 2011 at 11:21 AM, Renato  wrote:

> On Thu, 24 Nov 2011 11:07:16 -0800
> Iain Duncan  wrote:
>
> > Depending on your experience, if you're new to developing audio and
> > aren't already an expert C programmer, the new book "The Audio
> > Programming Book" by Boulanger and Lazzarini and others is good. lots
> > of examples of audio coding techniques.
> >
>
> Hi, does it focus particularly on linux? I.e. the jack API, lv2 and so
>

No, it's more a general introduction to making audio with C. It does have
examples with PortAudio and PortMidi, and how to make a VST plugin, but no
jack or LV2 specific stuff.

Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Linux Audio Documentaion Effort : (Was "Question 0")

2011-11-27 Thread Iain Duncan
I also think it's a much needed idea. I'd be happy to do some contributing
too, but like Harry, will need my contributions looked over by experts!

iain

On Sun, Nov 27, 2011 at 4:44 PM, Andrew C  wrote:

> Hiya Harry,
>
> On Mon, Nov 28, 2011 at 12:31 AM, Harry van Haaren
>  wrote:
> > I think some "beginner" coding documentation on Linux Audio would be a
> great
> > asset to the community, and I'm willing to contribute to such an effort.
> As
> > Robin Gareus mentioned in another thread, a "FLOSS" manual is probably
> the
> > best way to go for a community effort on documenting.
>
> I rather like this idea, for what it's worth.
>
> > "if
> > you think it should be thought that way write the tutorial"...  the
> downside
> > of this is that if one tutorial uses toolkit  and the next toolkit
> ,
> > the average beginning coder is going to get lost in implementation
> details
> > and that defeats the purpose of documentation :D
>
> Easier said than done, but why not use the most popular toolkits for
> each tutorial, that way you can follow each tutorial using whichever
> toolkit you want.
>
> Andrew.
>
> (hurray for me ignoring the amount of work the above statement would
> involve, but it's an idea.)
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] opinions wanted on a couple of architecture questions

2011-12-09 Thread Iain Duncan
As I slowly figure out audio architecture, I've gotten to here:

main()
  - sets up jack, instantiates and Engine class and a Controller class

Engine
- the engine is passed to the jack audio callback, and has a .tick() to
calculate audio
- engine will receive messages over ToEngineQueue ( with a ringbuffer )
- engine will send messages back to controller over a second queue,
ToControllerQueue

Controller
- started in main thread
- will handle all input and output and instantiate any guis
- sends messages to engine over queue
- receives messages from engine over the ToControllerQueue

Message
- struct for passing messages, holds simple numbers, keeping individual
messages of known size for now

I've seen a few ways now in tutorials of handling some things and would
love opinions.

- Should the ToEnqineQueue be a part of Engine? ie, do we pass messages to
engine
 with engine->newMessage( msg );
- or should the queue's be instantiated in main and should Engine and
Controller each get pointers to the queues?
- I think the stk examples do the former and SuperCollider the second

- is it a bad idea to have both engine and controller need pointers to each
other?
- is this an example of an undesired circular dependency?
- is avoiding that by having them each only depend on Queue and Message a
good plan?
- or should I avoid it by using an ABC for MessageReceivingComponent and
allow them each to have pointers
of type MessageReceivingComponent?

- how should one handle memory management of the messages? Is it ok for
Engine to allocate memory for new
messages and have them destroyed on receipt,
- or should I have queue delete the memory and return by value when
messages are fetched?
- what is the crustimony proceedcake for allocated memory from an engine?

thanks to anyone who feels like they have the time to answer these!
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] opinions wanted on a couple of architecture questions

2011-12-09 Thread Iain Duncan
> - how should one handle memory management of the messages? Is it ok for
> Engine to allocate memory for new
> messages and have them destroyed on receipt,
> - or should I have queue delete the memory and return by value when
> messages are fetched?
> - what is the crustimony proceedcake for allocated memory from an engine?
>

Oops, perhaps I missed the boat on the ringbuffer, is the whole point that
with a ring buffer the memory for the max number of messages is
pre-allocated and we can just call by value in and out of the ringbuffer
without any memory management?

thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] opinions wanted on a couple of architecture questions

2011-12-09 Thread Iain Duncan
On Fri, Dec 9, 2011 at 12:12 PM, Paul Davis wrote:

> On Fri, Dec 9, 2011 at 3:07 PM, Iain Duncan 
> wrote:
>
> > Oops, perhaps I missed the boat on the ringbuffer, is the whole point
> that
> > with a ring buffer the memory for the max number of messages is
> > pre-allocated and we can just call by value in and out of the ringbuffer
> > without any memory management?
>
> correct.
>

Thanks. Does it make any difference who owns the ringbuffer? or is it cool
to just make them first from main and pass references in to the
constructors of whatever will use them?

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] easiest way to serialize messages for sending over a ringbuffer?

2011-12-12 Thread Iain Duncan
Hi everyone, I have a fairly simple, known size message format, it's just
five numbers, either ints or floats, wrapped in a structure. I guess I need
some kind of serialization to send this over a jack ringbuffer, but I've
zero experience with serialization in C++. Can anyone tell me what the
easiest or best way to do this is? Should I use the boost serialization
library? FWIW, I would like to eventually add osc messaging in too if that
affects the best choice.

Thanks!
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] easiest way to serialize messages for sending over a ringbuffer?

2011-12-12 Thread Iain Duncan
On Mon, Dec 12, 2011 at 7:31 PM, Harry van Haaren wrote:

> On Tue, Dec 13, 2011 at 3:24 AM, Iain Duncan wrote:
>
>> I guess I need some kind of serialization to send this over a jack
>> ringbuffer, but I've zero experience with serialization in C++.
>>
>
> I don't really understand what your asking here, you want to be able to
> set the order of the messages in the ringbuffer?
>


> Its a FIFO queue, as in First In - First Out. So the order of the messages
> is the same as you write them...
> Perhaps I'm misunderstanding you :S
>

Yup, what I'm talking about is being able to put a data structure on to the
ring buffer. It needs be castable to a  const *char, so the structure needs
a way to be converted to a string. I can't just put my DataMessage
structure on there because there is no automatic conversion from my own
struct to a string. I know how to do this in Python, but not sure what the
best way to do it in C or C++ is.

thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] easiest way to serialize messages for sending over a ringbuffer?

2011-12-13 Thread Iain Duncan
On Tue, Dec 13, 2011 at 5:53 AM, Paul Davis wrote:

> On Mon, Dec 12, 2011 at 11:44 PM, Iain Duncan 
> wrote:
>
> > Yup, what I'm talking about is being able to put a data structure on to
> the
> > ring buffer. It needs be castable to a  const *char, so the structure
> needs
> > a way to be converted to a string.
>
> these two statements are not related. in an awful lot of C code,
> "pointer to char" means "pointer". in newer better C code, one uses
> void*. in newer, better code than that, one doesn't use raw pointers
> much at all.
>
> are you working in C or C++ ?
>

Thanks Paul. I'm working in C++, but I'm using the jack C api, which from
the docs I see has a signature for
size_t jack_ringbuffer_write ( jack_ringbuffer_t * rb,  const char * src,
size_t cnt )

My DataMessage structure is just a simple C structure for now. Is there a
recommended way of writing it to the ringbuffer given that I want to do
something like this:

void MessageQueue::push( DataMessage msg ){
// write to the ring buffer, converting DataMessage to a string
unsigned int written = jack_ringbuffer_write( mRingBuffer, (char *)
&msg , sizeof(DataMessage) );
   // etc
}


Thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] easiest way to serialize messages for sending over a ringbuffer?

2011-12-13 Thread Iain Duncan
> My DataMessage structure is just a simple C structure for now. Is there a
> recommended way of writing it to the ringbuffer given that I want to do
> something like this:
>
> void MessageQueue::push( DataMessage msg ){
> // write to the ring buffer, converting DataMessage to a string
> unsigned int written = jack_ringbuffer_write( mRingBuffer, (char *)
&msg
> , sizeof(DataMessage) );
>// etc
> }

as long as the struct is POD (Plain Old Data - no embedded pointers,
> etc), this will work fine.
>
> however, you need to keep in mind that under some easily encounterable
> circumstances, the write may not return sizeof(DataMessage). this is a
> very easy mistake to make with ringbuffers (ditto for read).
>
> it can be avoided via careful sizing of the ringbuffer and always
> read/writing "whole objects" OR by carefully checking the results of
> read/write.
>

Thanks for the tips. What should one do if one detects a partial write? Is
it best to have integrity checks on both ends of the ringbuffer?

the jack ringbuffer design is particular bad in this respect because
> it can only hold size-1 bytes (where size is its actual size).
>

Does this mean a good way to initialize it is to make the ringbuffer some
multiple of the sizeof(myMessageStruct) plus one byte?

Thanks again for all the help.
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] easiest way to serialize messages for sending over a ringbuffer?

2011-12-14 Thread Iain Duncan
>  > Thanks for the tips. What should one do if one detects a partial
> write? Is it best to have integrity checks on both ends of the ringbuffer?
>
> Avoid them.  There should be some calls for checkRingBufferWriteSpace(),
> make sure it is large enough to cater for your object size. You still have
> a few hoops and jumps to go through if the buffer is filling up but it is
> definitely going to be easier that catering for partial writes. Consider
> also logging messages when it is full and if it happens (too much or in my
> opinion at all) then look into why the reader is going too slow. If it is a
> general issue then you need to rearchitect what you are doing, if it is
> just a transient issue then bang more capacity on the ringbuffer.
>

Thanks Nick, so if I understand what you're saying, the producer should
check for sizeof(MyStruct) space before writing, and just delay writing
that message if there isn't enough space. I guess that's where I would log
that there a write failed then?

thanks
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] how to store deferred events in a real time audio app?

2011-12-19 Thread Iain Duncan
Hi, I'm sure others have tackled this and have some wisdom to share. My
project is principally a monosynth step sequencer. This is nice an simple
to do in real time because resolution is very limited and there can be only
one note per track. So step sequenced note data is stored in simple
multi-dimensional arrays, making reading and writing very easy, and
messaging simple between audio and gui threads.

However, I would like to add the ability for the user to send a message and
have it get executed later, where later gets figured out by the engine ( ie
on the top of the next 8 bar phrase ). To do this, I need some way of
storing deferred events and having the engine check on each step whether
there were any deferred events stored for 'now'. I can think of a few ways
to do this, and all of them raise red flags for a real time thread.

- I could use a hash table, hashed by time, with a linked list of all the
events for that time. The engine looks up the current time and gets all the
events. I don't know much about hashing so I'd prob just use Boost, is that
a bad idea?

- I could make a linked list of all deferred events and iterate through
them checking if time is now. There wouldn't be any hashing, but maybe this
list would be really big.

Anyone have any suggestions for how to safely do the above or some better
alternative?

thanks!
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] how to store deferred events in a real time audio app?

2011-12-19 Thread Iain Duncan
I guess what I'm really interested in is how others have anchored events to
a timeline. Another way of doing it would be to trade ram and a bit of
accuracy for speed of execution by having an array that is PartsPerQuarter
* max_length_in_quarters long, and use that array to hold events, allowing
more than one event to be anchored to the same place by using a linked
list.

Feedback welcome. =-/

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] how to store deferred events in a real time audio app?

2011-12-20 Thread Iain Duncan
Either I'm misunderstanding the answers, or I have not done a good job of
asking my question.

In more detail, here's what I'm curious about how people have done:

The sequencer has a clock, it nows what time 'now' is in bars:beats:ticks.
Events are stored somehow, encoded to a time in bars:beats:ticks. These may
be added on the fly to any time, and the sequencer must be able to hop
around non-linearly in time ( looping, jumping to marks, etc). How does the
sequencer engine find events stored for 'now', quickly enough that we can
be somewhat deterministic about making sure it can get all the events for
any time? ( 'now' may even be different on a track by track basis ).

Does it look up 'now' in some kind of hashed pile of events, where events
are keyed by a time? This makes me worry about hashing algorithms, but
would sure be the easiest to implement.

Is there some kind of master timeline array that events get attached to?
This seems like it would be quick to seek to a point, but use up a lot of
ram for the timeline array and I'm not sure how one would handle unlimited
length timelines.

I'd not clear how the above have to do with communicating between threads
using ringbuffers, I'm just talking about how the audio call back stores
events for a given time and then finds them quickly at that time. But maybe
I'm totally missing something here.

Would love to hear in pseudo code how others have tackled storing and
finding events in time.

thanks!
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] handling midi input in a jack app?

2011-12-28 Thread Iain Duncan
Hey folks, what is the easiest way to deal with midi input in a jack app?
I'm confused by the difference in jack midi and alsa midi, because I have
two midi inputs, one is a usb input, so it appears at a low level as an
alsa device, but the other is the midi input on a firewire unit, and it
appears as a jack midi device. I'd like to make sure that whatever I do is
easy to port to other systems. Does it make sense to use portmidi or rtmidi
to get input or should I stick to the jack api entirely?

thanks
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] handling midi input in a jack app?

2011-12-28 Thread Iain Duncan
On Wed, Dec 28, 2011 at 11:35 AM, Paul Davis wrote:

> On Wed, Dec 28, 2011 at 1:46 PM, Iain Duncan 
> wrote:
> > Hey folks, what is the easiest way to deal with midi input in a jack app?
> > I'm confused by the difference in jack midi and alsa midi, because I have
> > two midi inputs, one is a usb input, so it appears at a low level as an
> alsa
> > device, but the other is the midi input on a firewire unit, and it
> appears
> > as a jack midi device. I'd like to make sure that whatever I do is easy
> to
> > port to other systems. Does it make sense to use portmidi or rtmidi to
> get
> > input or should I stick to the jack api entirely?
>
> JACK's MIDI API is substantially different from all others in that you
> receive MIDI data in the same thread (and same callback) that you
> receive audio data. In this sense its much more like the APIs used by
> plugin APIs to retrieve MIDI for use during the plugin API's
> equivalent of JACK's process() call.
>
> None of the other APIs that you've mentioned have this property, and
> nor do any of the Windows MIDI APIs or CoreMIDI.
>
> this means that you face opposing issues depending on which API you
> choose to use:
>
>  * if you use JACK:
>- MIDI data is trivially available to alter synthesis done
> during process()
>- MIDI data needs to be moved across thread boundaries to be
> useful outside of process()
>
>  * if you use ALSA, portmidi, rtmidi, CoreMIDI or anything else
>   - MIDI data to be used for synthesis has to be moved across
> thread boundaries
>   - MIDI data used for other purposes can often be used in the
> same thread it was received in,
> though not always.
>
> this issue is far more substantive than the question of whether you
> can access a firewire-based MIDI port or a USB-based MIDI port.
>
>
Thanks for that explanation. In my case, I believe I will have two kinds of
midi input, one that would be best served by the first set of tradeoffs and
the other the second, namely, the user may be playing a synth, or the midi
input may be used to control the app. Is it reasonable to use both jack
midi and a non-jack midi api in the same app with different midi input
devices?

thanks
Iain

> --p
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] handling midi input in a jack app?

2011-12-28 Thread Iain Duncan
On Wed, Dec 28, 2011 at 12:04 PM, David Robillard  wrote:

> On Wed, 2011-12-28 at 14:35 -0500, Paul Davis wrote:
> [...]
> > None of the other APIs that you've mentioned have this property, and
> > nor do any of the Windows MIDI APIs or CoreMIDI.
> >
> > this means that you face opposing issues depending on which API you
> > choose to use:
> >
> >   * if you use JACK:
> > - MIDI data is trivially available to alter synthesis done
> > during process()
> > - MIDI data needs to be moved across thread boundaries to be
> > useful outside of process()
> >
> >   * if you use ALSA, portmidi, rtmidi, CoreMIDI or anything else
> >- MIDI data to be used for synthesis has to be moved across
> > thread boundaries
> >- MIDI data used for other purposes can often be used in the
> > same thread it was received in,
> >  though not always.
>
> Also, in both cases, there is a strong and precise correlation between
> MIDI time stamps and audio time for Jack MIDI, and not for the other
> APIs (which may be useful even if you do your processing in another
> thread).
>

Thanks for all the input everyone. It sounds like my best plan is for the
jack build to use jack midi in the audio process callback and send it to
the rest of the app over a ringbuffer. A few things I hope to achieve:

- keep all jack dependencies, with the exception of the jackringbuffer, in
one file, so far so good, the only use of jack right now is in main.cpp and
my wrapper class for the ring buffer. Everybody else communicates using an
internal message format, that is non-midi (on purpose, I want to make no
assumptions about anyone using midi).
- keep my porting task to a minimum in case I want to get this to run on an
non-jack platform ( Raspberry Pi for instance, no idea if jack runs on ARM?
)
- allow handing off input data to python processes so that users can easily
write algorithmic transforms. These *could* receive midi input directly, or
they could get midi input from jack:
  jack_midi_in -> passed to engine -> ringbuffered to nrt-controller -
controller maps to correct child process - socket/pipes/?? to python
processes

Basically, I want power/users to be able to write input plugins with a
really simple api in python, for cases where latency is not an issue
(step-sequencing). So I need to make sure that getting messages to Python
is as quick as possible, but also as reliable and manageable as possible.

Appreciate all the input everyone.
Iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] segfaulting using my ringbuffer queue, stuck!

2011-12-30 Thread Iain Duncan
Hi folks, wondering if anyone might be able to point me at the way to sort
this out, my latest queue is segfaulting when written to or read from,
while the rest ( which are supposed to be identical except for message type
) are working great, and have been tested with full stack runs fine. I'm
banging my head on the desk at this point trying to find the difference,
but perhaps others have seen similar behaviour?

I've made a template class for a queue, that internally uses a jack
ringbuffer. I have four of them, some of my data message struct, one for a
csound note message struct, and a new one for my raw midi message struct
which looks this this:

struct MidiMessage {
   char status;
   char data_1;
   char data_2;
   int time; // time in samples when midi message arrived
};

I instantiate them before anything else and pass them into the components
that need them in their constructors

MessageQueue *toEngineDataQueue = new
MessageQueue();
MessageQueue *fromEngineDataQueue = new
MessageQueue();
MessageQueue *toEngineNoteQueue = new
MessageQueue();
MessageQueue *fromEngineMidiQueue = new
MessageQueue();

All instantiation is working fine, app starts up, and the first three
queues are working. As soon as I either write to or read from the midi
queue, I segfault. Not sure how to debug this, hints welcome! Below is the
cue code in case anyone wants to look at it. I can't see anything wrong,
but maybe I've been doing something wrong and just gotten lucky so far??

thanks
Iain

template 
class MessageQueue {

   private:
   // pointer to the ring buffer the ring buffer
   jack_ringbuffer_t *mRingBuffer;
   int mQueueLength;

   public:
MessageQueue();
~MessageQueue();

   // put a msg on the queue, returns 0 or error code
   void push( Type msg );
   // store message in msg, returns true if message
   bool tryPop( Type *msg );
};

template 
MessageQueue::MessageQueue(){

   mQueueLength = DEFAULT_QUEUE_LENGTH;
   // create our ringbuffer, sized by Type
   mRingBuffer = jack_ringbuffer_create( mQueueLength * sizeof(Type) );

   // lock the buffer into memory, this is *NOT* realtime safe
   int errorLocking = jack_ringbuffer_mlock(mRingBuffer);
   if( errorLocking ){
 std::cout << "MessageQueue - Error locking memory when creating
ringbuffer\n";
   // XXX raise an exception or something?? how do we fail here??
   }

}

template 
MessageQueue::~MessageQueue(){
   cout << "MessageQueue destructor\n";
   // free the memory allocated for the ring buffer
   ack_ringbuffer_free( mRingBuffer );
}

template 
void MessageQueue::push( Type msg ){
   // write to the ring buffer, converting Type to a string
   unsigned int written = jack_ringbuffer_write( mRingBuffer, (const char
*) &msg , sizeof(Type) );

   // XXX: what to do if it fails anyway??
   if( written < sizeof(Type) ){
 cout << "Error, unable to write full message to ring buffer\n";
 // do something else here yo!
   }
}

// if a message is on the queue, get it
// returns True if it got a message
template 
bool MessageQueue::tryPop( Type *msgBuf ){

   // if there is a message on the ring buffer, copy contents into msg
   if( jack_ringbuffer_read_space( mRingBuffer) >= sizeof(Type) ){
  jack_ringbuffer_read( mRingBuffer, (char *)msgBuf, sizeof(Type) );
  // return True because a msg was read
  return 1;
   }else{
  // return False, no msg read
  return 0;
   }
}
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] segfaulting using my ringbuffer queue, stuck!

2011-12-30 Thread Iain Duncan
Found the problem, of course, bad pointer handling. But any feeback on how
my code looks would still be welcome.

Thanks
Iain

On Fri, Dec 30, 2011 at 11:16 AM, Iain Duncan wrote:

> Hi folks, wondering if anyone might be able to point me at the way to sort
> this out, my latest queue is segfaulting when written to or read from,
> while the rest ( which are supposed to be identical except for message type
> ) are working great, and have been tested with full stack runs fine. I'm
> banging my head on the desk at this point trying to find the difference,
> but perhaps others have seen similar behaviour?
>
> I've made a template class for a queue, that internally uses a jack
> ringbuffer. I have four of them, some of my data message struct, one for a
> csound note message struct, and a new one for my raw midi message struct
> which looks this this:
>
> struct MidiMessage {
>char status;
>char data_1;
>char data_2;
>int time; // time in samples when midi message arrived
> };
>
> I instantiate them before anything else and pass them into the components
> that need them in their constructors
>
> MessageQueue *toEngineDataQueue = new
> MessageQueue();
> MessageQueue *fromEngineDataQueue = new
> MessageQueue();
> MessageQueue *toEngineNoteQueue = new
> MessageQueue();
> MessageQueue *fromEngineMidiQueue = new
> MessageQueue();
>
> All instantiation is working fine, app starts up, and the first three
> queues are working. As soon as I either write to or read from the midi
> queue, I segfault. Not sure how to debug this, hints welcome! Below is the
> cue code in case anyone wants to look at it. I can't see anything wrong,
> but maybe I've been doing something wrong and just gotten lucky so far??
>
> thanks
> Iain
>
> template 
> class MessageQueue {
>
>private:
>// pointer to the ring buffer the ring buffer
>jack_ringbuffer_t *mRingBuffer;
>int mQueueLength;
>
>public:
> MessageQueue();
> ~MessageQueue();
>
>// put a msg on the queue, returns 0 or error code
>void push( Type msg );
>// store message in msg, returns true if message
>bool tryPop( Type *msg );
> };
>
> template 
> MessageQueue::MessageQueue(){
>
>mQueueLength = DEFAULT_QUEUE_LENGTH;
>// create our ringbuffer, sized by Type
>mRingBuffer = jack_ringbuffer_create( mQueueLength * sizeof(Type) );
>
>// lock the buffer into memory, this is *NOT* realtime safe
>int errorLocking = jack_ringbuffer_mlock(mRingBuffer);
>if( errorLocking ){
>  std::cout << "MessageQueue - Error locking memory when creating
> ringbuffer\n";
>// XXX raise an exception or something?? how do we fail here??
>}
>
> }
>
> template 
> MessageQueue::~MessageQueue(){
>cout << "MessageQueue destructor\n";
>// free the memory allocated for the ring buffer
>ack_ringbuffer_free( mRingBuffer );
> }
>
> template 
> void MessageQueue::push( Type msg ){
>// write to the ring buffer, converting Type to a string
>unsigned int written = jack_ringbuffer_write( mRingBuffer, (const char
> *) &msg , sizeof(Type) );
>
>// XXX: what to do if it fails anyway??
>if( written < sizeof(Type) ){
>  cout << "Error, unable to write full message to ring buffer\n";
>  // do something else here yo!
>}
> }
>
> // if a message is on the queue, get it
> // returns True if it got a message
> template 
> bool MessageQueue::tryPop( Type *msgBuf ){
>
>// if there is a message on the ring buffer, copy contents into msg
>if( jack_ringbuffer_read_space( mRingBuffer) >= sizeof(Type) ){
>   jack_ringbuffer_read( mRingBuffer, (char *)msgBuf, sizeof(Type) );
>   // return True because a msg was read
>   return 1;
>}else{
>   // return False, no msg read
>   return 0;
>}
> }
>
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


[LAD] memory allocation for the real time thread, opinions wanted

2012-02-25 Thread Iain Duncan
Hi everyone, hoping to get opinions from gurus on here, who have been
*incredibly* helpful in getting my project to where its at. A million thank
yous!

Ok, the situation so far, which is working well:
- the app uses a generalized 'message' structure, all the different forms
of messages fit into this structure by having it act alike a union. ( ie, a
message always takes up the same amount of space, no matter the type )
- messages do not container pointers, in order that they be simple to send
and receive over network clients, hardware, etc
- there are ringbuffers between my audio thread and real time thread

What I'm tackling:
- I want to add the capability for messages to have deferred execution, so
they can be sent with a 'process at 4:3:1' kind of thing
- I think the best tradeoff for my app so far will be to use a hybrid of a
timeline array and a linked list. there will be coarse time values stored
by raw array indexing, speeding up lookup, and fine time values will be
stored in the messages themselves
- so, when the engine is processing deferred messages, it will go and check
timelineArray for all messages at bar 1:beat 1, which will be a linked list
of all the messages with start time between bar:1 beat 1 and bar 1: beat 2
( time resolution may change, this just for example
- then the engine iterates on every tick through that list of messages.
This way, iteration on every tick is limited to a reasonable sized linked
list and I can play with the cpu vs data storage equation by simply
changing the resolution of the time line array

Issues:
- I need to allocate memory for new linked list items in the realtime thread
- the timeline array needs to be able to grow in the real time thread

Thoughts:
- I don't need to get it perfect *right now* but I need to be able to
change it to Really Good later
- I checked out some resources, like this one the Design Patterns for *Real*
-*Time* Computer *Music*
Systems
and the supercollider book chapter and see there are a lot of options
- I could pre-allocate a giant list of messages and pluck the data off that
list when I need to make a new one
- I could pre-allocate a block of memory and allocate off that
- I could allocate in the non-realtime thread and then pass memory over in
queues.

Would love to hear opinions on how others would solve these, including
tradeoffs of each.

thanks!
iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] memory allocation for the real time thread, opinions wanted

2012-02-25 Thread Iain Duncan
On Sat, Feb 25, 2012 at 2:25 PM, Iain Duncan wrote:

> Hi everyone, hoping to get opinions from gurus on here, who have been
> *incredibly* helpful in getting my project to where its at. A million thank
> yous!
>
> Ok, the situation so far, which is working well:
> - the app uses a generalized 'message' structure, all the different forms
> of messages fit into this structure by having it act alike a union. ( ie, a
> message always takes up the same amount of space, no matter the type )
> - messages do not container pointers, in order that they be simple to send
> and receive over network clients, hardware, etc
> - there are ringbuffers between my audio thread and real time thread
>
> What I'm tackling:
> - I want to add the capability for messages to have deferred execution, so
> they can be sent with a 'process at 4:3:1' kind of thing
> - I think the best tradeoff for my app so far will be to use a hybrid of a
> timeline array and a linked list. there will be coarse time values stored
> by raw array indexing, speeding up lookup, and fine time values will be
> stored in the messages themselves
> - so, when the engine is processing deferred messages, it will go and
> check timelineArray for all messages at bar 1:beat 1, which will be a
> linked list of all the messages with start time between bar:1 beat 1 and
> bar 1: beat 2 ( time resolution may change, this just for example
> - then the engine iterates on every tick through that list of messages.
> This way, iteration on every tick is limited to a reasonable sized linked
> list and I can play with the cpu vs data storage equation by simply
> changing the resolution of the time line array
>
> Issues:
> - I need to allocate memory for new linked list items in the realtime
> thread
> - the timeline array needs to be able to grow in the real time thread
>
> Thoughts:
> - I don't need to get it perfect *right now* but I need to be able to
> change it to Really Good later
> - I checked out some resources, like this one the Design Patterns for *
> Real*-*Time* Computer *Music* 
> Systems<http://www.google.ca/url?sa=t&rct=j&q=audio%20allocating%20memory%20in%20real%20time%20thread&source=web&cd=3&ved=0CDIQFjAC&url=http%3A%2F%2Fwww.cs.cmu.edu%2F%7Erbd%2Fdoc%2Ficmc2005workshop%2Freal-time-systems-concepts-design-patterns.pdf&ei=g1xJT66mEYvOiALDnN3oAQ&usg=AFQjCNET-FI8lrlYfnIBP39Dwr-GLDptrA&sig2=f5fUXFebwZ6MpOcF2tYJug&cad=rja>
> and the supercollider book chapter and see there are a lot of options
> - I could pre-allocate a giant list of messages and pluck the data off
> that list when I need to make a new one
> - I could pre-allocate a block of memory and allocate off that
> - I could allocate in the non-realtime thread and then pass memory over in
> queues.
>
> Would love to hear opinions on how others would solve these, including
> tradeoffs of each.
>
>
Also, any thoughts on 'good enough to start with' techniques welcome!

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] Tutorial for programming with JACK

2012-02-25 Thread Iain Duncan
Just thought I'd mention something I found, which has been working well.
PythonQT allows you to embed a python interpreter with bi-directional
passing in a C++ QT App. It's a bit under documented, but once you get it
working, it's a really nice way to pass off non-rt critical tasks to python
from within a C++ app.

I have no affiliation with it, I just think it's dead cool because I didn't
have to tackle inter-process communication to do high level data mangling
in python!
http://pythonqt.sourceforge.net/

iain
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev


Re: [LAD] memory allocation for the real time thread, opinions wanted

2012-02-29 Thread Iain Duncan
Thanks for all the comments everyone!
iain

On Tue, Feb 28, 2012 at 1:34 PM, Paul Coccoli  wrote:

> On Mon, Feb 27, 2012 at 8:43 PM, James Morris  wrote:
> > On Mon, 27 Feb 2012 20:01:18 -0500
> > Paul Coccoli  wrote:
> [Mass snippage]
> >> Why not just use 2 ringbuffers: one to send pointers to the RT thread,
> >> and a second to send them back to the low prio thread (so it can free
> >> them).  You probably need a semaphore for the return ringbuffer, but
> >> that should be RT-safe.
> >
> > That's what I thought... would be better for someone who is new to real
> > time threads and memory allocation... and is what I decided on... minus
> > the semaphore.
> >
> > So why is a semaphore needed? If the RT thread only sends an item back
> > when it absolutely no longer will use it?
>
> I suppose the semaphore isn't strictly necessary, but I think it's an
> easy way to tell the main thread that it has a message to process.
> Although one that probably doesn't integrate well with most
> main/non-RT threads.
> ___
> Linux-audio-dev mailing list
> Linux-audio-dev@lists.linuxaudio.org
> http://lists.linuxaudio.org/listinfo/linux-audio-dev
>
___
Linux-audio-dev mailing list
Linux-audio-dev@lists.linuxaudio.org
http://lists.linuxaudio.org/listinfo/linux-audio-dev