Actually, there is no need to use a clock for every scheduled LISP
function. You can also maintain a seperate scheduler, which is just a
priority queue for callback functions. In C++, you could use a
std::map<double, callback_type>. "double" is the desired (future) system
time, which you can get with "clock_getsystimeafter".
Then you create a *single* clock in the setup function *) with a tick
method that reschedules itself periodically (e.g. clock_delay(x, 1) ). In
the tick method, you get the current logical time with
"clock_getlogicaltime", walk over the priority queue and dispatch + remove
all items which have a time equal or lower. You have to be careful about
possible recursion, though, because calling a scheduled LISP function might
itself schedule another function. In the case of std::map, however, it is
safe, because insertion doesn't invalidate iterators.
Some more ideas:
Personally, I like to have both one-shot functions and repeated
functions, being able to change the time/interval and also cancel them. For
this, it is useful that the API returns some kind of identifier for each
callback (e.g. an integer ID). This is what Javascript does with
"setTimeout"/"clearTimeout" and "setInterval"/"clearInterval". I use a very
similar system for the Lua scripting API of my 2D game engine, but I also
have "resetTimeout" and "resetInterval" functions.
On the other hand, you could also have a look at the scheduling API of
the Supercollider, which is a bit different: if a routine yields a number
N, it means that the routine will be scheduled again after N seconds.
Generally, having periodic timers is very convenient in a musical
environment :-)
Christof
*) Don't just store the clock in a global variable, because Pd can have
several instances. Instead, put the clock in a struct which you allocate in
the setup function. The clock gets this struct as the owner.
typedef struct _myscheduler { t_clock *clock; } t_myscheduler; // this
would also be a good place to store the priority queue
t_scheduler *x = getbytes(sizeof(t_myscheduler));
t_clock *clock = clock_new(x, (t_method)myscheduler_tick);
x->clock = clock;
On 25.10.2020 02:02, Iain Duncan wrote:
Thanks Christof, that's very helpful.
iain
On Sat, Oct 24, 2020 at 5:53 PM Christof Ressi <[email protected]>
wrote:
But if you're still worried, creating a pool of objects of the same size
is actually quite easy, just use a
https://urldefense.com/v3/__https://en.wikipedia.org/wiki/Free_list__;!!Mih3wA!U2vSBqdBcvf_RHuxgQObMJTTPs25qVYaFIbydhQbq5R9QW16pv41XKgiaZ9c$
.
Christof
On 25.10.2020 02:45, Christof Ressi wrote:
A) Am I right, both about being bad, and about clock pre-allocation and
pooling being a decent solution?
B) Does anyone have tips on how one should implement and use said clock
pool?
ad A), basically yes, but in Pd you can get away with it. Pd's scheduler
doesn't run in the actual audio callback (unless you run Pd in "callback"
mode) and is more tolerant towards operations that are not exactly realtime
friendly (e.g. memory allocation, file IO, firing lots of messages, etc.).
The audio callback and scheduler thread exchange audio samples via a
lockfree ringbuffer. The "delay" parameter actually sets the size of this
ringbuffer, and a larger size allows for larger CPU spikes.
In practice, allocating a small struct is pretty fast even with the
standard memory allocator, so in the case of Pd it's nothing to worry
about. In Pd land, external authors don't really care too much about
realtime safety, simply because Pd itself doesn't either.
---
Now, in SuperCollider things are different. Scsynth and Supernova are
quite strict regarding realtime safety because DSP runs in the audio
callback. In fact, they use a special realtime allocator in case a plugin
needs to allocate memory in the audio thread. Supercollider also has a
seperate non-realtime thread where you would execute asynchronous commands,
like loading a soundfile into a buffer.
Finally, all sequencing and scheduling runs in a different program
(sclang). Sclang sends OSC bundles to scsynth, with timestamps in the near
future. Conceptually, this is a bit similar to Pd's ringbuffer scheduler,
with the difference that DSP itself never blocks. If Sclang blocks, OSC
messages will simply arrive late at the Server.
Christof
On 25.10.2020 02:10, Iain Duncan wrote:
Hi folks, I'm working on an external for Max and PD embedding the S7
scheme interpreter. It's mostly intended to do things at event level, (algo
comp, etc) so I have been somewhat lazy around real time issues so far. But
I'd like to make sure it's as robust as it can be, and can be used for as
much as possible. Right now, I'm pretty sure I'm being a bad
real-time-coder. When the user wants to delay a function call, ie (delay
100 foo-fun), I'm doing the following:
- callable foo-fun gets registered in a scheme hashtable with a gensym
unique handle
- C function gets called with the handle
- C code makes a clock, storing it in a hashtable (in C) by the handle,
and passing it a struct (I call it the "clock callback info struct") with
the references it needs for it's callback
- when the clock callback fires, it gets passed a void pointer to the
clock-callback-info-struct, uses it to get the cb handle and the ref to the
external (because the callback only gets one arg), calls back into Scheme
with said handle
- Scheme gets the callback out of it's registry and executes the stashed
function
This is working well, but.... I am both allocating and deallocating
memory in those functions: for the clock, and for the info struct I use to
pass around the reference to the external and the handle. Given that I want
to be treating this code as high priority, and having it execute as
timing-accurate as possible, I assume I should not be allocating and
freeing in those functions, because I could get blocked on the memory
calls, correct? I think I should probably have a pre-allocated pool of
clocks and their associated info structs so that when a delay call comes
in, we get one from the pool, and only do memory management if the pool is
empty. (and allow the user to set some reasonable config value of the clock
pool). I'm thinking RAM is cheap, clocks are small, people aren't likely to
have more than 1000 delay functions running concurrently or something at
once, and they can be allocated from the init routine.
My questions:
A) Am I right, both about being bad, and about clock pre-allocation and
pooling being a decent solution?
B) Does anyone have tips on how one should implement and use said clock
pool?
I suppose I should probably also be ensuring the Scheme hash-table
doesn't do any unplanned allocation too, but I can bug folks on the S7
mailing list for that one...
Thanks!
iain
_______________________________________________
Pd-dev mailing
[email protected]https://lists.puredata.info/listinfo/pd-dev
_______________________________________________
Pd-dev mailing
[email protected]https://lists.puredata.info/listinfo/pd-dev
_______________________________________________
Pd-dev mailing list
[email protected]
https://urldefense.com/v3/__https://lists.puredata.info/listinfo/pd-dev__;!!Mih3wA!U2vSBqdBcvf_RHuxgQObMJTTPs25qVYaFIbydhQbq5R9QW16pv41XF911flN$
_______________________________________________
Pd-dev mailing list
[email protected]
https://urldefense.com/v3/__https://lists.puredata.info/listinfo/pd-dev__;!!Mih3wA!U2vSBqdBcvf_RHuxgQObMJTTPs25qVYaFIbydhQbq5R9QW16pv41XF911flN$