Thanks for clarifying!
I think currently only readsf~ / writesf~ use threaded disk I/O and
they manage their own threads/locks. I think
https://github.com/pure-data/pure-data/pull/1357 could also help with
those.
I don't think so. PR #1357 is about asynchronous tasks, which can take
an arbitrary amount of time. Disk streaming, on the other hand, has to
meet deadlines. Both have to be completely decoupled, otherwise you run
into problems like this:
https://github.com/supercollider/supercollider/issues/5511
---
my patch wouldn't affect disk I/O as it is. However, it provides
infrastructure which could be reused to add threaded behaviour for
disk I/O
That's actually an interesting idea. I think your patch could actually
be generalized for different types of I/O:
1) "sys_registersocket" (-> "sys_registerfd") would also take the I/O
type, such as "socket" or "file".
2) there would be a thread for each I/O type. Otherwise a send() on a
TCP socket could block a read() on a soundfile, for example.
3) "poll_fds" would use the I/O type to determine if the fd is ready.
For example, on Windows select() only works on sockets, but not on files!
4) The size of the ring buffer depends on the I/O type: for UDP sockets
it must be large enough to hold the largest UDP packet; for disk
streaming we actually want to be able to set the size ourselves, like
the "buffer size" creation argument for [readsf~].
Just some ideas.
Christof
On 17.07.2021 23:26, Giulio Moro wrote:
What Christof is saying is right, my patch wouldn't affect disk I/O as
it is. However, it provides infrastructure which could be reused to
add threaded behaviour for disk I/O. I think currently only readsf~ /
writesf~ use threaded disk I/O and they manage their own
threads/locks. I think
https://github.com/pure-data/pure-data/pull/1357 could also help with
those. Apologies for the earlier misleading statement.
Best,
Giulio
sorry if that was misleading.
Edwin van der Heide wrote on 17/07/2021 16:30:
Dear Christof,
Your distinction of the three categories makes a lot of sense.
Very nice to see the PR for the ‘asynchronous tasks API’ that
provides a general infrastructure.
Best! Edwin
On 17 Jul 2021, at 15:49, Christof Ressi <[email protected]> wrote:
Am I right to assume that this would include writing arrays to disk
without interfering with the audio thread?
I don't think so, but there's already a separate PR for that:
https://github.com/pure-data/pure-data/pull/1357
Actually, I am not sure what Giulio meant with "disk I/O". Maybe that
the I/O thread could also be used to stream audio to/from disk?
Personally, I would keep networking and audio streaming on different
threads as they don't necessarily have the same priority.
---
Generally, I see three categories of worker threads:
1) the task has to meet a certain deadline. One example is streaming
audio to/from disk: it eventually has to happen within a certain time
frame, but the time frame can be made larger with additional buffering.
2) the task has no deadline but should be completed as fast as
possible. One example is network I/O.
3) the task has no time constraints at all. One example is loading
soundfiles from disk: you don't really care how much time it takes,
the only important thing is that it doesn't block other threads.
Supercollider's original "scsynth" Server has a thread for disk
streaming (1) plus a thread for asynchronous tasks which includes
outgoing network packets (2 + 3). The latter is not ideal, because it
means that networking can be temporarily blocked by time consuming
tasks. For this reason, the more recent "supernova" Server has a
dedicated thread for outgoing network packets.
Christof
On 17.07.2021 14:45, Edwin van der Heide wrote:
Dear Giulio,
Am I right to assume that this would include writing arrays to disk
without interfering with the audio thread?
Best! Edwin
On 16 Jul 2021, at 00:18, Giulio Moro via Pd-dev
<[email protected]> wrote:
I have had this PR open for a while:
https://github.com/pure-data/pure-data/pull/1261 . It adds threaded
behaviour for disk and network I/O, making the Pd audio thread
"real-time safer". I went through quite a few revisions courtesy of
umlaeute and Spacechild1, but haven't heard from you directly. I am
happy to do some more work if there is interest in merging. CI
build currently fails because of weird "support" of ssize_t on VS
(happy to hear about workarounds).
Best,
Giulio
Miller Puckette via Pd-dev wrote on 13/07/2021 19:22:
(re-send - I had sent to [email protected] but that now seems to be
defunct...)
To Pd dev -
I'm going to try to get the next Pd release (0.52) out over the
next month
or two. My personal priorities for this release would be putting
in a message
backtrace mechanism (by overriding canvas_connect and pd_bind to
go through
small proxy objects; this will have to be done at load time I
think) and
to go back and try to figure out how to do tooltips without adding
cruft to
the inlet structure. (There's an ancient source-patch to provide
tooltips
by Chris McCormichadn Guenter Geiger that I plan to start with -
https://sourceforge.net/p/pure-data/patches/264/).
Before doing that I want to do some reorganizing - in porting Pd
to FreeRTOS
(so I can run it on an Espressif LyraT board, which I think takes
only about
10 or 20% of the current that a Pi needs) I found out that I had
to move
a few functions from one file to another.
This might break some PRs, so... first of all would be to identify
whatever
PRs are ready to merge so I can do that before I make incompatible
changes.
Of course "stable development branch" first... then Dan's
soundfile updates...
then what?
PS more ideas of mine (among many):
hot-reloading externs via a message to Pd
use a "unix binding" socket between Pd and pd-gui instead of
localhost
generalize number/symbol box to allow displaying entire messages
or lists
cheers
Miller
_______________________________________________
Pd-dev mailing list
[email protected]
https://lists.puredata.info/listinfo/pd-dev
_______________________________________________
Pd-dev mailing list
[email protected]
https://lists.puredata.info/listinfo/pd-dev
_______________________________________________
Pd-dev mailing list
[email protected]
https://lists.puredata.info/listinfo/pd-dev
_______________________________________________
Pd-dev mailing list
[email protected]
https://lists.puredata.info/listinfo/pd-dev
_______________________________________________
Pd-dev mailing list
[email protected]
https://lists.puredata.info/listinfo/pd-dev