Hi,

Messages can
be sent to a subprocess from a parent--but I had to start using
udpsend to send messages back.
You can send messages from the subprocess to the parent process with [stdout], check the help file of [pd~].

I'm looking at long-duration signals analysis on command from a
real-time process, and I'd like that analysis to run with minimal
disruption to the real-time process.
Note that the subprocess is slaved to the parent process, so if the subprocess blocks, the parent also blocks. You can absorb occasional CPU spikes by setting "-fifo" to a high value, but you can never make it 100% non-blocking.

Now that you've described your setup, I think you should rather run a seperate instance of Pd (probably with the "-nrt" flag) and communicate with [netsend]/[netreceive] as you're already doing. Currently, there's no Vanilla way to start external processes (yet), but you can either find an external (there are some, but I keep forgetting the name) or start both instances together with a simple shell script.

Finally, you can simplify the communication by using TCP, because with [netreceive] - without "-u" - you just have to listen for an incoming message and then you can send a reply with the [send( method - no need to open another socket!

Christof

On 28.01.2020 06:22, Charles Z Henry wrote:
Thanks for your replies, Christof

On Mon, Jan 27, 2020 at 4:40 AM Christof Ressi <christof.re...@gmx.at> wrote:
Hi,

I'm guessing that mrpeach/udpreceive and netreceive have different
polling behavior that can explain this, but I don't see it yet.  Is
there anyone reading, who's dealt with this issue before?
They actually use the same polling mechanism. I tried to receive OSC messages 
with [mrpeach/udpreceive] while running it batch mode and it doesn't work, just 
like I expected. Are saying this works for you? I would be quite surprised...

As the ticket on sourceforge mentions, there's no explicit call to 
sys_pollgui() in batch mode *) - which by the way is a misnomer because it 
polls *all* sockets -, so I don't see how [mrpeach/udpreceive] could work under 
such circumstances.
I think I was unclear--as I'm editing the test patch to compare
udpsend/netsend, the key difference I'm finding is how the process
gets started from pd~
and it looks like I was wrong about this being true batch mode.
When pd~ starts up a new subprocess with "-batch", it suppresses the
GUI as expected, but it also starts the new process with "-schedlib"
which substitutes for the subprocess scheduler behavior.

I have been writing a patch to send messages to/from a subprocess in
batch mode.  First, I wrote the patch with mrpeach's
udpsend/udpreceive and got it working.  It's just a simple handshake:
I think the real problem is that you're using batch mode for the wrong job. In 
batch mode, Pd will run as fast as possible to get a certain task done. If you 
wait for incoming network traffic while in batch mode, you're just busy waiting 
and wasting lots of CPU cycles. Just use Pd in realtime mode instead!

Or is there a specific reason why you think you need to receive messages while 
running in batch mode?
I'm looking at long-duration signals analysis on command from a
real-time process, and I'd like that analysis to run with minimal
disruption to the real-time process.
At first, I was just buffering signals in, but decided to try using
shmem to copy tables back/forth.  That requires some coordination
between the processes to know when to read from shmem.  Messages can
be sent to a subprocess from a parent--but I had to start using
udpsend to send messages back.

That's why it occurred to me to start trying batch mode and see if I
could use udpsend or netsend as out-of-band communication to control
it.  It wouldn't have to start/stop an entire process or access the
disk at all.  It could all stay in memory waiting to run the long
analysis until the parent asks it to.

Here the cpu load goes to 100%
This is totally expected, as you want to run your task as fast as possible.

Christof

*) to be correct, there is a hidden call to sys_pollgui() if there are more 
than 5000 clock timeouts in a given scheduler tick (see sched_tick())
Thank you, I'll read through more of the sched_tick() and
sys_pollgui() code next.


Gesendet: Montag, 27. Januar 2020 um 02:20 Uhr
Von: "Charles Z Henry" <czhe...@gmail.com>
An: Pd-List <pd-list@lists.iem.at>
Betreff: [PD] netreceive vs mrpeach/udpreceive in batch mode

Hi list,

I have been writing a patch to send messages to/from a subprocess in
batch mode.  First, I wrote the patch with mrpeach's
udpsend/udpreceive and got it working.  It's just a simple handshake:

the toplevel process starts listening on port 16000 for a message [1
1(.  When it receives that message, it sends back a message [1
subprocess#( to localhost port 15999.

The subprocess starts up, listens on port 15999 and sends a message [1
1( to localhost port 16000.  When it gets a message [1 n( on port
15999, it outputs n as the subprocess #, and opens a new port 16000+n
(and closes 15999).

This was fine, except udpsend/receive pairs exchange binary numbers
(0-255).  It will work, but it doesn't make the patches as easily
readable.  It will still be possible to pass integers larger than 255
with a little patching, but some flexibility would be nice.

I thought "netsend -u"/"netreceive -u" would make a good replacement
with text instead.

It runs fine during the first part of testing with the GUI.  I test
"-nogui".  Also fine.  Then, "-batch" added.  Here the cpu load goes
to 100% (which didn't happen with mrpeach udpsend/receive).  I'm able
to strace and see the first message [1 1( sent.  Then, the process
keeps on going but doesn't receive any further UDP messages.

I'm able to find a sourceforge ticket from 2012:
https://sourceforge.net/p/pure-data/bugs/943/
and basically, I'm looking for the same usage case which is batch mode
processing under supervision from another Pd process.

I'm guessing that mrpeach/udpreceive and netreceive have different
polling behavior that can explain this, but I don't see it yet.  Is
there anyone reading, who's dealt with this issue before?

Chuck



_______________________________________________
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list



_______________________________________________
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list



_______________________________________________
Pd-list@lists.iem.at mailing list
UNSUBSCRIBE and account-management -> 
https://lists.puredata.info/listinfo/pd-list

Reply via email to