Re: Non-GUI, single processort inter process massaging - how?

2018-07-24 Thread Chris Green
Dennis Lee Bieber  wrote:
> On Mon, 23 Jul 2018 22:14:22 +0100, Chris Green  declaimed the
> following:
> 
> >Anders Wegge Keller  wrote:
> >> 
> >>  If your update frequency is low enough that it wont kill the filesystem 
> >> and
> >> the amount of data is reasonably small, atomic writes to a file is easy to
> >> work with:
> >> 
> >Yes, I think you're right, using a file would seem to be the best
> >answer.  Sample rate is only one a second or slower and there's not a
> >huge amount of data involved.
> >
> 
> If the data is small enough, putting the file into the small shared
> memory (forget if that is /dev/shm or /run/shm on the BBB) would even avoid
> wearing out eMMC/SD card.
> 
That is also a very good idea, there's /dev/shm and /run on my BBB,
/dev/shm seems to have more space.

-- 
Chris Green
·
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Non-GUI, single processort inter process massaging - how?

2018-07-23 Thread Chris Green
Anders Wegge Keller  wrote:
> På Sat, 21 Jul 2018 09:07:23 +0100
> Chris Green  skrev:
> 
> > So - what's the best approach to this?  I've done some searching and
> > most/many of the solutions seem rather heavyweight for my needs. Am I
> > overlooking something obvious or should I try rethinking the original
> > requirement and look for another approach?
> 
>  What do you consider heavyweight? Number of dependencies, memory footprint,
> amount of code or a fourth thing? Also, which platform will the code
> eventually run on?
> 
>  If your update frequency is low enough that it wont kill the filesystem and
> the amount of data is reasonably small, atomic writes to a file is easy to
> work with:
> 
Yes, I think you're right, using a file would seem to be the best
answer.  Sample rate is only one a second or slower and there's not a
huge amount of data involved.

Thanks.

-- 
Chris Green
·
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Non-GUI, single processort inter process massaging - how?

2018-07-23 Thread Anders Wegge Keller
På Sat, 21 Jul 2018 09:07:23 +0100
Chris Green  skrev:

> So - what's the best approach to this?  I've done some searching and
> most/many of the solutions seem rather heavyweight for my needs. Am I
> overlooking something obvious or should I try rethinking the original
> requirement and look for another approach?

 What do you consider heavyweight? Number of dependencies, memory footprint,
amount of code or a fourth thing? Also, which platform will the code
eventually run on?

 If your update frequency is low enough that it wont kill the filesystem and
the amount of data is reasonably small, atomic writes to a file is easy to
work with:

  def atomic_write(filename, data):
handle, name = tempfile.mkstemp()
os.write(handle, data)
os.fsync(handle)
os.close(handle)
os.replace(tmpname, filename)

 The imports are trivial to deduce, so I removed the clutter. If you have
multiple filesystems, you may have to read into the details of calling
mkstemp.


 If you have an update frequency that will kill the filesystem (for instance
a RPi on a SD card), or have a larger data set where you only want to do
partial updates, the mmap module could be an option. It require some more
boilerplate. You will probably also have to consider a semaphore to
guarantee that the clients read a consistent result set.

-- 
//Wegge
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Non-GUI, single processort inter process massaging - how?

2018-07-22 Thread Jan Claeys
On Sat, 2018-07-21 at 11:50 -0400, Dennis Lee Bieber wrote:
> Each client would use the urllib (various names between v2
> and v3) to submit requests to the server, and process the returned
> "page".

Or the client could even be a bash script using curl, or any other HTTP
client...

-- 
Jan Claeys
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Non-GUI, single processort inter process massaging - how?

2018-07-21 Thread Chris Green
Steven D'Aprano  wrote:
> On Sat, 21 Jul 2018 09:07:23 +0100, Chris Green wrote:
> 
> [...]
> > I want to be able to interrogate the server process from several client
> > processes, some will interrogate it multiple times, others once only. 
> > They are mostly (all?) run from the command line (bash).
> 
> 
> This sounds like a good approach for signals. Your server script sets up 
> one or more callbacks that print the desired information to stdout, or 
> writes it to a file, whichever is more convenient, and then you send the 
> appropriate signal to the server process from the client processes.
> 
[snip useful sample scripts]

Yes, maybe, though I was hoping for something a bit more sophisticated.

At the moment it's the 'client' processes which manage the output side
of things, pushing it across to the 'server' would mean that the
client does nothing except send a signal.  As the outputs are to quite
a variety of things (e.g. terminal  screen, serial connection, LCD
display) this would push a lot of code to the server which currently
is quite nicely separated into client modules.

Communicating by file is a possible approach I had considered though,
the server can do its smoothing etc. of results and write them to a
file which is then read as required by the client processes.  The only
issue then is the need for some sort of locking as one doesn't want
the client to read while the server is writing.  One could overcome
this by using a temporary file though, mv/rename is supposed to be
atomic.

Going back to my original request the clients don't really need to be
able to 'interrogate' the server, they just need to be able to access
results produced by the server.  So maybe simply having the server
write the values to file is all that's really needed.

-- 
Chris Green
·
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Non-GUI, single processort inter process massaging - how?

2018-07-21 Thread Steven D'Aprano
On Sat, 21 Jul 2018 09:07:23 +0100, Chris Green wrote:

[...]
> I want to be able to interrogate the server process from several client
> processes, some will interrogate it multiple times, others once only. 
> They are mostly (all?) run from the command line (bash).


This sounds like a good approach for signals. Your server script sets up 
one or more callbacks that print the desired information to stdout, or 
writes it to a file, whichever is more convenient, and then you send the 
appropriate signal to the server process from the client processes.

At the bash command line, you use the kill command: see `man kill` for 
details.


Here's a tiny demo:

# === cut ===

import signal, os, time
state = 0

def sig1(signum, stack):
print(time.strftime('it is %H:%m:%S'))

def sig2(signum, stack):
print("Current state:", stack.f_globals['state'])

# Register signal handlers
signal.signal(signal.SIGUSR1, sig1)
signal.signal(signal.SIGUSR2, sig2)

# Print the process ID.
print('My PID is:', os.getpid())

while True:
state += 1
time.sleep(0.2)

# === cut ===


Run that in one terminal, and the first thing it does is print the 
process ID. Let's say it prints 12345, over in another terminal, you can 
run:

kill -USR1 12345
kill -USR2 12345

to send the appropriate signals.

To do this programmatically from another Python script, use the os.kill() 
function.


https://docs.python.org/3/library/signal.html

https://pymotw.com/3/signal/




-- 
Steven D'Aprano
"Ever since I learned about confirmation bias, I've been seeing
it everywhere." -- Jon Ronson

-- 
https://mail.python.org/mailman/listinfo/python-list