Re: Supervising a pipeline?
Laurent Bercot: You can't supervise a pipeline per se; you need to supervise both processes in the pipeline independently, and make sure the pipe isn't broken when one of them dies. So, have "exec inotifywait /dev/disk" as foobar/run, and have "exec automounter.py" as foobar/log/run. This will work with daemontools, runit and s6. (You can accomplish the same goal with perp and nosh too; the syntax will just be different.) Actually, the syntax for nosh can be exactly as described: something/run and something/log/run . It's not ideal, because of course then there's no properly separated logging of the automounter. Laurent Bercot: Alternatively, you could use s6-rc and create the "inotifywait" and "automounter" longrun services in a pipeline; your compiled database would then include instructions to set up the supervised pipeline for you. This is more complex to set up than just using the integrated pipe management in svscan and runsvdir, but it's also more powerful, because you can pipeline an arbitrary number of processes that way (this is also what nosh does). Yes: The nosh toolkit makes this extensible. The aforegiven configuration is enacted by the daemontools service scanner. The underlying service management supports arbitrarily long pipelines of services, each one's standard output and standard error feeding into the standard input of the next one along. The system-control utility looks at the "log" symbolic links of whatever collection of services it is operating upon and sends a "plumb services together" order to the service manager. So one could construct a 3-service long pipeline: inotifywait into Python program into cyclog instance.
Supervising a pipeline?
Hi all, I'm making a thumb drive automounter using inotifywait piped into my Python program, which detects the proper CREATES and DELETES and automounts and autoumounts accordingly. Here's what I thought the run script would look like (runit dialect): #!/bin/sh exec /usr/bin/inotifywait /dev/disk | /usr/local/bin/automounter.py However, it was pointed out to me that if automounter.py crashed, inotifywait would keep spinning and shooting its voluminous stdout messages into the ether. And presumably the service wouldn't crash and restart. Is this doable in a way consistent with supervision suites? I completely understand that 95% of you think what I've suggested is a no-style kludge that shouldn't be done, and that I should use Python's inotify framework. This is a completely different discussion: I'm limiting my question to whether such a pipeline is, or can be made, consistent with things like daemontools-encore, s6, and runit. Thanks, SteveT Steve Litt November 2015 featured book: Troubleshooting Techniques of the Successful Technologist http://www.troubleshooters.com/techniques
Re: Supervising a pipeline?
Two things: I'm not at a computer but I'm *pretty* sure exec foo | bar doesn't work right? But more importantly, if the python program dies, the inotifywatch will get a SIGPIPE when it writes and then almost assuredly crash itself. You could kill the python program and see. -- sent from a rotary phone, pardon my brevity On Dec 26, 2015 11:07 AM, "Steve Litt"wrote: > Hi all, > > I'm making a thumb drive automounter using inotifywait piped into my > Python program, which detects the proper CREATES and DELETES and > automounts and autoumounts accordingly. > > Here's what I thought the run script would look like (runit dialect): > > #!/bin/sh > exec /usr/bin/inotifywait /dev/disk | /usr/local/bin/automounter.py > > However, it was pointed out to me that if automounter.py crashed, > inotifywait would keep spinning and shooting its voluminous stdout > messages into the ether. And presumably the service wouldn't crash and > restart. > > Is this doable in a way consistent with supervision suites? > > I completely understand that 95% of you think what I've suggested is a > no-style kludge that shouldn't be done, and that I should use Python's > inotify framework. This is a completely different discussion: I'm > limiting my question to whether such a pipeline is, or can be made, > consistent with things like daemontools-encore, s6, and runit. > > Thanks, > > SteveT > > Steve Litt > November 2015 featured book: Troubleshooting Techniques > of the Successful Technologist > http://www.troubleshooters.com/techniques >
Re: Supervising a pipeline?
On 2015-12-26 18:09, Steve Litt wrote: #!/bin/sh exec /usr/bin/inotifywait /dev/disk | /usr/local/bin/automounter.py You can't supervise a pipeline per se; you need to supervise both processes in the pipeline independently, and make sure the pipe isn't broken when one of them dies. So, have "exec inotifywait /dev/disk" as foobar/run, and have "exec automounter.py" as foobar/log/run. This will work with daemontools, runit and s6. (You can accomplish the same goal with perp and nosh too; the syntax will just be different.) Alternatively, you could use s6-rc and create the "inotifywait" and "automounter" longrun services in a pipeline; your compiled database would then include instructions to set up the supervised pipeline for you. This is more complex to set up than just using the integrated pipe management in svscan and runsvdir, but it's also more powerful, because you can pipeline an arbitrary number of processes that way (this is also what nosh does). -- Laurent
Re: Supervising a pipeline?
On 12/26/2015 12:09 PM, Steve Litt wrote: Is this doable in a way consistent with supervision suites? Pipelines can form practically indefinite graphs in scope owing to the nature of fds (modulo process control limitations and space constraints). You might be interested in pipexec, which exploits this as a generic tool: https://github.com/flonatel/pipexec
Re: Supervising a pipeline?
On Sat, 26 Dec 2015 18:25:22 +0100 Laurent Bercotwrote: > On 2015-12-26 18:09, Steve Litt wrote: > > #!/bin/sh > > exec /usr/bin/inotifywait /dev/disk > > | /usr/local/bin/automounter.py > > You can't supervise a pipeline per se; you need to supervise > both processes in the pipeline independently, and make sure the pipe > isn't broken when one of them dies. I've verified that the preceding paragraph to be true. Killing the Python program left the inotifywait running, and the service non-restarted. That's unacceptable. > > So, have "exec inotifywait /dev/disk" as foobar/run, and have > "exec automounter.py" as foobar/log/run. This will work with > daemontools, runit and s6. (You can accomplish the same goal with > perp and nosh too; the syntax will just be different.) :-) Congrats Laurent: You've suggested a kludge that the King of Kludges, Steve Litt, cannot abide by. I understand it would probably work, but I'm too much of a prude to use a facility, meant for logging, in this way. Nice one though: The kludgicity is a thing of panoramic beauty. If it had been my idea I might have done it just for that reason :-) > > Alternatively, you could use s6-rc and create the "inotifywait" and > "automounter" longrun services in a pipeline; your compiled database > would then include instructions to set up the supervised pipeline > for you. This is more complex to set up than just using the integrated > pipe management in svscan and runsvdir, but it's also more powerful, > because you can pipeline an arbitrary number of processes that way > (this is also what nosh does). I'll keep the preceding in mind when contemplating: exec a | b | c | d | e But I can't begin to imagine what would be in the "compiled database" to which you refer. I can't actually do this for the following reasons: 1) I have absolutely no idea how. 2) Documentation for deployment would be a nightmare. 3) My intended audience would not only laugh, but use such a setup as a reason not to use my automounter. At this point I should mention the pipexec described by post-sysv. This looks promising in a different situation, but it would be a dependency that would belie my automounter's claim to "simplicity." So what I did was something like this, in the Python program: proc = subprocess.Popen(['/usr/bin/inotifywait', \ '-m', '-r', '/dev/disk/by-id'], \ stdout=subprocess.PIPE, bufsize=1) EOF = False while not EOF: line = proc.stdout.readline().decode('utf8') if line == '': EOF = True else: line = line.strip() process_line(line) print('\nCaller loop done, aborting program') The runit run command becomes: exec /usr/local/bin/amounter.py While not as kludgificent as using the logging facility, the preceding does have a certain charmful kludgicity. And, as tested with runit, if amounter.py dies, they both die and the service gets rerun, and if inotifywait dies, they both die and the service gets rerun. Thanks everybody! SteveT Steve Litt November 2015 featured book: Troubleshooting Techniques of the Successful Technologist http://www.troubleshooters.com/techniques
Re: Supervising a pipeline?
On 12/26/15 7:09 PM, Steve Litt wrote: > I'm making a thumb drive automounter using inotifywait piped into my > Python program, which detects the proper CREATES and DELETES and > automounts and autoumounts accordingly. > > Here's what I thought the run script would look like (runit dialect): > > #!/bin/sh > exec /usr/bin/inotifywait /dev/disk | /usr/local/bin/automounter.py > > However, it was pointed out to me that if automounter.py crashed, > inotifywait would keep spinning and shooting its voluminous stdout > messages into the ether. And presumably the service wouldn't crash and > restart. > > Is this doable in a way consistent with supervision suites? > > I completely understand that 95% of you think what I've suggested is a > no-style kludge that shouldn't be done, and that I should use Python's > inotify framework. This is a completely different discussion: I'm > limiting my question to whether such a pipeline is, or can be made, > consistent with things like daemontools-encore, s6, and runit. I'm using rundeux from perp for exactly the same thing. http://b0llix.net/perp/site.cgi?page=rundeux.8 On side there is inotifywait process and on the other the script that processes the events. It works perfectly. -- Georgi Chorbadzhiyski | http://georgi.unixsol.org/ | http://github.com/gfto/