So, as I'm not in write-only mode, here some possible alternatives, we could do (may be it shows better, how and why I got to my approach):

1) netlink with a private pipe to long lived handler:

  establish netlink socket
  spawn a pipe to handler process
  write a "netlink, no timeout" message to pipe
  wait for event messages
     gather event information
     write message to pipe

The initial pipe message let the parser / handler know that we are at netlink operation and disable timeout functionality, resulting in both processes being long lived. This won't harm the system much, as memory of sleeping processes is usually swapped out, but still resources lie around unused.

@Laurent: You know the race conditions why the handler process needs to be long lived here, or we need complex pipe management with re-spawning handler, and all that stuff. You told about them.

This would indeed be the simplest solution when splitting of netlink reader and handler. Other mechanisms may still create a named pipe and use the same handler for it's purpose. With the cave-eat of two long lived processes, where I call one big.

So, look forward for second alternative ...


2) netlink with a private pipe but on demand start of handler (avoiding the race):

   create a pipe and hold both ends open (but never read)
   establish netlink socket
   wait for event message
      gather event information
      if no handler process running
spawn a new handler process, redirecting stdin from read end of pipe
     write message to pipe

  with a SIGCHLD handling of:
     get status of process
     do failure management
     check for data still pending in pipe
        re-spawn a handler process, redirecting stdin from read end of pipe

The netlink reader is a long lived process, the handler is started on demand when required and may die after some timeout. Races won't happen this way, as the pipe does not vanish and data written into the pipe during exit of an old handler, does not get lost (next handler will get the message).

... better?

This is, was I want to do, with an additional choice of more clarity: Let the netlink reader do it's job, and split of the pipe management and handler start into a separate thread, but otherwise exactly the same operation. With *no* extra cost, the pipe management and the handler startup may then be used for other mechanism(s).

... still afraid about using a named pipe? You still would prefer a private pipe for netlink?

... ok look at the next alternative (and on this one I came taking your fears into account).


3) netlink spawning external supervisor for on demand handler startup

netlink reader:
   establish netlink socket
create a pipe, save write end for writing to pipe
   spawn "fifosvd - xdev parser", redirecting stdin from read end of pipe
   close read end of pipe
  wait for event messages
     gather event information
     write message to pipe

fifosvd:
   save and hold read end of pipe open (but never read)
   wait until data arrive in pipe (poll for read)
       spawn handler process, handing over the pipe read end to stdin
       wait for exit of process
       failure management

A novice may think this way we added another process in the data flow, but no, the data flow is still the same: netlink -> pipe -> handler. The extra process is a small helper, containing the code for the on demand start of the handler, and the failure management, but will never get in contact with the data passed through the pipe.


This approach allows simple reusing of code for other mechanism(s), and fifosvd may be of general usage: when argument is a single dash ("-"), it uses the pipe from stdin else it creates and opens a named pipe. May also be used for on demand start of other jobs:

   process producing sometimes data | script to process the data

 may be changed to:

   process producing data | fifosvd - script to process data

will script start on demand when data arrives in the pipe, and when script dies, restart as soon as more data is in the pipe.

This is extra benefit from my approach, with no extra cost.


I hope this helps to solve some fears.

--
Harald

_______________________________________________
busybox mailing list
busybox@busybox.net
http://lists.busybox.net/mailman/listinfo/busybox

Reply via email to