> >
> > Hi List.
> >
> > I'm writing a script to automate some system maintenance tasks, and I
> > want to connect over SSH to several remote computers and do stuff on
> > them. I'm using ssh -f to background ssh so I can run the same operation
> > on multiple machines in parallel, otherwise it will be too slow - the
> > maintenance job may take up to a few minutes to run and the script is
> > not supposed to be fully automatic: a human is to monitor the process.
> >
> > But I don't want just to fire and forget the SSH processes - I want to
> > exit from the script only when all the SSH processes have completed. I
> > can do that by monitoring the process ids of the background SSH
> > processes, if I could know them - which I'm having a difficult time
> > detecting.
> >
> > I'm writing in bash, and optimally it would be something like this:
> >
> > for server in 1 2 ...; do
> >    ssh -f [EMAIL PROTECTED] 'run maintenance task'
> >    pids="$pids $(getSSHpid)"
> > done
> >
> > while kill -0 $pids 2>/dev/null; do echo "Waiting.."; sleep 1; done
> >
> > but I didn't manage to find a way to get the process id of the ssh
> > process after it goes to background, other the 'ps'ing for it.
> >
> > How can I go about doing this?
> >
> > --
> >
> > Oded
> >

Sorry for OT, but some kind of distributed shell seems me more
suitable for this task -
http://www.linux-mag.com/microsites.php?site=business-class-hpc&sid=build&p=4658

=================================================================
To unsubscribe, send mail to [EMAIL PROTECTED] with
the word "unsubscribe" in the message body, e.g., run the command
echo unsubscribe | mail [EMAIL PROTECTED]

Reply via email to