Sorry for my lack of expressiveness.

Master
|-> target1
|-> target2
|-> target3
.
.
+-> targetN


I want to execute myscript.sh via ssh once on each target1~N servers.
But Master server lacks capacity to fork many ssh processes at the same
time.

$ parallel ssh user@{} bash < /home/user/myscript.sh ::: target1 target2
... targetN

So I want to distribute the load of forking ssh to several worker servers
with -S option.
and myscript.sh should executed ONCE on each target1~N servers.
worker servers are just distributed gateway for executing myscript.sh via
ssh on remote target servers.

Master
|-> worker1 ---> |
|-> worker2 ---> |  target1~N
|-> worker3 \/-> |
+-> workerN /\-> |

       distributed by parallel
   execute script once on each target1~N


$ parallel --basefile /home/user/myscript.sh --controlmaster -S
worker1,worker2,...,workerN ssh user@{} bash < /home/user/myscript.sh  :::
target1 target2 ... targetN

On Mon, Feb 20, 2017 at 10:00 AM, Ole Tange <[email protected]> wrote:

> On Sun, Feb 19, 2017 at 2:28 AM, aero <[email protected]> wrote:
>
> > I want to run /home/user/myscript.sh once on remote [SERVER LIST...]
> > -S worker1,worker2 servers are only gateways to executed ssh command on
> > remote servers.
> > All needed env. variables in myscript.sh, so I don't think --env
> variable is
> > necessary.
> >
> > MY TEST:
> >
> > No 1)
> >
> > $ parallel --tag -k --nonall -S worker1,worker2 echo {} ::: s1 s2 s3 s4
> > worker2        s1
> > worker2        s2
> > worker2        s3
> > worker2        s4
> > worker1 s1
> > worker1 s2
> > worker1 s3
> > worker1 s4
> >
> > No 2)
> >
> > $ parallel --tag -k -S worker1,worker2 echo {} ::: s1 s2 s3 s4
> > s1      s1
> > s2      s2
> > s3      s3
> > s4      s4
> >
> > What I want is No 2.
> > Am I doing wrong ?
>
> I am sorry, but I do not understand the problem you are describing.
>
> Does it have anything to do with ssh via a jumphost? Then this may be
> helpful:
>
> https://www.gnu.org/software/parallel/man.html#EXAMPLE:-
> Using-remote-computers-behind-NAT-wall
>
>
> /Ole
>

Reply via email to