Hmm,
"--controlmaster -S worker1,woker2,..." option in my previous mail doesn't
work as I expected.
Master sever still forks each inter-master-worker-communicating ssh
connection to worker server when forking job on workers,
So process forking load on master server is not decreased.


I am going to try like the following.
Send divided list to workers. And workers runs own parallel against the
list from master.

cat SERVERLIST.txt | parallel --pipe -N[ int(lines count of SERVERLIST.txt/
number of workers) ] -S worker1,worker2,... 'cat | parallel ...'

Is there any other good way to achieve job like this ??



On Mon, Feb 20, 2017 at 3:05 PM, aero <[email protected]> wrote:

> Sorry for my lack of expressiveness.
>
> Master
> |-> target1
> |-> target2
> |-> target3
> .
> .
> +-> targetN
>
>
> I want to execute myscript.sh via ssh once on each target1~N servers.
> But Master server lacks capacity to fork many ssh processes at the same
> time.
>
> $ parallel ssh user@{} bash < /home/user/myscript.sh ::: target1 target2
> ... targetN
>
> So I want to distribute the load of forking ssh to several worker servers
> with -S option.
> and myscript.sh should executed ONCE on each target1~N servers.
> worker servers are just distributed gateway for executing myscript.sh via
> ssh on remote target servers.
>
> Master
> |-> worker1 ---> |
> |-> worker2 ---> |  target1~N
> |-> worker3 \/-> |
> +-> workerN /\-> |
>
>        distributed by parallel
>    execute script once on each target1~N
>
>
> $ parallel --basefile /home/user/myscript.sh --controlmaster -S
> worker1,worker2,...,workerN ssh user@{} bash < /home/user/myscript.sh
> ::: target1 target2 ... targetN
>
> On Mon, Feb 20, 2017 at 10:00 AM, Ole Tange <[email protected]> wrote:
>
>> On Sun, Feb 19, 2017 at 2:28 AM, aero <[email protected]> wrote:
>>
>> > I want to run /home/user/myscript.sh once on remote [SERVER LIST...]
>> > -S worker1,worker2 servers are only gateways to executed ssh command on
>> > remote servers.
>> > All needed env. variables in myscript.sh, so I don't think --env
>> variable is
>> > necessary.
>> >
>> > MY TEST:
>> >
>> > No 1)
>> >
>> > $ parallel --tag -k --nonall -S worker1,worker2 echo {} ::: s1 s2 s3 s4
>> > worker2        s1
>> > worker2        s2
>> > worker2        s3
>> > worker2        s4
>> > worker1 s1
>> > worker1 s2
>> > worker1 s3
>> > worker1 s4
>> >
>> > No 2)
>> >
>> > $ parallel --tag -k -S worker1,worker2 echo {} ::: s1 s2 s3 s4
>> > s1      s1
>> > s2      s2
>> > s3      s3
>> > s4      s4
>> >
>> > What I want is No 2.
>> > Am I doing wrong ?
>>
>> I am sorry, but I do not understand the problem you are describing.
>>
>> Does it have anything to do with ssh via a jumphost? Then this may be
>> helpful:
>>
>> https://www.gnu.org/software/parallel/man.html#EXAMPLE:-Usin
>> g-remote-computers-behind-NAT-wall
>>
>>
>> /Ole
>>
>
>

Reply via email to