My understanding is that you can use, for example, -j50% with --sshlogin to
use half the cores on each machine in your cluster.  You could also look
into --load and/or --nice, or running your jobs with niceload(1), to
dynamically use all available CPU while also leaving capacity for others.
These and other good options are explained in the manual.


On Tue, Feb 5, 2013 at 3:19 PM, yacob sen <[email protected]> wrote:

> Dear All,
>
> I would like to use the Gnu Parallel inside a HPC machine (cluster) with
> several nodes and and a handful of core attached to each of the nodes.
>
> Up until now I have been using my local machine or a server  with a
> multiple  processor core that I can count using
>
> cat /proc/cpuinfo  | grep "processor" | wc
>
> and supply this number depending on the intensiveness of the work not to
> slowdown a server  and to free up some processor cores for other users
>
>  parallel --eta -jn
>
> n=is the number of processor core that I want to use.
>
> How is this expanded in a cluster with several nodes and cores. Is the
> same command can be used ? If I and some one else in the cluster ask the
> same node can this parallel be adversely affected in terms of speed.
>
> I am looking forward to hear from you.
>
> Regards,
>
> Yacob
>

Reply via email to