[OMPI users] How does binding option affect network traffic?
On Sep 5, 2014, at 11:49 PM, Ralph Castain wrote:
> It would be about the worst thing you can do, to be honest. Reason is that
> each socket is typically a separate NUMA region, and so the shared memory
> system would be su
On Sep 5, 2014, at 11:49 PM, Ralph Castain wrote:
> It would be about the worst thing you can do, to be honest. Reason is that
> each socket is typically a separate NUMA region, and so the shared memory
> system would be sub-optimized in that configuration. It would be much better
> to map-by
rk out so that no process was bound across a socket
boundary as that is really bad.
HTH
Ralph
>
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Jeff Squyres
> (jsquyres)
> Sent: Friday, September 05, 2014 10:37 AM
> To: Open MPI User's List
> Subject: Re: [OM
(jsquyres)
Sent: Friday, September 05, 2014 10:37 AM
To: Open MPI User's List
Subject: Re: [OMPI users] How does binding option affect network traffic?
I'm confused, then: why you wouldn't want to minimize the number of servers
that a single job runs on?
I ask because it sounds to
de. If we just fill up two nodes as you suggest, we overload the RAM on those
two nodes.
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of
tmish...@jcity.maeda.co.jp<mailto:tmish...@jcity.maeda.co.jp>
Sent: Friday, August 29, 2014 5:24 PM
To: Open MPI Us
PM
To: Open MPI User's List
Subject: Re: [OMPI users] How does binding option affect network traffic?
Ah, ok -- I think I missed this part of the thread: each of your individual MPI
processes suck up huge gobbs of memory.
So just to be clear, in general: you don't intend to run mor
those two nodes.
>
> -Original Message-
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of
> tmish...@jcity.maeda.co.jp
> Sent: Friday, August 29, 2014 5:24 PM
> To: Open MPI Users
> Subject: Re: [OMPI users] How does binding option affect network traffic?
>
>
: [OMPI users] How does binding option affect network traffic?
Hi,
Your cluster is very similar to ours where Torque and OpenMPI is installed.
I would use this cmd line:
#PBS -l nodes=2:ppn=12
mpirun --report-bindings -np 16
Here --map-by socket:pe=1 and -bind-to core is assumed as default setting
Hi,
Your cluster is very similar to ours where Torque and OpenMPI
is installed.
I would use this cmd line:
#PBS -l nodes=2:ppn=12
mpirun --report-bindings -np 16
Here --map-by socket:pe=1 and -bind-to core is assumed as default setting.
Then, you can run 10 jobs independently and simultaneousl
ilto:users-boun...@open-mpi.org] *On Behalf Of *Ralph
> Castain
> *Sent:* Friday, August 29, 2014 3:26 PM
> *To:* Open MPI Users
>
> *Subject:* Re: [OMPI users] How does binding option affect network
> traffic?
>
>
>
>
>
> On Aug 29, 2014, at 10:51 AM, McGrattan,
29, 2014 3:26 PM
To: Open MPI Users
Subject: Re: [OMPI users] How does binding option affect network traffic?
On Aug 29, 2014, at 10:51 AM, McGrattan, Kevin B. Dr.
mailto:kevin.mcgrat...@nist.gov>> wrote:
Thanks for the tip. I understand how using the --cpuset option would help me i
run a maximum of 6 mpirun's at a time across a given set of
nodes. So you'd need to stage your allocations correctly to make it work.
>
> --
>
> Date: Thu, 28 Aug 2014 13:27:12 -0700
> From: Ralph Castain
> To: Open MPI Users
> Subje
13:27:12 -0700
From: Ralph Castain
To: Open MPI Users
Subject: Re: [OMPI users] How does binding option affect network
traffic?
Message-ID: <637caef5-bbb3-46c2-9387-decdf8cbd...@open-mpi.org>
Content-Type: text/plain; charset="windows-1252"
On Aug 28, 2014, at 11:50 AM, Mc
Hi,
Am 28.08.2014 um 20:50 schrieb McGrattan, Kevin B. Dr.:
> My institute recently purchased a linux cluster with 20 nodes; 2 sockets per
> node; 6 cores per socket. OpenMPI v 1.8.1 is installed. I want to run 15
> jobs. Each job requires 16 MPI processes. For each job, I want to use two
> c
On Aug 28, 2014, at 11:50 AM, McGrattan, Kevin B. Dr.
wrote:
> My institute recently purchased a linux cluster with 20 nodes; 2 sockets per
> node; 6 cores per socket. OpenMPI v 1.8.1 is installed. I want to run 15
> jobs. Each job requires 16 MPI processes. For each job, I want to use two
My institute recently purchased a linux cluster with 20 nodes; 2 sockets per
node; 6 cores per socket. OpenMPI v 1.8.1 is installed. I want to run 15 jobs.
Each job requires 16 MPI processes. For each job, I want to use two cores on
each node, mapping by socket. If I use these options:
#PBS -l
16 matches
Mail list logo