Re: [OMPI users] scatter/gather, tcp, 3 nodes, homogeneous, # RAM

2016-07-26 Thread MM
On 16 June 2016 at 00:46, Gilles Gouaillardet wrote: > > Here is the idea on how to get the number of tasks per node > > > MPI_Comm intranode_comm; > > int tasks_per_local_node; > > MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, > &intranode_comm); > > MPI_Comm_size(i

Re: [OMPI users] scatter/gather, tcp, 3 nodes, homogeneous, # RAM

2016-06-15 Thread Gilles Gouaillardet
Here is the idea on how to get the number of tasks per node MPI_Comm intranode_comm; int tasks_per_local_node; MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &intranode_comm); MPI_Comm_size(intranode_comm, &tasks_per_local_node) MPI_Comm_free(&intranode_comm);

Re: [OMPI users] scatter/gather, tcp, 3 nodes, homogeneous, # RAM

2016-06-15 Thread MM
On 14 June 2016 at 13:56, Gilles Gouaillardet wrote: On Tuesday, June 14, 2016, MM wrote: > > Hello, > I have the following 3 1-socket nodes: > > node1: 4GB RAM 2-core: rank 0 rank 1 > node2: 4GB RAM 4-core: rank 2 rank 3 rank 4 rank 5 > node3: 8GB RAM 4-core: rank 6 rank 7 rank 8 rank 9 >

Re: [OMPI users] scatter/gather, tcp, 3 nodes, homogeneous, # RAM

2016-06-14 Thread Gilles Gouaillardet
Note if your program is synchronous, it will run at the speed of the slowest task. (E.g. Tasks on node 2, 1GB per task, will wait for the other tasks, 2 GB per task) You can use MPI_Comm_split_type in order to create inter node communicators. Then you can find how much memory is available per task

[OMPI users] scatter/gather, tcp, 3 nodes, homogeneous, # RAM

2016-06-14 Thread MM
Hello, I have the following 3 1-socket nodes: node1: 4GB RAM 2-core: rank 0 rank 1 node2: 4GB RAM 4-core: rank 2 rank 3 rank 4 rank 5 node3: 8GB RAM 4-core: rank 6 rank 7 rank 8 rank 9 I have a model that takes a input and produces a output, and I want to run this model for N possible combi