This might be a bug that has been fixed - can you try the 1.10.3rc? If it doesn’t work, I’ll try to quickly fix it.
> On Apr 29, 2016, at 10:59 AM, Scott Shaw <ss...@sgi.com> wrote: > > I am using a –app file to run a serial application on N number of compute > nodes and each compute node has 24 cores available. If I only want to use one > core to execute the serial app I get a “not enough slots available” error > when running OMPI. How do you define the slots parameter to inform OMPI that > a total of 24 cores are available per node when using a app file. I need to > contain all parameters in the –app file since any additional options passed > on the mpirun command line are ignored. > > io/jobs> mpirun -V > mpirun (Open MPI) 1.10.2 > > io/jobs> mpirun --app cmd.file > -------------------------------------------------------------------------- > There are not enough slots available in the system to satisfy the 2 slots > that were requested by the application: > uptime > > Either request fewer slots for your application, or make more slots available > for use. > -------------------------------------------------------------------------- > > io/jobs> cat cmd.file > --host hosta -np 1 convertslice input1 output1 > --host hosta -np 1 convertslice input2 output2 > --host hostb -np 1 convertslice input3 output3 > --host hostb -np 1 convertslice input4 output4 > > Following is the lscpu output from one of the compute nodes showing 24 cores > and 24 HTs available. > io/jobs> lscpu > Architecture: x86_64 > CPU op-mode(s): 32-bit, 64-bit > Byte Order: Little Endian > CPU(s): 48 > On-line CPU(s) list: 0-47 > Thread(s) per core: 2 > Core(s) per socket: 12 > Socket(s): 2 > NUMA node(s): 2 > Vendor ID: GenuineIntel > CPU family: 6 > Model: 63 > Stepping: 2 > CPU MHz: 2500.092 > BogoMIPS: 4999.93 > Virtualization: VT-x > L1d cache: 32K > L1i cache: 32K > L2 cache: 256K > L3 cache: 30720K > NUMA node0 CPU(s): 0-11,24-35 > NUMA node1 CPU(s): 12-23,36-47 > > Any guidance would be greatly appreciated. > > Thanks, > Scott > > _______________________________________________ > users mailing list > us...@open-mpi.org <mailto:us...@open-mpi.org> > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > <https://www.open-mpi.org/mailman/listinfo.cgi/users> > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/04/29055.php > <http://www.open-mpi.org/community/lists/users/2016/04/29055.php>