Anne, output from "cat /proc/cpuinfo" on your node "hostname" may help those trying to answer.
-Tom > -----Original Message----- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Ralph Castain > Sent: Monday, July 16, 2012 2:47 PM > To: Open MPI Users > Subject: Re: [OMPI users] openmpi tar.gz for 1.6.1 or 1.6.2 > > I gather there are two sockets on this node? So the second cmd line is > equivalent > to leaving "num-sockets" off of the cmd line? > > I haven't tried what you are doing, so it is quite possible this is a bug. > > > On Jul 16, 2012, at 1:49 PM, Anne M. Hammond wrote: > > > Thanks! > > > > Built the latest snapshot. Still getting an error when trying to run > > on only one socket (see below): Is there a workaround? > > > > [hammond@node65 bin]$ ./mpirun -np 4 --num-sockets 1 --npersocket 4 > > hostname > > ---------------------------------------------------------------------- > > ---- An invalid physical processor ID was returned when attempting to > > bind an MPI process to a unique processor. > > > > This usually means that you requested binding to more processors than > > exist (e.g., trying to bind N MPI processes to M processors, where N > > > M). Double check that you have enough unique processors for all the > > MPI processes that you are launching on this host. > > > > You job will now abort. > > ---------------------------------------------------------------------- > > ---- > > ---------------------------------------------------------------------- > > ---- mpirun was unable to start the specified application as it > > encountered an error: > > > > Error name: Fatal > > Node: node65.cl.corp.com > > > > when attempting to start process rank 0. > > ---------------------------------------------------------------------- > > ---- > > 4 total processes failed to start > > > > > > [hammond@node65 bin]$ ./mpirun -np 4 --num-sockets 2 --npersocket 4 > > hostname node65.cl.corp.com node65.cl.corp.com node65.cl.corp.com > > node65.cl.corp.com > > [hammond@node65 bin]$ > > > > > > > > > > On Jul 16, 2012, at 12:56 PM, Ralph Castain wrote: > > > >> Jeff is at the MPI Forum this week, so his answers will be delayed. Last I > heard, it was close, but no specific date has been set. > >> > >> > >> On Jul 16, 2012, at 11:49 AM, Michael E. Thomadakis wrote: > >> > >>> When is the expected date for the official 1.6.1 (or 1.6.2 ?) to be > >>> available ? > >>> > >>> mike > >>> > >>> On 07/16/2012 01:44 PM, Ralph Castain wrote: > >>>> You can get it here: > >>>> > >>>> http://www.open-mpi.org/nightly/v1.6/ > >>>> > >>>> On Jul 16, 2012, at 10:22 AM, Anne M. Hammond wrote: > >>>> > >>>>> Hi, > >>>>> > >>>>> For benchmarking, we would like to use openmpi with > >>>>> --num-sockets 1 > >>>>> > >>>>> This fails in 1.6, but Bug Report #3119 indicates it is changed in > >>>>> 1.6.1. > >>>>> > >>>>> Is 1.6.1 or 1.6.2 available in tar.gz form? > >>>>> > >>>>> Thanks! > >>>>> Anne > >>>>> > >>>>> > >>>>> > >>>>> _______________________________________________ > >>>>> users mailing list > >>>>> us...@open-mpi.org > >>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users > >>>> _______________________________________________ > >>>> users mailing list > >>>> us...@open-mpi.org > >>>> http://www.open-mpi.org/mailman/listinfo.cgi/users > >>> > >>> > >>> _______________________________________________ > >>> users mailing list > >>> us...@open-mpi.org > >>> http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > >> > >> _______________________________________________ > >> users mailing list > >> us...@open-mpi.org > >> http://www.open-mpi.org/mailman/listinfo.cgi/users > >> > > > > Anne M. Hammond - Systems / Network Administration - Tech-X Corp > > hammond_at_txcorp.com 720-974-1840 > > > > > > > > > > > > _______________________________________________ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users