Hi,
I'm submitting a job through torque/PBS, the head node also runs the
Moab scheduler, the .pbs file has this in the resources line :
#PBS -l nodes=2:ppn=4
I've also tried something like :
#PBS -l procs=56
and at the end of script I'm running :
mpirun -np 8 cat /dev/urandom > /dev/null
or
How did you configure OMPI? If you add --display-allocation to your cmd line,
does it show all the nodes?
On Jan 24, 2013, at 6:34 AM, Sabuj Pattanayek wrote:
> Hi,
>
> I'm submitting a job through torque/PBS, the head node also runs the
> Moab scheduler, the .pbs file has this in the resource
ahha, with --display-allocation I'm getting :
mca: base: component_find: unable to open
/sb/apps/openmpi/1.6.3/x86_64/lib/openmpi/mca_mtl_psm:
libpsm_infinipath.so.1: cannot open shared object file: No such file
or directory (ignored)
I think the system I compiled it on has different ib libs than
or do i just need to compile two versions, one with IB and one without?
On Thu, Jan 24, 2013 at 9:09 AM, Sabuj Pattanayek wrote:
> ahha, with --display-allocation I'm getting :
>
> mca: base: component_find: unable to open
> /sb/apps/openmpi/1.6.3/x86_64/lib/openmpi/mca_mtl_psm:
> libpsm_infinipa
On Jan 24, 2013, at 10:10 AM, Sabuj Pattanayek wrote:
> or do i just need to compile two versions, one with IB and one without?
You should not need to, we have OMPI compiled for openib/psm and run that same
install on psm/tcp and verbs(openib) based gear.
All the nodes assigned to your job have
I've looked in more detail at the current two MPI_Alltoallv algorithms
and wanted to raise a couple of ideas.
Firstly, the new default "pairwise" algorithm.
* There is no optimisation for sparse/empty messages, compare to the old
basic "linear" algorithm.
* The attached "pairwise-nop" patch add
Sure - just add --with-openib=no --with-psm=no to your config line and we'll
ignore it
On Jan 24, 2013, at 7:09 AM, Sabuj Pattanayek wrote:
> ahha, with --display-allocation I'm getting :
>
> mca: base: component_find: unable to open
> /sb/apps/openmpi/1.6.3/x86_64/lib/openmpi/mca_mtl_psm:
>
This is for reference and suggestions as this took me several hours to track
down and the previous discussion on "mpivars.sh" failed to cover this point
(nothing in the FAQ):
I successfully build and installed OpenMPI 1.6.3 using the following on Debian
Linux:
./configure --prefix=/opt/openmpi
On 01/24/2013 12:40 PM, Michael Kluskens wrote:
This is for reference and suggestions as this took me several hours to track down and the
previous discussion on "mpivars.sh" failed to cover this point (nothing in the
FAQ):
I successfully build and installed OpenMPI 1.6.3 using the following on
Dear users,
Maybe something went wrong as I was compiling OpenMPI, I am very new to linux.
When I try to run LAMMPS using the following command:
/usr/lib64/openmpi/bin/mpirun -n 16 /opt/lammps-21Jan13/lmp_linux < zigzag.in
I get the following errors:
[NTU-2:28895] [[INVALID],INVALID] O
How was OMPI configured? What type of system are you running on (i.e., what is
the launcher - ssh, lsf, slurm, ...)?
On Jan 24, 2013, at 6:35 PM, #YEO JINGJIE# wrote:
> Dear users,
>
> Maybe something went wrong as I was compiling OpenMPI, I am very new to
> linux. When I try to run LAMMPS
I built the current 1.6 branch (which hasn't seen any changes that would impact
this function) and was able to execute it just fine on a single socket machine.
I then gave it your slot-list, which of course failed as I don't have two
active sockets (one is empty), but it appeared to parse the li
12 matches
Mail list logo