Ted Yu wrote:
I'm trying to run a job based on openmpi.  For some reason, the program and the 
global communicator are not in sync and it reads that there is only one 
processors, whereas, there should be 2 or more.  Any advice on where to look?  
Here is my PBS script.  Thanx!

PBS SCRIPT:
#!/bin/sh
### Set the job name
#PBS -N HH
### Declare myprogram non-rerunable
#PBS -r n
### Combine standard error and standard out to one file.
#PBS -j oe
### Have PBS mail you results
#PBS -m ae
#PBS -M ted...@wag.caltech.edu
### Set the queue name, given to you when you get a reservation.
#PBS -q workq
### Specify the number of cpus for your job.  This example will run on 32 cpus
### using 8 nodes with 4 processes per node.
#PBS -l nodes=1:ppn=2,walltime=70:00:00
# Switch to the working directory; by default PBS launches processes from your 
home directory.
# Jobs should only be run from /home, /project, or /work; PBS returns results 
via NFS.
PBS_O_WORKDIR=/temp1/tedhyu/HH
export 
CODE=/project/source/seqquest/seqquest_source_v261j/hive_CentOS4.5_parallel/build_261j/quest_ompi.x

echo Working directory is $PBS_O_WORKDIR
mkdir -p $PBS_O_WORKDIR
cd $PBS_O_WORKDIR
rm -rf *
cp /ul/tedhyu/fuelcell/HOH/test/HH.in ./lcao.in
cp /ul/tedhyu/atom_pbe/* .
echo Running on host `hostname`
echo Time is `date`
echo Directory is `pwd`
echo This jobs runs on the following processors:
echo `cat $PBS_NODEFILE`
Number=`wc -l $PBS_NODEFILE | awk '{print $1}'`

export Number
echo ${Number}
# Define number of processors
NPROCS=`wc -l < $PBS_NODEFILE`
# And the number or hosts
NHOSTS=`cat $PBS_NODEFILE|uniq|wc -l`
echo This job has allocated $NPROCS cpus
echo NHOSTS
#mpirun  -machinefile $PBS_NODEFILE  ${CODE} 
>/ul/tedhyu/fuelcell/HOH/test/HH.out
#mpiexec -np 2  ${CODE} >/ul/tedhyu/fuelcell/HOH/test/HH.out
/opt/mpich-1.2.5.10-ch_p4-gcc/bin/mpirun -machinefile $PBS_NODEFILE -np $NPROCS 
${CODE} >/ul/tedhyu/fuelcell/HOH/test/HH.out
cd ..
rm -rf HH



Please note, that you are mixing Open MPI (API/Library) with MPICH (mpirun). This is a mistake I like to make, too. If you use
the ompi mpiexec program, it probably works.

Dorian


------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to