Just to help separate out the issues, you might try running the hello_c program
in the OMPI examples directory - this will verify whether the problem is in the
mpirun command or in your program
On Sep 4, 2014, at 6:26 AM, Donato Pera wrote:
> Hi,
>
> the text was
Hi,
the text was on the file.err file in the file.out file I get only the name
of the node where the program run.
Thanks Donato.
On 04/09/2014 15:14, Reuti wrote:
> Hi,
>
> Am 04.09.2014 um 14:43 schrieb Donato Pera:
>
>> using this script :
>>
>> #!/bin/bash
>> #$ -S /bin/bash
>> #$ -pe orte
Hi,
Am 04.09.2014 um 14:43 schrieb Donato Pera:
> using this script :
>
> #!/bin/bash
> #$ -S /bin/bash
> #$ -pe orte 64
> #$ -cwd
> #$ -o ./file.out
> #$ -e ./file.err
>
> export LD_LIBRARY_PATH=/home/SWcbbc/openmpi-1.6.5/lib:$LD_LIBRARY_PATH
> export OMP_NUM_THREADS=1
>
>
Hi,
using this script :
#!/bin/bash
#$ -S /bin/bash
#$ -pe orte 64
#$ -cwd
#$ -o ./file.out
#$ -e ./file.err
export LD_LIBRARY_PATH=/home/SWcbbc/openmpi-1.6.5/lib:$LD_LIBRARY_PATH
export OMP_NUM_THREADS=1
CPMD_PATH=/home/tanzi/myroot/X86_66intel-mpi/
PP_PATH=/home/tanzi
Am 03.09.2014 um 13:11 schrieb Donato Pera:
> I get
>
> ompi_info | grep grid
> MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.5)
Good.
> and using this script
>
> #!/bin/bash
> #$ -S /bin/bash
> #$ -pe orte 64
> #$ -cwd
> #$ -o ./file.out
> #$ -e ./file.err
>
>
Hi,
I get
ompi_info | grep grid
MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.6.5)
and using this script
#!/bin/bash
#$ -S /bin/bash
#$ -pe orte 64
#$ -cwd
#$ -o ./file.out
#$ -e ./file.err
export LD_LIBRARY_PATH=/home/SWcbbc/openmpi-1.6.5/lib:$LD_LIBRARY_PATH
export
Hi,
Am 03.09.2014 um 12:17 schrieb Donato Pera:
> I'm using Rocks 5.4.3 with SGE 6.1 I installed
> a new version of openMPI 1.6.5 when I run
> a script using SGE+openMPI (1.6.5) in a single node
> I don't have any problems but when I try to use more nodes
> I get this error:
>
>
> A hostfile
Thanks for your resply. My problem was actually caused by having include
mpif.h in the code still, rather than use mpi. But the info about
NSLOTS, etc is good to know.
Cheers,
Jason
P.S. I had originally used $MPI_DIR in mpirun call, but changed it to
the explicit directory in the course of
On 04/06/2011 07:09 PM, Jason Palmer wrote:
> Hi,
> I am having trouble running a batch job in SGE using openmpi. I have read
> the faq, which says that openmpi will automatically do the right thing, but
> something seems to be wrong.
>
> Previously I used MPICH1 under SGE without any
Ok, the problem was apparently that I was still including mpif.h instead of
using "use mpi". Seems to be working now.
-Original Message-
From: Jason Palmer [mailto:japalme...@gmail.com]
Sent: Wednesday, April 06, 2011 5:01 PM
To: 'Open MPI Users'
Subject: RE: SGE and openmpi
Btw, I did
Are you able to run non-MPI programs like "hostname"?
I ask because that error message indicates that everything started just fine,
but there is an error in your application.
On Apr 6, 2011, at 6:01 PM, Jason Palmer wrote:
> Btw, I did compile openmpi with the --with-sge flag.
>
> I am able
Btw, I did compile openmpi with the --with-sge flag.
I am able to compile a test program using openf90 with no errors or
warnings. But when I try to run a test program that just calls
MPI_INIT(ierr), then MPI_COMM_RANK(ierr), I get the following, whether
static or linked, and whether run with
Hi,
I am having trouble running a batch job in SGE using openmpi. I have read
the faq, which says that openmpi will automatically do the right thing, but
something seems to be wrong.
Previously I used MPICH1 under SGE without any problems. I'm avoiding MPICH2
because it doesn't seem to support
I have sent the following experiences to the SGE mailing list, but I
thought I would also try here...
I have been trying out version 1.2b2 for its integration with SGE. The
simple "hello world" test program works fin by itself, but there are
issues when submitting it to SGE.
For small
14 matches
Mail list logo