On 17 Aug 2010, at 21:20, Steve Wise wrote:
> [ompi@hpc-hn1 ~]$ padb --show-jobs --config-option rmgr=orte
> 65427
> [ompi@hpc-hn1 ~]$ padb --all --proc-summary --config-option rmgr=orte
> Warning, failed to locate ranks [0-3]
>
> Any ideas on what I am doing wrong?
Nothing that springs to mind
Hi,
I'm trying to use padb 3.0 to get stack traces on open-mpi / IMB1 runs.
While the job is running, I do run this, but get an error:
[ompi@hpc-hn1 ~]$ padb --show-jobs --config-option rmgr=orte
65427
[ompi@hpc-hn1 ~]$ padb --all --proc-summary --config-option rmgr=orte
Warning, failed to l
I am trying to get OpenMPI built on a windows machine using Dev
Studio, and I'm not having any luck. I'm hoping someone can point me
in the right direction.
Here are the details:
Environment: Windows 7, (64 bit OS, but I’m performing a 32 bit
build), Attempting to build under Dev Studio 2010
Steps
Hey Yong,
This is very helpful ...
I have spent the morning verifying that OCTOPUS 3.2 code is
correct and that even other sections of the code that use:
MPI_IN_PLACE
are compiled without a problem. Both working and the non-working
routine properly include "use mpi_h" module which is built fr
On 17 August 2010 04:16, Manoj Vaghela wrote:
> Hi,
>
> I am compiling a C++ program with MPI-C function calls with mpic++.
>
> Is there effect of this on efficiency/speed of parallel program?
>
No.
--
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colectora R
Hi Richard,
We have reported this to Intel as a bug in 11.1.072. If I understand
it correctly you are also compiling Octopus with Intel 11.1.072. As we
have tested, Intel compilers 11.1.064 and all the 10.x, GNU, PGI,
etc., do not exhibit this issue at all. We are still waiting for words
from Inte
Hi Gijsbert
This may be more on the Torque side, but not necessarily so.
ClusterResources has decent documentation:
http://www.clusterresources.com/pages/products/torque-resource-manager.php
1) To verify Torque+OpenMPI functionality/support try first
a non-mpi executable, e.g.:
#PBS -lnodes=4:p
On Aug 17, 2010, at 11:29 , Gijsbert Wiesenekker wrote:
> I have a four-node quad core cluster. I am running OpenMPI (version 1.4.2)
> jobs with Torque (version 2.4.8). I can submit jobs using
> #PBS -lnodes=4:ppn=4
> And 16 processes are launched. However if I use
> #PBS -lnodes=4:ppn=1
> or
>
Hi Nysal,
There is only one thread invoking MPI functions in our applications. Others
threads are related to flexlm protection routines and some self-diagnostics
routines
that don't use any MPI functions. I built a version of our application, just ot
be sure, without any other thread that the
Hi Eloi,
>Do you think that a thread race condition could explain the hdr->tag value
?
Are there multiple threads invoking MPI functions in your application? The
openib BTL is not yet thread safe in the 1.4 release series. There have been
improvements to openib BTL thread safety in 1.5, but it is s
I have a four-node quad core cluster. I am running OpenMPI (version 1.4.2) jobs
with Torque (version 2.4.8). I can submit jobs using
#PBS -lnodes=4:ppn=4
And 16 processes are launched. However if I use
#PBS -lnodes=4:ppn=1
or
#PBS -lnodes=4
The call to MPI_Init is succesful, but the call to
MPI_
Hi Nysal,
This is what I was wondering, it hdr->tag was expected to be null or not. I'll
soon send a valgrind output to the list, hoping this could help to locate an
invalid
memory access allowing to understand why reg->cbfunc / hdr->tag are null.
Do you think that a thread race condition coul
On Monday 16 August 2010 19:14:47 Jeff Squyres wrote:
> On Aug 16, 2010, at 10:05 AM, Eloi Gaudry wrote:
> > I did run our application through valgrind but it couldn't find any
> > "Invalid write": there is a bunch of "Invalid read" (I'm using 1.4.2
> > with the suppression file), "Use of uninitial
Hi,
I am compiling a C++ program with MPI-C function calls with mpic++.
Is there effect of this on efficiency/speed of parallel program?
Thanks.
--
Manoj Vaghela
14 matches
Mail list logo