Thanks,
The messages are small and frequent (they flash metadata across the cluster).
The current approach works fine for small to medium clusters but I want it to
be able to go big. Maybe up to several hundred or even a thousands of nodes.
Its these larger deployments that concern me. The cu
I would appreciate someone with experience with MPI-IO look at the
simple fortran program gzipped and attached to this note. It is
imbedded in a script so that all that is necessary to run it is do:
'testio' from the command line. The program generates a small 2-D input
array, sets up an MPI-IO e
On May 10, 2011, at 08:10 , Tim Prince wrote:
> On 5/10/2011 6:43 AM, francoise.r...@obs.ujf-grenoble.fr wrote:
>>
>> Hi,
>>
>> I compile a parallel program with OpenMPI 1.4.1 (compiled with intel
>> compilers 12 from composerxe package) . This program is linked to MUMPS
>> library 4.9.2, compi
On 5/10/2011 6:43 AM, francoise.r...@obs.ujf-grenoble.fr wrote:
Hi,
I compile a parallel program with OpenMPI 1.4.1 (compiled with intel
compilers 12 from composerxe package) . This program is linked to MUMPS
library 4.9.2, compiled with the same compilers and link with intel MKL.
The OS is lin
Good day,
I am new to the Open MPI package, and so am starting at the beginning. I
have little if any desire to build the binaries, so I was glad to see a
Windows binary release.
I started with I think is the minimum program:
#include "mpi.h"
int main(int argc, char* argv[])
{
MPI_Init(
On May 10, 2011, at 2:30 AM, hi wrote:
>> You didn't answer my prior questions. :-)
> I am observing this crash using MPI_ALLREDUCE() in test program; and
> which does not have any memory corruption issue. ;)
Can you send the info listed on the help page?
>> I ran your test program with -np 2 a
Hi,
I compile a parallel program with OpenMPI 1.4.1 (compiled with intel
compilers 12 from composerxe package) . This program is linked to MUMPS
library 4.9.2, compiled with the same compilers and link with intel
MKL. The OS is linux debian.
No error in compiling or running the job, but the
Dear all,
we succeed in building several version of openmpi from 1.2.8 to 1.4.3
with Intel composer XE 2011 (aka 12.0).
However we found a threshold in the number of cores (depending from
the application: IMB, xhpl or user applications
and form the number of required cores) above which the a
Hi Jeff,
> You didn't answer my prior questions. :-)
I am observing this crash using MPI_ALLREDUCE() in test program; and
which does not have any memory corruption issue. ;)
> I ran your test program with -np 2 and -np 4 and it seemed to work ok.
Can you please let me know what environment (incl