[OMPI users] 1.4.2 build problem

2010-06-01 Thread John R. Cary
After patching, I get: make[3]: Entering directory `/scr_iter/cary/facetspkgs/builds/openmpi-1.4.2/nodl/ompi/contrib/vt/vt' make[3]: *** No rule to make target `/scr_iter/cary/facetspkgs/builds/openmpi/ompi/contrib/vt/vt/m4/acinclude.compinst.m4', needed by `/scr_iter/cary/facetspkgs/builds/o

Re: [OMPI users] heterogeneous cluster setup

2010-06-01 Thread Shiqing Fan
Hi, Unfortunately, we don't have such support at moment. Regards, Shiqing On 2010-6-1 8:49 PM, Jeff Squyres wrote: Shiqing -- Do we support mixed Windows + Linux jobs? On Jun 1, 2010, at 1:17 PM, Hicham Lahlou wrote: Hi all, I have a question regarding the support for heterogeneou

Re: [OMPI users] heterogeneous cluster setup

2010-06-01 Thread Jeff Squyres
Shiqing -- Do we support mixed Windows + Linux jobs? On Jun 1, 2010, at 1:17 PM, Hicham Lahlou wrote: > Hi all, > > I have a question regarding the support for heterogeneous systems. Using the > latest version of OpenMPI, is it possible to have a single MPI application > running across a clu

[OMPI users] heterogeneous cluster setup

2010-06-01 Thread Hicham Lahlou
Hi all, I have a question regarding the support for heterogeneous systems. Using the latest version of OpenMPI, is it possible to have a single MPI application running across a cluster of mixed operating systems, say for example Windows and Linux? Thanks, Hicham

Re: [OMPI users] Segmentation fault in MPI_Finalize with IB hardware and memory manager.

2010-06-01 Thread Jeff Squyres
Are you running on nodes with both MX and OpenFabrics? I don't know if this is a well-tested scenario -- there may be some strange interactions in the registered memory management between MX and OpenFabrics verbs. FWIW, you should be able to disable Open MPI's memory management at run time in

[OMPI users] reasons for jitter other than paffinity

2010-06-01 Thread Javier Fernández
Hello, I was running the pi demo on multicores (Pentium Dual, Core i7) to see it scale, but sometimes the time measurements return disparate results. The FAQ suggests processor affinity as one possible reason for that. For instance, the demo takes 3s in one core of the Pentium Dual $ mpirun -h