Re: [OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, buildhints?

2021-09-30 Thread Ray Muno via users
: nvc++ C++ compiler absolute: /stage/opt/NV_hpc_sdk/Linux_x86_64/21.9/compilers/bin/nvc++ Fort compiler: nvfortran On 9/30/21 12:18 PM, Ray Muno via users wrote: OK, starting clean. OS CentOS 7.9  (7.9.2009) mlnxofed 5.4-1.0.3.0 UCX 1.11.0 (from mlnxofed) hcoll-4.7.3199 (from mlnxofed

Re: [OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, buildhints?

2021-09-30 Thread Ray Muno via users
='nvfortran -fPIC' (which is kludgey). -Ray Muno On 9/30/21 8:13 AM, Gilles Gouaillardet via users wrote:  Ray, there is a typo, the configure option is --enable-mca-no-build=op-avx Cheers, Gilles - Original Message -  Added -*-enable-mca-no-build=op-avx *to the configure line. Still

Re: [OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, build hints?

2021-09-29 Thread Ray Muno via users
this helps,    -- bennet On Wed, Sep 29, 2021 at 12:29 PM Ray Muno via users <mailto:users@lists.open-mpi.org>> wrote: I did try that and it fails at the same place. Which version of the nVidia HPC-SDK are you using? I a m using 21.7.  I see there is an upgrade to 21.9, w

Re: [OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, build hints?

2021-09-29 Thread Ray Muno via users
with the -fPIC in place, then remake and see if that also causes the link error to go away, that would be a good start. Hope this helps,    -- bennet On Wed, Sep 29, 2021 at 12:29 PM Ray Muno via users <mailto:users@lists.open-mpi.org>> wrote: I did try that and it fails at the s

Re: [OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, build hints?

2021-09-29 Thread Ray Muno via users
to indicate any major changes. -Ray Muno On 9/29/21 10:54 AM, Jing Gong wrote: Hi, Before Nvidia persons look into details,pProbably you can try to add the flag "-fPIC" to the nvhpc compiler likes cc="nvc -fPIC", which at least worke

Re: [OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, build hints?

2021-09-29 Thread Ray Muno via users
add '-fPIC' to the CFLAGS, CXXFLAGS, FCFLAGS (maybe not need to all of those)." Tried adding these, still fails at the same place. -- Ray Muno IT Systems Administrator e-mail: m...@umn.edu University of Minnesota Aerospace Engineering and Mechanics

Re: [OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, build hints?

2021-09-29 Thread Ray Muno via users
Thanks, I looked through previous emails here in the user list. I guess I need to subscribe to the Developers list. -Ray Muno On 9/29/21 9:58 AM, Jeff Squyres (jsquyres) wrote: Ray -- Looks like this is a dup of https://github.com/open-mpi/ompi/issues/8919 <https://github.com/open-mpi/o

[OMPI users] OpenMPI 4.1.1, CentOS 7.9, nVidia HPC-SDk, build hints?

2021-09-29 Thread Ray Muno via users
/OpenMPI/BUILD/4.1.1/ROME/NV-HPC/21.7/ompi' make: *** [all-recursive] Error 1 -- Ray Muno IT Systems Administrator e-mail: m...@umn.edu University of Minnesota Aerospace Engineering and Mechanics

Re: [OMPI users] openmpi/pmix/ucx

2020-02-07 Thread Ray Muno via users
users using GCC, PGI, Intel and AOCC compilers with this config. PGI was the only one that was a challenge to build due to conflicts with HCOLL. -Ray Muno On 2/7/20 10:04 AM, Michael Di Domenico via users wrote: i haven't compiled openmpi in a while, but i'm in the process of upgrading our

Re: [OMPI users] OpenMPI 4.0.2 with PGI 19.10, will not build with hcoll

2020-01-28 Thread Ray Muno via users
I opened a case with pgroup support regarding this. We are also using Slurm along with HCOLL. -Ray Muno On 1/26/20 5:52 AM, Åke Sandgren via users wrote: Note that when built against SLURM it will pick up pthread from libslurm.la too. On 1/26/20 4:37 AM, Gilles Gouaillardet via users wrote

Re: [OMPI users] OpenMPI 1.4.2 with Myrinet MX, mpirun seg faults

2010-10-28 Thread Ray Muno
dule takes care of the issue. Thank you... -- Ray Muno University of Minnesota

Re: [OMPI users] OpenMPI 1.4.2 with Myrinet MX, mpirun seg faults

2010-10-28 Thread Ray Muno
different version of Open MPI? (ignored) ---------- -- Ray Muno University of Minnesota

Re: [OMPI users] OpenMPI and SGE

2009-06-25 Thread Ray Muno
As a follow up, the problem was with host name resolution. The error was introduced, with a change to the Rocks environment, which broke reverse lookups for host names. -- Ray Muno

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
[compute-6-25.local:10810] ERROR: The daemon exited unexpectedly with status 1. Establishing /usr/bin/ssh session to host compute-6-25.local ... -- Ray Muno

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
Ray Muno wrote: > Tha give me How about "That gives me" > > PMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required > environment variable: MPIRUN_RANK > PMGR_COLLECTIVE ERROR: PMGR_COLLECTIVE ERROR: unitialized MPI task: > Missing required envir

Re: [OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
Rolf Vandevaart wrote: > Ray Muno wrote: >> Ray Muno wrote: >> >>> We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily). >>> Scheduling is done through SGE. MPI communication is over InfiniBand. >>> >>> >> >>

[OMPI users] OpenMPI and SGE

2009-06-23 Thread Ray Muno
e caused this but I have not found where the actual problems lies. -- Ray Muno University of Minnesota

[OMPI users] OpenMPI 1.3RC2 job startup issue

2008-12-22 Thread Ray Muno
aemon did not report back when launched ======== -- Ray Muno University of Minnesota

Re: [OMPI users] /dev/shm

2008-11-20 Thread Ray Muno
John Hearns wrote: 2008/11/19 Ray Muno <m...@aem.umn.edu> Thought I would revisit this one. We are still having issues with this. It is not clear to me what is leaving the user files behind in /dev/shm. This is not something users are doing directly, they are just compiling thei

Re: [OMPI users] /dev/shm

2008-11-19 Thread Ray Muno
, new jobs do not launch. -- Ray Muno

Re: [OMPI users] /dev/shm

2008-11-19 Thread Ray Muno
daemon that is silently launched in v1.2 jobs should ensure that files under /tmp/openmpi-sessions-@ are removed. On Nov 10, 2008, at 2:14 PM, Ray Muno wrote: Brock Palen wrote: on most systems /dev/shm is limited to half the physical ram. Was the user someone filling up /dev/shm so

Re: [OMPI users] Trouble with OpenMPI and Intel 10.1 compilers

2008-11-11 Thread Ray Muno
Jeff Squyres wrote: On Nov 11, 2008, at 2:54 PM, Ray Muno wrote: See http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0. OK, that tells me lots of things ;-) Should I be running configure with --with-wrapper-cflags, --with-wrapper-fflags etc, set to include

Re: [OMPI users] Trouble with OpenMPI and Intel 10.1 compilers

2008-11-11 Thread Ray Muno
Jeff Squyres wrote: See http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0. OK, that tells me lots of things ;-) Should I be running configure with --with-wrapper-cflags, --with-wrapper-fflags etc, set to include -i_dynamic -- Ray Muno

Re: [OMPI users] Trouble with OpenMPI and Intel 10.1 compilers

2008-11-11 Thread Ray Muno
Steve Jones wrote: Are you adding -i_dynamic to base flags, or something different? Steve I brought this up to see if something should be changed with the install, For now, I am leaving that to users. -- Ray Muno

Re: [OMPI users] Trouble with OpenMPI and Intel 10.1 compilers

2008-11-11 Thread Ray Muno
ent. Seems strange that OpenMPI built without these being set at all. I could also compile test codes with the compilers, just not with mpicc and mpif90. -Ray Muno

Re: [OMPI users] Trouble with OpenMPI and Intel 10.1 compilers

2008-11-11 Thread Ray Muno
Ray Muno wrote: I updated the LD_LIBRARY_PATH to point to the directories that contain the installed copies of libimf.so. (this is not something I have not had to do for other compiler/OpenMpi combinations) How about... (this is not something I have had to do for other compiler/OpenMpi

[OMPI users] Trouble with OpenMPI and Intel 10.1 compilers

2008-11-11 Thread Ray Muno
. Is there something I should be doing at OpenMPI configure time to take care of these issues? -- Ray Muno University of Minnesota Aerospace Engineering

Re: [OMPI users] /dev/shm

2008-11-10 Thread Ray Muno
. -- Ray Muno

Re: [OMPI users] /dev/shm

2008-11-10 Thread Ray Muno
trying to determine why they are left behind. -- Ray Muno University of Minnesota Aerospace Engineering and Mechanics

[OMPI users] /dev/shm

2008-11-10 Thread Ray Muno
files, they can run. -- Ray Muno University of Minnesota Aerospace Engineering and Mechanics

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Ray Muno
. > > This benchmark is run on a AMD dual core, dual opteron processor. Both have > compiled with default configurations. > > The job is run on 2 nodes - 8 cores. > > OpenMPI - 25 m 39 s. > MPICH2 - 15 m 53 s. > > Any comments ..? > > Thanks, > Sangamesh > -Ray Muno Aerospace Engineering.