I hello world program with
> mpirun --mca pml ob1 --mca btl ofi ...
>
> on one and several nodes?
>
> Cheers,
>
> Gilles
>
> On Mon, Mar 31, 2025 at 3:48 PM Sangam B wrote:
>
>> Hello Gilles,
>>
>> The gromacs-2024.4 build at cmake stag
> cmake/GROMACS that failed to detect it?
>
> What if you "configure" GROMACS with
> cmake -DGMX_FORCE_GPU_AWARE_MPI=ON ...
>
> If the problem persists, please open an issue at
> https://github.com/open-mpi/ompi/issues and do provide the required
> information.
>
>
Hi,
OpenMPI-5.0.5 or 5.0.6 versions fail with following error during
"make" stage of the build procedure:
In file included from ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.h:51,
from ../../../../../../ompi/mca/mtl/ofi/mtl_ofi.c:13:
../../../../../../ompi/mca/mtl/ofi/mtl_ofi
Hi Team,
My application fails with following error [compiled with
openmpi-5.0.7, ucx-1.18.0, cuda-12.8, gdrcopy-2.5 ]:
Caught signal 11 (Segmentation fault: invalid permissions for mapped object
at address 0x14bd8f464160)
backtrace (tid:1104544)
0 0x0006141c ucs_callba
? May be checking LD_LIBRARY_PATH after sourcing the intel
> vars.sh file ?
> I'm using OpenMPI 5.0.6 but in a Slurm context and it works fine.
>
> Patrick
>
> Le 14/02/2025 à 19:00, Sangam B a écrit :
>
> Hi Patrick,
>
> Thanks for your reply.
> Ofcourse, the
, Feb 14, 2025 at 6:30 PM Patrick Begou <
patrick.be...@univ-grenoble-alpes.fr> wrote:
> Le 14/02/2025 à 13:22, Sangam B a écrit :
> > Hi,
> >
> > OpenMPI-5.0.6 is compiled with ucx-1.18 and Intel 1api 2024 v2.1
> > compilers. An mpi program is compiled with this
Hi,
OpenMPI-5.0.6 is compiled with ucx-1.18 and Intel 1api 2024 v2.1 compilers.
An mpi program is compiled with this openmpi-5.0.6.
While submitting job thru PBS on a Linux cluster, the intel compilers is
sourced and the same is passed thru OpenMPI's mpirun command option: " -x
LD_LIBRARY_PAT
Hello Ompi Users,
UCX version:
https://github.com/openucx/ucx/releases/download/v1.16.0
OpenMPI version: 5.0.5
OpenMPI is installed with Ucx, Pmix, Libevent & hwloc.
The job which is run on 4 nodes with 192 ranks per node fails with
following UCX error:
ucp_context.c:1112 U
Hi,
The application compiled with OpenMPI-5.0.2 or 5.0.3 runs fine *only if
"mpirun -mca pml ob1" *option is used.
If any other options such as "-mca pml ucx" OR some other btl options OR if
none of the options are used, then it fails with following error:
[n1:0] *** An error occurred in MPI_
Thanks Jeff for the resolution.
On Mon, Aug 19, 2019 at 7:45 PM Jeff Squyres (jsquyres)
wrote:
> On Aug 19, 2019, at 6:15 AM, Sangam B via users
> wrote:
> >
> > subroutine recv(this,lmb)
> > class(some__example6), intent(inout) :: this
> >
erflow.com/help/minimal-reproducible-example ?
>
> Cheers,
>
> Gilles
>
> On Mon, Aug 19, 2019 at 7:19 PM Sangam B via users
> wrote:
> >
> > Hi,
> >
> > Here is the sample program snippet:
> >
> >
> > #include "in
reproducer is ?
>
> Cheers,
>
> Gilles
>
> On Mon, Aug 19, 2019 at 6:42 PM Sangam B via users
> wrote:
> >
> > Hi,
> >
> > OpenMPI is configured as follows:
> >
> > export CC=`which clang`
> > export CXX=`which clang++`
> > export
producer is ?
> >
> > Cheers,
> >
> > Gilles
> >
> > On Mon, Aug 19, 2019 at 6:42 PM Sangam B via users
> > wrote:
> > >
> > > Hi,
> > >
> > > OpenMPI is configured as follows:
> > >
> > > export CC=`
-libfabric --without-lsf --with-verbs=/usr
--with-mxm=/sw/hpcx/hpcx-v2.1.0-gcc-MLNX_OFED_LINUX-4.3-1.0.1.0-redhat7.4-x86_64/mxm
..
On Mon, Aug 19, 2019 at 2:43 PM Sangam B wrote:
> Hi,
>
> I get following error if the application is compiled with openmpi-3.1.1:
>
> mpifort -O
Hi,
I get following error if the application is compiled with openmpi-3.1.1:
mpifort -O3 -march=native -funroll-loops -finline-aggressive -flto
-J./bin/obj_amd64aocc20 -std=f2008 -O3 -march=native -funroll-loops
-finline-aggressive -flto -fallow-fortran-gnu-ext -ffree-form
-fdefault-real-8 exampl
15 matches
Mail list logo