[OMPI users] Libnl bug in openmpi v3.0.0?

2017-09-20 Thread Stephen Guzik
v3... ibverbs nl-3 so I wonder if perhaps there is something more serious is going on. Any suggestions? Thanks, Stephen Guzik ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Libnl bug in openmpi v3.0.0?

2017-09-22 Thread Stephen Guzik
Yes, I can confirm that openmpi 3.0.0 builds without issue when libnl-route-3-dev is installed. Thanks, Stephen Stephen Guzik, Ph.D. Assistant Professor, Department of Mechanical Engineering Colorado State University On 09/21/2017 12:55 AM, Gilles Gouaillardet wrote: > Stephen, > > &g

[OMPI users] Missing data with MPI I/O and NFS

2017-10-12 Thread Stephen Guzik
running the job across the two workstations seems to work fine. - on a single node, everything works as expected in all cases.  In the case described above where I get an error, the error is only observed with processes on two nodes. - code follows. Thanks, Stephen Guzik -- #include #incl

Re: [OMPI users] Missing data with MPI I/O and NFS

2017-10-13 Thread Stephen Guzik
and working on, that > might trigger this behavior (although it should actually work for > collective I/O even in that case). > > try to set something like > > mpirun --mca io romio314 ... > > Thanks > > Edgar > > > On 10/12/2017 8:26 PM, Stephen Guzik wrote: >

Re: [OMPI users] Installation of openmpi-1.10.7 fails

2018-01-19 Thread Stephen Guzik
penib... cgio_open_file:H5Dwrite:write to node data failed The files system in NFS and an openmpi-v3.0.x-201711220306-2399e85 build. Stephen Stephen Guzik, Ph.D. Assistant Professor, Department of Mechanical Engineering Colorado State University On 01/18/2018 04:17 PM, Jeff Squyres (jsquyres)

[OMPI users] How to map to sockets with 1 per core, but bind to a single hwthread

2016-02-11 Thread Stephen Guzik
Hi, I would like to divide n processes between the sockets on a node, with one process per core, and bind them to a hwthread. Consider a system with 2 sockets, 10 cores per socket, and 2 hwthreads per core. If I enter -np 20 --map-by ppr:1:core --bind-to hwthread then this works as I intend.

Re: [OMPI users] How to map to sockets with 1 per core, but bind to a single hwthread

2016-02-12 Thread Stephen Guzik
e4:30795] Signal code: Address not mapped (1) Stephen On 02/11/2016 05:30 PM, Stephen Guzik wrote: > Hi, > > I would like to divide n processes between the sockets on a node, with > one process per core, and bind them to a hwthread. Consider a system > with 2 sockets, 10 cores per sock

[OMPI users] MPI::BOTTOM vs MPI_BOTTOM

2007-10-10 Thread Stephen Guzik
Hi, To the Devs. I just noticed that MPI::BOTTOM requires a cast. Not sure if that was intended. Compiling 'MPI::COMM_WORLD.Bcast(MPI::BOTTOM, 1, someDataType, 0);' results in: error: invalid conversion from ‘const void*’ to ‘void*’ error: initializing argument 1 of ‘virtual void MPI::Comm::

[OMPI users] Coordinating (non-overlapping) local stores with remote puts form using passive RMA synchronization

2020-05-30 Thread Stephen Guzik via users
Hi, I'm trying to get a better understanding of coordinating (non-overlapping) local stores with remote puts when using passive synchronization for RMA.  I understand that the window should be locked for a local store, but can it be a shared lock?  In my example, each process retrieves and in

[OMPI users] Issues with MPI_Win_Create on Debian 11

2022-02-08 Thread Stephen Guzik via users
Hi all, There are several bug reports on 4.1.x describing MPI_Win_create failing for various architectures.  I too am seeing the same for 4.1.0-10 which is packaged for Debian 11, just on a standard workstation where at least vader,tcp,self, and sm are identified (not sure which are being used

[OMPI users] OSC UCX error using MPI_Win_allocate

2024-03-19 Thread Stephen Guzik via users
Hi, For development purposes, I built and installed Open MPI 5.0.2 on my workstation.  As I understand it, to use OpenSHMEM, one has to include ucx.  I configured with ./configure --build=x86_64-linux-gnu --prefix=/usr/local/openmpi/5.0.2_gcc-12.2.0 --with-ucx --with-pmix=internal --with-li