Re: [hwloc-users] Please help interpreting reported topology - possible bug?

2018-05-17 Thread Brice Goglin
Hello Hartmut The mailing list address changed a while ago, there's an additional "lists." in the domaine name. Regarding your question, I would assume you are running in a cgroup with the second NUMA node disallowed (while all the corresponding cores are allowed). lstopo with --whole-system

Re: [OMPI users] MPI-3 RMA on Cray XC40

2018-05-17 Thread Nathan Hjelm
The invalid writes in uGNI are nothing. I suggest adding any GNI_ call to a suppression file. The RB tree invalid write looks like a bug. I will take a look and see what might be causing it. BTW, you can add --with-valgrind(=DIR) to configure. This will suppress some uninitialized value errors

Re: [OMPI users] slurm configuration override mpirun command line process mapping

2018-05-17 Thread Nicolas Deladerriere
Gilles, Adding ess component that excludes slurm and slurmd. I run into trouble about connection issue. I guess I need slurm and slurmd in my runtime context ! Anyway, as you mentioned that not a good solution regarding remaining mpi process when using scancel and I guess i will also lose some

Re: [OMPI users] slurm configuration override mpirun command line process mapping

2018-05-17 Thread Nicolas Deladerriere
"mpirun takes the #slots for each node from the slurm allocation." Yes this is my issue and what I was not expected. But I will stick with --bynode solution. Thanks a lot for your help. Regards, Nicolas 2018-05-17 14:33 GMT+02:00 r...@open-mpi.org : > mpirun takes the #slots

Re: [OMPI users] slurm configuration override mpirun command line process mapping

2018-05-17 Thread r...@open-mpi.org
mpirun takes the #slots for each node from the slurm allocation. Your hostfile (at least, what you provided) retained that information and shows 2 slots on each node. So both the original allocation _and_ your constructed hostfile are both telling mpirun to assign 2 slots on each node. Like I

Re: [OMPI users] slurm configuration override mpirun command line process mapping

2018-05-17 Thread Gilles Gouaillardet
Nicolas, This looks odd at first glance, but as stated before, 1.6 is an obsolete series. A workaround could be to mpirun—mca ess ... And replace ... with a comma separated list of ess components that excludes both slurm and slurmd. An other workaround could be to remove SLURM related

Re: [OMPI users] slurm configuration override mpirun command line process mapping

2018-05-17 Thread Nicolas Deladerriere
Hi all, Thanks for your feedback, about using " mpirun --mca ras ^slurm --mca plm ^slurm --mca ess ^slurm,slurmd ...". I am a bit confused since syntax sounds good, but I keep getting following error at run time :

Re: [OMPI users] MPI-3 RMA on Cray XC40

2018-05-17 Thread Joseph Schuchart
Nathan, I am trying to track down some memory corruption that leads to crashes in my application running on the Cray system using Open MPI (git-6093f2d). Valgrind reports quite some invalid reads and writes inside Open MPI when running the benchmark that I sent you earlier. There are plenty

Re: [OMPI users] MPI cartesian grid : cumulate a scalar value through the procs of a given axis of the grid

2018-05-17 Thread Pierre Gubernatis
yes, you are right..I didn't know MPI_scan and I finally jumped into, thanks Le Lun 14 Mai 2018 20:11, Nathan Hjelm a écrit : > Still looks to me like MPI_Scan is what you want. Just need three > additional communicators (one for each direction). With a recurive doubling >