Re: [hwloc-users] Question about hwloc_bitmap_singlify

2018-08-28 Thread Brice Goglin
Hello If you bind a thread to a newset that contains 4 PUs (4 bits), the operating system scheduler is free to run that thread on any of these PUs. It means it may run on it on one PU, then migrate it to the other PU, then migrate it back, etc. If these PUs do not share all caches, you will see a

[hwloc-users] Question about hwloc_bitmap_singlify

2018-08-28 Thread Junchao Zhang
Hi, On cpu binding, hwloc manual says "It is often useful to call hwloc_bitmap_singlify() first so that a single CPU remains in the set. This way, the process will not even migrate between different CPUs inside the given set" . I don't understand it. If I do not do hwloc_bitmap_singlify, what

Re: [OMPI users] What happened to orte-submit resp. DVM?

2018-08-28 Thread Ralph H Castain
You must have some stale code because those tools no longer exist. Note that we are (gradually) replacing orte-dvm with PRRTE: https://github.com/pmix/prrte See the “how-to” guides for PRRTE towards the bottom of this page: https://pmix.org/support/how-to/

Re: [OMPI users] lists.open-mpi.org appears to be back

2018-08-28 Thread Jeff Squyres (jsquyres) via users
I originally sent this mail on Saturday, but it looks like lists.open-mpi.org was *not* actually back at this time. I'm finally starting to see all the backlogged messages on Tuesday, around 5pm US Eastern time. So I think lists.open-mpi.org is finally back in service. Sorry for the

Re: [hwloc-users] How to combine bitmaps on MPI ranks?

2018-08-28 Thread Brice Goglin
This question was addressed offline while the mailing lists were offline. We had things like hwloc_bitmap_set_ith_ulong() and hwloc_bitmap_from_ith_ulong() for packing/unpacking but they weren't very convenient unless you know multiple ulongs are actually needed to store the bitmap. We added new

[OMPI users] What happened to orte-submit resp. DVM?

2018-08-28 Thread Reuti
Hi, Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the source, but it's neither build, nor any man page included. -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] MPI_MAXLOC problems

2018-08-28 Thread Nathan Hjelm via users
Yup. That is the case for all composed datatype which is what the tuple types  are. Predefined composed datatypes. -Nathan On Aug 28, 2018, at 02:35 PM, "Jeff Squyres (jsquyres) via users" wrote: I think Gilles is right: remember that datatypes like MPI_2DOUBLE_PRECISION are actually 2

Re: [OMPI users] MPI advantages over PBS

2018-08-28 Thread Gustavo Correa
Hi Diego I (still) have Torque/PBS version 4.something in old clusters. [Most people at this point already switched to Slurm.] Torque/PBS comes with a tool named "pbsdsh" (for PBS distributed shell): http://docs.adaptivecomputing.com/torque/4-1-3/Content/topics/commands/pbsdsh.htm

[OMPI users] lists.open-mpi.org appears to be back

2018-08-28 Thread Jeff Squyres (jsquyres) via users
The lists.open-mpi.org server went offline due to an outage at our hosting provider sometime in the evening on Aug 22 / early morning Aug 23 (US Eastern time). As of yesterday morning (Saturday, Aug 25), the list server now appears to be back online; I've seen at least a few backlogged emails

[OMPI users] Fwd: MPI advantages over PBS

2018-08-28 Thread Gustavo Correa
Hi The message below may not matter much. It was my two cents attempt to help and try to clarify for Diego the difference between MPI and simpler solutions to "embarrassingly parallel" problems. Somehow a few copies of this message that I sent yesterday never made it to the list, or were

[OMPI users] What happened to orte-submit resp. DVM?

2018-08-28 Thread Reuti
Hi, Should orte-submit/ompi-submit still be available in 3.x.y? I can spot the source, but it's neither build, nor any man page included. -- Reuti ___ users mailing list users@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] MPI advantages over PBS

2018-08-28 Thread Diego Avesani
Dear all, thank you for your answers. I will try to explain better my situation. I have written a code and I have parallelized it with openMPI. In particular I have a two level palatalization. The first takes care of a parallel code program and the second run the parallel code with different

[hwloc-users] How to combine bitmaps on MPI ranks?

2018-08-28 Thread Junchao Zhang
Hello, Suppose I call hwloc on two MPI ranks and get a bitmap on each. On rank 0, I want to bitwise OR the two. How to do that? I did not find bitmap APIs to pack/unpack bitmaps to/from ulongs for MPI send/recv purpose. Thank you. --Junchao Zhang

[OMPI users] Fwd: MPI advantages over PBS

2018-08-28 Thread Gustavo Correa
The message may not matter, but somehow two copies of this sent earlier today didn't make it to the list. Gus Correa > Begin forwarded message: > > From: Gustavo Correa > Subject: Re: [OMPI users] MPI advantages over PBS > Date: August 25, 2018 at 14:41:36 EDT > To: Open MPI Users > > Hi

[OMPI users] Fwd: MPI advantages over PBS

2018-08-28 Thread Gustavo Correa
Somehow two copies of this sent earlier today didn't make it to the list. Gus Correa > Begin forwarded message: > > From: Gustavo Correa > Subject: Re: [OMPI users] MPI advantages over PBS > Date: August 25, 2018 at 20:16:49 EDT > To: Open MPI Users > > Hi Diego > > I (still) have

Re: [OMPI users] MPI advantages over PBS

2018-08-28 Thread Gustavo Correa
Hi Diego I (still) have Torque/PBS version 4.something in old clusters. [Most people at this point already switched to Slurm.] Torque/PBS comes with a tool named "pbsdsh" (for PBS distributed shell): http://docs.adaptivecomputing.com/torque/4-1-3/Content/topics/commands/pbsdsh.htm

Re: [OMPI users] MPI advantages over PBS

2018-08-28 Thread Gustavo Correa
Hi Diego I (still) have Torque/PBS version 4.something in old clusters. [Most people at this point already switched to Slurm.] Torque/PBS comes with a tool named "pbsdsh" (for PBS distributed shell): http://docs.adaptivecomputing.com/torque/4-1-3/Content/topics/commands/pbsdsh.htm

Re: [OMPI users] MPI advantages over PBS

2018-08-28 Thread Jeff Squyres (jsquyres) via users
On Aug 22, 2018, at 11:49 AM, Diego Avesani wrote: > > I have a philosophical question. > > I am reading a lot of papers where people use Portable Batch System or job > scheduler in order to parallelize their code. > > What are the advantages in using MPI instead? It depends on the code in

Re: [OMPI users] MPI_MAXLOC problems

2018-08-28 Thread Jeff Squyres (jsquyres) via users
I think Gilles is right: remember that datatypes like MPI_2DOUBLE_PRECISION are actually 2 values. So if you want to send 1 pair of double precision values with MPI_2DOUBLE_PRECISION, then your count is actually 1. > On Aug 22, 2018, at 8:02 AM, Gilles Gouaillardet > wrote: > > Diego, > >

[OMPI users] lists.open-mpi.org appears to be back

2018-08-28 Thread Jeff Squyres (jsquyres) via users
The lists.open-mpi.org server went offline due to an outage at our hosting provider sometime in the evening on Aug 22 / early morning Aug 23 (US Eastern time). The list server now appears to be back online; I've seen at least a few backlogged emails finally come through. If you sent a mail in