Re: [OMPI users] Communicating MPI processes running in Docker containers in the same host by means of shared memory?

2017-03-29 Thread Jordi Guitart
r HPC with containers. Ralph On Mar 25, 2017, at 8:07 AM, Jordi Guitart wrote: Hi, I don't have previous expertise on the source code of OpenMPI, so I don't have a clear idea of the needed changes to implement this feature. This probably requires some preliminary brainstorming to decide

Re: [OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-28 Thread Jordi Guitart
Hi, On 27/03/2017 17:51, Jeff Squyres (jsquyres) wrote: 1. Recall that sched_yield() has effectively become a no-op in newer Linux kernels. Hence, Open MPI's "yield when idle" may not do much to actually de-schedule a currently-running process. Yes, I'm aware of this. However, this should imp

Re: [OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-27 Thread Jordi Guitart
=1 is precisely the “oversubscribed” setting. So why would you expect different results? On Mar 27, 2017, at 3:52 AM, Jordi Guitart <mailto:jordi.guit...@bsc.es>> wrote: Hi Ben, Thanks for your feedback. As described here (https://www.open-mpi.org/faq/?category=running#oversubscrib

Re: [OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-27 Thread Jordi Guitart
(P#54) L2 L#27 (256KB) + L1d L#27 (32KB) + L1i L#27 (32KB) + Core L#27 PU L#54 (P#27) PU L#55 (P#55) On 26/03/2017 9:37, Ben Menadue wrote: On 26 Mar 2017, at 2:22 am, Jordi Guitart <mailto:jordi.guit...@bsc.es>> wrote: However, what is puzzling me is the performance diffe

Re: [OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-25 Thread Jordi Guitart
g HT is not a performance win for MPI/HPC codes that are designed to run processors at 100%. On Mar 24, 2017, at 6:45 AM, Jordi Guitart wrote: Hello, I'm running experiments with BT NAS benchmark on OpenMPI. I've identified a very weird performance degradation of OpenM

Re: [OMPI users] Communicating MPI processes running in Docker containers in the same host by means of shared memory?

2017-03-25 Thread Jordi Guitart
are memory even if they have different IP addresses. On 24/03/2017 20:10, Jeff Squyres (jsquyres) wrote: On Mar 24, 2017, at 6:41 AM, Jordi Guitart wrote: Docker containers have different IP addresses, indeed, so now we know why it does not work. I think that this could be a nice feature for O

[OMPI users] Performance degradation of OpenMPI 1.10.2 when oversubscribed?

2017-03-24 Thread Jordi Guitart
Hello, I'm running experiments with BT NAS benchmark on OpenMPI. I've identified a very weird performance degradation of OpenMPI v1.10.2 (and later versions) when the system is oversubscribed. In particular, note the performance difference between 1.10.2 and 1.10.1 when running 36 MPI process

Re: [OMPI users] Communicating MPI processes running in Docker containers in the same host by means of shared memory?

2017-03-24 Thread Jordi Guitart
ning: no work has been done to make Open MPI understand Docker shared memory (i.e., you're the first person to ask about it). Pull requests would always be appreciated. ;-) On Mar 24, 2017, at 5:47 AM, Jordi Guitart wrote: Hello John, Yes, in fact, I'm comparing Docker wit

Re: [OMPI users] Communicating MPI processes running in Docker containers in the same host by means of shared memory?

2017-03-24 Thread Jordi Guitart
answer to your question. However have you looked at Singularity: http://singularity.lbl.gov/ On 24 March 2017 at 08:54, Jordi Guitart <mailto:jordi.guit...@bsc.es>> wrote: Hello, Docker allows several containers running in the same host to share the same IPC namespace,

[OMPI users] Communicating MPI processes running in Docker containers in the same host by means of shared memory?

2017-03-24 Thread Jordi Guitart
Hello, Docker allows several containers running in the same host to share the same IPC namespace, thus they can share memory (see example here: https://github.com/docker/docker/pull/8211#issuecomment-56873448). I assume this could be used by OpenMPI to communicate MPI processes running in dif