[OMPI users] --oversubscribe option

2018-06-06 Thread Mahmood Naderan
Hi, On a Ryzen 1800x which has 8 cores and 16 threads, when I run "mpirun -np 16 lammps..." I get an error that there is not enough slot. It seems that --oversubscribe option will fix that. Odd thing is that when I run "mpirun -np 8 lammps" it takes about 46 minutes to complete the job while with

[OMPI users] Sockets half-broken in Open MPI 2.0.2?

2018-06-06 Thread Alexander Supalov
Hi everybody, I noticed that sockets do not seem to work properly in the Open MPI version mentioned above. Intranode runs are OK. Internode, over 100-MBit Ethernet, I can go only as high as 32 KiB in a simple MPI ping-pong kind of benchmark. Before I start composing a full bug report: is this anot

[OMPI users] error building openmpi-master-201806060243-64a5baa on Linux with Sun C

2018-06-06 Thread Siegmar Gross
Hi, I've tried to install openmpi-master-201806060243-64a5baa on my "SUSE Linux Enterprise Server 12.3 (x86_64)" with Sun C 5.15 (Oracle Developer Studio 12.6). Unfortunately I still get the following error that I already reported on April 12th and May 5th. loki openmpi-master-201806060243-64a5b

Re: [OMPI users] Sockets half-broken in Open MPI 2.0.2?

2018-06-06 Thread Gilles Gouaillardet
Alexander, Note the v2.0 series is no more supported, and you should upgrade to v3.1, v3.0 or v2.1 You might have to force the tcp buffers size to 0 for optimal performances iirc, mpirun —mca btl_tcp_sndbuf_size 0 —mca btl_tcp_rcvbuf_size 0 ... (I am afk, so please confirm both parameter names an

Re: [OMPI users] Sockets half-broken in Open MPI 2.0.2?

2018-06-06 Thread Alexander Supalov
Thanks. This was not my question. I want to know if 2.0.2 was indeed faulty in this area. On Wed, Jun 6, 2018 at 1:22 PM, Gilles Gouaillardet < gilles.gouaillar...@gmail.com> wrote: > Alexander, > > Note the v2.0 series is no more supported, and you should upgrade to v3.1, > v3.0 or v2.1 > > You

Re: [OMPI users] Sockets half-broken in Open MPI 2.0.2?

2018-06-06 Thread Jeff Squyres (jsquyres) via users
Alexander -- I don't know offhand if 2.0.2 was faulty in this area. We usually ask users to upgrade to at least the latest release in a given series (e.g., 2.0.4) because various bug fixes are included in each sub-release. It wouldn't be much use to go through all the effort to make a proper

Re: [OMPI users] Sockets half-broken in Open MPI 2.0.2?

2018-06-06 Thread Alexander Supalov
Thanks. Fair enough. I will mark 2.0.2 as faulty for myself, and try the latest version when I have time for this. On Wed, Jun 6, 2018 at 2:40 PM, Jeff Squyres (jsquyres) via users < users@lists.open-mpi.org> wrote: > Alexander -- > > I don't know offhand if 2.0.2 was faulty in this area. We usu

[OMPI users] Issue With Setting Btl Parameters

2018-06-06 Thread Sam Powell-Gill
Hi All, I am having some issues with setting the btl parameter. I have both openmpi 2.1.1 and 1.6.5 installed on the system. The 2.1.1 version was installed by Rocks. I am using SGE to schedule jobs. I have been setting the btl parameter in the execution command which is working fine. But I would

Re: [OMPI users] error building openmpi-master-201806060243-64a5baa on Linux with Sun C

2018-06-06 Thread Nathan Hjelm
I put in a PR to "fix" this but if you look at the standard it has both intent(in) and asynchronous. Might be a compiler problem? -Nathan > On Jun 6, 2018, at 5:11 AM, Siegmar Gross > wrote: > > Hi, > > I've tried to install openmpi-master-201806060243-64a5baa on my "SUSE Linux > Enterprise

Re: [OMPI users] --oversubscribe option

2018-06-06 Thread r...@open-mpi.org
I’m not entirely sure what you are asking here. If you use oversubscribe, we do not bind your processes and you suffer some performance penalty for it. If you want to run one process/thread and retain binding, then do not use --oversubscribe and instead use --use-hwthread-cpus > On Jun 6, 2018

Re: [OMPI users] --oversubscribe option

2018-06-06 Thread Gilles Gouaillardet
Mahmoud, By default 1 slot = 1 core, that is why you need —oversubscribe or —use-hwthread-cpus to run 16 MPI tasks. It seems your lammps job benefits from hyper threading. Some applications behave like this, and this is not odd a priori. Cheers, Gilles On Wednesday, June 6, 2018, r...@open-mpi

Re: [OMPI users] error building openmpi-master-201806060243-64a5baa on Linux with Sun C

2018-06-06 Thread Jeff Squyres (jsquyres) via users
Siegmar -- I asked some Fortran gurus, and they don't think that there is any restriction on having ASYNCHRONOUS and INTENT on the same line. Indeed, Open MPI's definition of MPI_ACCUMULATE seems to agree with what is in MPI-3.1. Is this a new version of a Fortran compiler that you're using, p

Re: [OMPI users] error building openmpi-master-201806060243-64a5baa on Linux with Sun C

2018-06-06 Thread Siegmar Gross
Hi Jeff, I asked some Fortran gurus, and they don't think that there is any restriction on having ASYNCHRONOUS and INTENT on the same line. Indeed, Open MPI's definition of MPI_ACCUMULATE seems to agree with what is in MPI-3.1. Is this a new version of a Fortran compiler that you're using, per

Re: [OMPI users] error building openmpi-master-201806060243-64a5baa on Linux with Sun C

2018-06-06 Thread Nathan Hjelm
The bindings in v3.1.0 are incorrect. They are missing the asynchronous attribute. That will be fixed in v3.1.1. > On Jun 6, 2018, at 12:06 PM, Siegmar Gross > wrote: > > Hi Jeff, > >> I asked some Fortran gurus, and they don't think that there >> is any restriction on having ASYNCHRONOUS