the issue was already fixed by v2.0.4.
>
>
> > On Jun 6, 2018, at 7:40 AM, Alexander Supalov <
> alexander.supa...@gmail.com> wrote:
> >
> > Thanks. This was not my question. I want to know if 2.0.2 was indeed
> faulty in this area.
> >
> > On Wed,
latest version, default parameter is already optimal.
>
> Last but not least, the btl/tcp component uses all the available
> interfaces by default, so you might want to first restrict to a single
> interface
> mpirun —mca btl_tcp_if_include 192.168.0.0/24 ...
>
> Hope this hel
Hi everybody,
I noticed that sockets do not seem to work properly in the Open MPI version
mentioned above. Intranode runs are OK. Internode, over 100-MBit Ethernet,
I can go only as high as 32 KiB in a simple MPI ping-pong kind of
benchmark. Before I start composing a full bug report: is this anot
PS. No need to: apt-get install libnuma-dev resolved the issue. Very helpful
diagnostics and advice output. Thanks!
> 23 мая 2018 г., в 19:23, Alexander Supalov
> написал(а):
>
> OK, I'll send it directly to your business address then. :)
>
> On Wed, May 23, 2018 a
OK, I'll send it directly to your business address then. :)
On Wed, May 23, 2018 at 6:59 PM, Jeff Squyres (jsquyres) wrote:
> Any way you want to send it is fine. :-)
>
> > On May 23, 2018, at 12:06 PM, Alexander Supalov <
> alexander.supa...@gmail.com> wrote:
>
;
> Can you provide some more detail? The information listed here would be
> helpful:
>
> https://www.open-mpi.org/community/help/
>
>
>
> > On May 23, 2018, at 7:38 AM, Alexander Supalov <
> alexander.supa...@gmail.com> wrote:
> >
> > Hi ever
Hi everybody,
I've observed the process binding subpackage rejecting to be built and
hence not working on my perfectly sound Ubuntu 16.04 LTS in Open MPI 2.0.2
and 2.1.0. A known old issue? A hint will be appreciated.
Best regards.
Alexander
___
users