The latest patch also causes a segfault...
By the way, I found a typo as below. &ca_pml_ob1.use_all_rdma in the last
line should be &mca_pml_ob1.use_all_rdma:
+mca_pml_ob1.use_all_rdma = false;
+(void) mca_base_component_var_register
(&mca_pml_ob1_component.pmlm_version, "use_all_rdma",
+
No problem. Thanks for reporting this. Not all platforms see a slowdown so we
missed it before the release. Let me know if that latest patch works for you.
-Nathan
> On Aug 8, 2016, at 8:50 PM, tmish...@jcity.maeda.co.jp wrote:
>
> I understood. Thanks.
>
> Tetsuya Mishima
>
> 2016/08/09 11:3
I understood. Thanks.
Tetsuya Mishima
2016/08/09 11:33:15、"devel"さんは「Re: [OMPI devel] sm BTL performace of
the openmpi-2.0.0」で書きました
> I will add a control to have the new behavior or using all available RDMA
btls or just the eager ones for the RDMA protocol. The flags will remain as
they are. And
I will add a control to have the new behavior or using all available RDMA btls
or just the eager ones for the RDMA protocol. The flags will remain as they
are. And, yes, for 2.0.0 you can set the btl flags if you do not intend to use
MPI RMA.
New patch:
https://github.com/hjelmn/ompi/commit/43
Then, my understanding is that you will restore the default value of
btl_openib_flags to previous one( = 310) and add a new MCA parameter to
control HCA inclusion for such a situation. The work arround so far for
openmpi-2.0.0 is setting those flags manually. Right?
Tetsuya Mishima
2016/08/09 9:5
Hmm, not good. So we have a situation where it is sometimes better to include
the HCA when it is the only rdma btl. Will have a new version up in a bit that
adds an MCA parameter to control the behavior. The default will be the same as
1.10.x.
-Nathan
> On Aug 8, 2016, at 4:51 PM, tmish...@jci
Hi, unfortunately it doesn't work well. The previous one was much
better ...
[mishima@manage OMB-3.1.1-openmpi2.0.0]$ mpirun -np 2 -report-bindings
osu_bw
[manage.cluster:25107] MCW rank 0 bound to socket 0[core 0[hwt 0]], socket
0[core 1[hwt 0]], socket 0[core 2[hwt 0]], so
cket 0[core 3[hwt 0]],
Ok, there was a problem with the selection logic when only one rdma capable btl
is available. I changed the logic to always use the RDMA btl over pipelined
send/recv. This works better for me on a Intel Omnipath system. Let me know if
this works for you.
https://github.com/hjelmn/ompi/commit/d
On Aug 08, 2016, at 05:17 AM, Paul Kapinos wrote:Dear Open MPI developers,there is already a thread about 'sm BTL performace of the openmpi-2.0.0'https://www.open-mpi.org/community/lists/devel/2016/07/19288.phpand we also see 30% bandwidth loss, on communication *via InfiniBand*.And we also have a
It would be useful to enable Java apps to interact with the resource manager,
if possible - I’d hate to see the Java environment singled out as lacking
capability.
> On Aug 8, 2016, at 8:08 AM, Pritchard Jr., Howard wrote:
>
> HI Ralph,
>
> If the java bindings are of use, I could see if my
HI Ralph,
If the java bindings are of use, I could see if my student how did a lot
of the recent work in the Open MPI java bindings would be interested.
He doesn¹t have a lot of extra cycles at the moment though.
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
On 8/7/16,
We have established that the bug does not occur unless an RDMA network is being
used (openib, ugni, etc). The fix has been identified and will be included in
the 2.0.1 release.
-Nathan
> On Aug 8, 2016, at 3:20 AM, Christoph Niethammer wrote:
>
> Hello Howard,
>
>
> If I use tcp I get sligh
Dear Open MPI developers,
there is already a thread about 'sm BTL performace of the openmpi-2.0.0'
https://www.open-mpi.org/community/lists/devel/2016/07/19288.php
and we also see 30% bandwidth loss, on communication *via InfiniBand*.
And we also have a clue: the IB buffers seem not to be aligned
Hello Howard,
If I use tcp I get slightly better results:
mpirun -np 2 --mca btl self,vader,tcp osu_bw
# OSU MPI Bandwidth Test
# SizeBandwidth (MB/s)
1 5.23
211.27
422.46
844.88
16
14 matches
Mail list logo