Hello,
Investigating memory management implementation in OpenMPI I found that
opal's memory module licensed under Lesser GPL terms. This subsystem is
linked into openMPI library. As far as I know this fact should enforce
Lesser GPL license on libopen-rte.so and libopen-pal.so. Could anybody
explai
On Feb 14, 2012, at 6:09 AM, Denis Nagorny wrote:
> Investigating memory management implementation in OpenMPI I found that opal's
> memory module licensed under Lesser GPL terms.
I assume you're referring to the ptmalloc implementation under
opal/mca/memory/linux, right?
If, so, please read it
2012/2/14 Jeff Squyres
> On Feb 14, 2012, at 6:09 AM, Denis Nagorny wrote:
>
> I assume you're referring to the ptmalloc implementation under
> opal/mca/memory/linux, right?
>
Yes, you are right.
> Specifically, see opal/mca/memory/linux/README-ptmalloc.txt
>
It seems that I was misled by copy
I've built Open MPI 1.5.5rc1 (tarball from Web) with CFLAGS=-O3.
Unfortunately, also without any effect.
Here some results with enabled binding reports:
$ mpirun *--bind-to-core* --report-bindings -np 2 ./all2all_ompi1.5.5
[n043:61313] [[56788,0],0] odls:default:fork binding child [[56788,1],1]
See P. 38 - 40, MVAPICH2 outperforms Open-MPI for each test, so is it
something that they are doing to optimize for CUDA & GPUs and those
optimizations are not in OMPI, or did they specifically tune MVAPICH2
to make it shine??
http://hpcadvisorycouncil.com/events/2012/Israel-Workshop/Presentations
There are several things going on here that make their library perform better.
With respect to inter-node performance, both MVAPICH2 and Open MPI copy the GPU
memory into host memory first. However, they are using special host buffers
that and a code path that allows them to copy the data async
There was recently a fair amount of work done in hwloc to get configure
to work correctly for a probe that was intended to determine how many
arguments appear in a specific function prototype. The "issue" was that
the C spec doesn't require that the C compiler issue an error for either
too-man
I have configured the ompi-trunk (from last night's tarball:
1.7a1r25913) with --without-hwloc.
Having done so, I see the following failure at build time:
CC rmaps_rank_file_component.lo
/home/hargrove/OMPI/openmpi-trunk-linux-mips64el//openmpi-trunk/orte/mca/rmaps/rank_file/rmaps_rank_fi
On 2/14/2012 5:10 PM, Paul H. Hargrove wrote:
I have configured the ompi-trunk (from last night's tarball:
1.7a1r25913) with --without-hwloc.
Having done so, I see the following failure at build time:
CC rmaps_rank_file_component.lo
/home/hargrove/OMPI/openmpi-trunk-linux-mips64el//ope