On Nov 28, 2014, at 11:58 AM, George Bosilca wrote:
> The same functionality can be trivially achieved at the user level using
> Adam's approach. If we provide a shortcut in Open MPI, we should emphasize
> this is an MPI extension, and offer the opportunity to other MPI to
No worries :)
2014-11-27 14:20 GMT+01:00 Jeff Squyres (jsquyres) :
> Many thanks!
>
> Note that it's a holiday week here in the US -- I'm only on for a short
> time here this morning; I'll likely disappear again shortly until next
> week. :-)
>
>
>
> On Nov 27, 2014, at 8:12
Many thanks!
Note that it's a holiday week here in the US -- I'm only on for a short time
here this morning; I'll likely disappear again shortly until next week. :-)
On Nov 27, 2014, at 8:12 AM, Nick Papior Andersen wrote:
> Sure, I will make the changes and commit to
Sure, I will make the changes and commit to make them OMPI specific.
I will post forward my problems on the devel list.
I will keep you posted. :)
2014-11-27 13:58 GMT+01:00 Jeff Squyres (jsquyres) :
> On Nov 26, 2014, at 2:08 PM, Nick Papior Andersen
On Nov 26, 2014, at 2:08 PM, Nick Papior Andersen wrote:
> Here is my commit-msg:
> "
> We can now split communicators based on hwloc full capabilities up to BOARD.
> I.e.:
> HWTHREAD,CORE,L1CACHE,L2CACHE,L3CACHE,SOCKET,NUMA,NODE,BOARD
> where NODE is the same as SHARED.
>
Dear Ralph (all ;))
In regards of these posts and due to you adding it to your todo list.
I wanted to do something similarly and implemented a "quick fix".
I wanted to create a communicator per node, and then create a window to
allocate an array in shared memory, however, I came to short in the