I am quite definitely of the opinion that a novice user should be able to
"./configure --prefix=blah; make -j 32 install && mpicc my_mpi_app.c && mpirun
a.out", and OMPI should generally do the Right Thing.
I'm not opposed to being able to set configure-time defaults, but I (fairly
strongly) be
On Oct 21, 2015, at 11:09 AM, Jeff Squyres (jsquyres)
wrote:
> REVISION 2 (based on feedback in last 24 hours).
>
> Changes:
>
> - NETWORK instead of NETWORK_TYPE
> - Shared memory and process loopback are not affected by this CLI
> - Change the OPAL API usage.
>
> I actually like points 1-8
I’m not sure you are correct in stating that the simplest case “obviously”
still needs to work. In fact, there were several proposals on the last telecon
to the contrary as it was unclear that the community would be able to agree on
defaults. So people suggested either requiring configuring with
On Oct 21, 2015, at 12:05 PM, Ralph Castain wrote:
>
> It seemed like this topic was straying, so I’m glad to hear that specifying
> nothing means we still execute.
Yes, I'm not trying to change the simple/easiest case. That obviously still
needs to work.
> My question remains, though: what
It seemed like this topic was straying, so I’m glad to hear that specifying
nothing means we still execute.
My question remains, though: what is the default? What are the default values
of the networks/qualifiers?
> On Oct 21, 2015, at 8:56 AM, Jeff Squyres (jsquyres)
> wrote:
>
> On Oct 21
On Oct 21, 2015, at 11:32 AM, Ralph Castain wrote:
>
> With all due respect, I think this still dodges the key question. Are we now
> saying that every user will be *required* to provide this info? If not, then
> what is the default?
>
> Let’s face it: the default is what 90+% of the world is
With all due respect, I think this still dodges the key question. Are we now
saying that every user will be *required* to provide this info? If not, then
what is the default?
Let’s face it: the default is what 90+% of the world is going to use. This all
seems rather complex to expect the averag
REVISION 2 (based on feedback in last 24 hours).
Changes:
- NETWORK instead of NETWORK_TYPE
- Shared memory and process loopback are not affected by this CLI
- Change the OPAL API usage.
I actually like points 1-8 below quite a bit. If implemented in ALL
BTLs/MTLs/etc., it can solve the "how d
On Oct 20, 2015, at 6:37 PM, Paul Hargrove wrote:
>
>
> I am suggesting that a user wishes to NOT USE a specific port at all.
> In other words, I want to "obstruct" all of the API paths that might reach
> that port.
> However, they do want to use some other port of the same type - which means
On Oct 21, 2015, at 8:27 AM, Atchley, Scott wrote:
>
>> 2. --enable would work similar to our "include" MCA params: OMPI will *only*
>> use the network type(s) listed.
>
> In this scenario, will the user still need to “enable” off-node network, sm,
> and self? Or do you assume sm and self?
I
Hi Gilles,
My main concern is that if the user specifies InfiniBand per Jeff’s proposal,
will they get or not get sm (of any flavor) and self.
Scott
On Oct 21, 2015, at 9:00 AM, Gilles Gouaillardet
wrote:
> Scott and all,
>
> two btl are optimized (and work only) for intra node communicatio
Scott and all,
two btl are optimized (and work only) for intra node communications : sm
and vader
by "sm" I am not sure you mean the sm btl, or any/both sm and vader btl.
from an user point of view, and to disambiguate this, maybe we should use
the term "shm"
(which means sm and/or vader btl for
On Oct 20, 2015, at 4:45 PM, Jeff Squyres (jsquyres) wrote:
> On Oct 20, 2015, at 3:42 PM, Jeff Squyres (jsquyres)
> wrote:
>>
>> I'm guessing we'll talk about this at the Feb dev meeting, but we need to
>> think about this a bit before hand. Here's a little more fuel for the fire:
>> let's
Hi Jeff, Hi Gilles,
many thanks for your input and work. Good to hear your opinion, and if someone
reports a related issue in the future it may help to know things already.
Best,
-Tobias
--
Dr.-Ing. Tobias Hilbrich
Research Assistant
Technische Universitaet Dresden, Germany
Tel.: +49 (351) 46
Andrej,
a load average of 700 is very curious.
i guess you already made sure load average is zero when the system is idle ...
are you running an hybrid app (e.g. MPI + OpenMP) ?
one possible explanation is you run 48 MPI tasks and each task has 48
OpenMP threads, and that kills performances.
wh
15 matches
Mail list logo