I have got to say I like the name ...
On Nov 17, 2011, at 11:34 AM, Barrett, Brian W wrote:
> On 11/17/11 6:29 AM, "Ralph Castain" wrote:
>
>> Frankly, the only vote that counts is Nathan's - it's his btl, and we
>> have never forcibly made someone rename their component. I would suggest
>> we
I agree with Brian
Sent with Good (www.good.com)
-Original Message-
From: Barrett, Brian W [mailto:bwba...@sandia.gov]
Sent: Friday, November 04, 2011 07:46 PM Eastern Standard Time
To: Open MPI Developers
Cc: Christopher Yeoh
Subject:Re: [OMPI devel] [OMPI svn-ful
I am in favor.
Rich
On Aug 16, 2011, at 11:51 AM, Jeff Squyres wrote:
> We talked about this on the call today.
>
> No one seemed to have any objections; I generally think that this is the good
> idea.
>
> Any other comments?
>
>
> On Aug 12, 2011, at 4:09 PM, Edgar Gabriel wrote:
>
>> WHA
The MPI forum is in the process of fefining this - the work going on at ORNL is
in this context.
Rich
- Original Message -
From: N.M. Maclaren [mailto:n...@cam.ac.uk]
Sent: Friday, April 22, 2011 01:20 PM
To: Open MPI Developers
Subject: Re: [OMPI devel] Adaptive or fault-tolerant MPI
Why go to all this effort, and not just fork 1.7 from the trunk, skipping the
whole merge process ? Seems like it would be much more prudent to spend time
on improving the code base, adding missing MPI support, etc., rather than
spending the time on a merge.
Rich
On 10/8/10 6:34 PM, "Jeff
Please add Cray XT-5
On 10/6/10 8:01 PM, "Jeff Squyres (jsquyres)" wrote:
Folks -- I kinda need an answer on this ASAP -- particularly the Solaris and
Windows parts.
Thanks.
On Oct 5, 2010, at 9:06 PM, Jeff Squyres wrote:
> Developers -- I'm updating the 1.5 README file. Are these section
laris but I don't imagine I will see
anything that will change my mind.
--td
Samuel K. Gutierrez wrote:
Hi Rich,
It's a modification to the existing common sm component. The modifications do
include the addition of a new POSIX shared memory facility, however.
Sam
On Aug 11, 2
Is this a modification of the existing component, or a new component ?
Rich
On 8/10/10 10:52 AM, "Samuel K. Gutierrez" wrote:
Hi,
I wanted to give everyone a heads-up about a new POSIX shared memory
component
that has been in the works for a while now and is ready to be pushed
into the
trunk.
Why do we need an RFC for this sort of component ? Seems self contained.
Rich
On 8/3/10 6:59 AM, "Terry Dontje" wrote:
WHAT: Add new Solaris sysinfo component
WHY: To allow OPAL access to chip type and model information when running on
Solaris OS.
WHERE: opal/mca/sysinfo/solaris
WHEN:
Can you be a bit more explicit, please ?
I do not want this on our systems, so as long as this is a compile time
decision, and as long as this does not degrade the performance of the current
sm device, I will not object.
Rich
- Original Message -
From: devel-boun...@open-mpi.org
To: Op
Jeff,
If I can't have that type of fine level control, it means that any new
development we do will have to replicate this functionality. While most users
don't need this sort of capabilities, it is essential for experimenting with
some algorithmic ideas as new code is being developed. We ac
: ABI break between 1.4 and 1.5 / .so versioning
On Feb 23, 2010, at 12:58 PM, Graham, Richard L. wrote:
> Will we still have the option to build individual libraries, is we opt for
> this ?
You will still have individual libraries; it's just that libopen-rte will
include libopen-pal,
Will we still have the option to build individual libraries, is we opt for this
?
Rich
- Original Message -
From: devel-boun...@open-mpi.org
To: Open MPI Developers
Sent: Tue Feb 23 12:31:20 2010
Subject: Re: [OMPI devel] RFC: ABI break between 1.4 and 1.5 / .so versioning
No one has
Has someone managed to build ompi on snow leopard ? I am trying to build, and
it looks like configure does not detect the support for htonl and friends, so
it adds the definition.
static inline uint32_t htonl(uint32_t hostvar) { return hostvar; }
with the compiler proceeding to do a macro subst
This makes sense to me, unless there are features on the trunk that
absolutely should not be in 1.5. This seems to be a far more manageable way
to handle 1.5 - much less error prone, and much less time consuming.
Rich
On 12/10/09 8:55 AM, "Keller, Rainer H." wrote:
>
> WHAT: Branch (again) f
I am running into a situation that I don’t understand, so thought I would toss
it out and see if someone can give me a hint how to deal with what I am seeing.
I am making a call to MPI_Wait(), which ends up with the following call
sequence:
- ompi_request_default_wait()
- ompi_request_wait_
What happens if $sysconfdir/openmpi-priv-mca-params.conf is missing ?
Can the file name ( openmpi-priv-mca-params.conf ) also be configurable ?
Rich
On 9/3/09 5:23 AM, "Nadia Derbey" wrote:
What: Define a way for the system administrator to prevent users from
overwriting the default s
I have several questions here - since process migration is an open research
question,
and there is more than one way to address the issue -
- Is this being implemented as a component, so that other approaches can be
used ?
- If so, what sort of component interface is being considered ?
- What
A question about library dependencies in the ompi build system. I am creating
a new ompi component that has uses routines out of ompi/common/a and
ompi/common/b . How do I get routines from ompi/common/a to pick up the
symbols in ompi/common/b ? The symbol I am after is clearly in
libmca_com
re btl left, we abort the job. Next step is to
be able to re-establish the connection when the network is back.
Mouhamed
Graham, Richard L. a écrit :
> What is the impact on sm, which is by far the most sensitive to latency. This
> really belongs in a place other than ob1. Ob1 is supposed to pr
ctionality resides in the
base, then perhaps we can avoid this problem.
Is it possible?
Ralph
On Aug 2, 2009, at 3:25 PM, Graham, Richard L. wrote:
>
>
>
> On 8/2/09 12:55 AM, "Brian Barrett" wrote:
>
> While I agree that performance impact (latency in this case) is
s that local
>> completion
>> implies remote delivery, the problem is simple to solve. If not, heavier
>> weight protocols need to be used to cover the range of ways failure
>> may manifest itself.
Rich
Thanks,
Brian
On Aug 1, 2009, at 6:21 PM, Graham, Richard L. wrote:
What is the impact on sm, which is by far the most sensitive to latency. This
really belongs in a place other than ob1. Ob1 is supposed to provide the
lowest latency possible, and other pml's are supposed to be used for heavier
weight protocols.
On the technical side, how do you distinguish be
This should go ahead with right before the 1.5 changes, as discussed. Greg has
put out the changes for testing several months ago, so there has been quite a
while to test, and he has done quite a bit of testing himself.
Rich
- Original Message -
From: devel-boun...@open-mpi.org
To: Op
e
> performance impact.
>
> Brian
>
> On Feb 1, 2009, at 12:14 PM, Graham, Richard L. wrote:
>
>> Brian,
>> Just fyi, there is a weekly call - thursdays at 4 est where we have
>> been discussyng these issues.
>> Let's touch base at the forum
Brian,
Just fyi, there is a weekly call - thursdays at 4 est where we have been
discussyng these issues.
Let's touch base at the forum.
Rich
- Original Message -
From: devel-boun...@open-mpi.org
To: Open MPI Developers
Sent: Sun Feb 01 10:36:33 2009
Subject: Re: [OMPI devel] RFC: M
If all write to the same destination at the same time - yes. On older systems
you could start to see drgradation around 6 procs, but things heald up ok
further out. My guess is that you want one such queue per n procs, where n
might be 8 (have to experiment), so polling costs are low and memor
No specific test, just an idea how this might impact an app. I am guessing it
won't even be noticable.
Rich
- Original Message -
From: devel-boun...@open-mpi.org
To: Open MPI Developers
Sent: Thu Dec 18 07:13:08 2008
Subject: Re: [OMPI devel] RFC: make predefined handles extern to poi
I have not looked at the code in a long time, so not sure how many things have
changed ... In general what you are suggesting is reasonable. However,
especially on large machines you also need to worry about memory locality, so
should allocate from memory pools that are appropriately located.
We have seen the same thing.
Rich
- Original Message -
From: devel-boun...@open-mpi.org
To: Open MPI Developers
Sent: Fri Aug 22 13:32:32 2008
Subject: [OMPI devel] Still seeing hangs in OMPI 1.3
George:
We are still seeing hangs in OMPI 1.3 which I assume are due to the PML
issue.
Terry,
Are the performance numbers still with debugging turned on ? The sm latency
(trunk and tmp) is about 2.5 x higher than I typically see. BTW, if the tmp
branch is running coming in essentially the same, looks like there is no
performance problem with the changes.
Rich
- Origina
I would second this - thread safety should be a 1.3 item, unless someone has a
lot of spare time.
Rich
-Original Message-
From: devel-boun...@open-mpi.org
To: Open MPI Developers
Sent: Mon Jun 11 10:44:33 2007
Subject: Re: [OMPI devel] threaded builds
On Jun 11, 2007, at 8:25 AM, Jef
32 matches
Mail list logo