Future developers? Code? What are you talking about???
This isn't in the code base, nor is it "code" - it is config options in the
private platform files for configuring clusters of contributors. We -never-
review what is in that area, leaving it up to their respective owners. The
contents of t
No big deal one way or the other. It's a symbolic gesture against bit
rot, I suppose. The fact is that there are different pieces of the code
base that move forward while vestiges of old stuff get left behind
elsewhere. At first, it's easier to leave that stuff in. With time,
the history ge
On Mar 10, 2011, at 5:54 PM, Eugene Loh wrote:
> Ralph Castain wrote:
>
>> Just stale code that doesn't hurt anything
>>
> Okay, so it'd be all right to remove those lines. Right?
They are in my platform files - why are they a concern?
Just asking - we don't normally worry about people's pla
Ralph Castain wrote:
Just stale code that doesn't hurt anything
Okay, so it'd be all right to remove those lines. Right?
- frankly, I wouldn't look at platform files to try to get a handle on such
things as they tend to fall out of date unless someone needs to change it.
We always hard-co
The idea would be to hardwire support for MPI_THREAD_MULTIPLE to be off,
just as we have done for progress threads. Threads might still be used
for other purposes -- e.g., ORTE, openib async thread, etc.
Ralph Castain wrote:
Can't speak to the MPI layer, but you definitely cannot hardwire th
Can't speak to the MPI layer, but you definitely cannot hardwire thread support
to "off" for ORTE.
On Mar 10, 2011, at 10:57 AM, George Bosilca wrote:
>
> On Mar 10, 2011, at 11:23 , Eugene Loh wrote:
>
>> Any comments on this?
>
> Good luck?
>
> george.
>
>
>> We wanted to clean up MPI_
Just stale code that doesn't hurt anything - frankly, I wouldn't look at
platform files to try to get a handle on such things as they tend to fall out
of date unless someone needs to change it.
We always hard-code progress threads to off because the code isn't thread safe
in key areas involving
In the trunk, we hardwire progress threads to be off. E.g.,
% grep progress configure.ac
# Hardwire all progress threads to be off
enable_progress_threads="no"
[Hardcode the ORTE progress thread to be off])
[Hardcode the OMPI progress thread to be off])
So, h
On Mar 10 2011, Eugene Loh wrote:
Any comments on this? We wanted to clean up MPI_THREAD_MULTIPLE
support in the trunk and port these changes back to 1.5.x, but it's
unclear to me what our expectations should be about any
MPI_THREAD_MULTIPLE test succeeding. How do we assess (test) our
chan
On Mar 10, 2011, at 11:23 , Eugene Loh wrote:
> Any comments on this?
Good luck?
george.
> We wanted to clean up MPI_THREAD_MULTIPLE support in the trunk and port
> these changes back to 1.5.x, but it's unclear to me what our expectations
> should be about any MPI_THREAD_MULTIPLE test su
If you're trying to make THREAD_MULTIPLE support better, I think that would be
great. If your simple test seems to fail over TCP with THREAD_MULTIPLE, then I
think it's pretty clear that it's broken / needs debugging.
Specifically: if we could have higher confidence in at least a few BTLs'
sup
Any comments on this? We wanted to clean up MPI_THREAD_MULTIPLE
support in the trunk and port these changes back to 1.5.x, but it's
unclear to me what our expectations should be about any
MPI_THREAD_MULTIPLE test succeeding. How do we assess (test) our
changes? Or, should we just hardwire th
Ok.
I re-targeted all remaining 1.5.2 and 1.5.3 tickets to 1.5.4 and opened a
single blocker ticket for 1.5.3. I'm investigating the failure now...
(clearly we need to add a test for this MPI extension in MTT!)
On Mar 9, 2011, at 4:10 PM, Ken Lloyd wrote:
> Please, do.
>
> On Wed, 2011-03-0
On Wed, 9 Mar 2011, George Bosilca wrote:
One gets multiple non-overlapping BTL (in terms of peers), each with its
own set of parameters and eventually accepted protocols. Mainly there
will be one BTL per memory hierarchy.
Pretty cool :-)
I'll cleanup the code and send you a patch.
We'd be
14 matches
Mail list logo