Gleb Natapov wrote:
On Thu, Sep 06, 2007 at 06:50:43AM -0600, Ralph H Castain wrote:
WHAT: Decide upon how to handle MPI applications where one or more
processes exit without calling MPI_Finalize
WHY:Some applications can abort via an exit call instead of
calling MPI_Abo
http://dast.nlanr.net/Projects/Iperf2.0/patch-iperf-linux-2.6.21.txt
Scott
On Aug 31, 2007, at 1:36 PM, Terry D. Dontje wrote:
Ok, I have an update to this issue. I believe there is an
implementation difference of sched_yield between Linux and
Solaris. If
I change the sched_yield in opal_pro
nd CT-6 to not being
completely killed by the yield differences.
--td
Li-Ta Lo wrote:
On Thu, 2007-08-30 at 12:45 -0400, terry.don...@sun.com wrote:
Li-Ta Lo wrote:
On Thu, 2007-08-30 at 12:25 -0400, terry.don...@sun.com wrote:
Li-Ta Lo wrote:
On Wed, 2007-08-29 at
hmmm, interesting since my version doesn't abort at all.
--td
Li-Ta Lo wrote:
On Wed, 2007-08-29 at 11:36 -0400, Terry D. Dontje wrote:
To run the code I usually do "mpirun -np 6 a.out 10" on a 2 core
system. It'll print out the following and then hang:
Targ
To run the code I usually do "mpirun -np 6 a.out 10" on a 2 core
system. It'll print out the following and then hang:
Target duration (seconds): 10.00
# of messages sent in that time: 589207
Microseconds per message: 16.972
--td
Terry D. Dontje wr
00, Gleb Natapov wrote:
Is this trunk or 1.2?
Oops. I should read more carefully :) This is trunk.
On Wed, Aug 29, 2007 at 09:40:30AM -0400, Terry D. Dontje wrote:
I have a program that does a simple bucket brigade of sends and receives
where rank 0
Trunk.
--td
Gleb Natapov wrote:
Is this trunk or 1.2?
On Wed, Aug 29, 2007 at 09:40:30AM -0400, Terry D. Dontje wrote:
I have a program that does a simple bucket brigade of sends and receives
where rank 0 is the start and repeatedly sends to rank 1 until a certain
amount of time has
I have a program that does a simple bucket brigade of sends and receives
where rank 0 is the start and repeatedly sends to rank 1 until a certain
amount of time has passed and then it sends and all done packet.
Running this under np=2 always works. However, when I run with greater
than 2 usin
I've tried to do a vpath configure on the vt-integration tmp branch and
get the following:
configure: Entering directory './tracing/vampirtrace'
/workspace/tdd/ct7/ompi-ws-vt//ompi-vt-integration/builds/ompi-vt-integration/configure:
line 144920: cd: ./tracing/vampirtrace: No such file or direc
Maybe an clarification of the SM BTL implementation is needed. Does the
SM BTL not set a limit based on np using the max allowable as a
ceiling? If not and all jobs are allowed to use up to max allowable I
see the reason for not wanting to raise the max allowable.
That being said it seems to
George Bosilca wrote:
Looks like I'm the only one barely excited about this idea. The
system that you described, is well known. It been around for around
10 years, and it's called PMI. The interface you have in the tmp
branch as well as the description you gave in your email are more
than
Nevermind my message below, things seem to be working for me now. Not
sure what happened.
--td
Terry D. Dontje wrote:
Rainer Keller wrote:
Hi Terry,
On Wednesday 22 August 2007 16:22, Terry D. Dontje wrote:
I thought I would run this by the group before trying to unravel the
Rainer Keller wrote:
Hi Terry,
On Wednesday 22 August 2007 16:22, Terry D. Dontje wrote:
I thought I would run this by the group before trying to unravel the
code and figure out how to fix the problem. It looks to me from some
experiementation that when a process matches an unexpected
I thought I would run this by the group before trying to unravel the
code and figure out how to fix the problem. It looks to me from some
experiementation that when a process matches an unexpected message that
the PERUSE framework incorrectly fires a
PERUSE_COMM_MSG_MATCH_POSTED_REQ in additio
I think the concept is a good idea. A few questions that come to mind:
1. Do you have a set of APIs you plan on supporting?
2. Are you planning on adding new APIs (not currently supported by ORTE)?
3. Do any of the ORTE replacement APIs differ in how they work?
4. Will RSL change in how we
Jeff Squyres wrote:
With Mellanox's new HCA (ConnectX), extremely low latencies are
possible for short messages between two MPI processes. Currently,
OMPI's latency is around 1.9us while all other MPI's (HP MPI, Intel
MPI, MVAPICH[2], etc.) are around 1.4us. A big reason for this
differ
Ralph Castain wrote:
WHAT: Proposal to add two new command line options that will allow us to
replace the current need to separately launch a persistent daemon to
support connect/accept operations
WHY:Remove problems of confusing multiple allocations, provide a cleaner
I think I've found a problem that is causing at least some of my runs of
the MT tests to abort or hang. The issue is that in the OB1 request
structure there is a req_send_range_lock that is never initialized with
the appropriate (pthread_)mutex_init call. I've put in the following
patch (give
This announcement is to request links to Binary Distributions of Open
MPI that our community may have on the web for users to download. We'd
like to take those links and post them on our download page to make it
easier for those who are insterested in getting binaries to install and
not the so
Jeff Squyres wrote:
On Jul 10, 2007, at 1:26 PM, Ralph H Castain wrote:
2. It may be useful to have some high-level parameters to specify a
specific run-time environment, since ORTE has multiple, related
frameworks (e.g., RAS and PLS). E.g., "orte_base_launcher=tm", or
somesuch.
I
2007, at 6:38 AM, Terry D. Dontje wrote:
I am ok with the following as long as we can have some sort of
documenation describing what changed like which old functions
are replaced with newer functions and any description of changed
assumptions.
--td
Brian Barrett wrote:
On Jun 26, 2007, at
I am ok with the following as long as we can have some sort of
documenation describing what changed like which old functions
are replaced with newer functions and any description of changed
assumptions.
--td
Brian Barrett wrote:
On Jun 26, 2007, at 6:08 PM, Tim Prins wrote:
Some time ago yo
Rainer Keller wrote:
Hello dear all,
with the current numbering in mpif-common.h, the optional ddt
MPI_REAL2 will
break the binary compatibility of the fortran interface from v1.2 to
v1.3
(see r15133).
Now apart from MPI_REAL2 being of let's say rather minor importance,
the group
may fea
Open MPI wrote:
#898: Move MPI exception man page fixes to v1.2
---+
Reporter: jsquyres|Owner:
Type: changeset move request | Status: new
Priority: major
24 matches
Mail list logo