Okay, this should finally be fixed. See the commit message for r23045 for an
explanation.
It really wasn't anything in the cited changeset that caused the problem. The
root cause is that $#@$ abort file we dropped in the session dir to indicate
you called MPI_Abort vs trying to thoroughly clean
On Apr 26, 2010, at 9:05 PM, Leo P. wrote:
> Hi Ralph,
>
> Is there some reason why you don't just use MPI_Comm_spawn? This is precisely
> what it was created to do. You can still execute it from a singleton, if you
> don't want to start your first process via mpirun (and is there some reason
Hi Ralph,
Is there some reason why you don't just use MPI_Comm_spawn? This is precisely
what it was created to do. You can still execute it from a singleton, if you
don't want to start your first process via mpirun (and is there some reason why
you don't use mpirun???).
The reason why i am u
UI sincerely hope you are kidding :-)
Is there some reason why you don't just use MPI_Comm_spawn? This is precisely
what it was created to do. You can still execute it from a singleton, if you
don't want to start your first process via mpirun (and is there some reason why
you don't use
The ibm/final test does not call MPI_Abort directly. It is calling
MPI_Barrier after MPI_Finalize is called, which is a no-no. This is
detected and eventually the library calls ompi_mpi_abort(). This is
very similar to MPI_Abort() which ultimately calls ompi_mpi_abort as
well. So, I guess I
I'll try to keep it in mind as I continue the errmgr work. I gather these tests
all call MPI_Abort?
On Apr 26, 2010, at 12:31 PM, Rolf vandeVaart wrote:
>
> With our MTT testing we have noticed a problem that has cropped up in the
> trunk. There are some tests that are supposed to return a n
With our MTT testing we have noticed a problem that has cropped up in
the trunk. There are some tests that are supposed to return a non-zero
status because they are getting errors, but are instead returning 0.
This problem does not exist in r23022 but does exist in r23023.
One can use the
Hi Ralph,
Thank you for your response. Really appreciate it as usual. :)
It depends - if you have an environment like slurm, sge, or torque, then we use
that to launch our daemons on each node. Otherwise, we default to using ssh.
Once the daemons are launched, we then tell the daemons what p
Just delete the offending line - the 1.5 ESS API doesn't contain it.
On Apr 26, 2010, at 6:22 AM, Jeff Squyres wrote:
> https://svn.open-mpi.org/trac/ompi/changeset/23025 broke the v1.5 branch; I
> get compile failures on Linux.
>
> -
> CC ess_singleton_module.lo
> ess_singleton_module
It depends - if you have an environment like slurm, sge, or torque, then we use
that to launch our daemons on each node. Otherwise, we default to using ssh.
Once the daemons are launched, we then tell the daemons what processes each is
to run. So it is a two-stage launch procedure.
On Apr 26,
https://svn.open-mpi.org/trac/ompi/changeset/23025 broke the v1.5 branch; I get
compile failures on Linux.
-
CC ess_singleton_module.lo
ess_singleton_module.c:89: error: ‘orte_ess_base_query_sys_info’ undeclared
here (not in a function)
ess_singleton_module.c:91: warning: excess elemen
Hi everyone,
I wanted to know how OpenMPI launches a MPI process in a cluster environment.
I am assuming if the process lifecycle management it will be using rsh.
Anyhelp would be greatly appreciated.
12 matches
Mail list logo