I apologize - I had to revert this myself as it broke support for backend
launches on Torque and SLURM, which a number of us are actively using.
Please see the commit message for r21488 for a complete explanation.
Thanks
Ralph
On Fri, Jun 19, 2009 at 8:36 PM, Ralph Castain wrote:
> I'm sorry, b
Having gone around in circles on hostfile-related issues for over five years
now, I honestly have little motivation to re-open the entire discussion
again. It doesn't seem to be that daunting a requirement for those who are
using it, so I'm inclined to just leave well enough alone.
:-)
On Fri, Ju
I'm sorry, but this change is incorrect.
If you look in orte/mca/ess/base/ess_base_std_orted.c, you will see that
-all- orteds, regardless of how they are launched, open and select the PLM.
This change causes rsh launched daemons to
doubly-open/select the PLM, which is a very bad idea.
Would you
A few of us have been discussing OMPI's performance competitiveness
recently and the possibility of organizing an effort within the OMPI
community to assess better where we stand (potentially in an automated
basis, such as through MTT).
Does anyone have licenses for other MPI implementations
Ralph Castain wrote:
The two files have a slightly different format
Agreed.
and completely different meaning.
Somewhat agreed. They're both related to mapping processes onto a
cluster.
The hostfile specifies how many slots are on a node. The
rankfile specifies a rank and what node/slot it is t
Hi Jeff,
Jeff Squyres wrote:
Greetings David.
I think we should have a more explicit note about MPI_REAL16 support in
the README.
This issue has come up before; see
https://svn.open-mpi.org/trac/ompi/ticket/1603.
If you read through that ticket, you'll see that I was unable to find a
C e
Greetings David.
I think we should have a more explicit note about MPI_REAL16 support
in the README.
This issue has come up before; see https://svn.open-mpi.org/trac/ompi/ticket/1603
.
If you read through that ticket, you'll see that I was unable to find
a C equivalent type for REAL*16 w
Hi all,
I have compiled Open MPI 1.3.2 with Intel Fortran and C/C++ 11.0
compilers. Fortran Real*16 seems to be working except for MPI_Allreduce.
I have attached a simple program to show what I mean. I am not an MPI
programmer but I work for one and he actually wrote the attached
program. The
Ok; let me know what you find.
Thanks!
On Jun 19, 2009, at 7:16 AM, Sylvain Jeaugey wrote:
On Thu, 18 Jun 2009, Jeff Squyres wrote:
> On Jun 18, 2009, at 11:25 AM, Sylvain Jeaugey wrote:
>
>> My problem seems related to library generation through RPM, not
with
>> 1.3.2, nor the patch.
>>
On Thu, 18 Jun 2009, Jeff Squyres wrote:
On Jun 18, 2009, at 11:25 AM, Sylvain Jeaugey wrote:
My problem seems related to library generation through RPM, not with
1.3.2, nor the patch.
I'm not sure I understand -- is there something we need to fix in our SRPM?
I need to dig a bit, but her
Hi jeff,
All the information provided here helps me a lot.
Thank you, really really really appreciate it. :)
Regards,
Leo P.
From: Jeff Squyres
To: Open MPI Developers
Sent: Friday, 19 June, 2009 5:05:59 AM
Subject: Re: [OMPI devel] some question about OMP
11 matches
Mail list logo