On Mar 23, 2011, at 4:20 PM, Gus Correa wrote:
> However, now when I do "ompi_info -a",
> the output shows the non-default value 1 twice in a row,
> then later it shows the default value 0 again!
It's because we wanted to confuse you!
;-)
Sorry about that; this is a legitimate bug. I've fixed
Ralph Castain wrote:
On Mar 23, 2011, at 3:19 PM, Gus Correa wrote:
Dear OpenMPI Pros
Why am I getting the parser error below?
It seems not to recognize comment lines (#).
This is OpenMPI 1.4.3.
The same error happens with the other compiler wrappers too.
However, the wrappers compile and pro
On Mar 23, 2011, at 3:19 PM, Gus Correa wrote:
> Dear OpenMPI Pros
>
> Why am I getting the parser error below?
> It seems not to recognize comment lines (#).
>
> This is OpenMPI 1.4.3.
> The same error happens with the other compiler wrappers too.
> However, the wrappers compile and produce an
On Mar 23, 2011, at 2:20 PM, Gus Correa wrote:
> Ralph Castain wrote:
>> On Mar 21, 2011, at 9:27 PM, Eugene Loh wrote:
>>> Gustavo Correa wrote:
>>>
Dear OpenMPI Pros
Is there an MCA parameter that would do the same as the mpiexec switch
'-bind-to-core'?
I.e., somethi
Dear OpenMPI Pros
Why am I getting the parser error below?
It seems not to recognize comment lines (#).
This is OpenMPI 1.4.3.
The same error happens with the other compiler wrappers too.
However, the wrappers compile and produce an executable.
Thank you,
Gus Correa
Parser error:
$ mpicc hell
Gus Correa wrote:
Ralph Castain wrote:
On Mar 21, 2011, at 9:27 PM, Eugene Loh wrote:
Gustavo Correa wrote:
Dear OpenMPI Pros
Is there an MCA parameter that would do the same as the mpiexec
switch '-bind-to-core'?
I.e., something that I could set up not in the mpiexec command line,
but
Ralph Castain wrote:
On Mar 21, 2011, at 9:27 PM, Eugene Loh wrote:
Gustavo Correa wrote:
Dear OpenMPI Pros
Is there an MCA parameter that would do the same as the mpiexec switch
'-bind-to-core'?
I.e., something that I could set up not in the mpiexec command line,
but for the whole cluster,
Hi,
Thanks for your feedback and advice.
SELinux is currently disabled at runtime on all nodes as well as on the head
node.
So I don't believe this might be the issue here.
I have indeed compiled Open MPI myself and haven't specified anything peculiar
other than a --prefix and --enable-mpirun-