Dear Jeff,
These new versions of the tgz files replace the previous ones:
I had used an old outdated session environment. However, the
configuration and installation was OK again in each case.
Sorry for the noise caused by the previous tgz files.
Regards,
Jorge.
- Mensaje original -
>
@#$%@#$%#@$%
I was very confused by this bug report, because in my mind, a) this
functionality is on the SVN trunk, and b) we had moved the gcc functionality to
the v1.8 branch long ago.
I just checked the SVN/Trac records:
1. I'm right: this functionality *is* on the trunk. If you build the
I filed the following ticket:
https://svn.open-mpi.org/trac/ompi/ticket/4857
On Aug 12, 2014, at 12:39 PM, Jeff Squyres (jsquyres)
wrote:
> (please keep the users list CC'ed)
>
> We talked about this on the weekly engineering call today. Ralph has an idea
> what is happening -- I need
Dear Jeff,
- Mensaje original -
> De: "Jeff Squyres (jsquyres)"
> Para: "Open MPI User's List"
> Enviado: Lunes, 11 de Agosto 2014 11:47:29
> Asunto: Re: [OMPI users] Open MPI 1.8.1: "make all" error: symbol `Lhwloc1'
> is already defined
>
> The problem appears to be occurring in the
(please keep the users list CC'ed)
We talked about this on the weekly engineering call today. Ralph has an idea
what is happening -- I need to do a little investigation today and file a bug.
I'll make sure you're CC'ed on the bug ticket.
On Aug 12, 2014, at 12:27 PM, Timur Ismagilov wrote:
Hi Jeff,
On Tue, 2014-08-12 at 16:18 +, Jeff Squyres (jsquyres) wrote:
> Can you send the output from configure, the config.log file, and the
> ompi_config.h file?
Attached. configure.log comes from
(./configure --prefix=/usr/projects/eap/tools/openmpi/1.8.2rc3 2>&1) >
configure.log
Seem
Can you send the output from configure, the config.log file, and the
ompi_config.h file?
On Aug 12, 2014, at 12:11 PM, Daniels, Marcus G wrote:
> On Tue, 2014-08-12 at 15:50 +, Jeff Squyres (jsquyres) wrote:
>> It should be in the 1.8.2rc tarball (i.e., to be included in the
>> soon-to-be
On Tue, 2014-08-12 at 15:50 +, Jeff Squyres (jsquyres) wrote:
> It should be in the 1.8.2rc tarball (i.e., to be included in the
> soon-to-be-released 1.8.2).
>
> Want to give it a whirl before release to let us know if it works for you?
>
> http://www.open-mpi.org/software/ompi/v1.8/
>
It should be in the 1.8.2rc tarball (i.e., to be included in the
soon-to-be-released 1.8.2).
Want to give it a whirl before release to let us know if it works for you?
http://www.open-mpi.org/software/ompi/v1.8/
On Aug 12, 2014, at 11:44 AM, Daniels, Marcus G wrote:
> Hi,
>
> Looks lik
Hi,
Looks like there is not the check yet for "GCC$ ATTRIBUTES NO_ARG_CHECK"
-- a prerequisite for activating mpi_f08. Could it be added?
https://bitbucket.org/jsquyres/mpi3-fortran/commits/243ffae9f63ffc8fcdfdc604796ef290963ea1c4
Marcus
You can improve performance by using --bind-to socket or --bind-to numa as
this will keep the process inside the same memory region. You can also help
separate the jobs by using the --cpuset to tell each job which cpus it
should use - we'll stay within that envelope.
On Tue, Aug 12, 2014 at 8:33
Am 12.08.2014 um 16:57 schrieb Antonio Rago:
> Brilliant, this works!
> However I’ve to say that it seems that it seems that code becomes slightly
> less performing.
> Is there a way to instruct mpirun on which core to use, and maybe create this
> map automatically with grid engine?
In the open
I filed https://svn.open-mpi.org/trac/ompi/ticket/4856 to apply these ROMIO
patches.
Probably won't happen until 1.8.3.
On Aug 6, 2014, at 2:54 PM, Rob Latham wrote:
>
>
> On 08/06/2014 11:50 AM, Mohamad Chaarawi wrote:
>
>> To replicate, run the program with 2 or more procs:
>>
>> mpirun
Brilliant, this works!
However I’ve to say that it seems that it seems that code becomes slightly less
performing.
Is there a way to instruct mpirun on which core to use, and maybe create this
map automatically with grid engine?
Thanks in advance
Antonio
On 12 Aug 2014, at 14:10, Jeff Squyres
The quick and dirty answer is that in the v1.8 series, Open MPI started binding
MPI processes to cores by default.
When you run 2 independent jobs on the same machine in the way in which you
described, the two jobs won't have knowledge of each other, and therefore they
will both starting bingin
Are you running any kind of firewall on the node where mpirun is invoked? Open
MPI needs to be able to use arbitrary TCP ports between the servers on which it
runs.
This second mail seems to imply a bug in OMPI's oob_tcp_if_include param
handling, however -- it's supposed to be able to handle
Lenny --
Since this is about the development trunk, how about sending these kinds of
mails to the de...@open-mpi.org list? Not all the OMPI developers are on the
user's list, and this very definitely is not a user-level question.
On Aug 12, 2014, at 6:12 AM, Lenny Verkhovsky wrote:
>
> Hi
When i add --mca oob_tcp_if_include ib0 (infiniband interface) to mpirun (as
it was here: http://www.open-mpi.org/community/lists/users/2014/07/24857.php
) i got this output:
[compiler-2:08792] mca:base:select:( plm) Querying component [isolated]
[compiler-2:08792] mca:base:select:( plm) Quer
Hello!
I have Open MPI v1.8.2rc4r32485
When i run hello_c, I got this error message
$mpirun -np 2 hello_c
An ORTE daemon has unexpectedly failed after launch and before
communicating back to mpirun. This could be caused by a number
of factors, including an inability to create a connection bac
Dear mailing list
I’m running into trouble in the configuration of the small cluster I’m managing.
I’ve installed openmpi-1.8.1 with gcc 4.7 on a Centos 6.5 with infiniband
support.
Compile and installation were all ok and i can compile and actually run
parallel jobs, both directly or by submitti
Hi,
Config:
./configure --enable-openib-rdmacm-ibaddr --prefix /home/sources/ompi-bin
--enable-mpirun-prefix-by-default --with-openib=/usr/local --enable-debug
--disable-openib-connectx-xrc
Run:
/home/sources/ompi-bin/bin/mpirun -np 65 --host
ko0067,ko0069,ko0070,ko0074,ko0076,ko0079,ko0080,k
21 matches
Mail list logo