>
> I assume this
> executable doesn't have to be on the node - that would be silly
>
Not silly at all - we don't preposition the binary for you. It has to be
present on the node where it is to be executed.
I have added an option to preposition binaries in the OMPI developer's
trunk, but that fea
Hi again,
Since last time I made progress - I compiled openMPI 1.3.3 from sources,
now I'm trying to run it on one of my nodes. I am using the same
software on the master, but master is Ubuntu 9.04 (NOT using openMPI
1.3.2 from repos) and the node is my own Linux system - it lacks many
features so
Hi Josh,
Thanks for the response. I am actually testing it on a single node
(though in the near future i will run it on a set of nodes). Therefore, my
application is running on the same machine as mpirun.
When I run the application and triggers the checkpointing mechanism from a
seper
On Fri, Sep 11, 2009 at 6:35 PM, Doug Reeder wrote:
> Andreas,
> Have you checked that ifort is creating 64 bit objects. If I remember
> correctly with 10.1 the default was to create 32 bit objects.
>
>
I have not done so yet. I was under the impression that it was building
32-bit initially. Then
We built openmpi/1.3.2 with nagware 5.1, to make it build we used
the following options:
./configure --prefix=/home/software/rhel5/openmpi-1.3.2/nag-5.1 --with-
tm=/usr/local/torque --with-openib=/usr CC=gcc CXX=g++ FC=f95 F77=f95
FCFLAGS=-w -dusty --disable-shared --enable-static --mandir
Hi Jeff,
Jeff Squyres wrote:
On Sep 8, 2009, at 1:06 PM, Shaun Jackman wrote:
My INBOX has been a disaster recently. Please ping me repeatedly if
you need quicker replies (sorry! :-( ).
(btw, should this really be on the devel list, not the user list?)
It's tending that way. I'll keep the
The configuration looks fine, but from the stack it seems that the
segv is coming from an invalid free in BLCR (which seems odd to me).
Are you able to get a gdb backtrace from a core file generated from
this run? That would provide a bit more detail on where things are
going wrong.
What
Is your application running on the same machine as mpirun?
How did you configure Open MPI? Note that is program will not work
without the FT thread enabled, which would be one reason why it would
seem to hang (since it is waiting for the application to enter the MPI
library):
--enable-ft
The config.log looked fine, so I think you have fixed the configure
problem that you previously posted about.
Though the config.log indicates that the BLCR component is scheduled
for compile, ompi_info does not indicate that it is available. I
suspect that the error below is because the CRS
Hi Jeff,
Given the situation that 16 nodes have eth0,eth1 and eth2 interface, I tried
to run data transfer within themselves using mpirun, but without specifying
"btl_tcp_if_include". I got only 15% increase in uni-directional data
transfer rate when using 3 links. But if I run two such processes
Open MPI now has a new sub-project: Portable Hardware Locality
("hwloc").
hwloc represents the merger of libtopology and Portable Linux
Processor Affinity (PLPA). I described the project in a blog post here:
http://blogs.cisco.com/ciscotalk/performance/comments/announcing_hwloc_portable_ha
Dear All,
I'm having the following problem. If I execute the exact same
application using both openmpi and mpich2, the former takes more than
2 times as long. When we compared the ganglia output we could see that
openmpi generates more than 60 percent System CPU whereas mpich2 only
has about 5, th
With IFORT 10.x the compiler defaulted to 64-bit at install, but
there's a script that can be run to switch the compiler to 32-bit mode.
You may want to double check to make sure that it's in 64-bit mode by
executing "ifort -V"
Warner Yuen
Scientific Computing
Consulting Engineer
Apple, In
13 matches
Mail list logo