I'm having difficulty with running a simple hello world OpenMPI
program over Myrinet gm interconnect - please see the log at the end
of this email. The error is tripped by a call to the function
gm_global_id_to_node_id(
gm_btl->port,
gm_endpoint->endpoint_addr.global_id,
&gm_e
Steve --
Hypothetically, there shouldn't be much you need to do. Open MPI and
the other MPI's all conform to the same user-level API, so
recompiling your app with Open MPI *should* be sufficient.
That being said, there's a few disclaimers...
1. Command line syntax for certain tools will l
Wow! Thanks!! I didn't even see that a new version was available.
Thanks again!!
--jason
On 2/5/07, Brian Barrett wrote:
This was fixed in 1.1.4, along with some shared memory performance
issues on Intel Macs (32 or 64 bit builds).
Brian
On Feb 5, 2007, at 1:22 PM, Jason Martin wrote:
>
This was fixed in 1.1.4, along with some shared memory performance
issues on Intel Macs (32 or 64 bit builds).
Brian
On Feb 5, 2007, at 1:22 PM, Jason Martin wrote:
Hi All,
Using openmpi-1.1.3b3, I've been attempting to build Open-MPI in
64-bit bit mode on a Mac Pro (dual Xeon 5150 2.66GHz
I managed to make it run by disabling the parameter checking.
I added --mca mpi_param_check 0 to mpirun and it works ok, so maybe
the problem is with the parameter checking code.
On 2/2/07, Ivan de Jesus Deras Tabora wrote:
I've been checking the OpenMPI code, trying to find something, but
stil
Hi All,
Using openmpi-1.1.3b3, I've been attempting to build Open-MPI in
64-bit bit mode on a Mac Pro (dual Xeon 5150 2.66GHz with 1G RAM).
Using the following configuration options:
./configure --prefix=/usr/local/openmpi-1.1.3b3 \
--build=x86_64-apple-darwin \
CFLAGS=-m64 CXXFLAGS=-m64 \
LDFLA
Hello list,
I'm having difficulty with running a simple hello world OpenMPI
program over Myrinet gm interconnect - please see the log at the end
of this email. The error is tripped by a call to the function
gm_global_id_to_node_id(
gm_btl->port,
gm_endpoint->endpoint_addr.global_id,
Jeff,
Jeff Squyres wrote:
On Feb 5, 2007, at 12:19 PM, Joe Griffin wrote:
Thanks. As always .. you are a perl.
FWIW -- credit Brian on this one. He did 98% of the work; I only did
the final validation of it. :-)
Thanks to Brian then also.
I will try that with 1.2.
On Feb 5, 2007, at 12:19 PM, Joe Griffin wrote:
Thanks. As always .. you are a perl.
FWIW -- credit Brian on this one. He did 98% of the work; I only did
the final validation of it. :-)
I will try that with 1.2.
Are you saying you'd like me to ship you the patch? Or you'll wait
f
On 2/5/07, Tom Rosmond wrote:
Have you looked at the self-scheduling algorithm described in "USING
MPI" by Gropp, Lusk, and Skjellum.
Yes, a master-slave mode should be better for my module. In this way:
1. the master is started on node 0 and execute the Python script, the
master send command
Jeff,
Thanks. As always .. you are a perl.
I will try that with 1.2.
Joe
Jeff Squyres wrote:
Greetings Joe.
What we did was to make 2 sets of environment variables that you can
use to modify OMPI's internal path behavior:
1. OPAL_DESTDIR: If you want to use staged builds (a la RPM buil
Greetings Joe.
What we did was to make 2 sets of environment variables that you can
use to modify OMPI's internal path behavior:
1. OPAL_DESTDIR: If you want to use staged builds (a la RPM builds),
you can set the environment variable OPAL_DESTDIR. For example:
./configure --prefix=/
Hi Ralph,
Thanks for the reply. The OpenMPI version is 1.2b2 (because I would like
to integrate it with SGE).
Here is what is happening:
(1) When I run with -debug-daemons (but WITHOUT -d), I get "Daemon
[0,0,27] checking in as pid 7620 on host blade28" (for example) messages
for mo
This is very odd. The two error messages you are seeing are side
effects of the real problem, which is that Open MPI is segfaulting
when build with the Intel compiler. We've had some problems with
bugs in various versions of the Intel compiler -- just to be on the
safe side, can you make
Thanks for the suggestion, and I should have mentioned it, but there is
no firewall set up on any of the compute nodes. Only on the head node on
the eth interface to the outside world are there firewall restrictions.
Also, as I mentioned, I was able to build and run this test app
successfully usin
15 matches
Mail list logo