Hi all,
I am experiencing a rather unpleasant issue with a simple OpenMPI app. I have 4
nodes communicating with a central node. Performance is good and the
application behaves as it should. (i.e. performance steadily decreases as I
increase the work size). My problem is that immediately after
From: Jeff Squyres
To: adrian sabou
Sent: Friday, February 3, 2012 12:30 PM
Subject: Re: [OMPI users] OpenMPI / SLURM -> Send/Recv blocking
On Feb 3, 2012, at 5:21 AM, adrian sabou wrote:
> There is no iptables in my /etc/init.d.
It might be different in dif
out. I am grateful!
Adrian
From: Jeff Squyres
To: adrian sabou ; Open MPI Users
Sent: Thursday, February 2, 2012 11:19 PM
Subject: Re: [OMPI users] OpenMPI / SLURM -> Send/Recv blocking
When you run without a hostfile, you're likely only running on
ster are allowed to open TCP connections from any port to any other port.
On Feb 2, 2012, at 4:49 AM, adrian sabou wrote:
> Hi,
>
> The only example that works is hello_c.c. All others (that use MPI_Send and
> MPI_Recv)(connectivity_c.c and ring_c.c) block after the first MPI_Sen
2.1.0. It is also worth mentioning that all examples work when not using SLURM
(launching with "mpirun -np 5 "). Blocking
occurs only when I try to run on multiple hosts with SLURM ("salloc -N5
mpirun ").
Adrian
From: Jeff Squyres
To: adr
Hi All,
I'm having this weird problem when running a very simple OpenMPI application.
The application sends an integer from the rank 0 process to the rank 1 process.
The sequence of code that I use to accomplish this is the following:
if (rank == 0)
{
printf("Pro