[OMPI users] SGE and OpenMPI integration

2007-01-29 Thread Heywood, Todd
I have sent the following experiences to the SGE mailing list, but I
thought I would also try here...

 

I have been trying out version 1.2b2 for its integration with SGE. The
simple "hello world" test program works fin by itself, but there are
issues when submitting it to SGE.

 

For small numbers of tasks, for SOME runs, I get errors for each of the
non-master tasks, and they are all one of the following:

 

error: commlib error: got read error (closing
"blade27.bluehelix.cshl.edu/execd/1")

 

error: commlib error: can't read general message size header (GMSH)
(closing "blade221

.bluehelix.cshl.edu/execd/1")

 

When I repeat runs, these errors tend to go away, like the first time a
node runs a job it coughs on it, but then it is OK for subsequent jobs.
I do get the correct output.

 

Things change when I try a large job, say 400 tasks. I get loads of GMSH
errors, but NO output, and SGE's qstat command aborts:

 

[heywood@blade1 ompi]$ qsub -pe mpi 400 hello.sh

Your job 8239 ("hello.sh") has been submitted

[heywood@blade1 ompi]$ qstat -t

critical error: unrecoverable error - contact systems manager

Aborted

[heywood@blade1 ompi]$

 

I then have to qdel the job from another window.

 

If anyone has seen anything like this, I'd be interested in hearing.
Since the errors are coming from SGE's communication library, I did
increase the file descriptor limit (ulimit -n 65536), but it made no
difference.

 

Thanks,

 

Todd Heywood

 



[OMPI users] large jobs hang on startup (deadlock?)

2007-02-02 Thread Heywood, Todd
I have OpenMPI running fine for a small/medium number of tasks (simple
hello or cpi program). But when I try 700 or 800 tasks, it hangs,
apparently on startup. I think this might be related to LDAP, since if I
try to log into my account while the job is hung, I get told my username
doesn't exist. However, I tried adding -debug to the mpirun, and got the
same sequence of output as for successful smaller runs, until it hung
again. So I added --debug-daemons and got this (with an exit, i.e. no
hanging):

...

[blade1:31733] [0,0,0] wrote setup file


--

The rsh launcher has been given a number of 128 concurrent daemons to

launch and is in a debug-daemons option. However, the total number of

daemons to launch (200) is greater than this value. This is a scenario
that

will cause the system to deadlock.

 

To avoid deadlock, either increase the number of concurrent daemons, or

remove the debug-daemons flag.


--

[blade1:31733] [0,0,0] ORTE_ERROR_LOG: Fatal in file
../../../../../orte/mca/rmgr/urm/

rmgr_urm.c at line 455

[blade1:31733] mpirun: spawn failed with errno=-6

[blade1:31733] sess_dir_finalize: proc session dir not empty - leaving

 

Any ideas or suggestions appreciated.

 

Todd Heywood

 

 

 

 



Re: [OMPI users] large jobs hang on startup (deadlock?)

2007-02-05 Thread Heywood, Todd
Hi Ralph,

 

Thanks for the reply. The OpenMPI version is 1.2b2 (because I would like
to integrate it with SGE).

 

Here is what is happening:

 

(1) When I run with -debug-daemons (but WITHOUT -d), I get "Daemon
[0,0,27] checking in as pid 7620 on host blade28" (for example) messages
for most but not all of the daemons that should be started up, and then
it hangs. I also notice "reconnecting to LDAP server" messages in
various /var/log/secure files, and cannot login while things are hung
(with "su: pam_ldap: ldap_result Can't contact LDAP server" in
/var/log/messages). So apparently LDAP hits some limit to opening ssh
sessions, and I'm not sure how to address this.

(2) When I run with -debug-daemons AND the debug option -d, all
daemons start start up and check-in, albeit slowly (debug must slow
things down so LDAP can handle all the requests??). Then apparently, the
cpi process is started for each task but it then hangs:

 

[blade1:23816] spawn: in job_state_callback(jobid = 1, state = 0x4)

[blade1:23816] Info: Setting up debugger process table for applications

  MPIR_being_debugged = 0

  MPIR_debug_gate = 0

  MPIR_debug_state = 1

  MPIR_acquired_pre_main = 0

  MPIR_i_am_starter = 0

  MPIR_proctable_size = 800

  MPIR_proctable:

(i, host, exe, pid) = (0, blade1, /home4/itstaff/heywood/ompi/cpi,
24193)

...

...(i, host, exe, pid) = (799, blade213,
/home4/itstaff/heywood/ompi/cpi, 4762)

 

A "ps" on the head node shows 200 open ssh sessions, and 4 cpi processes
doing nothing. A ^C gives this:

 

mpirun: killing job...

 


--

WARNING: A process refused to die!

 

Host: blade1

PID:  24193

 

This process may still be running and/or consuming resources.

 

 

 

Still got a ways to go, but any ideas/suggestions are welcome!

 

Thanks,

 

Todd

 

 



From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Friday, February 02, 2007 5:20 PM
To: Open MPI Users
Subject: Re: [OMPI users] large jobs hang on startup (deadlock?)

 

Hi Todd

To help us provide advice, could you tell us what version of OpenMPI you
are using?

Meantime, try adding "-mca pls_rsh_num_concurrent 200" to your mpirun
command line. You can up the number of concurrent daemons we launch to
anything your system will support - basically, we limit the number only
because some systems have limits on the number of ssh calls we can have
active at any one time. Because we hold stdio open when running with
-debug-daemons, the number of concurrent daemons must match or exceed
the number of nodes you are trying to launch on.

I have a "fix" in the works that will help relieve some of that
restriction, but that won't come out until a later release.

Hopefully, that will allow you to obtain more debug info about why/where
things are hanging.

Ralph


On 2/2/07 11:41 AM, "Heywood, Todd"  wrote:

I have OpenMPI running fine for a small/medium number of tasks (simple
hello or cpi program). But when I try 700 or 800 tasks, it hangs,
apparently on startup. I think this might be related to LDAP, since if I
try to log into my account while the job is hung, I get told my username
doesn't exist. However, I tried adding -debug to the mpirun, and got the
same sequence of output as for successful smaller runs, until it hung
again. So I added --debug-daemons and got this (with an exit, i.e. no
hanging):
...
[blade1:31733] [0,0,0] wrote setup file

--
The rsh launcher has been given a number of 128 concurrent daemons to
launch and is in a debug-daemons option. However, the total number of
daemons to launch (200) is greater than this value. This is a scenario
that
will cause the system to deadlock.
 
To avoid deadlock, either increase the number of concurrent daemons, or
remove the debug-daemons flag.

--
[blade1:31733] [0,0,0] ORTE_ERROR_LOG: Fatal in file
../../../../../orte/mca/rmgr/urm/
rmgr_urm.c at line 455
[blade1:31733] mpirun: spawn failed with errno=-6
[blade1:31733] sess_dir_finalize: proc session dir not empty - leaving
 
Any ideas or suggestions appreciated.
 
Todd Heywood
 
 

 



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

 



Re: [OMPI users] large jobs hang on startup (deadlock?)

2007-02-06 Thread Heywood, Todd
Hi Ralph,

Thanks for the reply. This is a tough one. It is OpenLDAP. I had thought that I 
might be hitting a file descriptor limit for slapd (LDAP daemon), which ulimit 
-n does not effect (you have to rebuild LDAP with a different FD_SETSIZE 
variable). However, I simply turned on more expressive logging to 
/var/log/slapd, and that resulted in smaller jobs (which successfully ran 
before) hanging. Go figure. It appears that daemons are up and running (from 
ps), and everything hangs in MPI_Init. Ctl-C gives

[blade1:04524] ERROR: A daemon on node blade26 failed to start as expected.
[blade1:04524] ERROR: There may be more information available from
[blade1:04524] ERROR: the remote shell (see above).
[blade1:04524] ERROR: The daemon exited unexpectedly with status 255.

I'm interested in any suggestion, semi-fixes, etc. which might help get to the 
bottom of this. Right now: whether the daemons are indeed up and running, or if 
there are some that are not (causing MPI_Init to hang).

Thanks,

Todd

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Ralph H Castain
Sent: Tuesday, February 06, 2007 8:52 AM
To: Open MPI Users 
Subject: Re: [OMPI users] large jobs hang on startup (deadlock?)

Well, I can't say for sure about LDAP. I did a quick search and found two
things:

1. there are limits imposed in LDAP that may apply to your situation, and

2. that statement varies tremendously depending upon the specific LDAP
implementation you are using

I would suggest you see which LDAP you are using and contact the respective
organization to ask if they do have such a limit, and if so, how to adjust
it.

It sounds like maybe we are hitting the LDAP server with too many requests
too rapidly. Usually, the issue is not starting fast enough, so this is a
new one! We don't currently check to see if everything started up okay, so
that is why the processes might hang - we hope to fix that soon. I'll have
to see if there is something we can do to help alleviate such problems -
might not be in time for the 1.2 release, but perhaps it will make a
subsequent "fix" or, if you are willing/interested, I could provide it to
you as a "patch" you could use until a later official release.

Meantime, you might try upgrading to 1.2b3 or even a nightly release from
the trunk. There are known problems with 1.2b2 (which is why there is a b3
and soon to be an rc1), though I don't think that will be the problem here.
At the least, the nightly trunk has a much better response to ctrl-c in it.

Ralph


On 2/5/07 9:50 AM, "Heywood, Todd"  wrote:

> Hi Ralph,
>  
> Thanks for the reply. The OpenMPI version is 1.2b2 (because I would like to
> integrate it with SGE).
>  
> Here is what is happening:
>  
> (1) When I run with ­debug-daemons (but WITHOUT ­d), I get ³Daemon
> [0,0,27] checking in as pid 7620 on host blade28² (for example) messages for
> most but not all of the daemons that should be started up, and then it hangs.
> I also notice ³reconnecting to LDAP server² messages in various
> /var/log/secure files, and cannot login while things are hung (with ³su:
> pam_ldap: ldap_result Can't contact LDAP server² in /var/log/messages). So
> apparently LDAP hits some limit to opening ssh sessions, and I¹m not sure how
> to address this.
> (2) When I run with ­debug-daemons AND the debug option ­d, all daemons
> start start up and check-in, albeit slowly (debug must slow things down so
> LDAP can handle all the requests??). Then apparently, the cpi process is
> started for each task but it then hangs:
>  
> [blade1:23816] spawn: in job_state_callback(jobid = 1, state = 0x4)
> [blade1:23816] Info: Setting up debugger process table for applications
>  MPIR_being_debugged = 0
>  MPIR_debug_gate = 0
>  MPIR_debug_state = 1
>  MPIR_acquired_pre_main = 0
>  MPIR_i_am_starter = 0
>  MPIR_proctable_size = 800
>  MPIR_proctable:
>(i, host, exe, pid) = (0, blade1, /home4/itstaff/heywood/ompi/cpi, 24193)
> Š
> Š(i, host, exe, pid) = (799, blade213, /home4/itstaff/heywood/ompi/cpi, 4762)
>  
> A ³ps² on the head node shows 200 open ssh sessions, and 4 cpi processes doing
> nothing. A ^C gives this:
>  
> mpirun: killing job...
>  
> --
> WARNING: A process refused to die!
>  
> Host: blade1
> PID:  24193
>  
> This process may still be running and/or consuming resources.
> 
>  
>  
>  
> Still got a ways to go, but any ideas/suggestions are welcome!
>  
> Thanks,
>  
> Todd
>  
>  
> 
> 
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
> Of Ralph Castain
> Sent: Friday, February 02, 2007 5:20 PM
> To: Open MPI Users
> Subject: Re: [OMPI 

Re: [OMPI users] large jobs hang on startup (deadlock?)

2007-02-06 Thread Heywood, Todd
Hi Ralph,

It looks that way. I created a user local to each node, with local 
authentication via /etc/passwd and /etc/shadow, and OpenMPI scales up just fine 
for that.

I know this is an OpenMPI list, but does anyone know how common or uncommon 
LDAP-based clusters are? I would have thought this issue would have arisen 
elsewhere, but Googling MPI+LDAP (and similar) doesn't turn up much.

I'd certainly be willing to test any patch. Thanks.

Todd

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Ralph H Castain
Sent: Tuesday, February 06, 2007 9:54 AM
To: Open MPI Users 
Subject: Re: [OMPI users] large jobs hang on startup (deadlock?)

It sounds to me like we are probably overwhelming your slapd - your test
would seem to indicate that slowing down the slapd makes us fail even with
smaller jobs, which tends to support that idea.

We frankly haven't encountered that before since our rsh tests have all been
done using non-LDAP authentication (basically, we ask that you setup rsh to
auto-authenticate on each node). It sounds like we need to add an ability to
slow down so that the daemon doesn't "fail" due to authentication timeout
and/or slapd rejection due to the queue being full.

This may take a little time to fix due to other priorities, and will almost
certainly have to be released in a subsequent 1.2.x version. Meantime, I'll
let you know when I get something to test - would you be willing to give it
a shot if I provide a patch? I don't have access to an LDAP-based system.

Ralph


On 2/6/07 7:44 AM, "Heywood, Todd"  wrote:

> Hi Ralph,

Thanks for the reply. This is a tough one. It is OpenLDAP. I had
> thought that I might be hitting a file descriptor limit for slapd (LDAP
> daemon), which ulimit -n does not effect (you have to rebuild LDAP with a
> different FD_SETSIZE variable). However, I simply turned on more expressive
> logging to /var/log/slapd, and that resulted in smaller jobs (which
> successfully ran before) hanging. Go figure. It appears that daemons are up
> and running (from ps), and everything hangs in MPI_Init. Ctl-C
> gives

[blade1:04524] ERROR: A daemon on node blade26 failed to start as
> expected.
[blade1:04524] ERROR: There may be more information available
> from
[blade1:04524] ERROR: the remote shell (see above).
[blade1:04524] ERROR:
> The daemon exited unexpectedly with status 255.

I'm interested in any
> suggestion, semi-fixes, etc. which might help get to the bottom of this. Right
> now: whether the daemons are indeed up and running, or if there are some that
> are not (causing MPI_Init to hang).

Thanks,

Todd

-Original
> Message-
From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph H Castain
Sent:
> Tuesday, February 06, 2007 8:52 AM
To: Open MPI Users
> 
Subject: Re: [OMPI users] large jobs hang on startup
> (deadlock?)

Well, I can't say for sure about LDAP. I did a quick search and
> found two
things:

1. there are limits imposed in LDAP that may apply to your
> situation, and

2. that statement varies tremendously depending upon the
> specific LDAP
implementation you are using

I would suggest you see which LDAP
> you are using and contact the respective
organization to ask if they do have
> such a limit, and if so, how to adjust
it.

It sounds like maybe we are
> hitting the LDAP server with too many requests
too rapidly. Usually, the issue
> is not starting fast enough, so this is a
new one! We don't currently check to
> see if everything started up okay, so
that is why the processes might hang -
> we hope to fix that soon. I'll have
to see if there is something we can do to
> help alleviate such problems -
might not be in time for the 1.2 release, but
> perhaps it will make a
subsequent "fix" or, if you are willing/interested, I
> could provide it to
you as a "patch" you could use until a later official
> release.

Meantime, you might try upgrading to 1.2b3 or even a nightly release
> from
the trunk. There are known problems with 1.2b2 (which is why there is a
> b3
and soon to be an rc1), though I don't think that will be the problem
> here.
At the least, the nightly trunk has a much better response to ctrl-c in
> it.

Ralph


On 2/5/07 9:50 AM, "Heywood, Todd"  wrote:

>
> Hi Ralph,
>  
> Thanks for the reply. The OpenMPI version is 1.2b2 (because I
> would like to
> integrate it with SGE).
>  
> Here is what is happening:
>  
>
> (1) When I run with ­debug-daemons (but WITHOUT ­d), I get ³Daemon>
> [0,0,27] checking in as pid 7620 on host blade28² (for example) messages for
>
> most but not all of the daemons that should be started up, and then it
> hangs.
> I also notice ³reconnecting to L

Re: [OMPI users] large jobs hang on startup (deadlock?)

2007-02-07 Thread Heywood, Todd
Hi Ralph,

Unfortunately, adding "-mca pls_rsh_num_concurrent 50" to mpirun (with just -np 
and -hostfile) has no effect. The number of established connections for slapd 
grows to the same number at the same rate as without it. 

BTW, I upgraded from 1.2b2 to 1.2b3

Thanks,

TOdd

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Ralph Castain
Sent: Tuesday, February 06, 2007 6:48 PM
To: Open MPI Users
Subject: Re: [OMPI users] large jobs hang on startup (deadlock?)

Hi Todd

Just as a thought - you could try not using --debug-daemons or -d and
instead setting "-mca pls_rsh_num_concurrent 50" or some such small number.
This will tell the system to launch 50 ssh calls at a time, waiting for each
group to complete before launching the next. You can't use it with
--debug-daemons as that option prevents the ssh calls from "closing" so that
you can get the output from the daemons. You can still launch as big a job
as you like - we'll just do it 50 ssh calls at a time.

If we are truly overwhelming the slapd, then this should alleviate the
problem.

Let me know if you get to try it...
Ralph


On 2/6/07 4:05 PM, "Heywood, Todd"  wrote:

> Hi Ralph,

It looks that way. I created a user local to each node, with local
> authentication via /etc/passwd and /etc/shadow, and OpenMPI scales up just
> fine for that.

I know this is an OpenMPI list, but does anyone know how
> common or uncommon LDAP-based clusters are? I would have thought this issue
> would have arisen elsewhere, but Googling MPI+LDAP (and similar) doesn't turn
> up much.

I'd certainly be willing to test any patch.
> Thanks.

Todd

-Original Message-
From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph H Castain
Sent:
> Tuesday, February 06, 2007 9:54 AM
To: Open MPI Users
> 
Subject: Re: [OMPI users] large jobs hang on startup
> (deadlock?)

It sounds to me like we are probably overwhelming your slapd -
> your test
would seem to indicate that slowing down the slapd makes us fail
> even with
smaller jobs, which tends to support that idea.

We frankly haven't
> encountered that before since our rsh tests have all been
done using non-LDAP
> authentication (basically, we ask that you setup rsh to
auto-authenticate on
> each node). It sounds like we need to add an ability to
slow down so that the
> daemon doesn't "fail" due to authentication timeout
and/or slapd rejection due
> to the queue being full.

This may take a little time to fix due to other
> priorities, and will almost
certainly have to be released in a subsequent
> 1.2.x version. Meantime, I'll
let you know when I get something to test -
> would you be willing to give it
a shot if I provide a patch? I don't have
> access to an LDAP-based system.

Ralph


On 2/6/07 7:44 AM, "Heywood, Todd"
>  wrote:

> Hi Ralph,

Thanks for the reply. This is a tough
> one. It is OpenLDAP. I had
> thought that I might be hitting a file descriptor
> limit for slapd (LDAP
> daemon), which ulimit -n does not effect (you have to
> rebuild LDAP with a
> different FD_SETSIZE variable). However, I simply turned
> on more expressive
> logging to /var/log/slapd, and that resulted in smaller
> jobs (which
> successfully ran before) hanging. Go figure. It appears that
> daemons are up
> and running (from ps), and everything hangs in MPI_Init.
> Ctl-C
> gives

[blade1:04524] ERROR: A daemon on node blade26 failed to start
> as
> expected.
[blade1:04524] ERROR: There may be more information available
>
> from
[blade1:04524] ERROR: the remote shell (see above).
[blade1:04524]
> ERROR:
> The daemon exited unexpectedly with status 255.

I'm interested in
> any
> suggestion, semi-fixes, etc. which might help get to the bottom of this.
> Right
> now: whether the daemons are indeed up and running, or if there are
> some that
> are not (causing MPI_Init to
> hang).

Thanks,

Todd

-Original
> Message-
From:
> users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of
> Ralph H Castain
Sent:
> Tuesday, February 06, 2007 8:52 AM
To: Open MPI
> Users
> 
Subject: Re: [OMPI users] large jobs hang on
> startup
> (deadlock?)

Well, I can't say for sure about LDAP. I did a quick
> search and
> found two
things:

1. there are limits imposed in LDAP that may
> apply to your
> situation, and

2. that statement varies tremendously
> depending upon the
> specific LDAP
implementation you are using

I would
> suggest you see which LDAP
> you are using and contact the
> respective
organization to ask if they do have
> such a limit, and if so, how
> to adjust
it.

It sounds like maybe we are
> 

Re: [OMPI users] large jobs hang on startup (deadlock?)

2007-02-07 Thread Heywood, Todd
Hi Ralph,

Patience is not an issue since I have a workaround (a locally authenticated 
user), and other users are not running large enough MPI jobs to hit this 
problem.

I'm a bit confused now though. I thought that setting this switch would set off 
50 ssh sessions at a time, or 50 connections to slapd. I.e. a second group of 
50 connections wouldn't initiate until the first group "closed" their sessions, 
which should be reflected by a corresponding decrease in the number of 
established connections for slapd. So my conclusion was that no sessions are 
"closing".

There's also the observation that when slapd is slowed down by logging 
(extensively), things hang with fewer number of established connections (open 
ssh sessions). I don't see how this fitrs with a total number of connections 
limitation.

Thanks,

Todd

-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf 
Of Ralph Castain
Sent: Wednesday, February 07, 2007 1:28 PM
To: Open MPI Users
Subject: Re: [OMPI users] large jobs hang on startup (deadlock?)

Hi Todd

I truly appreciate your patience. If the rate was the same with that switch
set, then that would indicate to me that we aren't having trouble getting
through the slapd - it probably isn't a problem with how hard we are driving
it, but rather with the total number of connections being created.
Basically, we need to establish one connection/node to launch the orteds
(the app procs are just fork/exec'd by the orteds so they shouldn't see the
slapd).

The issue may have to do with limits on the total number of LDAP
authentication connections allowed for one user. I believe that is settable,
but will have to look it up and/or ask a few friends that might know.

I have not seen an LDAP-based cluster before (though authentication onto the
head node of a cluster is frequently handled that way), but that doesn't
mean someone hasn't done it.

Again, appreciate the patience.
Ralph



On 2/7/07 10:28 AM, "Heywood, Todd"  wrote:

> Hi Ralph,

Unfortunately, adding "-mca pls_rsh_num_concurrent 50" to mpirun
> (with just -np and -hostfile) has no effect. The number of established
> connections for slapd grows to the same number at the same rate as without it.
> 

BTW, I upgraded from 1.2b2 to 1.2b3

Thanks,

TOdd

-Original
> Message-
From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Tuesday,
> February 06, 2007 6:48 PM
To: Open MPI Users
Subject: Re: [OMPI users] large
> jobs hang on startup (deadlock?)

Hi Todd

Just as a thought - you could try
> not using --debug-daemons or -d and
instead setting "-mca
> pls_rsh_num_concurrent 50" or some such small number.
This will tell the
> system to launch 50 ssh calls at a time, waiting for each
group to complete
> before launching the next. You can't use it with
--debug-daemons as that
> option prevents the ssh calls from "closing" so that
you can get the output
> from the daemons. You can still launch as big a job
as you like - we'll just
> do it 50 ssh calls at a time.

If we are truly overwhelming the slapd, then
> this should alleviate the
problem.

Let me know if you get to try
> it...
Ralph


On 2/6/07 4:05 PM, "Heywood, Todd"  wrote:

>
> Hi Ralph,

It looks that way. I created a user local to each node, with
> local
> authentication via /etc/passwd and /etc/shadow, and OpenMPI scales up
> just
> fine for that.

I know this is an OpenMPI list, but does anyone know
> how
> common or uncommon LDAP-based clusters are? I would have thought this
> issue
> would have arisen elsewhere, but Googling MPI+LDAP (and similar)
> doesn't turn
> up much.

I'd certainly be willing to test any patch.
>
> Thanks.

Todd

-Original Message-
From: users-boun...@open-mpi.org
>
> [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph H Castain
Sent:
>
> Tuesday, February 06, 2007 9:54 AM
To: Open MPI Users
>
> 
Subject: Re: [OMPI users] large jobs hang on startup
>
> (deadlock?)

It sounds to me like we are probably overwhelming your slapd -
>
> your test
would seem to indicate that slowing down the slapd makes us fail
>
> even with
smaller jobs, which tends to support that idea.

We frankly
> haven't
> encountered that before since our rsh tests have all been
done using
> non-LDAP
> authentication (basically, we ask that you setup rsh
> to
auto-authenticate on
> each node). It sounds like we need to add an ability
> to
slow down so that the
> daemon doesn't "fail" due to authentication
> timeout
and/or slapd rejection due
> to the queue being full.

This may take a
> little time to fix due to other
> priorities, and will almost

[OMPI users] MPI processes swapping out

2007-03-21 Thread Heywood, Todd
I noticed that my OpenMPI processes are using larger amounts of system time
than user time (via vmstat, top). I'm running on dual-core, dual-CPU
Opterons, with 4 slots per node, where the program has the nodes to
themselves. A closer look showed that they are constantly switching between
run and sleep states with 4-8 page faults per second.

Why would this be? It doesn't happen with 4 sequential jobs running on a
node, where I get 99% user time, maybe 1% system time.

The processes have plenty of memory. This behavior occurs whether I use
processor/memory affinity or not (there is no oversubscription).

Thanks,

Todd



Re: [OMPI users] MPI processes swapping out

2007-03-21 Thread Heywood, Todd
P.s. I should have said this this is a pretty course-grained application,
and netstat doesn't show much communication going on (except in stages).


On 3/21/07 4:21 PM, "Heywood, Todd"  wrote:

> I noticed that my OpenMPI processes are using larger amounts of system time
> than user time (via vmstat, top). I'm running on dual-core, dual-CPU
> Opterons, with 4 slots per node, where the program has the nodes to
> themselves. A closer look showed that they are constantly switching between
> run and sleep states with 4-8 page faults per second.
> 
> Why would this be? It doesn't happen with 4 sequential jobs running on a
> node, where I get 99% user time, maybe 1% system time.
> 
> The processes have plenty of memory. This behavior occurs whether I use
> processor/memory affinity or not (there is no oversubscription).
> 
> Thanks,
> 
> Todd
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] MPI processes swapping out

2007-03-22 Thread Heywood, Todd
Yes, I'm using SGE. I also just noticed that when 2 tasks/slots run on a
4-core node, the 2 tasks are still cycling between run and sleep, with
higher system time than user time.

Ompi_info shows the MCA parameter mpi_yield_when_idle to be 0 (aggressive),
so that suggests the tasks aren't swapping out on bloccking calls.

Still puzzled.

Thanks,
Todd


On 3/22/07 7:36 AM, "Jeff Squyres"  wrote:

> Are you using a scheduler on your system?
> 
> More specifically, does Open MPI know that you have for process slots
> on each node?  If you are using a hostfile and didn't specify
> "slots=4" for each host, Open MPI will think that it's
> oversubscribing and will therefore call sched_yield() in the depths
> of its progress engine.
> 
> 
> On Mar 21, 2007, at 5:08 PM, Heywood, Todd wrote:
> 
>> P.s. I should have said this this is a pretty course-grained
>> application,
>> and netstat doesn't show much communication going on (except in
>> stages).
>> 
>> 
>> On 3/21/07 4:21 PM, "Heywood, Todd"  wrote:
>> 
>>> I noticed that my OpenMPI processes are using larger amounts of
>>> system time
>>> than user time (via vmstat, top). I'm running on dual-core, dual-CPU
>>> Opterons, with 4 slots per node, where the program has the nodes to
>>> themselves. A closer look showed that they are constantly
>>> switching between
>>> run and sleep states with 4-8 page faults per second.
>>> 
>>> Why would this be? It doesn't happen with 4 sequential jobs
>>> running on a
>>> node, where I get 99% user time, maybe 1% system time.
>>> 
>>> The processes have plenty of memory. This behavior occurs whether
>>> I use
>>> processor/memory affinity or not (there is no oversubscription).
>>> 
>>> Thanks,
>>> 
>>> Todd
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



Re: [OMPI users] MPI processes swapping out

2007-03-22 Thread Heywood, Todd
Ralph,

Well, according to the FAQ, aggressive mode can be "forced" so I did try
setting OMPI_MCA_mpi_yield_when_idle=0 before running. I also tried turning
processor/memory affinity on. Efffects were minor. The MPI tasks still cycle
bewteen run and sleep states, driving up system time well over user time.

Mpstat shows SGE is indeed giving 4 or 2 slots per node as approporiate
(depending on memory) and the MPI tasks are using 4 or 2 cores, but to be
sure, I also tried running directly with a hostfile with slots=4 or slots=2.
The same behavior occurs.

This behavior is a function of the size of the job. I.e. As I scale from 200
to 800 tasks the run/sleep cycling increases, so that system time grows from
maybe half the user time to maybe 5 times user time.

This is for TCP/gigE.

Todd


On 3/22/07 12:19 PM, "Ralph Castain"  wrote:

> Just for clarification: ompi_info only shows the *default* value of the MCA
> parameter. In this case, mpi_yield_when_idle defaults to aggressive, but
> that value is reset internally if the system sees an "oversubscribed"
> condition.
> 
> The issue here isn't how many cores are on the node, but rather how many
> were specifically allocated to this job. If the allocation wasn't at least 2
> (in your example), then we would automatically reset mpi_yield_when_idle to
> be non-aggressive, regardless of how many cores are actually on the node.
> 
> Ralph
> 
> 
> On 3/22/07 7:14 AM, "Heywood, Todd"  wrote:
> 
>> Yes, I'm using SGE. I also just noticed that when 2 tasks/slots run on a
>> 4-core node, the 2 tasks are still cycling between run and sleep, with
>> higher system time than user time.
>> 
>> Ompi_info shows the MCA parameter mpi_yield_when_idle to be 0 (aggressive),
>> so that suggests the tasks aren't swapping out on bloccking calls.
>> 
>> Still puzzled.
>> 
>> Thanks,
>> Todd
>> 
>> 
>> On 3/22/07 7:36 AM, "Jeff Squyres"  wrote:
>> 
>>> Are you using a scheduler on your system?
>>> 
>>> More specifically, does Open MPI know that you have for process slots
>>> on each node?  If you are using a hostfile and didn't specify
>>> "slots=4" for each host, Open MPI will think that it's
>>> oversubscribing and will therefore call sched_yield() in the depths
>>> of its progress engine.
>>> 
>>> 
>>> On Mar 21, 2007, at 5:08 PM, Heywood, Todd wrote:
>>> 
>>>> P.s. I should have said this this is a pretty course-grained
>>>> application,
>>>> and netstat doesn't show much communication going on (except in
>>>> stages).
>>>> 
>>>> 
>>>> On 3/21/07 4:21 PM, "Heywood, Todd"  wrote:
>>>> 
>>>>> I noticed that my OpenMPI processes are using larger amounts of
>>>>> system time
>>>>> than user time (via vmstat, top). I'm running on dual-core, dual-CPU
>>>>> Opterons, with 4 slots per node, where the program has the nodes to
>>>>> themselves. A closer look showed that they are constantly
>>>>> switching between
>>>>> run and sleep states with 4-8 page faults per second.
>>>>> 
>>>>> Why would this be? It doesn't happen with 4 sequential jobs
>>>>> running on a
>>>>> node, where I get 99% user time, maybe 1% system time.
>>>>> 
>>>>> The processes have plenty of memory. This behavior occurs whether
>>>>> I use
>>>>> processor/memory affinity or not (there is no oversubscription).
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Todd
>>>>> 
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> 
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] MPI processes swapping out

2007-03-22 Thread Heywood, Todd
Hi,

It is v1.2, default configuration. If it matters: OS is RHEL
(2.6.9-42.0.3.ELsmp) on x86_64.

I have noticed this for 2 apps so far, mpiBLAST and HPL, which are both
course grained.

Thanks,

Todd


On 3/22/07 2:38 PM, "Ralph Castain"  wrote:

> 
> 
> 
> On 3/22/07 11:30 AM, "Heywood, Todd"  wrote:
> 
>> Ralph,
>> 
>> Well, according to the FAQ, aggressive mode can be "forced" so I did try
>> setting OMPI_MCA_mpi_yield_when_idle=0 before running. I also tried turning
>> processor/memory affinity on. Efffects were minor. The MPI tasks still cycle
>> bewteen run and sleep states, driving up system time well over user time.
> 
> Yes, that's true - and we do (should) respect any such directive.
> 
>> 
>> Mpstat shows SGE is indeed giving 4 or 2 slots per node as approporiate
>> (depending on memory) and the MPI tasks are using 4 or 2 cores, but to be
>> sure, I also tried running directly with a hostfile with slots=4 or slots=2.
>> The same behavior occurs.
> 
> Okay - thanks for trying that!
> 
>> 
>> This behavior is a function of the size of the job. I.e. As I scale from 200
>> to 800 tasks the run/sleep cycling increases, so that system time grows from
>> maybe half the user time to maybe 5 times user time.
>> 
>> This is for TCP/gigE.
> 
> What version of OpenMPI are you using? This sounds like something we need to
> investigate.
> 
> Thanks for the help!
> Ralph
> 
>> 
>> Todd
>> 
>> 
>> On 3/22/07 12:19 PM, "Ralph Castain"  wrote:
>> 
>>> Just for clarification: ompi_info only shows the *default* value of the MCA
>>> parameter. In this case, mpi_yield_when_idle defaults to aggressive, but
>>> that value is reset internally if the system sees an "oversubscribed"
>>> condition.
>>> 
>>> The issue here isn't how many cores are on the node, but rather how many
>>> were specifically allocated to this job. If the allocation wasn't at least 2
>>> (in your example), then we would automatically reset mpi_yield_when_idle to
>>> be non-aggressive, regardless of how many cores are actually on the node.
>>> 
>>> Ralph
>>> 
>>> 
>>> On 3/22/07 7:14 AM, "Heywood, Todd"  wrote:
>>> 
>>>> Yes, I'm using SGE. I also just noticed that when 2 tasks/slots run on a
>>>> 4-core node, the 2 tasks are still cycling between run and sleep, with
>>>> higher system time than user time.
>>>> 
>>>> Ompi_info shows the MCA parameter mpi_yield_when_idle to be 0 (aggressive),
>>>> so that suggests the tasks aren't swapping out on bloccking calls.
>>>> 
>>>> Still puzzled.
>>>> 
>>>> Thanks,
>>>> Todd
>>>> 
>>>> 
>>>> On 3/22/07 7:36 AM, "Jeff Squyres"  wrote:
>>>> 
>>>>> Are you using a scheduler on your system?
>>>>> 
>>>>> More specifically, does Open MPI know that you have for process slots
>>>>> on each node?  If you are using a hostfile and didn't specify
>>>>> "slots=4" for each host, Open MPI will think that it's
>>>>> oversubscribing and will therefore call sched_yield() in the depths
>>>>> of its progress engine.
>>>>> 
>>>>> 
>>>>> On Mar 21, 2007, at 5:08 PM, Heywood, Todd wrote:
>>>>> 
>>>>>> P.s. I should have said this this is a pretty course-grained
>>>>>> application,
>>>>>> and netstat doesn't show much communication going on (except in
>>>>>> stages).
>>>>>> 
>>>>>> 
>>>>>> On 3/21/07 4:21 PM, "Heywood, Todd"  wrote:
>>>>>> 
>>>>>>> I noticed that my OpenMPI processes are using larger amounts of
>>>>>>> system time
>>>>>>> than user time (via vmstat, top). I'm running on dual-core, dual-CPU
>>>>>>> Opterons, with 4 slots per node, where the program has the nodes to
>>>>>>> themselves. A closer look showed that they are constantly
>>>>>>> switching between
>>>>>>> run and sleep states with 4-8 page faults per second.
>>>>>>> 
>>>>>>> Why would this be? It doesn't happen with 4 sequential jobs
>>>>>>> running on a
>>>>>>> node, where I get 99% user

Re: [OMPI users] MPI processes swapping out

2007-03-23 Thread Heywood, Todd
Rolf,

> Is it possible that everything is working just as it should?

That's what I'm afraid of :-). But I did not expect to see such
communication overhead due to blocking from mpiBLAST, which is very
course-grained. I then tried HPL, which is computation-heavy, and found the
same thing. Also, the system time seemed to correspond to the MPI processes
cycling between run and sleep (as seen via top), and I thought that setting
the mpi_yield_when_idle parameter to 0 would keep the processes from
entering sleep state when blocking. But it doesn't.

Todd



On 3/23/07 2:06 PM, "Rolf Vandevaart"  wrote:

> 
> Todd:
> 
> I assume the system time is being consumed by
> the calls to send and receive data over the TCP sockets.
> As the number of processes in the job increases, then more
> time is spent waiting for data from one of the other processes.
> 
> I did a little experiment on a single node to see the difference
> in system time consumed when running over TCP vs when
> running over shared memory.   When running on a single
> node and using the sm btl, I see almost 100% user time.
> I assume this is because the sm btl handles sending and
> receiving its data within a shared memory segment.
> However, when I switch over to TCP, I see my system time
> go up.  Note that this is on Solaris.
> 
> RUNNING OVER SELF,SM
>> mpirun -np 8 -mca btl self,sm hpcc.amd64
> 
>PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP
>   3505 rolfv100 0.0 0.0 0.0 0.0 0.0 0.0 0.0   0  75 182   0 hpcc.amd64/1
>   3503 rolfv100 0.0 0.0 0.0 0.0 0.0 0.0 0.2   0  69 116   0 hpcc.amd64/1
>   3499 rolfv 99 0.0 0.0 0.0 0.0 0.0 0.0 0.5   0 106 236   0 hpcc.amd64/1
>   3497 rolfv 99 0.0 0.0 0.0 0.0 0.0 0.0 1.0   0 169 200   0 hpcc.amd64/1
>   3501 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 1.9   0 127 158   0 hpcc.amd64/1
>   3507 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 2.0   0 244 200   0 hpcc.amd64/1
>   3509 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 2.0   0 282 212   0 hpcc.amd64/1
>   3495 rolfv 97 0.0 0.0 0.0 0.0 0.0 0.0 3.2   0 237  98   0 hpcc.amd64/1
> 
> RUNNING OVER SELF,TCP
>> mpirun -np 8 -mca btl self,tcp hpcc.amd64
> 
>PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP
>   4316 rolfv 93 6.9 0.0 0.0 0.0 0.0 0.0 0.2   5 346 .6M   0 hpcc.amd64/1
>   4328 rolfv 91 8.4 0.0 0.0 0.0 0.0 0.0 0.4   3  59 .15   0 hpcc.amd64/1
>   4324 rolfv 98 1.1 0.0 0.0 0.0 0.0 0.0 0.7   2 270 .1M   0 hpcc.amd64/1
>   4320 rolfv 88  12 0.0 0.0 0.0 0.0 0.0 0.8   4 244 .15   0 hpcc.amd64/1
>   4322 rolfv 94 5.1 0.0 0.0 0.0 0.0 0.0 1.3   2 150 .2M   0 hpcc.amd64/1
>   4318 rolfv 92 6.7 0.0 0.0 0.0 0.0 0.0 1.4   5 236 .9M   0 hpcc.amd64/1
>   4326 rolfv 93 5.3 0.0 0.0 0.0 0.0 0.0 1.7   7 117 .2M   0 hpcc.amd64/1
>   4314 rolfv 91 6.6 0.0 0.0 0.0 0.0 1.3 0.9  19 150 .10   0 hpcc.amd64/1
> 
> I also ran HPL over a larger cluster of 6 nodes, and noticed even higher
> system times. 
> 
> And lastly, I ran a simple MPI test over a cluster of 64 nodes, 2 procs
> per node
> using Sun HPC ClusterTools 6, and saw about a 50/50 split between user
> and system time.
> 
>   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP
>  11525 rolfv 55  44 0.1 0.0 0.0 0.0 0.1 0.4  76 960 .3M   0
> maxtrunc_ct6/1
>  11526 rolfv 54  45 0.0 0.0 0.0 0.0 0.0 1.0   0 362 .4M   0
> maxtrunc_ct6/1
> 
> Is it possible that everything is working just as it should?
> 
> Rolf
> 
> Heywood, Todd wrote On 03/22/07 13:30,:
> 
>> Ralph,
>> 
>> Well, according to the FAQ, aggressive mode can be "forced" so I did try
>> setting OMPI_MCA_mpi_yield_when_idle=0 before running. I also tried turning
>> processor/memory affinity on. Efffects were minor. The MPI tasks still cycle
>> bewteen run and sleep states, driving up system time well over user time.
>> 
>> Mpstat shows SGE is indeed giving 4 or 2 slots per node as approporiate
>> (depending on memory) and the MPI tasks are using 4 or 2 cores, but to be
>> sure, I also tried running directly with a hostfile with slots=4 or slots=2.
>> The same behavior occurs.
>> 
>> This behavior is a function of the size of the job. I.e. As I scale from 200
>> to 800 tasks the run/sleep cycling increases, so that system time grows from
>> maybe half the user time to maybe 5 times user time.
>> 
>> This is for TCP/gigE.
>> 
>> Todd
>> 
>> 
>> On 3/22/07 12:19 PM, "Ralph Castain"  wrote:
>> 
>>  
>> 
>>> Just for clarification: ompi_info only shows the *default* value of the MCA
>>> parameter. In this case, mpi_yield_when_idle defaults to a

Re: [OMPI users] MPI processes swapping out

2007-03-25 Thread Heywood, Todd
Thanks, George. I will try the trunk version (1.3a1r14138) tomorrow. However, I 
am
keeping the number of processes per node (i.e. 4, one per core) constant. The 
system time, and the number of sleep states (eyeballed via top) grows 
significantly as the number of nodes scale up.

I was wondering if this might be the OS jitter/noise problem.

Todd


-Original Message-
From: users-boun...@open-mpi.org on behalf of George Bosilca
Sent: Fri 3/23/2007 7:15 PM
To: Open MPI Users
Subject: Re: [OMPI users] MPI processes  swapping out
 
So far the described behavior seems as normal as expected. As Open  
MPI never goes in blocking mode, the processes will always spin  
between active and sleep mode. More processes on the same node leads  
to more time in the system mode (because of the empty polls). There  
is a trick in the trunk version of Open MPI which will trigger the  
blocking mode if and only if TCP is the only used device. Please try  
add "--mca btl tcp,self" to your mpirun command line, and check the  
output of vmstat.

   Thanks,
 george.

On Mar 23, 2007, at 3:32 PM, Heywood, Todd wrote:

> Rolf,
>
>> Is it possible that everything is working just as it should?
>
> That's what I'm afraid of :-). But I did not expect to see such
> communication overhead due to blocking from mpiBLAST, which is very
> course-grained. I then tried HPL, which is computation-heavy, and  
> found the
> same thing. Also, the system time seemed to correspond to the MPI  
> processes
> cycling between run and sleep (as seen via top), and I thought that  
> setting
> the mpi_yield_when_idle parameter to 0 would keep the processes from
> entering sleep state when blocking. But it doesn't.
>
> Todd
>
>
>
> On 3/23/07 2:06 PM, "Rolf Vandevaart"  wrote:
>
>>
>> Todd:
>>
>> I assume the system time is being consumed by
>> the calls to send and receive data over the TCP sockets.
>> As the number of processes in the job increases, then more
>> time is spent waiting for data from one of the other processes.
>>
>> I did a little experiment on a single node to see the difference
>> in system time consumed when running over TCP vs when
>> running over shared memory.   When running on a single
>> node and using the sm btl, I see almost 100% user time.
>> I assume this is because the sm btl handles sending and
>> receiving its data within a shared memory segment.
>> However, when I switch over to TCP, I see my system time
>> go up.  Note that this is on Solaris.
>>
>> RUNNING OVER SELF,SM
>>> mpirun -np 8 -mca btl self,sm hpcc.amd64
>>
>>PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG  
>> PROCESS/NLWP
>>   3505 rolfv100 0.0 0.0 0.0 0.0 0.0 0.0 0.0   0  75 182   0  
>> hpcc.amd64/1
>>   3503 rolfv100 0.0 0.0 0.0 0.0 0.0 0.0 0.2   0  69 116   0  
>> hpcc.amd64/1
>>   3499 rolfv 99 0.0 0.0 0.0 0.0 0.0 0.0 0.5   0 106 236   0  
>> hpcc.amd64/1
>>   3497 rolfv 99 0.0 0.0 0.0 0.0 0.0 0.0 1.0   0 169 200   0  
>> hpcc.amd64/1
>>   3501 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 1.9   0 127 158   0  
>> hpcc.amd64/1
>>   3507 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 2.0   0 244 200   0  
>> hpcc.amd64/1
>>   3509 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 2.0   0 282 212   0  
>> hpcc.amd64/1
>>   3495 rolfv 97 0.0 0.0 0.0 0.0 0.0 0.0 3.2   0 237  98   0  
>> hpcc.amd64/1
>>
>> RUNNING OVER SELF,TCP
>>> mpirun -np 8 -mca btl self,tcp hpcc.amd64
>>
>>PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG  
>> PROCESS/NLWP
>>   4316 rolfv 93 6.9 0.0 0.0 0.0 0.0 0.0 0.2   5 346 .6M   0  
>> hpcc.amd64/1
>>   4328 rolfv 91 8.4 0.0 0.0 0.0 0.0 0.0 0.4   3  59 .15   0  
>> hpcc.amd64/1
>>   4324 rolfv 98 1.1 0.0 0.0 0.0 0.0 0.0 0.7   2 270 .1M   0  
>> hpcc.amd64/1
>>   4320 rolfv 88  12 0.0 0.0 0.0 0.0 0.0 0.8   4 244 .15   0  
>> hpcc.amd64/1
>>   4322 rolfv 94 5.1 0.0 0.0 0.0 0.0 0.0 1.3   2 150 .2M   0  
>> hpcc.amd64/1
>>   4318 rolfv 92 6.7 0.0 0.0 0.0 0.0 0.0 1.4   5 236 .9M   0  
>> hpcc.amd64/1
>>   4326 rolfv 93 5.3 0.0 0.0 0.0 0.0 0.0 1.7   7 117 .2M   0  
>> hpcc.amd64/1
>>   4314 rolfv 91 6.6 0.0 0.0 0.0 0.0 1.3 0.9  19 150 .10   0  
>> hpcc.amd64/1
>>
>> I also ran HPL over a larger cluster of 6 nodes, and noticed even  
>> higher
>> system times.
>>
>> And lastly, I ran a simple MPI test over a cluster of 64 nodes, 2  
>> procs
>> per node
>> using Sun HPC ClusterTools 6, and saw about a 50/50 split between  
>> user
>> and system time

Re: [OMPI users] MPI processes swapping out

2007-03-27 Thread Heywood, Todd
I tried the trunk version with "--mca btl tcp,self". Essentially system time
changes to idle time, since empty polling is being replaced by blocking
(right?). Page faults go to 0 though.

It is interesting since you can see what is going on now, with distinct
phases of user time and idle time (sleep mode, en masse). Before, vmstat
showed processes going into sleep mode rather randomly, and distinct phases
of mostly user time or mostly system time were not visible.

I also tried mpi_yield_when_idle=0 with the trunk version. No effect on
behavior.

Todd


On 3/23/07 7:15 PM, "George Bosilca"  wrote:

> So far the described behavior seems as normal as expected. As Open
> MPI never goes in blocking mode, the processes will always spin
> between active and sleep mode. More processes on the same node leads
> to more time in the system mode (because of the empty polls). There
> is a trick in the trunk version of Open MPI which will trigger the
> blocking mode if and only if TCP is the only used device. Please try
> add "--mca btl tcp,self" to your mpirun command line, and check the
> output of vmstat.
> 
>    Thanks,
>  george.
> 
> On Mar 23, 2007, at 3:32 PM, Heywood, Todd wrote:
> 
>> Rolf,
>> 
>>> Is it possible that everything is working just as it should?
>> 
>> That's what I'm afraid of :-). But I did not expect to see such
>> communication overhead due to blocking from mpiBLAST, which is very
>> course-grained. I then tried HPL, which is computation-heavy, and
>> found the
>> same thing. Also, the system time seemed to correspond to the MPI
>> processes
>> cycling between run and sleep (as seen via top), and I thought that
>> setting
>> the mpi_yield_when_idle parameter to 0 would keep the processes from
>> entering sleep state when blocking. But it doesn't.
>> 
>> Todd
>> 
>> 
>> 
>> On 3/23/07 2:06 PM, "Rolf Vandevaart"  wrote:
>> 
>>> 
>>> Todd:
>>> 
>>> I assume the system time is being consumed by
>>> the calls to send and receive data over the TCP sockets.
>>> As the number of processes in the job increases, then more
>>> time is spent waiting for data from one of the other processes.
>>> 
>>> I did a little experiment on a single node to see the difference
>>> in system time consumed when running over TCP vs when
>>> running over shared memory.   When running on a single
>>> node and using the sm btl, I see almost 100% user time.
>>> I assume this is because the sm btl handles sending and
>>> receiving its data within a shared memory segment.
>>> However, when I switch over to TCP, I see my system time
>>> go up.  Note that this is on Solaris.
>>> 
>>> RUNNING OVER SELF,SM
>>>> mpirun -np 8 -mca btl self,sm hpcc.amd64
>>> 
>>>PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG
>>> PROCESS/NLWP
>>>   3505 rolfv100 0.0 0.0 0.0 0.0 0.0 0.0 0.0   0  75 182   0
>>> hpcc.amd64/1
>>>   3503 rolfv100 0.0 0.0 0.0 0.0 0.0 0.0 0.2   0  69 116   0
>>> hpcc.amd64/1
>>>   3499 rolfv 99 0.0 0.0 0.0 0.0 0.0 0.0 0.5   0 106 236   0
>>> hpcc.amd64/1
>>>   3497 rolfv 99 0.0 0.0 0.0 0.0 0.0 0.0 1.0   0 169 200   0
>>> hpcc.amd64/1
>>>   3501 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 1.9   0 127 158   0
>>> hpcc.amd64/1
>>>   3507 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 2.0   0 244 200   0
>>> hpcc.amd64/1
>>>   3509 rolfv 98 0.0 0.0 0.0 0.0 0.0 0.0 2.0   0 282 212   0
>>> hpcc.amd64/1
>>>   3495 rolfv 97 0.0 0.0 0.0 0.0 0.0 0.0 3.2   0 237  98   0
>>> hpcc.amd64/1
>>> 
>>> RUNNING OVER SELF,TCP
>>>> mpirun -np 8 -mca btl self,tcp hpcc.amd64
>>> 
>>>PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG
>>> PROCESS/NLWP
>>>   4316 rolfv 93 6.9 0.0 0.0 0.0 0.0 0.0 0.2   5 346 .6M   0
>>> hpcc.amd64/1
>>>   4328 rolfv 91 8.4 0.0 0.0 0.0 0.0 0.0 0.4   3  59 .15   0
>>> hpcc.amd64/1
>>>   4324 rolfv 98 1.1 0.0 0.0 0.0 0.0 0.0 0.7   2 270 .1M   0
>>> hpcc.amd64/1
>>>   4320 rolfv 88  12 0.0 0.0 0.0 0.0 0.0 0.8   4 244 .15   0
>>> hpcc.amd64/1
>>>   4322 rolfv 94 5.1 0.0 0.0 0.0 0.0 0.0 1.3   2 150 .2M   0
>>> hpcc.amd64/1
>>>   4318 rolfv 92 6.7 0.0 0.0 0.0 0.0 0.0 1.4   5 236 .9M   0
>>> hpcc.amd64/1
>>>   4326 rolfv 93 5.3 0.0 0.0 0.0 0.0 0.0 1.7   7 117 .2M   0
>>> hpcc.amd64/1
>>>   4314 rolfv 

Re: [OMPI users] Measuring MPI message size used by application

2007-03-29 Thread Heywood, Todd
George,

Any other simple, small, text-based (!) suggestions? mpiP seg faults on
x86_64, and indeed its web page doesn't list x86_64 Linux as a supported
platform.

Todd


On 3/28/07 10:39 AM, "George Bosilca"  wrote:

> Stephen,
> 
> There are a huge number of MPI profiling tools out there. My
> preference will be something small, fast and where the output is in
> human readable text format (and not fancy graphics). The tools I'm
> talking about is called mpiP (http://mpip.sourceforge.net/). It's not
> Open MPI specific, but it's really simple to use.
> 
>george.
> 
> On Mar 28, 2007, at 10:10 AM, stephen mulcahy wrote:
> 
>> Hi,
>> 
>> What is the best way of getting statistics on the size of MPI messages
>> being sent/received by my OpenMPI-using application? I'm guessing
>> MPE is
>> one route but is there anything built into OpenMPI that will give me
>> this specific statistic?
>> 
>> Thanks,
>> 
>> -stephen
>> 
>> -- 
>> Stephen Mulcahy, Applepie Solutions Ltd, Innovation in Business
>> Center,
>> GMIT, Dublin Rd, Galway, Ireland.  http://www.aplpi.com
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] Measuring MPI message size used by application

2007-03-30 Thread Heywood, Todd
George,

It turns out I didn't have libunwind either, but didn't notice since mpiP
compiled/linked without it (OK, so I should have checked the config log).
However, once I got it it wouldn't compile on my RHEL system.

So, following this thread:

http://www.mail-archive.com/libunwind-devel@nongnu.org/msg00067.html

I had to download an alpha version of libunwind:

http://download.savannah.nongnu.org/releases/libunwind/libunwind-snap-070224
.tar.gz

... And build it with:

CFLAGS=-fPIC ./configure
make CFLAGS=-fPIC LDFLAGS=-fPIC shared
make CFLAGS=-fPIC LDFLAGS=-fPIC install

After that, everything went as you described. The "strange readings" in the
output did list the Parent_Funct's though:

---
@--- Callsites: 5 -
---
 ID Lev File/AddressLine Parent_Funct MPI_Call
  1   0 0x0041341d   RecvData Recv
  2   0 0x004133c7   SendData Send
  3   0 0x004134b9   SendRepeat   Send
  4   0 0x00413315   Sync Barrier
  5   0 0x004134ef   RecvRepeat   Recv


Thanks for the help!

Todd


On 3/29/07 5:48 PM, "George Bosilca"  wrote:

> I used it on a IA64 platform, so I supposed x86_64 is supported, but
> I never use it on an AMD 64. On the mpiP webpage they claim they
> support the Cray XT3, which as far as I know are based on AMD Opteron
> 64 bits. So, there is at least a spark of hope in the dark ...
> 
> I decide to give it a try on my x86_64 AMD box (Debian based system).
> First problem, my box didn't have the libunwind. Not a big deal, it's
> freely available on HP website (http://www.hpl.hp.com/research/linux/
> libunwind/download.php4). Few minutes later, the libunwind was
> installed in /lib64. Now, time to focus on mpiP ... For some obscure
> reason the configure script was unable to detect my g77 compiler
> (whatever!!!) nor the installation of libunwind. Moreover, it keep
> trying to use the clock_gettime call. Fortunately (which make me
> think I'm not the only one having trouble with this), mpiP provide
> configure options for all these. The final configure line was: ./
> configure --prefix=/opt/ --without-f77 --with-wtime --with-include=-I/
> include --with-lib=-L/lib64. Then a quick "make shared" followed by
> "make install", complete the work. So, at least mpiP can compile on a
> x86_64 box.
> 
> Now, I modify the makefile of NetPIPE, and add the "-lmpiP -lunwind",
> compile NetPIPE and run it. The mpiP headers showed up, the
> application run to completion and my human readable output was there.
> 
> @ mpiP
> @ Command : ./NPmpi
> @ Version  : 3.1.0
> @ MPIP Build date  : Mar 29 2007, 13:35:47
> @ Start time   : 2007 03 29 13:43:40
> @ Stop time: 2007 03 29 13:44:42
> @ Timer Used   : PMPI_Wtime
> @ MPIP env var : [null]
> @ Collector Rank   : 0
> @ Collector PID: 22838
> @ Final Output Dir : .
> @ Report generation: Single collector task
> @ MPI Task Assignment  : 0 dancer
> @ MPI Task Assignment  : 1 dancer
> 
> However, I got some strange reading inside the output.
> 
> ---
> @--- Callsites: 5
> -
> 
> ---
> ID Lev File/AddressLine Parent_Funct MPI_Call
>1   0 0x00402ffb   [unknown]Barrier
>2   0 0x00403103   [unknown]Recv
>3   0 0x004030ad   [unknown]Send
>4   0 0x0040319f   [unknown]Send
>5   0 0x004031d5   [unknown]        Recv
> 
> I didn't dig further to see why. But, this prove that for at least a
> basic usage (general statistics gathering) mpiP works on x86_64
> platforms.
> 
>Have fun,
>  george.
> 
> On Mar 29, 2007, at 11:32 AM, Heywood, Todd wrote:
> 
>> George,
>> 
>> Any other simple, small, text-based (!) suggestions? mpiP seg
>> faults on
>> x86_64, and indeed its web page doesn't list x86_64 Linux as a
>> supported
>> platform.
>> 
>> Todd
>> 
>> 
>> On 3/28/07 10:39 AM, "George Bosilca"  wrote:
>> 
>>> Stephen,
>>> 
>>> There are a huge number of MPI

Re: [OMPI users] Measuring MPI message size used by application

2007-03-30 Thread Heywood, Todd
P.s. I just found out you have to recompile/relink the MPI code with -g in
order for the File/Address field to show non-garbage.


On 3/30/07 2:43 PM, "Heywood, Todd"  wrote:

> George,
> 
> It turns out I didn't have libunwind either, but didn't notice since mpiP
> compiled/linked without it (OK, so I should have checked the config log).
> However, once I got it it wouldn't compile on my RHEL system.
> 
> So, following this thread:
> 
> http://www.mail-archive.com/libunwind-devel@nongnu.org/msg00067.html
> 
> I had to download an alpha version of libunwind:
> 
> http://download.savannah.nongnu.org/releases/libunwind/libunwind-snap-070224
> .tar.gz
> 
> ... And build it with:
> 
> CFLAGS=-fPIC ./configure
> make CFLAGS=-fPIC LDFLAGS=-fPIC shared
> make CFLAGS=-fPIC LDFLAGS=-fPIC install
> 
> After that, everything went as you described. The "strange readings" in the
> output did list the Parent_Funct's though:
> 
> ---
> @--- Callsites: 5 -
> ---
>  ID Lev File/AddressLine Parent_Funct MPI_Call
>   1   0 0x0041341d   RecvData Recv
>   2   0 0x004133c7   SendData Send
>   3   0 0x004134b9   SendRepeat   Send
>   4   0 0x00413315   Sync Barrier
>   5   0 0x004134ef   RecvRepeat   Recv
> 
> 
> Thanks for the help!
> 
> Todd
> 
> 
> On 3/29/07 5:48 PM, "George Bosilca"  wrote:
> 
>> I used it on a IA64 platform, so I supposed x86_64 is supported, but
>> I never use it on an AMD 64. On the mpiP webpage they claim they
>> support the Cray XT3, which as far as I know are based on AMD Opteron
>> 64 bits. So, there is at least a spark of hope in the dark ...
>> 
>> I decide to give it a try on my x86_64 AMD box (Debian based system).
>> First problem, my box didn't have the libunwind. Not a big deal, it's
>> freely available on HP website (http://www.hpl.hp.com/research/linux/
>> libunwind/download.php4). Few minutes later, the libunwind was
>> installed in /lib64. Now, time to focus on mpiP ... For some obscure
>> reason the configure script was unable to detect my g77 compiler
>> (whatever!!!) nor the installation of libunwind. Moreover, it keep
>> trying to use the clock_gettime call. Fortunately (which make me
>> think I'm not the only one having trouble with this), mpiP provide
>> configure options for all these. The final configure line was: ./
>> configure --prefix=/opt/ --without-f77 --with-wtime --with-include=-I/
>> include --with-lib=-L/lib64. Then a quick "make shared" followed by
>> "make install", complete the work. So, at least mpiP can compile on a
>> x86_64 box.
>> 
>> Now, I modify the makefile of NetPIPE, and add the "-lmpiP -lunwind",
>> compile NetPIPE and run it. The mpiP headers showed up, the
>> application run to completion and my human readable output was there.
>> 
>> @ mpiP
>> @ Command : ./NPmpi
>> @ Version  : 3.1.0
>> @ MPIP Build date  : Mar 29 2007, 13:35:47
>> @ Start time   : 2007 03 29 13:43:40
>> @ Stop time: 2007 03 29 13:44:42
>> @ Timer Used   : PMPI_Wtime
>> @ MPIP env var : [null]
>> @ Collector Rank   : 0
>> @ Collector PID: 22838
>> @ Final Output Dir : .
>> @ Report generation: Single collector task
>> @ MPI Task Assignment  : 0 dancer
>> @ MPI Task Assignment  : 1 dancer
>> 
>> However, I got some strange reading inside the output.
>> 
>> ---
>> @--- Callsites: 5
>> -
>> 
>> ---
>> ID Lev File/AddressLine Parent_Funct MPI_Call
>>1   0 0x00402ffb   [unknown]    Barrier
>>2   0 0x00403103   [unknown]Recv
>>3   0 0x004030ad   [unknown]Send
>>4   0 0x0040319f   [unknown]Send
>>5   0 0x004031d5   [unknown]Recv
>> 
>> I didn't dig further to see why. But, this prove that for at least a
>> 

[OMPI users] btl_tcp_endpoint errors

2007-04-02 Thread Heywood, Todd
I'm testing a couple of applications with OpenMPI v1.2b, using over 1000
processors, and am getting TCP errors. These apps ran fine for a lesser
number of processors.

The errors can be different for different runs. Here's one:

[blade90][0,1,223][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:572:mc
a_btl_tcp_endpoint_complete_connect] connect() failed with errno=113
[blade82][0,1,203][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:572:mc
a_btl_tcp_endpoint_complete_connect] connect() failed with errno=113

And I've appended the output from a second type of error, on another trial
run.

I only have a single interface, and understand I'm pushing the capacity of
the single gigE. But I'd like to know what these errors signify.

Thanks,

Todd

-



[blade6][0,1,10][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mca_
btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade309:12625] mca_btl_tcp_frag_send: writev failed with errno=104
[blade309:12625] mca_btl_tcp_frag_send: writev failed with errno=104
[blade5][0,1,9][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mca_b
tl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade134:12179] mca_btl_tcp_frag_send: writev failed with errno=104
[blade3][0,1,4][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mca_b
tl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade484][0,1,1060][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:
mca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade146][0,1,400][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking]
[blade157][0,1,444][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking]
[blade212][0,1,532][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade134:12182] mca_btl_tcp_frag_send: writev failed with errno=104
recv() failed with errno=104
recv() failed with errno=104
[blade146][0,1,402][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade157][0,1,446][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade4][0,1,6][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mca_b
tl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade485][0,1,1062][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:
mca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade214][0,1,534][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade146][0,1,403][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking]
[blade4][0,1,7][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mca_b
tl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade486][0,1,1063][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:
mca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade157][0,1,447][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking]
[blade215][0,1,535][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
recv() failed with errno=104
recv() failed with errno=104
[blade146][0,1,401][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade157][0,1,445][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade3][0,1,5][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mca_b
tl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade485][0,1,1061][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:
mca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade213][0,1,533][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade62][0,1,124][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mc
a_btl_tcp_endpoint_recv_blocking] [blade71:12423] mca_btl_tcp_frag_send:
writev failed with errno=104
[blade132][0,1,344][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking]
[blade389][0,1,872][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
recv() failed with errno=104
[blade132][0,1,347][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
[blade390][0,1,873][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:m
ca_btl_tcp_endpoint_recv_blocking] recv() failed with errno=104
recv() failed with errno=104
[blade62][0,1,125][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:415:mc
a_btl_tcp_endpoint_recv

Re: [OMPI users] btl_tcp_endpoint errors

2007-04-03 Thread Heywood, Todd
Hi Adrian,

Thanks for that info. The OS is Linux. I was able to get rid of the
"connection reset" (104) errors by increasing btl_tcp_endpoint_cache. That
leaves the "no route to host" (113) problem.

Interestingly, I sometimes (sometimes not) get the same error on daemon
startup with ssh when experimenting with very large jobs:

ssh: connect to host blade45 port 22: No route to host
[blade1:05832] ERROR: A daemon on node blade45 failed to start as expected.
[blade1:05832] ERROR: There may be more information available from
[blade1:05832] ERROR: the remote shell (see above).
[blade1:05832] ERROR: The daemon exited unexpectedly with status 1.
[blade1:05832] [0,0,0] ORTE_ERROR_LOG: Timeout in file
../../../../orte/mca/pls/base/pls_base_orted_cmds.c at line 188
[blade1:05832] [0,0,0] ORTE_ERROR_LOG: Timeout in file
../../../../../orte/mca/pls/rsh/pls_rsh_module.c at line 1187

I can understand this arising from an ssh bottleneck, with a timeout. So, a
question to the OMPI folks: could the "no route to host" (113) error in
btl_tcp_endpoint.c:572 also result from a timeout?

Thanks,

Todd




On 4/3/07 5:44 AM, "Adrian Knoth"  wrote:

> On Mon, Apr 02, 2007 at 07:15:41PM -0400, Heywood, Todd wrote:
> 
> Hi,
> 
>> [blade90][0,1,223][../../../../../ompi/mca/btl/tcp/btl_tcp_endpoint.c:572:mc
>> a_btl_tcp_endpoint_complete_connect] connect() failed with errno=113
> 
> errno is OS specific, so it's important to know which OS you're using.
> 
> You can always convert these error numbers to normal strings with perl:
> 
> adi@drcomp:~$ perl -e 'die$!=113'
> No route to host at -e line 1.
> 
> (read: 113 is "No route to host" under Linux. If you're not using Linux,
>  your 113 probably means something else)
> 
> If it's really "No route to host", check your routing setup.
> 
> 
> adi@drcomp:~$ perl -e 'die$!=104'
> Connection reset by peer at -e line 1.
> 
> 
> This usually happens when a remote process dies, perhaps due to
> segfaults.
> 
> 
> HTH