[OMPI users] specifying hosts in mpi_spawn()

2008-05-29 Thread Bruno Coutinho
How mpi handles the host string passed in the info argument to
mpi_comm_spawn() ?

if I set host to:
"host1,host2,host3,host2,host2,host1"

then ranks 0 and 5 will run in host1, ranks 1,3,4 in host 2 and rank 3 in
host3?


Re: [OMPI users] ulimit question from video open-fabrics-concepts...

2008-05-29 Thread Jeff Squyres

On May 29, 2008, at 3:41 PM, twu...@goodyear.com wrote:

I am in one of your MPI instructional videos and have a question.   
You said

to make sure
the registered memory ulimit is set to unlimited.  I type the command
"ulimit -a"  and don't
see a registered memory entry.  Is this maybe the same as "max locked
memory"?


Yes - that's the same thing as max locked memory.


ps: the videos are very helpful


Excellent!

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Robert Latham
On Thu, May 29, 2008 at 04:48:49PM -0300, Davi Vercillo C. Garcia wrote:
> > Oh, I see you want to use ordered i/o in your application.  PVFS
> > doesn't support that mode.  However, since you know how much data each
> > process wants to write, a combination of MPI_Scan (to compute each
> > processes offset) and MPI_File_write_at_all (to carry out the
> > collective i/o) will give you the same result with likely better
> > performance (and has the nice side effect of working with pvfs).
> 
> I don't understand very well this... what do I need to change in my code ?

MPI_File_write_ordered has an interesting property (which you probably
know since you use it, but i'll spell it out anyway):  writes end up
in the file in rank-order, but are not necessarily carried out in
rank-order.   

Once each process knows the offsets and lengths of the writes the
other process will do, that process can writes its data.  Observe that
rank 0 can write immediately.  Rank 1 only needs to know how much data
rank 0 will write.  and so on.

Rank N can compute its offset by knowing how much data the proceeding
N-1 processes want to write.  The most efficent way to collect this is
to use MPI_Scan and collect a sum of data:

http://www.mpi-forum.org/docs/mpi-11-html/node84.html#Node84

Once you've computed these offsets, MPI_File_write_at_all has enough
information to cary out a collective write of the data.

==rob

-- 
Rob Latham
Mathematics and Computer Science DivisionA215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA B29D F333 664A 4280 315B


Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Brock Palen
That I don't know, we use lustre for this stuff now. And our users  
don't use parallel IO,  (though I hope to change that).
Sorry can't help more.  I would really use 'just' pvfs2 for your IO.   
The other reply pointed out you can have both and not use NFS at all  
for your IO but leave it mounted is thats what users are expecting.


Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985



On May 29, 2008, at 3:46 PM, Davi Vercillo C. Garcia wrote:

Hi,

I'm already using "noac" option on my /etc/fstab but this error is
still happening. I need to put this in another file ?

On Thu, May 29, 2008 at 4:33 PM, Brock Palen  wrote:
Well don't run like this. Have PVFS have NFS,  don't mix them like  
that your

asking for pain, my 2 cents.
I get this error all the time also, you have to disable a large  
portion of
the caching that NFS does to make sure that all MPI-IO clients get  
true data
on the file they are all trying to access.  To make this work  
check in your

/etc/fstab and see if you have the 'noac'  option.  This
is attribute caching, this must be disabled.

On that note PVFS2 is made for doing MPI-IO to multiple hosts  
(thus no need
for NFS)  because it was made with MPI-IO in mind it should work  
out the

box.
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On May 29, 2008, at 3:24 PM, Davi Vercillo C. Garcia wrote:

Hi,
I'm trying to run my program in my environment and some problems are
happening. My environment is based on PVFS2 over NFS (PVFS is mounted
over NFS partition), OpenMPI and Ubuntu. My program uses MPI-IO and
BZ2 development libraries. When I tried to run, a message appears:
File locking failed in ADIOI_Set_lock. If the file system is NFS, you
need to use NFS version 3, ensure that the lockd daemon is running on
all the machines, and mount the directory with the 'noac' option (no
attribute caching).
[campogrande05.dcc.ufrj.br:05005] MPI_ABORT invoked on rank 0 in
communicator MPI_COMM_WORLD with errorcode 1
mpiexec noticed that job rank 1 with PID 5008 on node campogrande04
exited on signal 15 (Terminated).
Why ?!
--
Davi Vercillo Carneiro Garcia
Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br
"Good things come to those who... wait." - Debian Project
"A computer is like air conditioning: it becomes useless when you  
open

windows." - Linus Torvalds
"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert
Einstein___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





--
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users







Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Davi Vercillo C. Garcia
HI,

> Oh, I see you want to use ordered i/o in your application.  PVFS
> doesn't support that mode.  However, since you know how much data each
> process wants to write, a combination of MPI_Scan (to compute each
> processes offset) and MPI_File_write_at_all (to carry out the
> collective i/o) will give you the same result with likely better
> performance (and has the nice side effect of working with pvfs).

I don't understand very well this... what do I need to change in my code ?

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein



Re: [OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Robert Latham
On Thu, May 29, 2008 at 04:24:18PM -0300, Davi Vercillo C. Garcia wrote:
> Hi,
> 
> I'm trying to run my program in my environment and some problems are
> happening. My environment is based on PVFS2 over NFS (PVFS is mounted
> over NFS partition), OpenMPI and Ubuntu. My program uses MPI-IO and
> BZ2 development libraries. When I tried to run, a message appears:
> 
> File locking failed in ADIOI_Set_lock. If the file system is NFS, you
> need to use NFS version 3, ensure that the lockd daemon is running on
> all the machines, and mount the directory with the 'noac' option (no
> attribute caching).
> [campogrande05.dcc.ufrj.br:05005] MPI_ABORT invoked on rank 0 in
> communicator MPI_COMM_WORLD with errorcode 1
> mpiexec noticed that job rank 1 with PID 5008 on node campogrande04
> exited on signal 15 (Terminated).

Hi.

NFS has some pretty sloppy consistency semantics.  If you want
parallel I/O to NFS you have to turn off some caches (the 'noac'
option in your error message) and work pretty hard to flush
client-side caches (which ROMIO does for you using fcntl locks).  If
you do this, note that your performance will be really bad, but you'll
get correct results.

Your nfs-exported PVFS volumes will give you pretty decent serial i/o
performance since NFS caching only helps in that case.

I'd suggest, though, that you try using straight PVFS for your MPI-IO
application, as long as the parallell clients have access to all of
the pvfs servers (if tools like pvfs2-ping and pvfs2-ls work, then you
do).  You'll get better performance for a variety of reasons and can
continue to keep your NFS-exported PVFS volumes up at the same time. 

Oh, I see you want to use ordered i/o in your application.  PVFS
doesn't support that mode.  However, since you know how much data each
process wants to write, a combination of MPI_Scan (to compute each
processes offset) and MPI_File_write_at_all (to carry out the
collective i/o) will give you the same result with likely better
performance (and has the nice side effect of working with pvfs).

==rob

-- 
Rob Latham
Mathematics and Computer Science DivisionA215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA B29D F333 664A 4280 315B


[OMPI users] Problem with NFS + PVFS2 + OpenMPI

2008-05-29 Thread Davi Vercillo C. Garcia
Hi,

I'm trying to run my program in my environment and some problems are
happening. My environment is based on PVFS2 over NFS (PVFS is mounted
over NFS partition), OpenMPI and Ubuntu. My program uses MPI-IO and
BZ2 development libraries. When I tried to run, a message appears:

File locking failed in ADIOI_Set_lock. If the file system is NFS, you
need to use NFS version 3, ensure that the lockd daemon is running on
all the machines, and mount the directory with the 'noac' option (no
attribute caching).
[campogrande05.dcc.ufrj.br:05005] MPI_ABORT invoked on rank 0 in
communicator MPI_COMM_WORLD with errorcode 1
mpiexec noticed that job rank 1 with PID 5008 on node campogrande04
exited on signal 15 (Terminated).

Why ?!

-- 
Davi Vercillo Carneiro Garcia

Universidade Federal do Rio de Janeiro
Departamento de Ciência da Computação
DCC-IM/UFRJ - http://www.dcc.ufrj.br

"Good things come to those who... wait." - Debian Project

"A computer is like air conditioning: it becomes useless when you open
windows." - Linus Torvalds

"Há duas coisas infinitas, o universo e a burrice humana. E eu estou
em dúvida quanto o primeiro." - Albert Einstein
/**
* - Lembrar na hora de executar com o MPI que os usuarios PRECISAM ter o mesmo ID.
*
*
*/
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include "bzlib.h"
#include 
#include "mpi.h"

#define FILE_NAME_LEN 1034
#define BENCH   1

typedef unsigned char uchar;
typedef char Char;
typedef unsigned char Bool;
typedef unsigned char UChar;
typedef int Int32;
typedef unsigned int UInt32;
typedef short Int16;
typedef unsigned short UInt16;

#define True  ((Bool)1)
#define False ((Bool)0)

/**
 * Define o modo verboso
 */
int VERBOSE = 1;

/*--
 IntNative is your platform's `native' int size.
 Only here to avoid probs with 64-bit platforms.
 --*/
typedef int IntNative;

Int32 blockSize100k = 9;
Int32 verbosity = 0;
Int32 workFactor= 30;

/**
 * Define o tamanho Maximo da fila
 */
long TAM_FILA = 10;
/**
 * Tamanho do bloco lido por cada thread
 */
long M_BLOCK  = 900*1000;
#define M_BLOCK_OUT (M_BLOCK + M_BLOCK)


/**
 * MPI Variables
 */

int  nProcs= 0;
int  rank  = 0;
int  nfiles= 0;
int  nBlocosPorProc= 0;
int  nBlocosResto  = 0;
long nBlocos   = 0;
long long filesize = 0;
long long tamComprimidoPorProc = 0;

typedef struct SBloco{
   UChar* dado;
   long int id;
} Bloco;


typedef struct s_OutputBuffer{
   long size;
   uchar *zbuf;
} OutputBuffer;


/**
 * TODO Implementando
 */
static void comprime( MPI_File stream, MPI_File zStream )
{
   // 1 T Leitora, 1 T Escritora, o resto são compressoras
   // OBS: No minimo deve existir 3 T
   #define NUM_THREADS 4
   MPI_Status status;
   //MPI_Offset offset; [DAVI]

   uchar *zbuf;
   long r, count;
   unsigned int nZ;
   long nIdBlock;
   UChar *ibuf[TAM_FILA]; // buffer de entrada
   OutputBuffer **obuf; //buffer de saida
   Int32 nIbuf[TAM_FILA];
   Int32 block_in_use[TAM_FILA];

   long nLeituraAtual;
   long nProcAtual;
   long nGravacaoAtual;
   Int32 erro;
   Int32 endRead;
   long long nTamOBuf = ( filesize / M_BLOCK ) + 1;

   // incia buffer de saida
   obuf = (OutputBuffer**)malloc( sizeof(OutputBuffer*)*nTamOBuf );

   for( count = 0; count < nTamOBuf; count++ )
   {
   if( count < TAM_FILA )
   ibuf[count] = (UChar*)malloc( sizeof(UChar) * M_BLOCK );
   obuf[count] = (OutputBuffer*)malloc( sizeof(OutputBuffer) );
   obuf[count]->size = -1;
   obuf[count]->zbuf = NULL;
   }

   // Configura nro de threads
   omp_set_num_threads( NUM_THREADS );

   erro   = 0;
   nLeituraAtual  = 0;
   nProcAtual = 0;
   nGravacaoAtual = 0;
   endRead= 0;
   nIdBlock   = -1;
//  char str[10];
   //int nPrinted = 0;
   int tsleep   = 0;

   for (count = 0; count < TAM_FILA; ++count) {
   block_in_use[count] = 0;
   }

   MPI_File_set_view( stream,  0, MPI_BYTE, MPI_BYTE, "native", MPI_INFO_NULL );
   MPI_File_set_view( zStream, 0, MPI_BYTE, MPI_BYTE, "native", MPI_INFO_NULL );

// Inicio Regiao Paralela
#pragma omp parallel default(shared) private(zbuf, nZ, r, nIdBlock )
{
   zbuf = (uchar*)malloc( (M_BLOCK + 600 + (M_BLOCK / 100)) * sizeof(uchar) );

   while ( !erro && omp_get_thread_num() != 1 )
   {
   //printf( "PROCESSO %d\n", rank );
   if( omp_get_thread_num() == 0 ) //Thread Leitora
   {
   if( VERBOSE )printf( "Processo %d Thread Leitora\n", rank );

   if ( (rank + nLeituraAtual*nProcs) >= nBlocos &&
nLeituraAtual > 0

Re: [OMPI users] [torqueusers] Job dies randomly, but only through torque

2008-05-29 Thread Jeff Squyres
I don't know much about Maui, but these lines from the log seem  
relevant:


-
maui.log:05/29 09:27:21 INFO: job 2120 exceeds requested proc  
limit (3.72 > 1.00)
maui.log:05/29 09:27:21 MSysRegEvent(JOBRESVIOLATION:  job '2120' in  
state 'Running' has exceeded PROC resource limit (372 > 100) (action  
CANCEL will be taken)  job start time: Thu May 29 09:26:19

-

I'm not sure what resource limits it's talking about.


On May 29, 2008, at 2:25 PM, Jim Kusznir wrote:


I have verified that maui is killing the job.  I actually ran into
this with another user all of a sudden.  I don't know why its only
effecting a few currently.  Here's the maui log extract for a current
run of this users' program:

---
[root@aeolus log]# grep 2120 *
maui.log:05/29 09:01:45 INFO: job '2118' loaded:   1   patton
patton   1800   Idle   0 1212076905   [NONE] [NONE] [NONE] >=
0 >=  0 [NONE] 1212076905
maui.log:05/29 09:23:40 INFO: job '2119' loaded:   1   patton
patton   1800   Idle   0 1212078218   [NONE] [NONE] [NONE] >=
0 >=  0 [NONE] 1212078220
maui.log:05/29 09:26:19  
MPBSJobLoad(2120,2120.aeolus.eecs.wsu.edu,J,TaskList,0)

maui.log:05/29 09:26:19 MReqCreate(2120,SrcRQ,DstRQ,DoCreate)
maui.log:05/29 09:26:19 MJobSetCreds(2120,patton,patton,)
maui.log:05/29 09:26:19 INFO: default QOS for job 2120 set to
DEFAULT(0) (P:DEFAULT,U:[NONE],G:[NONE],A:[NONE],C:[NONE])
maui.log:05/29 09:26:19 INFO: default QOS for job 2120 set to
DEFAULT(0) (P:DEFAULT,U:[NONE],G:[NONE],A:[NONE],C:[NONE])
maui.log:05/29 09:26:19 INFO: default QOS for job 2120 set to
DEFAULT(0) (P:DEFAULT,U:[NONE],G:[NONE],A:[NONE],C:[NONE])
maui.log:05/29 09:26:19 INFO: job '2120' loaded:   1   patton
patton   1800   Idle   0 1212078378   [NONE] [NONE] [NONE] >=
0 >=  0 [NONE] 1212078379
maui.log:05/29 09:26:19 INFO: job '2120' Priority:1
maui.log:05/29 09:26:19 INFO: job '2120' Priority:1
maui.log:05/29 09:26:19 INFO: 8 feasible tasks found for job
2120:0 in partition DEFAULT (1 Needed)
maui.log:05/29 09:26:19 INFO: 1 requested hostlist tasks allocated
for job 2120 (0 remain)
maui.log:05/29 09:26:19 MJobStart(2120)
maui.log:05/29 09:26:19  
MJobDistributeTasks(2120,base,NodeList,TaskMap)

maui.log:05/29 09:26:19 MAMAllocJReserve(2120,RIndex,ErrMsg)
maui.log:05/29 09:26:19 MRMJobStart(2120,Msg,SC)
maui.log:05/29 09:26:19 MPBSJobStart(2120,base,Msg,SC)
maui.log:05/29 09:26:19
MPBSJobModify(2120,Resource_List,Resource,compute-0-0.local)
maui.log:05/29 09:26:19 MPBSJobModify(2120,Resource_List,Resource,1)
maui.log:05/29 09:26:19 INFO: job '2120' successfully started
maui.log:05/29 09:26:19 MStatUpdateActiveJobUsage(2120)
maui.log:05/29 09:26:19 MResJCreate(2120,MNodeList, 
00:00:00,ActiveJob,Res)

maui.log:05/29 09:26:19 INFO: starting job '2120'
maui.log:05/29 09:26:50 INFO: node compute-0-0.local has joblist
'0/2120.aeolus.eecs.wsu.edu'
maui.log:05/29 09:26:50 INFO: job 2120 adds 1 processors per task
to node compute-0-0.local (1)
maui.log:05/29 09:26:50  
MPBSJobUpdate(2120,2120.aeolus.eecs.wsu.edu,TaskList,0)

maui.log:05/29 09:26:50 MStatUpdateActiveJobUsage(2120)
maui.log:05/29 09:26:50 MResDestroy(2120)
maui.log:05/29 09:26:50 MResChargeAllocation(2120,2)
maui.log:05/29 09:26:50  
MResJCreate(2120,MNodeList,-00:00:31,ActiveJob,Res)

maui.log:05/29 09:26:50 INFO: job '2120' Priority:1
maui.log:05/29 09:26:50 INFO: job '2120' Priority:1
maui.log:05/29 09:27:21 INFO: node compute-0-0.local has joblist
'0/2120.aeolus.eecs.wsu.edu'
maui.log:05/29 09:27:21 INFO: job 2120 adds 1 processors per task
to node compute-0-0.local (1)
maui.log:05/29 09:27:21  
MPBSJobUpdate(2120,2120.aeolus.eecs.wsu.edu,TaskList,0)

maui.log:05/29 09:27:21 MStatUpdateActiveJobUsage(2120)
maui.log:05/29 09:27:21 MResDestroy(2120)
maui.log:05/29 09:27:21 MResChargeAllocation(2120,2)
maui.log:05/29 09:27:21  
MResJCreate(2120,MNodeList,-00:01:02,ActiveJob,Res)

maui.log:05/29 09:27:21 INFO: job '2120' Priority:1
maui.log:05/29 09:27:21 INFO: job '2120' Priority:1
maui.log:05/29 09:27:21 INFO: job 2120 exceeds requested proc
limit (3.72 > 1.00)
maui.log:05/29 09:27:21 MSysRegEvent(JOBRESVIOLATION:  job '2120' in
state 'Running' has exceeded PROC resource limit (372 > 100) (action
CANCEL will be taken)  job start time: Thu May 29 09:26:19
maui.log:05/29 09:27:21 MRMJobCancel(2120,job violates resource
utilization policies,SC)
maui.log:05/29 09:27:21 MPBSJobCancel(2120,base,CMsg,Msg,job violates
resource utilization policies)
maui.log:05/29 09:27:21 INFO: job '2120' successfully cancelled
maui.log:05/29 09:27:27 INFO: active PBS job 2120 has been removed
from the queue.  assuming successful completion
maui.log:05/29 09:27:27 MJobProcessCompleted(2120)
maui.log:05/29 09:27:27 MAMAllocJDebit(A,2120,SC,ErrMsg)
maui.log:05/29 09:27:27 INFO: job '  2120' completed.
QueueTime:  1  RunTime: 62  

Re: [OMPI users] [torqueusers] Job dies randomly, but only through torque

2008-05-29 Thread Jim Kusznir
I have verified that maui is killing the job.  I actually ran into
this with another user all of a sudden.  I don't know why its only
effecting a few currently.  Here's the maui log extract for a current
run of this users' program:

---
[root@aeolus log]# grep 2120 *
maui.log:05/29 09:01:45 INFO: job '2118' loaded:   1   patton
patton   1800   Idle   0 1212076905   [NONE] [NONE] [NONE] >=
0 >=  0 [NONE] 1212076905
maui.log:05/29 09:23:40 INFO: job '2119' loaded:   1   patton
patton   1800   Idle   0 1212078218   [NONE] [NONE] [NONE] >=
0 >=  0 [NONE] 1212078220
maui.log:05/29 09:26:19 MPBSJobLoad(2120,2120.aeolus.eecs.wsu.edu,J,TaskList,0)
maui.log:05/29 09:26:19 MReqCreate(2120,SrcRQ,DstRQ,DoCreate)
maui.log:05/29 09:26:19 MJobSetCreds(2120,patton,patton,)
maui.log:05/29 09:26:19 INFO: default QOS for job 2120 set to
DEFAULT(0) (P:DEFAULT,U:[NONE],G:[NONE],A:[NONE],C:[NONE])
maui.log:05/29 09:26:19 INFO: default QOS for job 2120 set to
DEFAULT(0) (P:DEFAULT,U:[NONE],G:[NONE],A:[NONE],C:[NONE])
maui.log:05/29 09:26:19 INFO: default QOS for job 2120 set to
DEFAULT(0) (P:DEFAULT,U:[NONE],G:[NONE],A:[NONE],C:[NONE])
maui.log:05/29 09:26:19 INFO: job '2120' loaded:   1   patton
patton   1800   Idle   0 1212078378   [NONE] [NONE] [NONE] >=
0 >=  0 [NONE] 1212078379
maui.log:05/29 09:26:19 INFO: job '2120' Priority:1
maui.log:05/29 09:26:19 INFO: job '2120' Priority:1
maui.log:05/29 09:26:19 INFO: 8 feasible tasks found for job
2120:0 in partition DEFAULT (1 Needed)
maui.log:05/29 09:26:19 INFO: 1 requested hostlist tasks allocated
for job 2120 (0 remain)
maui.log:05/29 09:26:19 MJobStart(2120)
maui.log:05/29 09:26:19 MJobDistributeTasks(2120,base,NodeList,TaskMap)
maui.log:05/29 09:26:19 MAMAllocJReserve(2120,RIndex,ErrMsg)
maui.log:05/29 09:26:19 MRMJobStart(2120,Msg,SC)
maui.log:05/29 09:26:19 MPBSJobStart(2120,base,Msg,SC)
maui.log:05/29 09:26:19
MPBSJobModify(2120,Resource_List,Resource,compute-0-0.local)
maui.log:05/29 09:26:19 MPBSJobModify(2120,Resource_List,Resource,1)
maui.log:05/29 09:26:19 INFO: job '2120' successfully started
maui.log:05/29 09:26:19 MStatUpdateActiveJobUsage(2120)
maui.log:05/29 09:26:19 MResJCreate(2120,MNodeList,00:00:00,ActiveJob,Res)
maui.log:05/29 09:26:19 INFO: starting job '2120'
maui.log:05/29 09:26:50 INFO: node compute-0-0.local has joblist
'0/2120.aeolus.eecs.wsu.edu'
maui.log:05/29 09:26:50 INFO: job 2120 adds 1 processors per task
to node compute-0-0.local (1)
maui.log:05/29 09:26:50 MPBSJobUpdate(2120,2120.aeolus.eecs.wsu.edu,TaskList,0)
maui.log:05/29 09:26:50 MStatUpdateActiveJobUsage(2120)
maui.log:05/29 09:26:50 MResDestroy(2120)
maui.log:05/29 09:26:50 MResChargeAllocation(2120,2)
maui.log:05/29 09:26:50 MResJCreate(2120,MNodeList,-00:00:31,ActiveJob,Res)
maui.log:05/29 09:26:50 INFO: job '2120' Priority:1
maui.log:05/29 09:26:50 INFO: job '2120' Priority:1
maui.log:05/29 09:27:21 INFO: node compute-0-0.local has joblist
'0/2120.aeolus.eecs.wsu.edu'
maui.log:05/29 09:27:21 INFO: job 2120 adds 1 processors per task
to node compute-0-0.local (1)
maui.log:05/29 09:27:21 MPBSJobUpdate(2120,2120.aeolus.eecs.wsu.edu,TaskList,0)
maui.log:05/29 09:27:21 MStatUpdateActiveJobUsage(2120)
maui.log:05/29 09:27:21 MResDestroy(2120)
maui.log:05/29 09:27:21 MResChargeAllocation(2120,2)
maui.log:05/29 09:27:21 MResJCreate(2120,MNodeList,-00:01:02,ActiveJob,Res)
maui.log:05/29 09:27:21 INFO: job '2120' Priority:1
maui.log:05/29 09:27:21 INFO: job '2120' Priority:1
maui.log:05/29 09:27:21 INFO: job 2120 exceeds requested proc
limit (3.72 > 1.00)
maui.log:05/29 09:27:21 MSysRegEvent(JOBRESVIOLATION:  job '2120' in
state 'Running' has exceeded PROC resource limit (372 > 100) (action
CANCEL will be taken)  job start time: Thu May 29 09:26:19
maui.log:05/29 09:27:21 MRMJobCancel(2120,job violates resource
utilization policies,SC)
maui.log:05/29 09:27:21 MPBSJobCancel(2120,base,CMsg,Msg,job violates
resource utilization policies)
maui.log:05/29 09:27:21 INFO: job '2120' successfully cancelled
maui.log:05/29 09:27:27 INFO: active PBS job 2120 has been removed
from the queue.  assuming successful completion
maui.log:05/29 09:27:27 MJobProcessCompleted(2120)
maui.log:05/29 09:27:27 MAMAllocJDebit(A,2120,SC,ErrMsg)
maui.log:05/29 09:27:27 INFO: job '  2120' completed.
QueueTime:  1  RunTime: 62  Accuracy:  3.44  XFactor:  0.04
maui.log:05/29 09:27:27 INFO: job '2120' completed  X: 0.035000
T: 62  PS: 62  A: 0.03
maui.log:05/29 09:27:27 MJobSendFB(2120)
maui.log:05/29 09:27:27 INFO: job usage sent for job '2120'
maui.log:05/29 09:27:27 MJobRemove(2120)
maui.log:05/29 09:27:27 MResDestroy(2120)
maui.log:05/29 09:27:27 MResChargeAllocation(2120,2)
maui.log:05/29 09:27:27 MJobDestroy(2120)
maui.log:05/29 09:42:54 INFO: job '2121' loaded:   1 sledburg
sledburg   1800   Idle   0 

Re: [OMPI users] Help: Program Terminated

2008-05-29 Thread Andreas Schäfer
Hi Amy,

On 16:10 Thu 29 May , Lee Amy wrote:
> MicroTar parallel version was terminated after 463 minutes with following
> error messages:
> 
> [gnode5:31982] [ 0] /lib64/tls/libpthread.so.0 [0x345460c430]
> [gnode5:31982] [ 1] microtar(LocateNuclei+0x137) [0x403037]
> [gnode5:31982] [ 2] microtar(main+0x4ac) [0x40431c]
> [gnode5:31982] [ 3] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
> [0x3453b1c3fb]
> [gnode5:31982] [ 4] microtar [0x402e6a]
> [gnode5:31982] *** End of error message ***
> mpirun noticed that job rank 0 with PID 18710 on node gnode1 exited on
> signal 15 (Terminated).
> 19 additional processes aborted (not shown)
> 

if I'm not mistaken, signal 15 is SIGTERM, which is sent to processes
to terminate them. To me this sounds like your application is
terminated from an external instance, maybe because your job exceeded
the wall clock time limit of your scheduling system. Does the job
repeatedly fail at the same time? Do shorter jobs finish successfully?

Just my 0.02 Euros (-8

Cheers
-Andreas


-- 

Andreas Schäfer
Cluster and Metacomputing Working Group
Friedrich-Schiller-Universität Jena, Germany
PGP/GPG key via keyserver
I'm a bright... http://www.the-brights.net


(\___/)
(+'.'+)
(")_(")
This is Bunny. Copy and paste Bunny into your 
signature to help him gain world domination!


pgp8TQOHKBqEK.pgp
Description: PGP signature


Re: [OMPI users] Process size

2008-05-29 Thread Josh Hursey

Leonardo,

You are exactly correct. The CRCP module/component will grow the  
application size probably for every message that you send or receive.  
This is because the CRCP component tracks the signature {data_size,  
tag, communicator, peer} (*not* the contents of the message) of every  
message sent/received.


I have in development some fixes for the CRCP component to make it  
behave a bit better for large numbers of messages, and as a result  
will also help control the number of memory allocations needed by this  
component. Unfortunately it is not 100% ready for public use at the  
moment, but hopefully soon.


As an aside: to clearly see the effect of turning the CRCP component  
on/off at runtime try the two commands below:

Without CRCP:
 shell$ mpirun -np 2 -am ft-enable-cr -mca crcp none simple-ping 20 1
With CRCP:
 shell$ mpirun -np 2 -am ft-enable-cr simple-ping 20 1

-- Josh

On May 29, 2008, at 7:54 AM, Leonardo Fialho wrote:


Hi All,

I made some tests with a dummy "ping" application. Some memory  
problems occurred. On these tests I obtained the following results:


1) OpenMPI (without FT):
  - delaying 1 second to send token to other node: orted and  
application size stable;
  - delaying 0 seconds to send token to other node: orted and  
application size stable.


2) OpenMPI (with CRCP FT):
  - delaying 1 second to send token to other node: orted stable and  
application size grow in the first seconds and establish;
  - delaying 0 seconds to send token to other node: orted stable and  
application size growing all the time.


I think that it is something in the CRCP module/component...

Thanks,

--
Leonardo Fialho
Computer Architecture and Operating Systems Department - CAOS
Universidad Autonoma de Barcelona - UAB
ETSE, Edifcio Q, QC/3088
http://www.caos.uab.es
Phone: +34-93-581-2888
Fax: +34-93-581-2478

#include 
#include 
#include 

int main (int argc, char *argv[]) {
   double time_end, time_start;
   int count, rank, fim, x;
   char buffer[5] = "test!";
   MPI_Status status;

   if (3 > argc) {
 printf("\nInsuficient arguments (%d)\n\nping   
\n\n", argc);

 exit(1);
   }

   if (MPI_Init(, ) == MPI_SUCCESS) {
   time_start = MPI_Wtime();
   MPI_Comm_size (MPI_COMM_WORLD, );
   MPI_Comm_rank (MPI_COMM_WORLD,  );
   for (fim = 1; fim <= atoi(argv[1]); fim++) {
   if (rank == 0) {
   printf("(%d) sent token to (%d)\n", rank, rank+1);
   fflush(stdout);
   sleep(atoi(argv[2]));
   MPI_Send(buffer, 5, MPI_CHAR, 1, 1, MPI_COMM_WORLD);
   MPI_Recv(buffer, 5, MPI_CHAR, count-1, 1,  
MPI_COMM_WORLD, );

   } else {
   MPI_Recv(buffer, 5, MPI_CHAR, rank-1, 1,  
MPI_COMM_WORLD, );
   printf("(%d) sent token to (%d)\n", rank,  
(rank==(count-1) ? 0 : rank+1));

   fflush(stdout);
   sleep(atoi(argv[2]));
   MPI_Send(buffer, 5, MPI_CHAR, (rank==(count-1) ? 0 :  
rank+1), 1, MPI_COMM_WORLD);

   }
   }
   }

   time_end = MPI_Wtime();
   MPI_Finalize();

   if (rank == 0) {
   printf("%f\n", time_end - time_start);
   }

   return 0;
}
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




[OMPI users] Process size

2008-05-29 Thread Leonardo Fialho

Hi All,

I made some tests with a dummy "ping" application. Some memory problems 
occurred. On these tests I obtained the following results:


1) OpenMPI (without FT):
   - delaying 1 second to send token to other node: orted and 
application size stable;
   - delaying 0 seconds to send token to other node: orted and 
application size stable.


2) OpenMPI (with CRCP FT):
   - delaying 1 second to send token to other node: orted stable and 
application size grow in the first seconds and establish;
   - delaying 0 seconds to send token to other node: orted stable and 
application size growing all the time.


I think that it is something in the CRCP module/component...

Thanks,

--
Leonardo Fialho
Computer Architecture and Operating Systems Department - CAOS
Universidad Autonoma de Barcelona - UAB
ETSE, Edifcio Q, QC/3088
http://www.caos.uab.es
Phone: +34-93-581-2888
Fax: +34-93-581-2478

#include 
#include 
#include 

int main (int argc, char *argv[]) {
double time_end, time_start;
int count, rank, fim, x;
char buffer[5] = "test!";
MPI_Status status;

if (3 > argc) {
  printf("\nInsuficient arguments (%d)\n\nping  
\n\n", argc);
  exit(1);
}

if (MPI_Init(, ) == MPI_SUCCESS) {
time_start = MPI_Wtime();
MPI_Comm_size (MPI_COMM_WORLD, );
MPI_Comm_rank (MPI_COMM_WORLD,  );
for (fim = 1; fim <= atoi(argv[1]); fim++) {
if (rank == 0) {
printf("(%d) sent token to (%d)\n", rank, rank+1);
fflush(stdout);
sleep(atoi(argv[2]));
MPI_Send(buffer, 5, MPI_CHAR, 1, 1, MPI_COMM_WORLD);
MPI_Recv(buffer, 5, MPI_CHAR, count-1, 1, MPI_COMM_WORLD, 
);
} else {
MPI_Recv(buffer, 5, MPI_CHAR, rank-1, 1, MPI_COMM_WORLD, 
);
printf("(%d) sent token to (%d)\n", rank, (rank==(count-1) ? 0 
: rank+1));
fflush(stdout);
sleep(atoi(argv[2]));
MPI_Send(buffer, 5, MPI_CHAR, (rank==(count-1) ? 0 : rank+1), 
1, MPI_COMM_WORLD);
}
}
}

time_end = MPI_Wtime();
MPI_Finalize();

if (rank == 0) {
printf("%f\n", time_end - time_start);
}

return 0;
}


[OMPI users] Help: Program Terminated

2008-05-29 Thread Lee Amy
Hello,

I use a bioinformatics software called MicroTar to do some work. But it
seems that it dosen't finish well.

MicroTar parallel version was terminated after 463 minutes with following
error messages:

[gnode5:31982] [ 0] /lib64/tls/libpthread.so.0 [0x345460c430]
[gnode5:31982] [ 1] microtar(LocateNuclei+0x137) [0x403037]
[gnode5:31982] [ 2] microtar(main+0x4ac) [0x40431c]
[gnode5:31982] [ 3] /lib64/tls/libc.so.6(__libc_start_main+0xdb)
[0x3453b1c3fb]
[gnode5:31982] [ 4] microtar [0x402e6a]
[gnode5:31982] *** End of error message ***
mpirun noticed that job rank 0 with PID 18710 on node gnode1 exited on
signal 15 (Terminated).
19 additional processes aborted (not shown)

And gnode5 is a slave node, gnode1 is the master node. I run MicroTar in
gnode1 by using Open MPI 1.2.6 mpirun.

The OS is Cent OS 4. Processor is AMD Opteron 270HE.

So from that what can we know? It's a MPI problem or I didn't compile
MicroTar properly?

Thank you very much~

Best Regards,

Amy Lee


[OMPI users] OpenIB problem: error polling HP CQ...

2008-05-29 Thread Matt Hughes
I have a program which uses MPI::Comm::Spawn to start processes on
compute nodes (c0-0, c0-1, etc).  The communication between the
compute nodes consists of ISend and IRecv pairs, while communication
between the compute nodes consists of gather and bcast operations.
After executing ~80 successful loops (gather/bcast pairs), I get this
error message from the head node process during a gather call:

[0,1,0][btl_openib_component.c:1332:btl_openib_component_progress]
from headnode.local to: c0-0 error polling HP CQ with status WORK
REQUEST FLUSHED ERROR status number 5 for wr_id 18504944 opcode 1

The relevant environment variables:
OMPI_MCA_btl_openib_rd_num=128
OMPI_MCA_btl_openib_verbose=1
OMPI_MCA_btl_base_verbose=1
OMPI_MCA_btl_openib_rd_low=75
OMPI_MCA_btl_base_debug=1
OMPI_MCA_btl_openib_warn_no_hca_params_found=0
OMPI_MCA_btl_openib_warn_default_gid_prefix=0
OMPI_MCA_btl=self,openib

If rd_low and rd_num are left at their default values, the program
simply hangs in the gather call after about 20 iterations (a gather
and a bcast).

Can anyone shed any light on what this error message means or what
might be done about it?

Thanks,
mch