[OMPI users] Test OpenMPI on a cluster

2010-01-30 Thread Tim
Hi,  

I am learning MPI on a cluster. Here is one simple example. I expect the output 
would show response from different nodes, but they all respond from the same 
node node062. I just wonder why and how I can actually get report from 
different nodes to show MPI actually distributes processes to different nodes? 
Thanks and regards!

ex1.c  

/* test of MPI */  
#include "mpi.h"  
#include   
#include   

int main(int argc, char **argv)  
{  
char idstr[2232]; char buff[22128];  
char processor_name[MPI_MAX_PROCESSOR_NAME];  
int numprocs; int myid; int i; int namelen;  
MPI_Status stat;  

MPI_Init(&argc,&argv);  
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);  
MPI_Comm_rank(MPI_COMM_WORLD,&myid);  
MPI_Get_processor_name(processor_name, &namelen);  

if(myid == 0)  
{  
  printf("WE have %d processors\n", numprocs);  
  for(i=1;i

Re: [OMPI users] ABI stabilization/versioning

2010-01-30 Thread Jeff Squyres
On Jan 26, 2010, at 6:49 AM, Jed Brown wrote:

> And inspecting a binary built in Sep 2008 (must have been 1.2.7), ldd
> resolves to my 1.4.1 copy without complaints.  However, the loader is
> intelligent and at least offers a warning when I try to run this ancient
> binary
> 
>   ./a.out: Symbol `ompi_mpi_comm_null' has different size in shared object, 
> consider re-linking

Yes, this was definitely a problem.  We introduced ABI forward compatibility in 
1.3.2 that fixes this issue.

Background: even though Open MPI's MPI handles are just pointers, there's a 
subtlety that we didn't anticipate that makes the back-end size of the actual 
structs matter for global handles like MPI_COMM_WORLD.  In 1.3.2, we ended up 
padding the size of back-end MPI objects for pre-defined handles (like 
MPI_COMM_WORLD, MPI_INT, ...etc.) to help isolate us from future struct size 
changes.

-- 
Jeff Squyres
jsquy...@cisco.com