Re: [OMPI users] [Open MPI] #2681: ompi-server publish name broken in 1.5.x

2011-01-11 Thread Bernard Secher - SFME/LGLS

Hello,

This feature is very important for my project. It is managing coupling 
parallel codes.

Thanks to correct this bug as soon as possible.

Best
Bernard

Open MPI a écrit :

#2681: ompi-server publish name broken in 1.5.x
---+
Reporter:  jsquyres|   Owner:
Type:  defect  |  Status:  new   
Priority:  major   |   Milestone:  Open MPI 1.5.3
 Version:  Open MPI 1.5.1  |Keywords:
---+


Comment(by rhc):

 Better not to as I may never get to it

  






Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS

The accept and connect tests are OK with version openmpi 1.4.1.

I think there is a bug in version 1.5.1

Best
Bernard

Bernard Secher - SFME/LGLS a écrit :
I get the same dead lock with openmpi tests: pubsub, accept and 
connect with version 1.5.1


Bernard Secher - SFME/LGLS a écrit :

Jeff,

The dead lock is not in MPI_Comm_accept and MPI_Comm_connect, but 
before in MPI_Publish_name and MPI_Lookup_name.

So the broadcast of srv is not involved in the dead lock.

Best
Bernard

Bernard Secher - SFME/LGLS a écrit :

Jeff,

Only the processes of the program where process 0 successed to 
publish name, have srv=1 and then call MPI_Comm_accept.
The processes of the program where process 0 failed to publish name, 
have srv=0 and then call MPI_Comm_connect.


That's worked like this with openmpi 1.4.1.

Is it different whith openmpi 1.5.1 ?

Best
Bernard


Jeff Squyres a écrit :

On Jan 5, 2011, at 10:36 AM, Bernard Secher - SFME/LGLS wrote:

  

MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
{
  int clt=0;
  MPI_Request request; /* requete pour communication non bloquante */
  MPI_Comm gcom;
  MPI_Status status; 
  char   port_name_clt[MPI_MAX_PORT_NAME]; 


  if( service == NULL ) service = defaultService;

  /* only process of rank null can publish name */
  MPI_Barrier(MPI_COMM_WORLD);

  /* A lookup for an unpublished service generate an error */
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
  if( myrank == 0 ){
/* Try to be a server. If there service is already published, try to be a 
cient */
MPI_Open_port(MPI_INFO_NULL, port_name); 
printf("[%d] Publish name\n",myrank);

if ( MPI_Publish_name(service, MPI_INFO_NULL, port_name) == MPI_SUCCESS )  {
  *srv = 1;
  printf("[%d] service %s available at %s\n",myrank,service,port_name);
}
else if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == 
MPI_SUCCESS ){
  MPI_Close_port( port_name ); 
  clt = 1;

}
else
  /* Throw exception */
  printf("[%d] Error\n",myrank);
  }
  else{
/* Waiting rank 0 publish name */
sleep(1);
printf("[%d] Lookup name\n",myrank);
if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == MPI_SUCCESS 
){
  clt = 1;
}
else
  /* Throw exception */
  ;
  }
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  
  MPI_Bcast(srv,1,MPI_INT,0,MPI_COMM_WORLD);



You're broadcasting srv here -- won't everyone now have *srv==1, such that they 
all call MPI_COMM_ACCEPT, below?

  

  if ( *srv )
/* I am the Master */
MPI_Comm_accept( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,  );
  else{
/*  Connect to service SERVER, get the inter-communicator server*/
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
if ( MPI_Comm_connect(port_name_clt, MPI_INFO_NULL, 0, MPI_COMM_WORLD, 
 )  == MPI_SUCCESS )
  printf("[%d] I get the connection with %s at %s !\n",myrank, service, 
port_name_clt);
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  }

  if(myrank != 0) *srv = 0;

  return gcom;

}




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




  





  



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS
I get the same dead lock with openmpi tests: pubsub, accept and connect 
with version 1.5.1


Bernard Secher - SFME/LGLS a écrit :

Jeff,

The dead lock is not in MPI_Comm_accept and MPI_Comm_connect, but 
before in MPI_Publish_name and MPI_Lookup_name.

So the broadcast of srv is not involved in the dead lock.

Best
Bernard

Bernard Secher - SFME/LGLS a écrit :

Jeff,

Only the processes of the program where process 0 successed to 
publish name, have srv=1 and then call MPI_Comm_accept.
The processes of the program where process 0 failed to publish name, 
have srv=0 and then call MPI_Comm_connect.


That's worked like this with openmpi 1.4.1.

Is it different whith openmpi 1.5.1 ?

Best
Bernard


Jeff Squyres a écrit :

On Jan 5, 2011, at 10:36 AM, Bernard Secher - SFME/LGLS wrote:

  

MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
{
  int clt=0;
  MPI_Request request; /* requete pour communication non bloquante */
  MPI_Comm gcom;
  MPI_Status status; 
  char   port_name_clt[MPI_MAX_PORT_NAME]; 


  if( service == NULL ) service = defaultService;

  /* only process of rank null can publish name */
  MPI_Barrier(MPI_COMM_WORLD);

  /* A lookup for an unpublished service generate an error */
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
  if( myrank == 0 ){
/* Try to be a server. If there service is already published, try to be a 
cient */
MPI_Open_port(MPI_INFO_NULL, port_name); 
printf("[%d] Publish name\n",myrank);

if ( MPI_Publish_name(service, MPI_INFO_NULL, port_name) == MPI_SUCCESS )  {
  *srv = 1;
  printf("[%d] service %s available at %s\n",myrank,service,port_name);
}
else if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == 
MPI_SUCCESS ){
  MPI_Close_port( port_name ); 
  clt = 1;

}
else
  /* Throw exception */
  printf("[%d] Error\n",myrank);
  }
  else{
/* Waiting rank 0 publish name */
sleep(1);
printf("[%d] Lookup name\n",myrank);
if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == MPI_SUCCESS 
){
  clt = 1;
}
else
  /* Throw exception */
  ;
  }
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  
  MPI_Bcast(srv,1,MPI_INT,0,MPI_COMM_WORLD);



You're broadcasting srv here -- won't everyone now have *srv==1, such that they 
all call MPI_COMM_ACCEPT, below?

  

  if ( *srv )
/* I am the Master */
MPI_Comm_accept( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,  );
  else{
/*  Connect to service SERVER, get the inter-communicator server*/
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
if ( MPI_Comm_connect(port_name_clt, MPI_INFO_NULL, 0, MPI_COMM_WORLD, 
 )  == MPI_SUCCESS )
  printf("[%d] I get the connection with %s at %s !\n",myrank, service, 
port_name_clt);
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  }

  if(myrank != 0) *srv = 0;

  return gcom;

}




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




  





  



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS

Jeff,

The dead lock is not in MPI_Comm_accept and MPI_Comm_connect, but before 
in MPI_Publish_name and MPI_Lookup_name.

So the broadcast of srv is not involved in the dead lock.

Best
Bernard

Bernard Secher - SFME/LGLS a écrit :

Jeff,

Only the processes of the program where process 0 successed to publish 
name, have srv=1 and then call MPI_Comm_accept.
The processes of the program where process 0 failed to publish name, 
have srv=0 and then call MPI_Comm_connect.


That's worked like this with openmpi 1.4.1.

Is it different whith openmpi 1.5.1 ?

Best
Bernard


Jeff Squyres a écrit :

On Jan 5, 2011, at 10:36 AM, Bernard Secher - SFME/LGLS wrote:

  

MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
{
  int clt=0;
  MPI_Request request; /* requete pour communication non bloquante */
  MPI_Comm gcom;
  MPI_Status status; 
  char   port_name_clt[MPI_MAX_PORT_NAME]; 


  if( service == NULL ) service = defaultService;

  /* only process of rank null can publish name */
  MPI_Barrier(MPI_COMM_WORLD);

  /* A lookup for an unpublished service generate an error */
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
  if( myrank == 0 ){
/* Try to be a server. If there service is already published, try to be a 
cient */
MPI_Open_port(MPI_INFO_NULL, port_name); 
printf("[%d] Publish name\n",myrank);

if ( MPI_Publish_name(service, MPI_INFO_NULL, port_name) == MPI_SUCCESS )  {
  *srv = 1;
  printf("[%d] service %s available at %s\n",myrank,service,port_name);
}
else if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == 
MPI_SUCCESS ){
  MPI_Close_port( port_name ); 
  clt = 1;

}
else
  /* Throw exception */
  printf("[%d] Error\n",myrank);
  }
  else{
/* Waiting rank 0 publish name */
sleep(1);
printf("[%d] Lookup name\n",myrank);
if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == MPI_SUCCESS 
){
  clt = 1;
}
else
  /* Throw exception */
  ;
  }
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  
  MPI_Bcast(srv,1,MPI_INT,0,MPI_COMM_WORLD);



You're broadcasting srv here -- won't everyone now have *srv==1, such that they 
all call MPI_COMM_ACCEPT, below?

  

  if ( *srv )
/* I am the Master */
MPI_Comm_accept( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,  );
  else{
/*  Connect to service SERVER, get the inter-communicator server*/
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
if ( MPI_Comm_connect(port_name_clt, MPI_INFO_NULL, 0, MPI_COMM_WORLD, 
 )  == MPI_SUCCESS )
  printf("[%d] I get the connection with %s at %s !\n",myrank, service, 
port_name_clt);
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  }

  if(myrank != 0) *srv = 0;

  return gcom;

}




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




  







Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-07 Thread Bernard Secher - SFME/LGLS

Jeff,

Only the processes of the program where process 0 successed to publish 
name, have srv=1 and then call MPI_Comm_accept.
The processes of the program where process 0 failed to publish name, 
have srv=0 and then call MPI_Comm_connect.


That's worked like this with openmpi 1.4.1.

Is it different whith openmpi 1.5.1 ?

Best
Bernard


Jeff Squyres a écrit :

On Jan 5, 2011, at 10:36 AM, Bernard Secher - SFME/LGLS wrote:

  

MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
{
  int clt=0;
  MPI_Request request; /* requete pour communication non bloquante */
  MPI_Comm gcom;
  MPI_Status status; 
  char   port_name_clt[MPI_MAX_PORT_NAME]; 


  if( service == NULL ) service = defaultService;

  /* only process of rank null can publish name */
  MPI_Barrier(MPI_COMM_WORLD);

  /* A lookup for an unpublished service generate an error */
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
  if( myrank == 0 ){
/* Try to be a server. If there service is already published, try to be a 
cient */
MPI_Open_port(MPI_INFO_NULL, port_name); 
printf("[%d] Publish name\n",myrank);

if ( MPI_Publish_name(service, MPI_INFO_NULL, port_name) == MPI_SUCCESS )  {
  *srv = 1;
  printf("[%d] service %s available at %s\n",myrank,service,port_name);
}
else if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == 
MPI_SUCCESS ){
  MPI_Close_port( port_name ); 
  clt = 1;

}
else
  /* Throw exception */
  printf("[%d] Error\n",myrank);
  }
  else{
/* Waiting rank 0 publish name */
sleep(1);
printf("[%d] Lookup name\n",myrank);
if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == MPI_SUCCESS 
){
  clt = 1;
}
else
  /* Throw exception */
  ;
  }
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  
  MPI_Bcast(srv,1,MPI_INT,0,MPI_COMM_WORLD);



You're broadcasting srv here -- won't everyone now have *srv==1, such that they 
all call MPI_COMM_ACCEPT, below?

  

  if ( *srv )
/* I am the Master */
MPI_Comm_accept( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,  );
  else{
/*  Connect to service SERVER, get the inter-communicator server*/
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
if ( MPI_Comm_connect(port_name_clt, MPI_INFO_NULL, 0, MPI_COMM_WORLD, 
 )  == MPI_SUCCESS )
  printf("[%d] I get the connection with %s at %s !\n",myrank, service, 
port_name_clt);
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  }

  if(myrank != 0) *srv = 0;

  return gcom;

}




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




  



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.



Re: [OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-06 Thread Bernard Secher - SFME/LGLS

Is it a bug in openmpi V1.5.1 ?

Bernard

Bernard Secher - SFME/LGLS a écrit :

Hello,

What are the changes between openMPI 1.4.1 and 1.5.1 about MPI2 
service of publishing name.
I have 2 programs which connect them via MPI_Publish_name and 
MPI_Lookup_name subroutines and ompi-server.
That's OK with 1.4.1 version , but I have a deadlock with 1.5.1 
version inside the subroutine MPI_Publish_name and MPI_Lookup_name.


best
Bernard

That's my connection subroutine:


MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
{
  int clt=0;
  MPI_Request request; /* requete pour communication non bloquante */
  MPI_Comm gcom;
  MPI_Status status; 
  char   port_name_clt[MPI_MAX_PORT_NAME]; 


  if( service == NULL ) service = defaultService;

  /* only process of rank null can publish name */
  MPI_Barrier(MPI_COMM_WORLD);

  /* A lookup for an unpublished service generate an error */
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
  if( myrank == 0 ){
/* Try to be a server. If there service is already published, try to be a 
cient */
MPI_Open_port(MPI_INFO_NULL, port_name); 
printf("[%d] Publish name\n",myrank);

if ( MPI_Publish_name(service, MPI_INFO_NULL, port_name) == MPI_SUCCESS )  {
  *srv = 1;
  printf("[%d] service %s available at %s\n",myrank,service,port_name);
}
else if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == 
MPI_SUCCESS ){
  MPI_Close_port( port_name ); 
  clt = 1;

}
else
  /* Throw exception */
  printf("[%d] Error\n",myrank);
  }
  else{
/* Waiting rank 0 publish name */
sleep(1);
printf("[%d] Lookup name\n",myrank);
if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == MPI_SUCCESS 
){
  clt = 1;
}
else
  /* Throw exception */
  ;
  }
  MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  
  MPI_Bcast(srv,1,MPI_INT,0,MPI_COMM_WORLD);


  if ( *srv )
/* I am the Master */
MPI_Comm_accept( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,  );
  else{
/*  Connect to service SERVER, get the inter-communicator server*/
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
if ( MPI_Comm_connect(port_name_clt, MPI_INFO_NULL, 0, MPI_COMM_WORLD, 
 )  == MPI_SUCCESS )
  printf("[%d] I get the connection with %s at %s !\n",myrank, service, 
port_name_clt);
MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
  }

  if(myrank != 0) *srv = 0;

  return gcom;

}



  



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.



[OMPI users] change between openmpi 1.4.1 and 1.5.1 about MPI2 publish name

2011-01-05 Thread Bernard Secher - SFME/LGLS

Hello,

What are the changes between openMPI 1.4.1 and 1.5.1 about MPI2 service 
of publishing name.
I have 2 programs which connect them via MPI_Publish_name and 
MPI_Lookup_name subroutines and ompi-server.
That's OK with 1.4.1 version , but I have a deadlock with 1.5.1 version 
inside the subroutine MPI_Publish_name and MPI_Lookup_name.


best
Bernard

That's my connection subroutine:


MPI_Comm remoteConnect(int myrank, int *srv, char *port_name, char* service)
{
 int clt=0;
 MPI_Request request; /* requete pour communication non bloquante */
 MPI_Comm gcom;
 MPI_Status status; 
 char   port_name_clt[MPI_MAX_PORT_NAME]; 


 if( service == NULL ) service = defaultService;

 /* only process of rank null can publish name */
 MPI_Barrier(MPI_COMM_WORLD);

 /* A lookup for an unpublished service generate an error */
 MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
 if( myrank == 0 ){
   /* Try to be a server. If there service is already published, try to be a 
cient */
   MPI_Open_port(MPI_INFO_NULL, port_name); 
   printf("[%d] Publish name\n",myrank);

   if ( MPI_Publish_name(service, MPI_INFO_NULL, port_name) == MPI_SUCCESS )  {
 *srv = 1;
 printf("[%d] service %s available at %s\n",myrank,service,port_name);
   }
   else if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == 
MPI_SUCCESS ){
 MPI_Close_port( port_name ); 
 clt = 1;

   }
   else
 /* Throw exception */
 printf("[%d] Error\n",myrank);
 }
 else{
   /* Waiting rank 0 publish name */
   sleep(1);
   printf("[%d] Lookup name\n",myrank);
   if ( MPI_Lookup_name(service, MPI_INFO_NULL, port_name_clt) == MPI_SUCCESS ){
 clt = 1;
   }
   else
 /* Throw exception */
 ;
 }
 MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);

 MPI_Bcast(srv,1,MPI_INT,0,MPI_COMM_WORLD);

 if ( *srv )
   /* I am the Master */
   MPI_Comm_accept( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD,  );
 else{
   /*  Connect to service SERVER, get the inter-communicator server*/
   MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_RETURN);
   if ( MPI_Comm_connect(port_name_clt, MPI_INFO_NULL, 0, MPI_COMM_WORLD,  
)  == MPI_SUCCESS )
 printf("[%d] I get the connection with %s at %s !\n",myrank, service, 
port_name_clt);
   MPI_Errhandler_set(MPI_COMM_WORLD, MPI_ERRORS_ARE_FATAL);
 }

 if(myrank != 0) *srv = 0;

 return gcom;

}





[OMPI users] [SPAM:### 85%] Re: [SPAM:### 83%] problem when compiling ompenmpiV1.5.1

2010-12-16 Thread Bernard Secher - SFME/LGLS

Thanks Jody,

Is it possible to install openmpi without openmp ? Is there any option 
in configure for that ?


Bernard

jody a écrit :

Hi

if i rememmber correctly, "omp.h" is a header file for OpenMP which is
not the same as Open MPI.
So it looks like you have to install OpenMP,
Then you can compile it with the compiler option -fopenmp (in gcc)

Jody


On Thu, Dec 16, 2010 at 11:56 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:
  

I get the following error message when I compile openmpi V1.5.1:

  CXXotfprofile-otfprofile.o
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:11:18:
erreur: omp.h : Aucun fichier ou dossier de ce type
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:
In function ‘int main(int, const char**)’:
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:325:
erreur: ‘omp_set_num_threads’ was not declared in this scope
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:460:
erreur: ‘omp_get_thread_num’ was not declared in this scope
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:471:
erreur: ‘omp_get_num_threads’ was not declared in this scope

The compiler doesn't find the omp.h file.
What happens ?

Best
Bernard

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
  



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.



[OMPI users] [SPAM:### 83%] problem when compiling ompenmpi V1.5.1

2010-12-16 Thread Bernard Secher - SFME/LGLS

I get the following error message when I compile openmpi V1.5.1:

 CXXotfprofile-otfprofile.o
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:11:18: 
erreur: omp.h : Aucun fichier ou dossier de ce type
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp: 
In function 'int main(int, const char**)':
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:325: 
erreur: 'omp_set_num_threads' was not declared in this scope
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:460: 
erreur: 'omp_get_thread_num' was not declared in this scope
../../../../../../../../../openmpi-1.5.1-src/ompi/contrib/vt/vt/extlib/otf/tools/otfprofile/otfprofile.cpp:471: 
erreur: 'omp_get_num_threads' was not declared in this scope


The compiler doesn't find the omp.h file.
What happens ?

Best
Bernard



Re: [OMPI users] dead lock in MPI_Finalize

2009-01-26 Thread Bernard Secher - SFME/LGLS

Hi Jody,

I think it is not a problem of MPI_Sends which doesn't match a 
corresponding MPI_Recvs, because all processes reach MPI_Finalize(). If 
not, at least one process would be blocked before reaching MPI_Finalize.


Bernard



jody a écrit :

Hi Bernard

The structure looks as far as i can see.
Did it run OK on Open-MPI 1.2.X?
So are you sure all processes reach the MPI_Finalize command?
Usually MPI_Finalize only completes when all processes reach it.
I think you should also make sure that all MPI_Sends are matched by
corresponding MPI_Recvs.

Jody

On Fri, Jan 23, 2009 at 11:08 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:
  

Thanks Jody for your answer.

I launch 2 instances of my program on 2 processes each instance, on the same
machine.
I use MPI_Publish_name, MPI_Lookup_name  to create a global communicator on
the 4 processes.
Then the 4 processes exchange data.

The main program is a CORBA server. I send you this program.

Bernard

jody a écrit :

For instance:
- how many processes on how many machines,
- what kind of computation
- perhaps minimal code which reproduces this failing
- configuration settings, etc.
See: http://www.open-mpi.org/community/help/

Without any information except for "it doesn't work",
nobody can give you any help whatsoever.

Jody

On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:


Hello Jeff,

I don't understand what you mean by "A _detailed_ description of what is
failing".
The problem is a dead lock in MPI_Finalize() function. All processes are
blocked in this MPI_Finalize() function.

Bernard

Jeff Squyres a écrit :


Per this note on the "getting help" page, we still need the following:

"A _detailed_ description of what is failing. The more details that you
provide, the better. E-mails saying "My application doesn't work!" will
inevitably be answered with requests for more information about exactly what
doesn't work; so please include as much information detailed in your initial
e-mail as possible."

Additionally:

"The best way to get help is to provide a "recipie" for reproducing the
problem."

Thanks!


On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote:



Hello Tim,

I send you the information in join files.

Bernard

Tim Mattox a écrit :


Can you send all the information listed here:

  http://www.open-mpi.org/community/help/

On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:



Hello,

I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.

Can some body help me please?

Bernard



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






--
  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
 Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--

   _\\|//_
  (' 0 0 ')
ooO  (_) Ooo__
 Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
   oooO (   )
   (   ) ) /
\ ( (_/
 \_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-26 Thread Bernard Secher - SFME/LGLS

Hello George,

Thanks for your messages. Yes i disconnect my different worlds before 
calling MPI_Finalize().


Bernard

George Bosilca a écrit :
I was somehow confused when I wrote my last email and I mixed up the 
MPI versions (thanks to Dick Treumann for gently pointing me to the 
truth). Before MPI 2.1, the MPI Standard was unclear how the 
MPI_Finalize should behave in the context of spawned or joined worlds, 
which make the disconnect+finalize the only safe and portable way to 
correctly finalize all processes connected. However, the MPI 2.1 had 
clarified this point, and now MPI_Finalize is collective over all 
connected processes (for a definition of connected processes please 
see the MPI 2.1 10.5 page 318).


However, if you really want to write a portable MPI application, I 
suggest to use the disconnect+finalize, at least until all MPI 
libraries available are 2.1 compliant.


Open MPI 1.3 version was supposed to be 2.1 compliant, so I guess I'll 
have to create a new bug report for this.


  Thanks,
george.

On Jan 23, 2009, at 10:02 , George Bosilca wrote:


I don't know what your program is doing but I kind of guess what the  =

problem is. If you use MPI 2 dynamics to spawn or connect two  =

MPI_COMM_WORLD you have to disconnect them before calling  =

MPI_Finalize. The reason is that an MPI_Finalize do the opposite of 
an  =


MPI_Init, so it is MPI_COMM_WORLD based. Make sure your different  =

world are disconnected before doing the MPI_Finalize should solve the  =

problem.

  george.

On Jan 23, 2009, at 06:00 , Bernard Secher - SFME/LGLS wrote:


No i didn't run this program whith Open-MPI 1.2.X because one said  =



to me there were many changes between 1.2.X version and 1.3 version  =



about MPI_publish_name, MPI_Lookup_name (new ompi-server, ...), and  =



it was better to use 1.3 version.

Yes i am sure all processes reach MPI_Finalize() function because i  =



write message just before (it is the END_OF macro in my program),  =



and i am sure all processes are locked in MPI_Finalize() function  =



beacause i write message just after (it is the MESSAGE macro).

May be all MPI_Sends are not matched  by corresponding MPI_Recvs,...  =



It can be a possibility.

Thanks
Bernard



jody a =E9crit :

Hi Bernard

The structure looks as far as i can see.
Did it run OK on Open-MPI 1.2.X?
So are you sure all processes reach the MPI_Finalize command?
Usually MPI_Finalize only completes when all processes reach it.
I think you should also make sure that all MPI_Sends are matched by
corresponding MPI_Recvs.

Jody

On Fri, Jan 23, 2009 at 11:08 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:


Thanks Jody for your answer.

I launch 2 instances of my program on 2 processes each instance,  =



on the same
machine.
I use MPI_Publish_name, MPI_Lookup_name  to create a global  =



communicator on
the 4 processes.
Then the 4 processes exchange data.

The main program is a CORBA server. I send you this program.

Bernard

jody a =E9crit :

For instance:
- how many processes on how many machines,
- what kind of computation
- perhaps minimal code which reproduces this failing
- configuration settings, etc.
See: http://www.open-mpi.org/community/help/

Without any information except for "it doesn't work",
nobody can give you any help whatsoever.

Jody

On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:


Hello Jeff,

I don't understand what you mean by "A _detailed_ description of  =



what is
failing".
The problem is a dead lock in MPI_Finalize() function. All  =



processes are
blocked in this MPI_Finalize() function.

Bernard

Jeff Squyres a =E9crit :


Per this note on the "getting help" page, we still need the  =



following:

"A _detailed_ description of what is failing. The more details  =



that you
provide, the better. E-mails saying "My application doesn't work!"  =



will
inevitably be answered with requests for more information about  =



exactly what
doesn't work; so please include as much information detailed in  =



your initial
e-mail as possible."

Additionally:

"The best way to get help is to provide a "recipie" for  =



reproducing the
problem."

Thanks!


On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote:



Hello Tim,

I send you the information in join files.

Bernard

Tim Mattox a =E9crit :


Can you send all the information listed here:

http://www.open-mpi.org/community/help/

On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:



Hello,

I have a case wher i have a dead lock in MPI_Finalize() function  =



with
openMPI v1.3.

Can some body help me please?

Bernard



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






--
_\\|//_
   (' 0 0 ')
ooO  (_)  =



Ooo___

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-23 Thread Bernard Secher - SFME/LGLS
No i didn't run this program whith Open-MPI 1.2.X because one said to me 
there were many changes between 1.2.X version and 1.3 version about 
MPI_publish_name, MPI_Lookup_name (new ompi-server, ...), and it was 
better to use 1.3 version.


Yes i am sure all processes reach MPI_Finalize() function because i 
write message just before (it is the END_OF macro in my program), and i 
am sure all processes are locked in MPI_Finalize() function beacause i 
write message just after (it is the MESSAGE macro).


May be all MPI_Sends are not matched  by corresponding MPI_Recvs,... It 
can be a possibility.


Thanks
Bernard



jody a écrit :

Hi Bernard

The structure looks as far as i can see.
Did it run OK on Open-MPI 1.2.X?
So are you sure all processes reach the MPI_Finalize command?
Usually MPI_Finalize only completes when all processes reach it.
I think you should also make sure that all MPI_Sends are matched by
corresponding MPI_Recvs.

Jody

On Fri, Jan 23, 2009 at 11:08 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:
  

Thanks Jody for your answer.

I launch 2 instances of my program on 2 processes each instance, on the same
machine.
I use MPI_Publish_name, MPI_Lookup_name  to create a global communicator on
the 4 processes.
Then the 4 processes exchange data.

The main program is a CORBA server. I send you this program.

Bernard

jody a écrit :

For instance:
- how many processes on how many machines,
- what kind of computation
- perhaps minimal code which reproduces this failing
- configuration settings, etc.
See: http://www.open-mpi.org/community/help/

Without any information except for "it doesn't work",
nobody can give you any help whatsoever.

Jody

On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:


Hello Jeff,

I don't understand what you mean by "A _detailed_ description of what is
failing".
The problem is a dead lock in MPI_Finalize() function. All processes are
blocked in this MPI_Finalize() function.

Bernard

Jeff Squyres a écrit :


Per this note on the "getting help" page, we still need the following:

"A _detailed_ description of what is failing. The more details that you
provide, the better. E-mails saying "My application doesn't work!" will
inevitably be answered with requests for more information about exactly what
doesn't work; so please include as much information detailed in your initial
e-mail as possible."

Additionally:

"The best way to get help is to provide a "recipie" for reproducing the
problem."

Thanks!


On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote:



Hello Tim,

I send you the information in join files.

Bernard

Tim Mattox a écrit :


Can you send all the information listed here:

  http://www.open-mpi.org/community/help/

On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:



Hello,

I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.

Can some body help me please?

Bernard



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






--
  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
 Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--

   _\\|//_
  (' 0 0 ')
ooO  (_) Ooo__
 Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-23 Thread Bernard Secher - SFME/LGLS

Thanks Jody for your answer.

I launch 2 instances of my program on 2 processes each instance, on the 
same machine.
I use MPI_Publish_name, MPI_Lookup_name  to create a global communicator 
on the 4 processes.

Then the 4 processes exchange data.

The main program is a CORBA server. I send you this program.

Bernard

jody a écrit :

For instance:
- how many processes on how many machines,
- what kind of computation
- perhaps minimal code which reproduces this failing
- configuration settings, etc.
See: http://www.open-mpi.org/community/help/

Without any information except for "it doesn't work",
nobody can give you any help whatsoever.

Jody

On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:
  

Hello Jeff,

I don't understand what you mean by "A _detailed_ description of what is
failing".
The problem is a dead lock in MPI_Finalize() function. All processes are
blocked in this MPI_Finalize() function.

Bernard

Jeff Squyres a écrit :


Per this note on the "getting help" page, we still need the following:

"A _detailed_ description of what is failing. The more details that you
provide, the better. E-mails saying "My application doesn't work!" will
inevitably be answered with requests for more information about exactly what
doesn't work; so please include as much information detailed in your initial
e-mail as possible."

Additionally:

"The best way to get help is to provide a "recipie" for reproducing the
problem."

Thanks!


On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote:

  

Hello Tim,

I send you the information in join files.

Bernard

Tim Mattox a écrit :


Can you send all the information listed here:

  http://www.open-mpi.org/community/help/

On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:

  

Hello,

I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.

Can some body help me please?

Bernard



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





  

--
  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
 Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

  

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
  



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.

//  Copyright (C) 2007-2008  CEA/DEN, EDF R, OPEN CASCADE
//
//  Copyright (C) 2003-2007  OPEN CASCADE, EADS/CCR, LIP6, CEA/DEN,
//  CEDRAT, EDF R, LEG, PRINCIPIA R, BUREAU VERITAS
//
//  This library is free software; you can redistribute it and/or
//  modify it under the terms of the GNU Lesser General Public
//  License as published by the Fr

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-23 Thread Bernard Secher - SFME/LGLS

Hello Jeff,

I don't understand what you mean by "A _detailed_ description of what is 
failing".
The problem is a dead lock in MPI_Finalize() function. All processes are 
blocked in this MPI_Finalize() function.


Bernard

Jeff Squyres a écrit :

Per this note on the "getting help" page, we still need the following:

"A _detailed_ description of what is failing. The more details that 
you provide, the better. E-mails saying "My application doesn't work!" 
will inevitably be answered with requests for more information about 
exactly what doesn't work; so please include as much information 
detailed in your initial e-mail as possible."


Additionally:

"The best way to get help is to provide a "recipie" for reproducing 
the problem."


Thanks!


On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote:


Hello Tim,

I send you the information in join files.

Bernard

Tim Mattox a écrit :


Can you send all the information listed here:

   http://www.open-mpi.org/community/help/

On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:


Hello,

I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.

Can some body help me please?

Bernard



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users









--
   _\\|//_
  (' 0 0 ')
ooO  (_) Ooo__
 Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
   oooO (   )
   (   ) ) /
\ ( (_/
 \_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are 
addressed.

If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.

___ 


users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users







Re: [OMPI users] dead lock in MPI_Finalize

2009-01-22 Thread Bernard Secher - SFME/LGLS

Hello Tim,

I send you the information in join files.

Bernard

Tim Mattox a écrit :

Can you send all the information listed here:

   http://www.open-mpi.org/community/help/

On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:
  

Hello,

I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.

Can some body help me please?

Bernard



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






  



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.



config.log.tgz
Description: application/compressed-tar


ifconfig.log.tgz
Description: application/compressed-tar


ompi_info.log.tgz
Description: application/compressed-tar


[OMPI users] dead lock in MPI_Finalize

2009-01-21 Thread Bernard Secher - SFME/LGLS

Hello,

I have a case wher i have a dead lock in MPI_Finalize() function with 
openMPI v1.3.


Can some body help me please?

Bernard





[OMPI users] ORTE_ERROR_LOG

2009-01-16 Thread Bernard Secher - SFME/LGLS

Hello,

I have the following error at the beginning of my mpi code:

[is124684:07869] [[38040,0],0] ORTE_ERROR_LOG: Data unpack would read 
past end of buffer in file orted/orted_comm.c at line 448


Anybody can help me to solve this pb?

Bernard


Re: [OMPI users] default hostfile with 1.3 version

2009-01-06 Thread Bernard Secher - SFME/LGLS

How can i do this in my etc/openmpi-mca-params.conf file ?

Bernard

Ralph Castain a écrit :
I'm afraid that the changes in how we handle hostfiles forced us to 
remove the default hostfile name. Beginning with 1.3, you will need to 
specify it.


Note that you can do this in your etc/openmpi-mca-params.conf file, if 
you want.


Ralph

On Jan 6, 2009, at 4:36 AM, Bernard Secher - SFME/LGLS wrote:


Hello,

I take 1.3 version from svn base.

The default hostfile in etc/openmpi-default-hostfile is not taken. I 
must give to mpirun the -hostfile option to take this file. Is there 
any change in 1.3 version?


Regards
Bernard

___

users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.



[OMPI users] default hostfile with 1.3 version

2009-01-06 Thread Bernard Secher - SFME/LGLS

Hello,

I take 1.3 version from svn base.

The default hostfile in etc/openmpi-default-hostfile is not taken. I 
must give to mpirun the -hostfile option to take this file. Is there any 
change in 1.3 version?


Regards
Bernard





Re: [OMPI users] using of MPI_Publish_name with openmpi

2008-12-11 Thread Bernard Secher - SFME/LGLS

I have the same pb with 1.2.9rc1 version.
I don't see any orte-clean utility in this version.
But the best is i use the 1.3 version. Thanks to give me more details 
about ompi-server in the 1.3 version.


Regards
Bernard

Bernard Secher - SFME/LGLS a écrit :

I use first 1.2.5 version then 1.2.8 version.

When the 1.3 version will be available?

Before i try to use svn version. Thanks to give me more details.

Best regards
Bernard

Aurélien Bouteiller a écrit :

Bernard,

Could you give us more details on the version of Open MPI you are 
using ? I suppose from your message this is one of the 1.2 series, 
but mode details would be greatly helpful.


An utility called orte-clean can be used to try to remove all the 
garbage left, should something go wrong.


We have fixed a number of bugs in the MPI-2 dynamics recently. The 
forthcoming 1.3 should be more robust to that aspect. You can have a 
preview of it by using the svn version of Open MPI. The concepts have 
changed a little since then as --persistent --seed have been replaced 
by an external name server called ompi-server. I can give you more 
details if you want to try the svn version.


Regards,
Aurelien

--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321


Le 10 déc. 08 à 10:28, Bernard Secher - SFME/LGLS a écrit :


Hi everybody

I want to use MPI_publish_name function to do communicaztion between 
two independant code.


I saw on the web i must use the orted daemon with the following 
command:



orted --persistent --seed --scope public --universe foo

The communication success, but when i close the communication  I 
have a dead-lock at the following function: MPI_Comm_disconnect();



I have a second question:
How can i stop the orted daemon?

Bernard
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
   _\\|//_
  (' 0 0 ')
ooO  (_) Ooo__
 Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
   oooO (   )
   (   ) ) /
\ ( (_/
 \_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.

  



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.



Re: [OMPI users] using of MPI_Publish_name with openmpi

2008-12-11 Thread Bernard Secher - SFME/LGLS

I use first 1.2.5 version then 1.2.8 version.

When the 1.3 version will be available?

Before i try to use svn version. Thanks to give me more details.

Best regards
Bernard

Aurélien Bouteiller a écrit :

Bernard,

Could you give us more details on the version of Open MPI you are 
using ? I suppose from your message this is one of the 1.2 series, but 
mode details would be greatly helpful.


An utility called orte-clean can be used to try to remove all the 
garbage left, should something go wrong.


We have fixed a number of bugs in the MPI-2 dynamics recently. The 
forthcoming 1.3 should be more robust to that aspect. You can have a 
preview of it by using the svn version of Open MPI. The concepts have 
changed a little since then as --persistent --seed have been replaced 
by an external name server called ompi-server. I can give you more 
details if you want to try the svn version.


Regards,
Aurelien

--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321


Le 10 déc. 08 à 10:28, Bernard Secher - SFME/LGLS a écrit :


Hi everybody

I want to use MPI_publish_name function to do communicaztion between 
two independant code.


I saw on the web i must use the orted daemon with the following command:


orted --persistent --seed --scope public --universe foo

The communication success, but when i close the communication  I have 
a dead-lock at the following function: MPI_Comm_disconnect();



I have a second question:
How can i stop the orted daemon?

Bernard
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--

  _\\|//_
 (' 0 0 ')
ooO  (_) Ooo__
Bernard Sécher  DEN/DM2S/SFME/LGLSmailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, FranceFax: 33 (0)1 69 08 10 87
Oooo---
  oooO (   )
  (   ) ) /
   \ ( (_/
\_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.



[OMPI users] using of MPI_Publish_name with openmpi

2008-12-10 Thread Bernard Secher - SFME/LGLS

Hi everybody

I want to use MPI_publish_name function to do communicaztion between two 
independant code.


I saw on the web i must use the orted daemon with the following command:


orted --persistent --seed --scope public --universe foo

The communication success, but when i close the communication  I have a 
dead-lock at the following function: MPI_Comm_disconnect();



I have a second question:
How can i stop the orted daemon?

Bernard