Thanks Jody for your answer.

I launch 2 instances of my program on 2 processes each instance, on the same machine. I use MPI_Publish_name, MPI_Lookup_name to create a global communicator on the 4 processes.
Then the 4 processes exchange data.

The main program is a CORBA server. I send you this program.

Bernard

jody a écrit :
For instance:
- how many processes on how many machines,
- what kind of computation
- perhaps minimal code which reproduces this failing
- configuration settings, etc.
See: http://www.open-mpi.org/community/help/

Without any information except for "it doesn't work",
nobody can give you any help whatsoever.

Jody

On Fri, Jan 23, 2009 at 9:33 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:
Hello Jeff,

I don't understand what you mean by "A _detailed_ description of what is
failing".
The problem is a dead lock in MPI_Finalize() function. All processes are
blocked in this MPI_Finalize() function.

Bernard

Jeff Squyres a écrit :
Per this note on the "getting help" page, we still need the following:

"A _detailed_ description of what is failing. The more details that you
provide, the better. E-mails saying "My application doesn't work!" will
inevitably be answered with requests for more information about exactly what
doesn't work; so please include as much information detailed in your initial
e-mail as possible."

Additionally:

"The best way to get help is to provide a "recipie" for reproducing the
problem."

Thanks!


On Jan 22, 2009, at 8:53 AM, Bernard Secher - SFME/LGLS wrote:

Hello Tim,

I send you the information in join files.

Bernard

Tim Mattox a écrit :
Can you send all the information listed here:

  http://www.open-mpi.org/community/help/

On Wed, Jan 21, 2009 at 8:58 AM, Bernard Secher - SFME/LGLS
<bernard.sec...@cea.fr> wrote:

Hello,

I have a case wher i have a dead lock in MPI_Finalize() function with
openMPI v1.3.

Can some body help me please?

Bernard



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
      _\\|//_
     (' 0 0 ')
____ooO  (_) Ooo______________________________________________________
 Bernard Sécher  DEN/DM2S/SFME/LGLS    mailto : bsec...@cea.fr
 CEA Saclay, Bât 454, Pièce 114        Phone  : 33 (0)1 69 08 73 78
 91191 Gif-sur-Yvette Cedex, France    Fax    : 33 (0)1 69 08 10 87
------------Oooo---------------------------------------------------
      oooO (   )
      (   ) ) /
       \ ( (_/
        \_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.


<config.log.tgz><ifconfig.log.tgz><ompi_info.log.tgz>_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--

      _\\|//_
     (' 0 0 ')
____ooO  (_) Ooo______________________________________________________
Bernard Sécher  DEN/DM2S/SFME/LGLS    mailto : bsec...@cea.fr
CEA Saclay, Bât 454, Pièce 114        Phone  : 33 (0)1 69 08 73 78
91191 Gif-sur-Yvette Cedex, France    Fax    : 33 (0)1 69 08 10 87
------------Oooo---------------------------------------------------
      oooO (   )
      (   ) ) /
       \ ( (_/
        \_)


Ce message électronique et tous les fichiers attachés qu'il contient
sont confidentiels et destinés exclusivement à l'usage de la personne
à laquelle ils sont adressés. Si vous avez reçu ce message par erreur,
merci d'en avertir immédiatement son émetteur et de ne pas en conserver
de copie.

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual to whom they are addressed.
If you have received this e-mail in error please inform the sender
immediately, without keeping any copy thereof.

//  Copyright (C) 2007-2008  CEA/DEN, EDF R&D, OPEN CASCADE
//
//  Copyright (C) 2003-2007  OPEN CASCADE, EADS/CCR, LIP6, CEA/DEN,
//  CEDRAT, EDF R&D, LEG, PRINCIPIA R&D, BUREAU VERITAS
//
//  This library is free software; you can redistribute it and/or
//  modify it under the terms of the GNU Lesser General Public
//  License as published by the Free Software Foundation; either
//  version 2.1 of the License.
//
//  This library is distributed in the hope that it will be useful,
//  but WITHOUT ANY WARRANTY; without even the implied warranty of
//  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
//  Lesser General Public License for more details.
//
//  You should have received a copy of the GNU Lesser General Public
//  License along with this library; if not, write to the Free Software
//  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
//
//  See http://www.salome-platform.org/ or email : webmaster.sal...@opencascade.com
//
#include <mpi.h>
#include <iostream>
#include "MPIContainer_i.hxx"
#include "Utils_ORB_INIT.hxx"
#include "Utils_SINGLETON.hxx"
#include "utilities.h"
using namespace std;

int main(int argc, char* argv[])
{
  int nbproc, numproc;
  Engines_MPIContainer_i * myContainer=NULL;

  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&nbproc);
  MPI_Comm_rank(MPI_COMM_WORLD,&numproc);

  // Initialise the ORB.
  ORB_INIT &init = *SINGLETON_<ORB_INIT>::Instance() ;
  CORBA::ORB_var &orb = init( argc , argv ) ;
  //  SALOMETraceCollector *myThreadTrace = SALOMETraceCollector::instance(orb);

  BEGIN_OF("[" << numproc << "] " << argv[0])
  try {

    // Obtain a reference to the root POA.
    CORBA::Object_var obj = orb->resolve_initial_references("RootPOA");
    PortableServer::POA_var root_poa = PortableServer::POA::_narrow(obj);

    // obtain the root poa manager
    PortableServer::POAManager_var pman = root_poa->the_POAManager();

    // define policy objects     
    PortableServer::ImplicitActivationPolicy_var implicitActivation =
      root_poa->create_implicit_activation_policy(PortableServer::NO_IMPLICIT_ACTIVATION) ;

      // default = NO_IMPLICIT_ACTIVATION
    PortableServer::ThreadPolicy_var threadPolicy =
      root_poa->create_thread_policy(PortableServer::ORB_CTRL_MODEL) ;
      // default = ORB_CTRL_MODEL, other choice SINGLE_THREAD_MODEL

    // create policy list
    CORBA::PolicyList policyList;
    policyList.length(2);
    policyList[0] = PortableServer::ImplicitActivationPolicy::_duplicate(implicitActivation) ;
    policyList[1] = PortableServer::ThreadPolicy::_duplicate(threadPolicy) ;

    // create the child POA
    PortableServer::POAManager_var nil_mgr = PortableServer::POAManager::_nil() ;
    PortableServer::POA_var factory_poa =
      root_poa->create_POA("factory_poa", pman, policyList) ;
      //with nil_mgr instead of pman, a new POA manager is created with the new POA

    // destroy policy objects
    implicitActivation->destroy() ;
    threadPolicy->destroy() ;

    char *containerName = "";
    if (argc >1) 
    {
	containerName = argv[1] ;
    }

    MESSAGE("[" << numproc << "] MPIContainer: load MPIContainer servant");
    myContainer = new Engines_MPIContainer_i(nbproc,numproc,orb,factory_poa, containerName,argc,argv);

    pman->activate();

    orb->run();

  }
  catch(CORBA::SystemException&){
    INFOS("Caught CORBA::SystemException.");
  }
  catch(PortableServer::POA::WrongPolicy&){
    INFOS("Caught CORBA::WrongPolicyException.");
  }
  catch(PortableServer::POA::ServantAlreadyActive&){
    INFOS("Caught CORBA::ServantAlreadyActiveException");
  }
  catch(CORBA::Exception&){
    INFOS("Caught CORBA::Exception.");
  }
  catch(...){
    INFOS("Caught unknown exception.");
  }

  if(myContainer)
    delete myContainer;

  END_OF("[" << numproc << "] " << argv[0]);
  //  delete myThreadTrace;

  MPI_Finalize();

  MESSAGE("[" << numproc << "] MPIContainer: after MPI_Finalize");

  exit(0);

}

Reply via email to