How are you running the job without mpirun? Is this under slurm or some other 
RM?


On Oct 31, 2011, at 9:46 AM, Weston, Stephen wrote:

> Hello,
> 
> I'm seeing an error on one of our clusters when executing the
> MPI_Init function in a program that is _not_ invoked using the
> mpirun command.  The error is:
> 
>    Error obtaining unique transport key from ORTE
>    (orte_precondition_transports not present in the environment).
> 
> followed by "It looks like MPI_INIT failed for some reason; your
> parallel process is likely to abort.", etc.  Since mpirun sets
> this environment variable, it's not surprising that it isn't
> set, but in our other Open MPI installations it doesn't seem
> necessary for this environment variable to be set.
> 
> I can work around the problem by setting the
> "OMPI_MCA_orte_precondition_transports" environment variable
> before running the program using the command:
> 
>  % eval "export `mpirun env | grep OMPI_MCA_orte_precondition_transports`"
> 
> But I'm very curious what is causing this error, since it only
> happens on one of our clusters.  Could this indicate a problem
> with the way we configured Open MPI when we installed it?
> 
> Any pointers on how to further investigate this issue would be
> appreciated.
> 
> - Steve Weston
> 
> P.S.  I'm using Open MPI 1.4.3 on a Linux cluster using CentOS
> release 5.5.  It happens in any MPI program that I execute
> without mpirun.
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to