Even though you think you do, you actually don't have the path set correctly on 
the Mac side. Remember, when executing from ssh, the .cshrc (or whatever shell 
you use) executes differently. So even though an interactive login gets the 
right path, the non-interactive execution doesn't.

See http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path for 
details


On Jul 19, 2012, at 9:13 PM, christophe petit wrote:

> Hello,
> 
> I try to launch an executable on 2 computers ( Debian 6.0 and MacOS snow 
> leopard ).
> 
> I have successfully installed on both Open-MPI 1.6 with the option 
> "--enable-heterogenous" and
> setup passwordless ssh connection between the 2 computers.
> 
> My issue is that the distributed computing works fine but only when I launch 
> the 
> "mpirun -np 16 -hostfile hosts.txt program_exec" command from the MacOS 
> computer, not from Debian pc.
> 
> However, I have disabled firewall on MacOS with : 
> 
> sysctl -w net.inet.ip.fw.enable=0
> 
> When I launched from Debian, I get :
> 
> ~/mpirun -np 16 -hostfile hosts.txt program_exec 
> 
> [maco:01498] Error: unknown option "--daemonize"
> Usage: orted [OPTION]...
> Start an Open RTE Daemon
> 
>    --bootproxy <arg0>    Run as boot proxy for <job-id>
> -d|--debug               Debug the OpenRTE
> -d|--spin                Have the orted spin until we can connect a debugger
>                          to it
>    --debug-daemons       Enable debugging of OpenRTE daemons
>    --debug-daemons-file  Enable debugging of OpenRTE daemons, storing output
>                          in files
>    --gprreplica <arg0>   Registry contact information.
> -h|--help                This help message
>    --mpi-call-yield <arg0>  
>                          Have MPI (or similar) applications call yield when
>                          idle
>    --name <arg0>         Set the orte process name
>    --no-daemonize        Don't daemonize into the background
>    --nodename <arg0>     Node name as specified by host/resource
>                          description.
>    --ns-nds <arg0>       set sds/nds component to use for daemon (normally
>                          not needed)
>    --nsreplica <arg0>    Name service contact information.
>    --num_procs <arg0>    Set the number of process in this job
>    --persistent          Remain alive after the application process
>                          completes
>    --report-uri <arg0>   Report this process' uri on indicated pipe
>    --scope <arg0>        Set restrictions on who can connect to this
>                          universe
>    --seed                Host replicas for the core universe services
>    --set-sid             Direct the orted to separate from the current
>                          session
>    --tmpdir <arg0>       Set the root for the session directory tree
>    --universe <arg0>     Set the universe name as
>                          username@hostname:universe_name for this
>                          application
>    --vpid_start <arg0>   Set the starting vpid for this job
> --------------------------------------------------------------------------
> A daemon (pid 24370) died unexpectedly with status 251 while attempting
> to launch so we are aborting.
> 
> There may be more information reported by the environment (see above).
> 
> This may be because the daemon was unable to find all the needed shared
> libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
> location of the shared libraries on the remote nodes and this will
> automatically be forwarded to the remote nodes.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun noticed that the job aborted, but has no info as to the process
> that caused that situation.
> --------------------------------------------------------------------------
> 
> LD_LIBRARY_PATH and PATH are correctly set on both ( Open-MPI is installed on
> "/usr/local/openmpi/" ).
> 
> Apparently, the problem comes from the Debian pc ...
> 
> If anyone sees what's wrong ?
> 
> Thanks.
> 
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to