[OMPI users] Does Open MPI support manual launcher?

2016-06-02 Thread Du, Fan
Hi folks Starting from Open MPI, I can launch mpi application a.out as following on host1 mpirun --allow-run-as-root --host host1,host2 -np 4 /tmp/a.out On host2, I saw an proxy, say orted here is spawned: orted --hnp-topo-sig 4N:2S:4L3:20L2:20L1:20C:40H:x86_64 -mca ess env -mca orte_ess_jobi

Re: [OMPI users] Does Open MPI support manual launcher?

2016-06-02 Thread Gilles Gouaillardet
Hi, may I ask why you need/want to launch orted manually ? unless you are running under a batch manager, Open MPI uses the rsh pml to remotely start orted. basically, it does ssh host orted the best I can suggest is you do mpirun --mca orte_rsh_agent myrshagent.sh --mca orte_launch_agent mylau

Re: [OMPI users] Regression in MPI_File_close?!

2016-06-02 Thread Edgar Gabriel
Gilles, I think the semantics of MPI_File_close does not necessarily mandate that there has to be an MPI_Barrier based on that text snippet. However, I think what the Barrier does in this scenario is 'hide' a consequence of an implementation aspect. So the MPI standard might not mandate a Bar

Re: [OMPI users] Firewall settings for MPI communication

2016-06-02 Thread Ping Wang
Hi, I've installed Open MPI v1.10.2. Every VM on the cloud has two IPs (internal IP, public IP). When I run: mpirun --host hostname, the output is the hostname of the VM. But when I run: mpirun --host hostname, the output is bash: orted: command not found -

Re: [OMPI users] Firewall settings for MPI communication

2016-06-02 Thread Ralph Castain
Possibly - did you configure —enable-orterun-prefix-by-default as the error message suggests? > On Jun 2, 2016, at 7:44 AM, Ping Wang wrote: > > Hi, > > I've installed Open MPI v1.10.2. Every VM on the cloud has two IPs (internal > IP, public IP). > When I run: mpirun --host hostname, the o

Re: [OMPI users] Firewall settings for MPI communication

2016-06-02 Thread Gilles Gouaillardet
are you saying both IP are the ones of the VM on which mpirun is running ? orted is only launched on all the machines *except* the one running mpirun. can you double/triple check the IPs are ok and unique ? for example, mpirun --host /sbin/ifconfig -a can you also make sure Open MPI is installed

Re: [OMPI users] Firewall settings for MPI communication

2016-06-02 Thread Ping Wang
Hi, thank you Gilles for your suggestion. I tried: mpirun --prefix --host hostname, then it works. I’m sure both IPs are the ones of the VM on which mpirun is running, and they are unique. I also configured Open MPI with --enable-mpirun-prefix-by-default, but I still need to add --pre

Re: [OMPI users] Firewall settings for MPI communication

2016-06-02 Thread Gilles Gouaillardet
The syntax is configure --enable-mpirun-prefix-by-default --prefix= ... all hosts must be able to ssh each other passwordless. that means you need to generate a user ssh key pair on all hosts, add your public keys to the list of authorized keys, and ssh to all hosts in order to populate your known

[OMPI users] PSM vs PSM2

2016-06-02 Thread dpchoudh .
Hello all What is the difference between PSM and PSM2? Any pointer to more information is appreciated. Also, the PSM2 MTL does not seem to have a owner.txt file (on master, at least). Why is that? Thanks Durga We learn from history that we never learn from history.

[OMPI users] Docker Cluster Queue Manager

2016-06-02 Thread Rob Nagler
We would like to use MPI on Docker with arbitrarily configured clusters (e.g. created with StarCluster or bare metal). What I'm curious about is if there is a queue manager that understands Docker, file systems, MPI, and OpenAuth. JupyterHub does a lot of this, but it doesn't interface with MPI. Id

Re: [OMPI users] PSM vs PSM2

2016-06-02 Thread Cabral, Matias A
Hi Durga, Here is a short summary: PSM: is intended for Intel TrueScale InfiniBand product series. It is also known as PSM gen 1, uses libpsm_infinipath.so PSM2: is intended for Intel’s next generation fabric called OmniPath. PSM gen2, uses libpsm2.so. I didn’t know about the owner.txt missing.

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-02 Thread Ralph Castain
I’m afraid I’m not familiar with JupyterHub at all, or Salt. All you really need is: * a scheduler that understands the need to start all the procs at the same time - i.e., as a block * wireup support for the MPI procs themselves If JupyterHub can do the first, then you could just have it laun

Re: [OMPI users] Docker Cluster Queue Manager

2016-06-02 Thread Rob Nagler
Thanks, Ralph. I'm not sure I explained the problem clearly. Salt and JupyterHub are distractions, sorry. I have code which "wires up" a cluster for MPI. What I need is scheduler that allows users to: * Select which Docker image they'd like to wire up * Request a number of nodes/cores * Understan