$ grep stdin_target orte/runtime/orte_globals.c
635:    job->stdin_target = 0;


Recompiled with --enable-debug.


Same test case: 2 nodes, each node 10 cores. Rank 0 and mpirun command on the 
same node.

$ mpirun -display-devel-map --mca iof_base_verbose 100 ./a.out < test.in &> 
debug_info2.txt


The new debug_info2.txt file is attached.


Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users <users-boun...@lists.open-mpi.org> on behalf of r...@open-mpi.org 
<r...@open-mpi.org>
Sent: Thursday, August 25, 2016 8:59:23 AM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

??? Weird - can you send me an updated output of that last test we ran?

On Aug 25, 2016, at 7:51 AM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

Hi Ralph,

I saw the pull request and did a test with v2.0.1rc1, but the problem persists. 
Any ideas?

Thanks,

Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> 
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Wednesday, August 24, 2016 1:27:28 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

Bingo - found it, fix submitted and hope to get it into 2.0.1

Thanks for the assist!
Ralph


On Aug 24, 2016, at 12:15 PM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

I configured v2.0.1rc1 with --enable-debug and ran the test with --mca 
iof_base_verbose 100. I also added -display-devel-map in case it provides some 
useful information.

Test job has 2 nodes, each node 10 cores. Rank 0 and mpirun command on the same 
node.
$ mpirun -display-devel-map --mca iof_base_verbose 100 ./a.out < test.in &> 
debug_info.txt

The debug_info.txt is attached.

Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> 
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Wednesday, August 24, 2016 12:14:26 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

Afraid I can’t replicate a problem at all, whether rank=0 is local or not. I’m 
also using bash, but on CentOS-7, so I suspect the OS is the difference.

Can you configure OMPI with --enable-debug, and then run the test again with 
--mca iof_base_verbose 100? It will hopefully tell us something about why the 
IO subsystem is stuck.


On Aug 24, 2016, at 8:46 AM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

Hi Ralph,

For our tests, rank 0 is always on the same node with mpirun. I just tested 
mpirun with -nolocal and it still hangs.

Information on shell and OS

$ echo $0
-bash

$ lsb_release -a
LSB Version:    
:base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: Scientific
Description:    Scientific Linux release 6.8 (Carbon)
Release:        6.8
Codename:       Carbon


$ uname -a
Linux login.crane.hcc.unl.edu<http://login.crane.hcc.unl.edu/> 
2.6.32-642.3.1.el6.x86_64 #1 SMP Tue Jul 12 11:25:51 CDT 2016 x86_64 x86_64 
x86_64 GNU/Linux


Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> 
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Tuesday, August 23, 2016 8:14:48 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

Hmmm...that’s a good point. Rank 0 and mpirun are always on the same node on my 
cluster. I’ll give it a try.

Jingchao: is rank 0 on the node with mpirun, or on a remote node?


On Aug 23, 2016, at 5:58 PM, Gilles Gouaillardet 
<gil...@rist.or.jp<mailto:gil...@rist.or.jp>> wrote:

Ralph,

did you run task 0 and mpirun on different nodes ?

i observed some random hangs, though i cannot blame openmpi 100% yet

Cheers,

Gilles

On 8/24/2016 9:41 AM, r...@open-mpi.org<mailto:r...@open-mpi.org> wrote:
Very strange. I cannot reproduce it as I’m able to run any number of nodes and 
procs, pushing over 100Mbytes thru without any problem.

Which leads me to suspect that the issue here is with the tty interface. Can 
you tell me what shell and OS you are running?


On Aug 23, 2016, at 3:25 PM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

Everything stuck at MPI_Init. For a test job with 2 nodes and 10 cores each 
node, I got the following

$ mpirun ./a.out < test.in
Rank 2 has cleared MPI_Init
Rank 4 has cleared MPI_Init
Rank 7 has cleared MPI_Init
Rank 8 has cleared MPI_Init
Rank 0 has cleared MPI_Init
Rank 5 has cleared MPI_Init
Rank 6 has cleared MPI_Init
Rank 9 has cleared MPI_Init
Rank 1 has cleared MPI_Init
Rank 16 has cleared MPI_Init
Rank 19 has cleared MPI_Init
Rank 10 has cleared MPI_Init
Rank 11 has cleared MPI_Init
Rank 12 has cleared MPI_Init
Rank 13 has cleared MPI_Init
Rank 14 has cleared MPI_Init
Rank 15 has cleared MPI_Init
Rank 17 has cleared MPI_Init
Rank 18 has cleared MPI_Init
Rank 3 has cleared MPI_Init

then it just hanged.

--Jingchao

Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> 
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Tuesday, August 23, 2016 4:03:07 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

The IO forwarding messages all flow over the Ethernet, so the type of fabric is 
irrelevant. The number of procs involved would definitely have an impact, but 
that might not be due to the IO forwarding subsystem. We know we have flow 
control issues with collectives like Bcast that don’t have built-in 
synchronization points. How many reads were you able to do before it hung?

I was running it on my little test setup (2 nodes, using only a few procs), but 
I’ll try scaling up and see what happens. I’ll also try introducing some forced 
“syncs” on the Bcast and see if that solves the issue.

Ralph

On Aug 23, 2016, at 2:30 PM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

Hi Ralph,

I tested v2.0.1rc1 with your code but has the same issue. I also installed 
v2.0.1rc1 on a different cluster which has Mellanox QDR Infiniband and get the 
same result. For the tests you have done, how many cores and nodes did you use? 
I can trigger the problem by using multiple nodes and each node with more than 
10 cores.

Thank you for looking into this.

Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> 
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Monday, August 22, 2016 10:23:42 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

FWIW: I just tested forwarding up to 100MBytes via stdin using the simple test 
shown below with OMPI v2.0.1rc1, and it worked fine. So I’d suggest upgrading 
when the official release comes out, or going ahead and at least testing 
2.0.1rc1 on your machine. Or you can test this program with some input file and 
let me know if it works for you.

Ralph

#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <stdbool.h>
#include <unistd.h>
#include <mpi.h>

#define ORTE_IOF_BASE_MSG_MAX   2048

int main(int argc, char *argv[])
{
    int i, rank, size, next, prev, tag = 201;
    int pos, msgsize, nbytes;
    bool done;
    char *msg;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &rank);
    MPI_Comm_size(MPI_COMM_WORLD, &size);

    fprintf(stderr, "Rank %d has cleared MPI_Init\n", rank);

    next = (rank + 1) % size;
    prev = (rank + size - 1) % size;
    msg = malloc(ORTE_IOF_BASE_MSG_MAX);
    pos = 0;
    nbytes = 0;

    if (0 == rank) {
        while (0 != (msgsize = read(0, msg, ORTE_IOF_BASE_MSG_MAX))) {
            fprintf(stderr, "Rank %d: sending blob %d\n", rank, pos);
            if (msgsize > 0) {
                MPI_Bcast(msg, ORTE_IOF_BASE_MSG_MAX, MPI_BYTE, 0, 
MPI_COMM_WORLD);
            }
            ++pos;
            nbytes += msgsize;
        }
        fprintf(stderr, "Rank %d: sending termination blob %d\n", rank, pos);
        memset(msg, 0, ORTE_IOF_BASE_MSG_MAX);
        MPI_Bcast(msg, ORTE_IOF_BASE_MSG_MAX, MPI_BYTE, 0, MPI_COMM_WORLD);
        MPI_Barrier(MPI_COMM_WORLD);
    } else {
        while (1) {
            MPI_Bcast(msg, ORTE_IOF_BASE_MSG_MAX, MPI_BYTE, 0, MPI_COMM_WORLD);
            fprintf(stderr, "Rank %d: recvd blob %d\n", rank, pos);
            ++pos;
            done = true;
            for (i=0; i < ORTE_IOF_BASE_MSG_MAX; i++) {
                if (0 != msg[i]) {
                    done = false;
                    break;
                }
            }
            if (done) {
                break;
            }
        }
        fprintf(stderr, "Rank %d: recv done\n", rank);
        MPI_Barrier(MPI_COMM_WORLD);
    }

    fprintf(stderr, "Rank %d has completed bcast\n", rank);
    MPI_Finalize();
    return 0;
}



On Aug 22, 2016, at 3:40 PM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

This might be a thin argument but we have many users running mpirun in this way 
for years with no problem until this recent upgrade. And some home-brewed mpi 
codes do not even have a standard way to read the input files. Last time I 
checked, the openmpi manual still claims it supports stdin 
(https://www.open-mpi.org/doc/v2.0/man1/mpirun.1.php#sect14). Maybe I missed it 
but the v2.0 release notes did not mention any changes to the behaviors of 
stdin as well.

We can tell our users to run mpirun in the suggested way, but I do hope someone 
can look into the issue and fix it.

Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> 
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Monday, August 22, 2016 3:04:50 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

Well, I can try to find time to take a look. However, I will reiterate what 
Jeff H said - it is very unwise to rely on IO forwarding. Much better to just 
directly read the file unless that file is simply unavailable on the node where 
rank=0 is running.

On Aug 22, 2016, at 1:55 PM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

Here you can find the source code for lammps input 
https://github.com/lammps/lammps/blob/r13864/src/input.cpp

Based on the gdb output, rank 0 stuck at line 167
if (fgets(&line[m],maxline-m,infile) == NULL)
and the rest threads stuck at line 203
MPI_Bcast(&n,1,MPI_INT,0,world);

So rank 0 possibly hangs on the fgets() function.

Here are the whole backtrace information:
$ cat master.backtrace worker.backtrace
#0  0x0000003c37cdb68d in read () from /lib64/libc.so.6
#1  0x0000003c37c71ca8 in _IO_new_file_underflow () from /lib64/libc.so.6
#2  0x0000003c37c737ae in _IO_default_uflow_internal () from /lib64/libc.so.6
#3  0x0000003c37c67e8a in _IO_getline_info_internal () from /lib64/libc.so.6
#4  0x0000003c37c66ce9 in fgets () from /lib64/libc.so.6
#5  0x00000000005c5a43 in LAMMPS_NS::Input::file() () at ../input.cpp:167
#6  0x00000000005d4236 in main () at ../main.cpp:31
#0  0x00002b1635d2ace2 in poll_dispatch () from 
/util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libopen-pal.so.20
#1  0x00002b1635d1fa71 in opal_libevent2022_event_base_loop ()
   from /util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libopen-pal.so.20
#2  0x00002b1635ce4634 in opal_progress () from 
/util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libopen-pal.so.20
#3  0x00002b16351b8fad in ompi_request_default_wait () from 
/util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libmpi.so.20
#4  0x00002b16351fcb40 in ompi_coll_base_bcast_intra_generic ()
   from /util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libmpi.so.20
#5  0x00002b16351fd0c2 in ompi_coll_base_bcast_intra_binomial ()
   from /util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libmpi.so.20
#6  0x00002b1644fa6d9b in ompi_coll_tuned_bcast_intra_dec_fixed ()
   from /util/opt/openmpi/2.0.0/gcc/6.1.0/lib/openmpi/mca_coll_tuned.so
#7  0x00002b16351cb4fb in PMPI_Bcast () from 
/util/opt/openmpi/2.0.0/gcc/6.1.0/lib/libmpi.so.20
#8  0x00000000005c5b5d in LAMMPS_NS::Input::file() () at ../input.cpp:203
#9  0x00000000005d4236 in main () at ../main.cpp:31

Thanks,

Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
________________________________
From: users 
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> on 
behalf of r...@open-mpi.org<mailto:r...@open-mpi.org> 
<r...@open-mpi.org<mailto:r...@open-mpi.org>>
Sent: Monday, August 22, 2016 2:17:10 PM
To: Open MPI Users
Subject: Re: [OMPI users] stdin issue with openmpi/2.0.0

Hmmm...perhaps we can break this out a bit? The stdin will be going to your 
rank=0 proc. It sounds like you have some subsequent step that calls MPI_Bcast?

Can you first verify that the input is being correctly delivered to rank=0? 
This will help us isolate if the problem is in the IO forwarding, or in the 
subsequent Bcast.

On Aug 22, 2016, at 1:11 PM, Jingchao Zhang 
<zh...@unl.edu<mailto:zh...@unl.edu>> wrote:

Hi all,

We compiled openmpi/2.0.0 with gcc/6.1.0 and intel/13.1.3. Both of them have 
odd behaviors when trying to read from standard input.

For example, if we start the application lammps across 4 nodes, each node 16 
cores, connected by Intel QDR Infiniband, mpirun works fine for the 1st time, 
but always stuck in a few seconds thereafter.
Command:
mpirun ./lmp_ompi_g++ < in.snr
in.snr is the Lammps input file. compiler is gcc/6.1.

Instead, if we use
mpirun ./lmp_ompi_g++ -in in.snr
it works 100%.

Some odd behaviors we gathered so far.
1. For 1 node job, stdin always works.
2. For multiple nodes, stdin works unstably when the number of cores per node 
are relatively small. For example, for 2/3/4 nodes, each node 8 cores, mpirun 
works most of the time. But for each node with >8 cores, mpirun works the 1st 
time, then always stuck. There seems to be a magic number when it stops working.
3. We tested Quantum Expresso with compiler intel/13 and had the same issue.

We used gdb to debug and found when mpirun was stuck, the rest of the processes 
were all waiting on mpi broadcast from the master thread. The lammps binary, 
input file and gdb core files (example.tar.bz2) can be downloaded from this 
link https://drive.google.com/open?id=0B3Yj4QkZpI-dVWZtWmJ3ZXNVRGc

Extra information:
1. Job scheduler is slurm.
2. configure setup:
./configure     --prefix=$PREFIX \
                --with-hwloc=internal \
                --enable-mpirun-prefix-by-default \
                --with-slurm \
                --with-verbs \
                --with-psm \
                --disable-openib-connectx-xrc \
                --with-knem=/opt/knem-1.1.2.90mlnx1 \
                --with-cma
3. openmpi-mca-params.conf file
orte_hetero_nodes=1
hwloc_base_binding_policy=core
rmaps_base_mapping_policy=core
opal_cuda_support=0
btl_openib_use_eager_rdma=0
btl_openib_max_eager_rdma=0
btl_openib_flags=1

Thanks,
Jingchao

Dr. Jingchao Zhang
Holland Computing Center
University of Nebraska-Lincoln
402-472-6400
_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users




_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

<debug_info.txt>_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: registering 
framework iof components
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: found loaded 
component hnp
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: component hnp 
has no register or open function
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: found loaded 
component tool
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: component tool 
has no register or open function
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: found loaded 
component mr_orted
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: component 
mr_orted has no register or open function
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: found loaded 
component mr_hnp
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: component 
mr_hnp has no register or open function
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: found loaded 
component orted
[c1725.crane.hcc.unl.edu:222353] mca: base: components_register: component 
orted has no register or open function
[c1725.crane.hcc.unl.edu:222353] defining endpt: file base/iof_base_frame.c 
line 271 fd 1
[c1725.crane.hcc.unl.edu:222353] defining endpt: file base/iof_base_frame.c 
line 274 fd 2
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: opening iof 
components
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: found loaded 
component hnp
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: component hnp open 
function successful
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: found loaded 
component tool
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: component tool 
open function successful
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: found loaded 
component mr_orted
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: component mr_orted 
open function successful
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: found loaded 
component mr_hnp
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: component mr_hnp 
open function successful
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: found loaded 
component orted
[c1725.crane.hcc.unl.edu:222353] mca: base: components_open: component orted 
open function successful
[c1725.crane.hcc.unl.edu:222353] mca:base:select: Auto-selecting iof components
[c1725.crane.hcc.unl.edu:222353] mca:base:select:(  iof) Querying component 
[hnp]
[c1725.crane.hcc.unl.edu:222353] mca:base:select:(  iof) Query of component 
[hnp] set priority to 100
[c1725.crane.hcc.unl.edu:222353] mca:base:select:(  iof) Querying component 
[tool]
[c1725.crane.hcc.unl.edu:222353] mca:base:select:(  iof) Querying component 
[mr_orted]
[c1725.crane.hcc.unl.edu:222353] mca:base:select:(  iof) Querying component 
[mr_hnp]
[c1725.crane.hcc.unl.edu:222353] mca:base:select:(  iof) Querying component 
[orted]
[c1725.crane.hcc.unl.edu:222353] mca:base:select:(  iof) Selected component 
[hnp]
[c1725.crane.hcc.unl.edu:222353] mca: base: close: component tool closed
[c1725.crane.hcc.unl.edu:222353] mca: base: close: unloading component tool
[c1725.crane.hcc.unl.edu:222353] mca: base: close: component mr_orted closed
[c1725.crane.hcc.unl.edu:222353] mca: base: close: unloading component mr_orted
[c1725.crane.hcc.unl.edu:222353] mca: base: close: component mr_hnp closed
[c1725.crane.hcc.unl.edu:222353] mca: base: close: unloading component mr_hnp
[c1725.crane.hcc.unl.edu:222353] mca: base: close: component orted closed
[c1725.crane.hcc.unl.edu:222353] mca: base: close: unloading component orted
 Data for JOB [21980,1] offset 0

 Mapper requested: NULL  Last mapper: round_robin  Mapping policy: 
BYCORE:NOOVERSUBSCRIBE  Ranking policy: CORE
 Binding policy: CORE  Cpu set: NULL  PPR: NULL  Cpus-per-rank: 1
        Num new daemons: 0      New daemon starting vpid INVALID
        Num nodes: 2

 Data for node: c1725   State: 3
        Daemon: [[21980,0],0]   Daemon launched: True
        Num slots: 10   Slots in use: 10        Oversubscribed: FALSE
        Num slots allocated: 10 Max slots: 0
        Num procs: 10   Next node_rank: 10
        Data for proc: [[21980,1],0]
                Pid: 0  Local rank: 0   Node rank: 0    App rank: 0
                State: INITIALIZED      App_context: 0
                Locale:  [B/././././././.][./././././././.]
                Binding: [B/././././././.][./././././././.]
        Data for proc: [[21980,1],1]
                Pid: 0  Local rank: 1   Node rank: 1    App rank: 1
                State: INITIALIZED      App_context: 0
                Locale:  [./B/./././././.][./././././././.]
                Binding: [./B/./././././.][./././././././.]
        Data for proc: [[21980,1],2]
                Pid: 0  Local rank: 2   Node rank: 2    App rank: 2
                State: INITIALIZED      App_context: 0
                Locale:  [././B/././././.][./././././././.]
                Binding: [././B/././././.][./././././././.]
        Data for proc: [[21980,1],3]
                Pid: 0  Local rank: 3   Node rank: 3    App rank: 3
                State: INITIALIZED      App_context: 0
                Locale:  [./././B/./././.][./././././././.]
                Binding: [./././B/./././.][./././././././.]
        Data for proc: [[21980,1],4]
                Pid: 0  Local rank: 4   Node rank: 4    App rank: 4
                State: INITIALIZED      App_context: 0
                Locale:  [././././B/././.][./././././././.]
                Binding: [././././B/././.][./././././././.]
        Data for proc: [[21980,1],5]
                Pid: 0  Local rank: 5   Node rank: 5    App rank: 5
                State: INITIALIZED      App_context: 0
                Locale:  [./././././B/./.][./././././././.]
                Binding: [./././././B/./.][./././././././.]
        Data for proc: [[21980,1],6]
                Pid: 0  Local rank: 6   Node rank: 6    App rank: 6
                State: INITIALIZED      App_context: 0
                Locale:  [././././././B/.][./././././././.]
                Binding: [././././././B/.][./././././././.]
        Data for proc: [[21980,1],7]
                Pid: 0  Local rank: 7   Node rank: 7    App rank: 7
                State: INITIALIZED      App_context: 0
                Locale:  [./././././././B][./././././././.]
                Binding: [./././././././B][./././././././.]
        Data for proc: [[21980,1],8]
                Pid: 0  Local rank: 8   Node rank: 8    App rank: 8
                State: INITIALIZED      App_context: 0
                Locale:  [./././././././.][B/././././././.]
                Binding: [./././././././.][B/././././././.]
        Data for proc: [[21980,1],9]
                Pid: 0  Local rank: 9   Node rank: 9    App rank: 9
                State: INITIALIZED      App_context: 0
                Locale:  [./././././././.][./B/./././././.]
                Binding: [./././././././.][./B/./././././.]

 Data for node: c1726   State: 3
        Daemon: [[21980,0],1]   Daemon launched: True
        Num slots: 10   Slots in use: 10        Oversubscribed: FALSE
        Num slots allocated: 10 Max slots: 0
        Num procs: 10   Next node_rank: 10
        Data for proc: [[21980,1],10]
                Pid: 0  Local rank: 0   Node rank: 0    App rank: 10
                State: INITIALIZED      App_context: 0
                Locale:  [B/././././././.][./././././././.]
                Binding: [B/././././././.][./././././././.]
        Data for proc: [[21980,1],11]
                Pid: 0  Local rank: 1   Node rank: 1    App rank: 11
                State: INITIALIZED      App_context: 0
                Locale:  [./B/./././././.][./././././././.]
                Binding: [./B/./././././.][./././././././.]
        Data for proc: [[21980,1],12]
                Pid: 0  Local rank: 2   Node rank: 2    App rank: 12
                State: INITIALIZED      App_context: 0
                Locale:  [././B/././././.][./././././././.]
                Binding: [././B/././././.][./././././././.]
        Data for proc: [[21980,1],13]
                Pid: 0  Local rank: 3   Node rank: 3    App rank: 13
                State: INITIALIZED      App_context: 0
                Locale:  [./././B/./././.][./././././././.]
                Binding: [./././B/./././.][./././././././.]
        Data for proc: [[21980,1],14]
                Pid: 0  Local rank: 4   Node rank: 4    App rank: 14
                State: INITIALIZED      App_context: 0
                Locale:  [././././B/././.][./././././././.]
                Binding: [././././B/././.][./././././././.]
        Data for proc: [[21980,1],15]
                Pid: 0  Local rank: 5   Node rank: 5    App rank: 15
                State: INITIALIZED      App_context: 0
                Locale:  [./././././B/./.][./././././././.]
                Binding: [./././././B/./.][./././././././.]
        Data for proc: [[21980,1],16]
                Pid: 0  Local rank: 6   Node rank: 6    App rank: 16
                State: INITIALIZED      App_context: 0
                Locale:  [././././././B/.][./././././././.]
                Binding: [././././././B/.][./././././././.]
        Data for proc: [[21980,1],17]
                Pid: 0  Local rank: 7   Node rank: 7    App rank: 17
                State: INITIALIZED      App_context: 0
                Locale:  [./././././././B][./././././././.]
                Binding: [./././././././B][./././././././.]
        Data for proc: [[21980,1],18]
                Pid: 0  Local rank: 8   Node rank: 8    App rank: 18
                State: INITIALIZED      App_context: 0
                Locale:  [./././././././.][B/././././././.]
                Binding: [./././././././.][B/././././././.]
        Data for proc: [[21980,1],19]
                Pid: 0  Local rank: 9   Node rank: 9    App rank: 19
                State: INITIALIZED      App_context: 0
                Locale:  [./././././././.][./B/./././././.]
                Binding: [./././././././.][./B/./././././.]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pulling fd 39 for 
process [[21980,1],0]
[c1725.crane.hcc.unl.edu:222353] defining endpt: file iof_hnp.c line 371 fd 39
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 35 for 
process [[21980,1],0]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],0]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 41 for 
process [[21980,1],0]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],0]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 43 for 
process [[21980,1],0]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],0]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 37 for 
process [[21980,1],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],1]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 46 for 
process [[21980,1],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],1]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 49 for 
process [[21980,1],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],1]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 38 for 
process [[21980,1],2]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],2]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 50 for 
process [[21980,1],2]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],2]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 52 for 
process [[21980,1],2]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],2]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 42 for 
process [[21980,1],3]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],3]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 53 for 
process [[21980,1],3]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],3]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 55 for 
process [[21980,1],3]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],3]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 45 for 
process [[21980,1],4]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],4]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 56 for 
process [[21980,1],4]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],4]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 58 for 
process [[21980,1],4]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],4]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 47 for 
process [[21980,1],5]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],5]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 59 for 
process [[21980,1],5]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],5]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 61 for 
process [[21980,1],5]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],5]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 54 for 
process [[21980,1],6]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],6]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 63 for 
process [[21980,1],6]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],6]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 65 for 
process [[21980,1],6]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],6]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 60 for 
process [[21980,1],7]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],7]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 67 for 
process [[21980,1],7]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],7]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 69 for 
process [[21980,1],7]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],7]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 64 for 
process [[21980,1],8]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],8]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 71 for 
process [[21980,1],8]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],8]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 73 for 
process [[21980,1],8]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],8]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 68 for 
process [[21980,1],9]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],9]: iof_hnp.c 219
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 75 for 
process [[21980,1],9]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],9]: iof_hnp.c 222
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] iof:hnp pushing fd 77 for 
process [[21980,1],9]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] defining read event for 
[[21980,1],9]: iof_hnp.c 225
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],1] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],2]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],2] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],3]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],3] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],5]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],5] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],6]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],6] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],7]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],7] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],9]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],9] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],0]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],0] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],4]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],4] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] read 28 bytes from stderr of 
[[21980,1],8]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
28 bytes to stderr for [[21980,1],8] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 1 has cleared MPI_Init
Rank 2 has cleared MPI_Init
Rank 3 has cleared MPI_Init
Rank 5 has cleared MPI_Init
Rank 6 has cleared MPI_Init
Rank 7 has cleared MPI_Init
Rank 9 has cleared MPI_Init
Rank 0 has cleared MPI_Init
Rank 4 has cleared MPI_Init
Rank 8 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],11]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],11]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],11] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 11 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],12]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],12]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],12] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 12 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],13]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],13]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],13] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 13 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],14]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],14]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],14] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 14 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],15]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],15]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],15] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 15 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],16]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],16]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],16] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 16 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],17]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],17]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],17] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 17 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],18]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],18]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],18] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 18 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],19]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],19]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],19] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 19 has cleared MPI_Init
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF from proc 
[[21980,0],1]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] received IOF cmd from sender 
[[61934,56064],32765] for source [[21980,1],10]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] unpacked 29 bytes from remote 
proc [[21980,1],10]
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output setting up to write 
29 bytes to stderr for [[21980,1],10] on fd 2
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:output adding write event
[c1725.crane.hcc.unl.edu:222353] [[21980,0],0] write:handler writing data to 2
Rank 10 has cleared MPI_Init
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to