Re: [OMPI devel] One-sided tests in MTT

2019-02-07 Thread Jeff Squyres (jsquyres) via devel
I'm re-sending this back to the devel list, because I just now realized that 
Nathan sent this to the devel list from an address that was not subscribed, so 
it was rejected / never went through to the entire list (I got it because I was 
CC'ed).


> On Jan 30, 2019, at 1:55 PM, Nathan Hjelm  wrote:
> 
> 
> For rma-mt:
> 
> Build:
> autoreconf -vif
> configure
> 
> Run:
> cd src
> mpirun  -n 2 -N 1 (if multinode) ./rmamt_bw -o put (or get) -s flush (pscw, 
> fence, lock, etc) -t  -x (binds threads to cores)
> 
> If that exits successfully then that is a pretty good smoke-test that 
> multi-threaded RMA is working.
> 
> ARMCI:
> 
> Build:
> ./autogen.pl
> ./configure
> make
> 
> Run:
> make check
> 
> ARMCI was broken with osc/rdma for a couple of released and we didn't know. 
> It is worth running the checks with OMPI_MCA_osc=sm,rdma, 
> OMPI_MCA_osc=sm,pt2pt, and OMPI_MCA_osc=sm,ucx to test each possible 
> configuration.
> 
> -Nathan
> 
> On Jan 30, 2019, at 11:26 AM, "Jeff Squyres (jsquyres) via devel" 
>  wrote:
> 
>> Yo Nathan --
>> 
>> I see you just added 2 suites of one-sided tests to the MTT repo. Huzzah!
>> 
>> Can you provide some simple recipes -- frankly, for someone who doesn't 
>> want/care to know how the tests work :-) -- on how to:
>> 
>> 1. Build the test suites
>> 2. Run in MTT
>> 
>> Thanks!
>> 
>> -- 
>> Jeff Squyres
>> jsquy...@cisco.com
>> 
>> ___
>> devel mailing list
>> devel@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/devel


-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel


[OMPI devel] Open MPI face-to-face

2019-02-07 Thread Jeff Squyres (jsquyres) via devel
It has been settled: Tue Apr 23 - Thu Apr 25, 2019, in San Jose, CA (probably 
at Cisco).

Please add your names and agenda items:

https://github.com/open-mpi/ompi/wiki/Meeting-2019-04

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel


[OMPI devel] Fwd: System Runtime Interfaces: What’s the Best Way Forward?

2019-02-07 Thread Josh Hursey
The Open MPI developer community might be interested in this discussion
(Below is the message I sent to the PMIx list a few weeks ago).

If you are interested in participating please register so we can properly
scope resource needs. Information in the link below.
 https://groups.google.com/d/msg/hpc-runtime-wg/LGaHyZ0jRvE/n2t9MeSkDgAJ

Thanks,
Josh


-- Forwarded message -
From: Josh Hursey 
Date: Fri, Jan 18, 2019 at 6:55 PM
Subject: System Runtime Interfaces: What’s the Best Way Forward?
To: 


I'd like to share this meeting announcement with the PMIx community.

I am co-facilitating this meeting to help champion the exceptional effort
of the PMIx community towards innovation, adoption, and standardization in
this domain. I intend to bring forward the PMIx standardization effort as
one such path forward towards the goals described in this announcement.

I hope for a meaningful discussion both on the group's mailing list and in
the face-to-face meeting. If you are interested and able to lend your voice
to the conversation it would be appreciated.

-- Josh




The entire HPC community (and beyond!) benefits by having a standardized
API specification between applications/tools and system runtime
environments.  Such a standard interface should focus on supporting HPC
application launch and wire-up; tools that wish to inspect, steer, and/or
debug parallel applications; interfaces to manage dynamic workload
applications; interfaces to support fault tolerance and cross-library
coordination; and communication across container boundaries.

Beyond the proprietary interfaces, the HPC community has seen the evolution
from PMI-1 to PMI-2 to the current PMIx interfaces. We would like to
discuss how to move the current state of practice forward towards greater
stability and wider adoption. What is the best way to achieve this goal
without hindering current progress in the broader HPC community? There are a
wide range of directions to go towards this goal, but which is the best
path - that is the question we seek to discuss in this group.

This effort seeks a broad community to participate in this discussion. The
community should represent folks working on parallel libraries (e.g., MPI,
UPC, OpenSHMEM), runtime environments (e.g., SLURM, LSF, Torque), container
runtimes and orchestration environments (e.g., Singularity, CharlieCloud,
Docker, Kubernetes), tools (e.g., TotalView, DDT, STAT), and the broader
research community.

We will have a face-to-face meeting on March 4, 2019, from 9:00 am - 2:00
pm in Chattanooga, TN, to discuss these questions.  The meeting will be
co-located with (but separate from) the MPI Forum meeting (See this page
for logistics information: https://www.mpi-forum.org/meetings/). The
meeting is co-located with the MPI Forum to facilitate organization and
because of the overlap in the communities.

A mailing list has been created at the link below to facilitate
conversation on this topic:
https://groups.google.com/forum/#!forum/hpc-runtime-wg

Feel free to forward this information to others in your community that
might be interested in this discussion.

Kathryn Mohror (LLNL) and Josh Hursey (IBM)



-- 
Josh Hursey
IBM Spectrum MPI Developer


-- 
Josh Hursey
IBM Spectrum MPI Developer
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel

[OMPI devel] Queued up Open MPI mails

2019-02-07 Thread Jeff Squyres (jsquyres) via devel
As you can probably tell from the floodgate of backlogged Open MPI mails that 
probably just landed in your inbox, there was some kind of issue at our mail 
list provider (but it only affected some of our lists).  They just released all 
the backlogged emails.

Enjoy the onslaught!

-- 
Jeff Squyres
jsquy...@cisco.com

___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel