Re: [OMPI devel] RFC: RML change to multi-select

2016-03-16 Thread Howard
Hi Ralph I dont know if it's relevant, but I'm working on an ofi BTL so we can use the OSC rdma. Howard Von meinem iPhone gesendet > Am 15.03.2016 um 17:21 schrieb Ralph Castain : > > Hi folks > > We are working on integrating the RML with libfabric so we have acc

Re: [OMPI devel] RFC: RML change to multi-select

2016-03-17 Thread Howard
I think that's a better approach. Not clear you'd want to use same EP type as BTL. I'm going for RDM type for now for BTL. Howard Von meinem iPhone gesendet > Am 16.03.2016 um 09:35 schrieb Ralph Castain : > > Interesting! Yeah, we debated about BTL or go direct to O

[OMPI devel] opal components subject to removal for 1.9 release

2014-10-03 Thread Howard
of these components need to be maintained going forward, please speak up. ORTE and OMPI components will be discussed at next week's devel meeting. Thanks, Howard

Re: [OMPI devel] RFC: calloc instead of malloc in opal_obj_new()

2014-10-03 Thread Howard
Hi Jeff, I'd be okay with this as long as there would be a config option to revert back to using malloc rather than calloc. Howard On 10/3/14 2:54 PM, Jeff Squyres (jsquyres) wrote: WHAT: change the malloc() to calloc() in opal_obj_new() (perhaps only in debug builds?) WHY: Drasti

Re: [OMPI devel] ompi_win_create hangs on a non uniform cluster

2015-11-14 Thread Howard
Hi Gilles Could you check whether you also see this problem with v2.x? Thanks, Howard Von meinem iPhone gesendet > Am 10.11.2015 um 19:57 schrieb Gilles Gouaillardet : > > Nathan, > > a simple MPI_Win_create test hangs on my non uniform cluster > (ibm/onesided/c_create)

Re: [OMPI devel] RFC: RML change to multi-select

2016-03-17 Thread Howard Pritchard
fabric had a > decent shared memory provider... > > > On Mar 17, 2016, at 7:10 AM, Howard wrote: > > I think that's a better approach. Not clear you'd want to use same EP type > as BTL. I'm going for RDM type for now for BTL. > > Howard > > Von me

[OMPI devel] psm2 and psm2_ep_open problems

2016-04-14 Thread Howard Pritchard
ve ofi mtl, but perhaps there's another way to get psm2 mtl to work for single node jobs? I'd prefer to not ask users to disable psm2 mtl explicitly for their single node jobs. Thanks for suggestions. Howard

Re: [OMPI devel] psm2 and psm2_ep_open problems

2016-04-14 Thread Howard Pritchard
the PSM2MTL to handle this feature of PSM2. Howard Am Donnerstag, 14. April 2016 schrieb Cabral, Matias A : > Hi Howard, > > > > I suspect this is the known issue that when using SLURM with OMPI and PSM > that is discussed here: > > https://www.open-mpi.org/community/lists

[OMPI devel] Fwd: psm2 and psm2_ep_open problems

2016-04-15 Thread Howard Pritchard
I didn't copy dev on this. -- Weitergeleitete Nachricht -- Von: *Howard Pritchard* Datum: Donnerstag, 14. April 2016 Betreff: psm2 and psm2_ep_open problems An: Open MPI Developers Hi Matias Actually I triaged this further. Open mpi PMI subsystem is actually doing t

Re: [OMPI devel] psm2 and psm2_ep_open problems

2016-04-18 Thread Howard Pritchard
please point me to the patch. -- sent from my smart phonr so no good type. Howard On Apr 15, 2016 1:04 PM, "Ralph Castain" wrote: > I have a patch that I think will resolve this problem - would you please > take a look? > > Ralph > > > > On Apr 1

[OMPI devel] PSM2 Intel folks question

2016-04-19 Thread Howard Pritchard
6.x86_64 infinipath-*psm*-3.3-0.g6f42cdb1bb8.2.el7.x86_64 should we get newer rpms installed? Is there a way to disable the AMSHM path? I'm wondering if that would help since multi-node jobs seems to run fine. Thanks for any help, Howard

Re: [OMPI devel] PSM2 Intel folks question

2016-04-19 Thread Howard Pritchard
its not a SLURM specific problem. 2016-04-19 12:25 GMT-06:00 Cabral, Matias A : > Hi Howard, > > > > Couple more questions to understand a little better the context: > > - What type of job running? > > - Is this also under srun? > > > > Fo

Re: [OMPI devel] PSM2 Intel folks question

2016-04-20 Thread Howard Pritchard
ixes to get the PSM2 MTL working on our omnipath clusters. I don't think this problem has anything to do with SLURM except for the jobid manipulation to generate the unique key. Howard 2016-04-19 17:18 GMT-06:00 Cabral, Matias A : > Howard, > > > > PSM2_DEVICES, I went ba

Re: [OMPI devel] Common symbol warnings in tarballs (was: make install warns about 'common symbols')

2016-04-20 Thread Howard Pritchard
I also think this symbol checker should not be in the tarball. Howard 2016-04-20 13:08 GMT-06:00 Jeff Squyres (jsquyres) : > On Apr 20, 2016, at 2:08 PM, dpchoudh . wrote: > > > > Just to clarify, I was doing a build (after adding code to support a new > transport) from co

Re: [OMPI devel] Common symbol warnings in tarballs (was: make install warns about 'common symbols')

2016-04-21 Thread Howard Pritchard
Am Mittwoch, 20. April 2016 schrieb Paul Hargrove : > Not sure if Howard wants the check to be OFF by default in tarballs, or > absent completely. > > I meant the former. > I test almost exclusively from RC tarballs, and have access to many > uncommon platforms. > So, if y

Re: [OMPI devel] PSM2 Intel folks question

2016-04-21 Thread Howard Pritchard
Hi Matias, I updated the issue 1559 with the info requested. It might be simpler to just switch over to using the issue for tracking this conversation? I don't want to be posting big attachments emails on this list. Thanks, Howard 2016-04-20 19:21 GMT-06:00 Cabral, Matias A : > H

Re: [OMPI devel] 2.0.0 is coming: what do we need to communicate to users?

2016-04-29 Thread Howard Pritchard
Hi Jeff, checkpoint/restart is not supported in this release. Does this release work with totalview? I recall we had some problems, and do not remember if they were resolved. We may also want to clarify if any PML/MTLs are experimental in this release. MPI_THREAD_MULTIPLE support. Howard

Re: [OMPI devel] Open MPI v2.0.0rc2

2016-04-30 Thread Howard Pritchard
Hi Jeff, Let's just update the MPI_THREAD_MULTIPLE comment to say that enable-mpi-thread-multiple is still required as part of config. Howard 2016-04-29 22:20 GMT-06:00 Orion Poplawski : > On 04/28/2016 05:01 PM, Jeff Squyres (jsquyres) wrote: > >> At long last, here's t

[OMPI devel] updating the users migration guide request

2016-05-16 Thread Howard Pritchard
migration guide. The wiki page format of the guide is at ttps:// github.com/open-mpi/ompi/wiki/User-Migration-Guide%3A-1.8.x-and-v1.10.x-to-v2.0.0 We'll discuss this at the devel telecon tomorrow (5/17). Thanks, Howard

[OMPI devel] NERSC down today so lanl-bot getting time off

2016-05-24 Thread Howard Pritchard
orrow when lanl-bot should be back in business wrt cori and edison. Howard

Re: [OMPI devel] Jenkins testing - what purpose are we striving to achieve?

2016-06-07 Thread Howard Pritchard
he underlying branch which the PR targets. Howard 2016-06-07 13:33 GMT-06:00 Ralph Castain : > Hi folks > > I’m trying to get a handle on our use of Jenkins testing for PRs prior to > committing them. When we first discussed this, it was my impression that > our objective wa

[OMPI devel] Open MPI v2.0.0rc3 now available

2016-06-15 Thread Howard Pritchard
priority callbacks are made during Open MPI's main progress loop. - Disable backtrace support by default in the PSM/PSM2 libraries to prevent unintentional conflicting behavior. Thanks, Howard -- Howard Pritchard HPC-DES Los Alamos National Laboratory

Re: [OMPI devel] Issue with 2.0.0rc3, singleton init

2016-06-16 Thread Howard Pritchard
Hi Lisandro, Thanks for giving the rc3 a try. Could you post the output of ompi_info from your install to the list? Thanks, Howard 2016-06-16 7:55 GMT-06:00 Lisandro Dalcin : > ./configure --prefix=/home/devel/mpi/openmpi/2.0.0rc3 --enable-debug > --enable-mem-debug >

Re: [OMPI devel] [2.0.0rc4] non-critical faulres report

2016-07-12 Thread Howard Pritchard
Paul, Could you narrow down the versions of the PGCC where you get the ICE when using the -m32 option? Thanks, Howard 2016-07-06 15:29 GMT-06:00 Paul Hargrove : > The following are previously reported issues that I am *not* expecting to > be resolved in 2.0.0. > However, I am lis

Re: [OMPI devel] 2.0.0rc4 Crash in MPI_File_write_all_end

2016-07-13 Thread Howard Pritchard
issue to a 2.0.1 bug fix release. Howard 2016-07-12 13:51 GMT-06:00 Eric Chamberland < eric.chamberl...@giref.ulaval.ca>: > Hi Edgard, > > I just saw that your patch got into ompi/master... any chances it goes > into ompi-release/v2.x before rc5? > > thanks, > >

[OMPI devel] tcp btl rendezvous performance question

2016-07-18 Thread Howard Pritchard
e socket performance obtained with iperf for large messages (~16 Gb/sec). We tried adjusting the tcp_btl_rendezvous threshold but that doesn't appear to actually be adjustable from the mpirun command line. Thanks for any suggestions, Howard

[OMPI devel] heads up about OMPI/master

2016-12-01 Thread Howard Pritchard
what has happened. Howard ___ devel mailing list devel@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] heads up about OMPI/master

2016-12-01 Thread Howard Pritchard
Hi Gilles I didn't see a merge commit for all these commits, hence my concern that it was a mistake. In general it's better to pull in commits via PR process. Howard Am Donnerstag, 1. Dezember 2016 schrieb Gilles Gouaillardet : > fwiw, the major change is in https://github.com/

Re: [OMPI devel] heads up about OMPI/master

2016-12-01 Thread Howard Pritchard
Ralph, I don't know how it happened but if you do git log --oneline --topo-order you don't see a Merge pull request #2488 in the history for master. Howard 2016-12-01 16:59 GMT-07:00 r...@open-mpi.org : > Ummm...guys, it was done via PR. I saw it go by, and it was all done t

Re: [OMPI devel] Open MPI v2.0.2rc1 is up

2016-12-16 Thread Howard Pritchard
HI Paul, Thanks for the checking the rc out. And for noting the grammar mistake. Howard 2016-12-16 1:00 GMT-07:00 Paul Hargrove : > My testing is complete. > > The only problems not already known are related to PGI's recent "Community > Edition" compilers and

Re: [OMPI devel] Open MPI v2.0.2rc1 is up

2016-12-19 Thread Howard Pritchard
HI Paul, Would you mind resending the "runtime error w/ PGI usempif08 on OpenPOWER" email without the config.log attached? Thanks, Howard 2016-12-16 12:17 GMT-07:00 Howard Pritchard : > HI Paul, > > Thanks for the checking the rc out. And for noting the grammar &g

Re: [OMPI devel] Open MPI v2.0.2rc1 is up

2016-12-20 Thread Howard Pritchard
Hi Orion, Thanks for trying out the rc. Which compiler/version of compiler are you using? Howard 2016-12-20 10:50 GMT-07:00 Orion Poplawski : > On 12/14/2016 07:58 PM, Jeff Squyres (jsquyres) wrote: > > Please test! > > > > https://www.open-mpi.org/software/ompi/v2

Re: [OMPI devel] Open MPI v2.0.2rc1 is up

2016-12-20 Thread Howard Pritchard
HI Orion, Opened issue 2610 <https://github.com/open-mpi/ompi/issues/2610>. Thanks, Howard 2016-12-20 11:27 GMT-07:00 Howard Pritchard : > Hi Orion, > > Thanks for trying out the rc. Which compiler/version of compiler are you > using? > > Howard > > >

[OMPI devel] Open MPI 2.0.2rc2 is up

2016-12-23 Thread Howard Pritchard
. Thanks, Howard -- Howard Pritchard HPC-DES Los Alamos National Laboratory ___ devel mailing list devel@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] [2.0.2rc2] FreeBSD-11 run failure

2017-01-05 Thread Howard Pritchard
HI Paul, I opened https://github.com/open-mpi/ompi/issues/2665 to track this. Thanks for reporting this. Howard 2017-01-04 14:43 GMT-07:00 Paul Hargrove : > With the 2.0.2rc2 tarball on FreeBSD-11 (i386 or amd64) I am configuring > with: > --prefix=... CC=clang CXX=clang++

Re: [OMPI devel] rdmacm and udcm for 2.0.1 and RoCE

2017-01-05 Thread Howard Pritchard
,128,64,32,32,32:S,2048,1024,128,32:S, 12288,1024,128,32:S,65536,1024,128,32 (all the reset of the command line args) and see if it then works? Howard 2017-01-04 16:37 GMT-07:00 Dave Turner : > -- > No OpenFabrics conn

Re: [OMPI devel] [2.0.2rc2] opal_fifo hang w/ --enable-osx-builtin-atomics

2017-01-05 Thread Howard Pritchard
Hi Paul, I opened issue 2666 <https://github.com/open-mpi/ompi/issues/2666> to track this. Howard 2017-01-05 0:23 GMT-07:00 Paul Hargrove : > On Macs running Yosemite (OS X 10.10 w/ Xcode 7.1) and El Capitan (OS X > 10.11 w/ Xcode 8.1) I have configured with > CC=cc CXX

Re: [OMPI devel] [2.0.2rc3] build failure ppc64/-m32 and bultin-atomics

2017-01-06 Thread Howard Pritchard
Hi Paul, Thanks for checking this. This problem was previously reporting and there's an issue: https://github.com/open-mpi/ompi/issues/2610 tracking it. Howard 2017-01-05 21:19 GMT-07:00 Paul Hargrove : > I have a standard Linux/ppc64 system with gcc-4.8.3 > I have configured t

Re: [OMPI devel] [2.0.2rc3] build failure ppc64/-m32 and bultin-atomics

2017-01-06 Thread Howard Pritchard
Hi Paul, Sorry for the confusion. This is a different problem. I'll open an issue for this one too. Howard 2017-01-06 9:18 GMT-07:00 Howard Pritchard : > Hi Paul, > > Thanks for checking this. > > This problem was previously reporting and there's an issue: > &

Re: [OMPI devel] [2.0.2rc3] build failure ppc64/-m32 and bultin-atomics

2017-01-06 Thread Howard Pritchard
HI Paul, https://github.com/open-mpi/ompi/issues/2677 It seems we have a bunch of problems with PPC64 atomics and I'd like to see if we can get at least some of these issues resolved for 2.0.2, so I've set this as a blocker along with 2610. Howard 2017-01-06 9:48 GMT-07:00 Howard

Re: [OMPI devel] Fwd: Re: [OMPI users] still segmentation fault with openmpi-2.0.2rc3 on Linux

2017-01-12 Thread Howard Pritchard
Siegmar, Could you confirm that if you use one of the mpirun arg lists that works for Gilles that your test case passes. Something simple like mpirun -np 1 ./spawn_master ? Howard 2017-01-11 18:27 GMT-07:00 Gilles Gouaillardet : > Ralph, > > > so it seems the root cause

Re: [OMPI devel] [OMPI users] still segmentation fault with openmpi-2.0.2rc3 on Linux

2017-01-13 Thread Howard Pritchard
Thanks Siegmar. I just wanted to confirm you weren't having some other issue besides the host and slot-list problems. Howard 2017-01-12 23:50 GMT-07:00 Siegmar Gross < siegmar.gr...@informatik.hs-fulda.de>: > Hi Howard and Gilles, > > thank you very much for your help. Al

Re: [OMPI devel] [2.0.2rc4] "make install" failure on NetBSD/i386 (libtool?)

2017-01-28 Thread Howard Pritchard
HI Paul, This might be a result of building the tarball on a new system. Would you mind trying the rc3 tarball and see if that builds on the system? Howard 2017-01-27 15:12 GMT-07:00 Paul Hargrove : > I had no problem with 2.0.2rc3 on NetBSD, but with 2.0.2rc4 I am seeing a > "m

Re: [OMPI devel] No Preset Parameters found

2017-02-20 Thread Howard Pritchard
Hello Amit which version of Open MPI are you using? Howard -- sent from my smart phonr so no good type. Howard On Feb 20, 2017 12:09 PM, "Kumar, Amit" wrote: > Dear OpenMPI, > > > > Wondering what preset parameters are this warning is indicating? &g

Re: [OMPI devel] [2.1.0rc2] stupid run failure on Mac OS X Sierra

2017-03-07 Thread Howard Pritchard
Hi Paul There is an entry 8 under OS-X FAQ which describes this problem. Adding max allowable len is a good idea. Howard Paul Hargrove schrieb am Di. 7. März 2017 um 08:04: > The following is fairly annoying (though I understand the problem is real): > > $ [full-path-to]/mpirun -m

Re: [OMPI devel] Segfault during a free in reduce_scatter using basic component

2017-03-28 Thread Howard Pritchard
Hello Emmanuel, Which version of Open MPI are you using? Howard 2017-03-28 3:38 GMT-06:00 BRELLE, EMMANUEL : > Hi, > > We are working on a portals4 components and we have found a bug (causing > a segmentation fault ) which must be related to the coll/basic component. > D

Re: [OMPI devel] Pull request: LANL-XXX tests failing

2017-03-30 Thread Howard Pritchard
Actually it looks like we're running out of disk space at AWS. 2017-03-30 9:28 GMT-06:00 r...@open-mpi.org : > You didn’t do anything wrong - the Jenkins test server at LANL is having a > problem. > > On Mar 30, 2017, at 8:22 AM, DERBEY, NADIA wrote: > > Hi, > > I just created a pull request an

Re: [OMPI devel] Pull request: LANL-XXX tests failing

2017-03-30 Thread Howard Pritchard
well not sure what's going on. there was an upgrade of jenkins a bunch of functionality seems to have gotten lost. 2017-03-30 9:37 GMT-06:00 Howard Pritchard : > Actually it looks like we're running out of disk space at AWS. > > > 2017-03-30 9:28 GMT-06:00 r...@open-mpi

[OMPI devel] OS-X specific jenkins/PR retest

2017-04-07 Thread Howard Pritchard
Hi Folks, I added an OS-X specific bot retest command for jenkins CI: bot:osx:retest Also added a blurb to the related wiki page: https://github.com/open-mpi/ompi/wiki/PRJenkins Hope this helps folks who encounter os-x specific problems with their PRs. Howard

[OMPI devel] Open MPI v2.1.1 release reminder - public service announcement

2017-04-19 Thread Howard Pritchard
HI Folks, Reminder that we are planning to do a v2.1.1 bug release next Tuesday (4/25/17) as discussed in yesterday's con-call. If you have bug fixes you'd like to get in to v2.1.1 please open PRs this week so there will be time for review and testing in MTT. Thanks, Howar

[OMPI devel] Open MPI 2.1.1rc1 is up

2017-04-27 Thread Howard Pritchard
location. Thanks to Kevin Buckley for reporting and supplying a fix. - Fix a problem with conflicting PMI symbols when linking statically. Thanks to Kilian Cavalotti for reporting. Please try it out if you have time. Thanks, Howard and Jeff ___ devel

[OMPI devel] Open MPI v2.0.3rc1 available for testing

2017-05-26 Thread Howard Pritchard
mangling of custom CFLAGS when configuring Open MPI. Thanks to Phil Tooley for reporting. - Fix some minor memory leaks and remove some unused variables. Thanks to Joshua Gerrard for reporting. - Fix MPI_ALLGATHERV bug with MPI_IN_PLACE. Thanks, Howard

Re: [OMPI devel] Time to remove Travis?

2017-06-01 Thread Howard Pritchard
I vote for removal too. Howard r...@open-mpi.org schrieb am Do. 1. Juni 2017 um 08:10: > I’d vote to remove it - it’s too unreliable anyway > > > On Jun 1, 2017, at 6:30 AM, Jeff Squyres (jsquyres) > wrote: > > > > Is it time to remove Travis? > > > > I b

Re: [OMPI devel] SLURM 17.02 support

2017-06-16 Thread Howard Pritchard
Hi Ralph I think a helpful error message would suffice. Howard r...@open-mpi.org schrieb am Di. 13. Juni 2017 um 11:15: > Hey folks > > Brian brought this up today on the call, so I spent a little time > investigating. After installing SLURM 17.02 (with just --prefix as confi

Re: [OMPI devel] SLURM 17.02 support

2017-06-19 Thread Howard Pritchard
Hi Ralph I think the alternative you mention below should suffice. Howard r...@open-mpi.org schrieb am Mo. 19. Juni 2017 um 07:24: > So what you guys want is for me to detect that no opal/pmix framework > components could run, detect that we are in a slurm job, and so print out >

Re: [OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-06-21 Thread Howard Pritchard
Hi Chris, Thanks very much for the patch! Howard 2017-06-21 9:43 GMT-06:00 Christoph Niethammer : > Hello Ralph, > > Thanks for the update on this issue. > > I used the latest master (c38866eb3929339147259a3a46c6fc815720afdb). > > The behaviour is still the

Re: [OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-06-21 Thread Howard Pritchard
Hi Chris, Sorry for being a bit picky, but could you add a sign-off to the commit message? I'm not suppose to manually add it for you. Thanks, Howard 2017-06-21 9:45 GMT-06:00 Howard Pritchard : > Hi Chris, > > Thanks very much for the patch! > > Howard > > &

Re: [OMPI devel] orte-clean not cleaning left over temporary I/O files in /tmp

2017-06-22 Thread Howard Pritchard
Hi Chris Please go ahead and open a PR for master and I'll open corresponding ones for the release branches. Howard Christoph Niethammer schrieb am Do. 22. Juni 2017 um 01:10: > Hi Howard, > > Sorry, missed the new license policy. I added a Sign-off now. > Shall I op

[OMPI devel] libtool guru help needed (Fortran problem)

2017-06-22 Thread Howard Pritchard
roblem myself. F08 rocks! Howard ___ devel mailing list devel@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Re: [OMPI devel] Open MPI 3.0.0 first release candidate posted

2017-06-29 Thread Howard Pritchard
#x27;ll do some more investigating, but probably not till next week. Howard 2017-06-28 11:50 GMT-06:00 Barrett, Brian via devel < devel@lists.open-mpi.org>: > The first release candidate of Open MPI 3.0.0 is now available ( > https://www.open-mpi.org/software/ompi/v3.0/). We ex

Re: [OMPI devel] Open MPI 3.0.0 first release candidate posted

2017-06-29 Thread Howard Pritchard
Brian, Things look much better with this patch. We need it for 3.0.0 release The patch from 3794 applied cleanly from master. Howard 2017-06-29 16:51 GMT-06:00 r...@open-mpi.org : > I tracked down a possible source of the oob/tcp error - this should > address it, I think: https://gith

[OMPI devel] hwloc 2 thing

2017-07-20 Thread Howard Pritchard
n -df but that did not help. Is anyone else seeing this? Just curious, Howard ___ devel mailing list devel@lists.open-mpi.org https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

[OMPI devel] Open MPI v2.1.2rc1 available

2017-08-10 Thread Howard Pritchard
Hi Folks, Open MPI v2.1.2rc1 tarballs are available for testing at the usual place: https://www.open-mpi.org/software/ompi/v2.1/ There is an outstanding issue which will be fixed before the final release: https://github.com/open-mpi/ompi/issues/4069 but we wanted to get an rc1 out to see

[OMPI devel] Open MPI 2.1.2rc2 available

2017-08-17 Thread Howard Pritchard
reporting. Thanks, Howard and Jeff ___ devel mailing list devel@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/devel

[OMPI devel] Open MPI 2.1.2rc3 available for testing

2017-08-30 Thread Howard Pritchard
error. Thanks to Neil Carlson for reporting. Also, removed support for big endian PPC and XL compilers older than 13.1. Thanks, Jeff and Howard ___ devel mailing list devel@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/devel

[OMPI devel] KNL/hwloc funny message question

2017-09-01 Thread Howard Pritchard
y, 142) Hello, world, I am 3 of 4, (Open MPI v2.1.1rc1, package: Open MPI dshrader@tt-fey1 Distribution, ident: 2.1.1rc1, repo rev: v2.1.1-4-g5ded3a2d, Unreleased developer copy, 142) Anyone know what might be causing hwloc to report this invalid knl_memoryside_cache

[OMPI devel] Open MPI 2.1.2rc4 available for testing

2017-09-13 Thread Howard Pritchard
node via the slurmd daemon rather than mpirun. - Fix a problem with one of Open MPI's opal_path_nfs make check test. Thanks, Howard and Jeff ___ devel mailing list devel@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/devel

[OMPI devel] MTT database

2017-10-12 Thread Howard Pritchard
is anyone seeing issues with MTT today? When I go to the website and click on summary I get this back in my browser window: MTTDatabase abort: Could not connect to the ompidb database; submit this run later. Howard ___ devel mailing list devel

[OMPI devel] Open MPI 2.0.4rc1 available for testing

2017-10-29 Thread Howard Pritchard
HI Folks, Open MPI 2.0.4rc1 is available for download and testing at https://www.open-mpi.org/software/ompi/v2.0/ Fixes in this release include: 2.0.4 -- October, 2017 -- Bug fixes/minor improvements: - Add configure check to prevent trying to build this release of Open

[OMPI devel] Open MPI 2.0.4rc2 available for testing

2017-11-01 Thread Howard Pritchard
HI Folks, We decided to roll an rc2 to pick up a PMIx fix: - Fix an issue with visibility of functions defined in the built-in PMIx. Thanks to Siegmar Gross for reporting this issue. Tarballs can be found at the usual place https://www.open-mpi.org/software/ompi/v2.0/ Thanks, Your Open MPI

[OMPI devel] 2.0.4rc3 is available for testing

2017-11-07 Thread Howard Pritchard
Hi Folks, We fixed one more thing for the 2.0.4 release, so there's another rc, now rc3. The fixed item was a problem with neighbor collectives. Thanks to Lisandro Dalcin for reporting. Tarballs are at the usual place, https://www.open-mpi.org/software/ompi/v2.0/ Thanks, Open MPI release team

[OMPI devel] hwloc2 and cuda and non-default cudatoolkit install location

2017-12-20 Thread Howard Pritchard
h a configury argument change be traumatic for the hwloc community? I think it would be weird to have both an --enable-cuda and a --with-cuda configury argument for hwloc. Third option, wait for the next major release of UCX with built-in cuda support

Re: [OMPI devel] hwloc2 and cuda and non-default cudatoolkit install location

2017-12-22 Thread Howard Pritchard
Okay. I'll wait till we've had the discussion about removing embedded versions. I appreciate the use of pkg-config, but it doesn't look like cudatookit 8.0 installed on our systems includes *.pc files. Howard 2017-12-20 14:55 GMT-07:00 r...@open-mpi.org : > FWIW: what we do

[OMPI devel] Open MPI 2.1.3 rc1 available for testing

2018-02-15 Thread Howard Pritchard
Hello Folks, Open MPI 2.1.3rc1 tarballs are available for testing at the usual place: https://www.open-mpi.org/software/ompi/v2.1/ This is a bug fix release for the Open MPI 2.1.x release stream. Items fixed in this release include the following: - Update internal PMIx version to 1.2.5. - Fix a

[OMPI devel] Open MPI 2.1.3 rc2 available for testing

2018-02-22 Thread Howard Pritchard
Hello Folks, We discovered a bug in the osc/rdma component that we wanted to fix in this release,hence an rc2. Open MPI 2.1.3rc2 tarballs are available for testing at the usual place: https://www.open-mpi.org/software/ompi/v2.1/ This is a bug fix release for the Open MPI 2.1.x release stream. I

[OMPI devel] Open MPI 2.1.3rc3 available for testing

2018-03-14 Thread Howard Pritchard
HI Folks, A few MPI I/O (both in OMPI I/O and ROMIO glue layer) bugs were found in the rc2 so we're doing an rc3. Open MPI 2.1.3rc3 tarballs are available for testing at the usual place: https://www.open-mpi.org/software/ompi/v2.1/ This is a bug fix release for the Open MPI 2.1.x release strea

[OMPI devel] testing if NMC mail server working again

2018-08-28 Thread Howard Pritchard
___ devel mailing list devel@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/devel

[OMPI devel] Open MPI website borked up?

2018-09-01 Thread Howard Pritchard
Hi Folks, Something seems to be borked up about the OMPI website. Got to website and you'll get some odd parsing error appearing. Howard ___ devel mailing list devel@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/devel

[OMPI devel] testing again (EOM)

2018-09-01 Thread Howard Pritchard
___ devel mailing list devel@lists.open-mpi.org https://lists.open-mpi.org/mailman/listinfo/devel

Re: [OMPI devel] Entry in mca-btl-openib-device-params.ini

2018-10-15 Thread Howard Pritchard
Hello Sindhu, Open a github PR with your changes. See https://github.com/open-mpi/ompi/wiki/SubmittingPullRequests Howard Am Mo., 15. Okt. 2018 um 13:26 Uhr schrieb Devale, Sindhu < sindhu.dev...@intel.com>: > Hi, > > > > I need to add an entry to the *mca-btl-ope

[OMPI devel] Open MPI 4.0.1rc1 available for testing

2019-03-01 Thread Howard Pritchard
The first release candidate for the Open MPI v4.0.1 release is posted at https://www.open-mpi.org/software/ompi/v4.0/ Major changes include: - Update embedded PMIx to 3.1.2. - Fix an issue when using --enable-visibility configure option and older versions of hwloc. Thanks to Ben Menadue fo

[OMPI devel] Open MPI 4.0.1rc2 available for testing

2019-03-19 Thread Howard Pritchard
A second release candidate for the Open MPI v4.0.1 release is posted at https://www.open-mpi.org/software/ompi/v4.0/ Fixes since 4.0.1rc1 include - Fix an issue with Vader (shared-memory) transport on OS-X. Thanks to Daniel Vollmer for reporting. - Fix a problem with the usNIC BTL Makefile. Th

[OMPI devel] Open MPI 4.0.1rc3 available for testing

2019-03-22 Thread Howard Pritchard
A third release candidate for the Open MPI v4.0.1 release is posted at https://www.open-mpi.org/software/ompi/v4.0/ Fixes since 4.0.1rc2 include - Add acquire semantics to an Open MPI internal lock acquire function. Our goal is to release 4.0.1 by the end of March, so any testing is appreciated.

Re: [OMPI devel] sm BTL performace of the openmpi-2.0.0

2016-08-05 Thread Howard Pritchard
Hello Christoph The rdmacm messages while annoying are not causing the problem. If you specify tcp BTL does the BW drop disappear? Also could you post your configure options to the mail list? Thanks Howard Am Freitag, 5. August 2016 schrieb Christoph Niethammer : > Hello, > > W

Re: [OMPI devel] [OMPI commits] Git: open-mpi/ompi branch master updated. dev-40-g93eba3a

2014-10-08 Thread Howard Pritchard
ecause the projects I was working on had very little concurrent commits going in. thanks for pointing this out though, Howard 2014-10-08 7:29 GMT-06:00 Dave Goodell (dgoodell) : > On Oct 3, 2014, at 5:10 PM, git...@crest.iu.edu wrote

Re: [OMPI devel] Pull requests to release branch

2014-10-09 Thread Howard Pritchard
Hi Ralph, Just so its clear to everyone, what is the definition of "mark" in this context? Howard 2014-10-09 16:28 GMT-06:00 Ralph Castain : > Hi folks > > I would appreciate it if people marked their pull requests for the 1.8 > series with the commit hash from t

[OMPI devel] fixing a bug in 1.8 that's not in master

2014-10-27 Thread Howard Pritchard
27;m doing the right thing here. Howard

[OMPI devel] configure.m4 for pmix/s1 and pmix/s2 question

2014-10-28 Thread Howard Pritchard
e. Maybe these were accidentally copied from the configure.m4 for the cray pmi? Howard

Re: [OMPI devel] configure.m4 for pmix/s1 and pmix/s2 question

2014-10-28 Thread Howard Pritchard
Hi Ralph, 2014-10-28 12:26 GMT-06:00 Ralph Castain : > > > On Oct 28, 2014, at 11:16 AM, Howard Pritchard > wrote: > > > > Hi Folks, > > > > I'm trying to figure out what broke for pmi configure since now the > pmix/cray component > > does

Re: [OMPI devel] configure.m4 for pmix/s1 and pmix/s2 question

2014-10-28 Thread Howard Pritchard
. The pc files for the various cray software packages are suppose to include all dependencies on headers files, libs, etc. from other cay packages. Howard 2014-10-28 13:20 GMT-06:00 Ralph Castain : > > On Oct 28, 2014, at 12:17 PM, Paul Hargrove wrote: > > Ralph, > > The Cr

Re: [OMPI devel] configure.m4 for pmix/s1 and pmix/s2 question

2014-10-28 Thread Howard Pritchard
HI Ralph, I think I found the problem. Thanks. Howard 2014-10-28 12:58 GMT-06:00 Ralph Castain : > > On Oct 28, 2014, at 11:53 AM, Howard Pritchard > wrote: > > Hi Ralph, > > > 2014-10-28 12:26 GMT-06:00 Ralph Castain : > >> >> > On Oct 28, 201

Re: [OMPI devel] configure.m4 for pmix/s1 and pmix/s2 question

2014-10-28 Thread Howard Pritchard
Hi Ralph, Oh on the cray, you don't need to specify the --with-pmi, except to say you either want a particular directory (for instance if you wanted to try your luck with s2 on a cray nativized slurm), or you want to say --with-pmi=no. Howard 2014-10-28 14:14 GMT-06:00 Ralph Castain : &

Re: [OMPI devel] configure.m4 for pmix/s1 and pmix/s2 question

2014-10-28 Thread Howard Pritchard
Hi Paul, Yes, that is the minor problem I was referring to. It does in fact reflect the oldness of CLE 4. The cray pmi 5 and higher is newer software which probably should never have been installed on CLE 4, since the alps packaging changed completely between CLE 4 and 5. Howard 2014-10-28

Re: [OMPI devel] ROMIO+Lustre problems in OpenMPI 1.8.3

2014-10-29 Thread Howard Pritchard
Hi Paul, Thanks for the forward. I've opened an issue #255 <https://github.com/open-mpi/ompi/issues/255> to track the ROMIO config regression. Just to make sure, older releases of the 1.8 branch still configure and build properly with your current lustre setup? Thanks, Howard 2

Re: [OMPI devel] enable-smp-locks affects PSM performance

2014-10-30 Thread Howard Pritchard
. Thanks, Howard 2014-10-30 8:10 GMT-06:00 Friedley, Andrew : > Hi, > > I'm reporting a performance (message rate 16%, latency 3%) regression when > using PSM that occurred between OMPI v1.6.5 and v1.8.1. I would guess it > affects other networks too, but I haven't t

[OMPI devel] OpenMPI Developers Face to Face Q1 2015 poll

2014-11-04 Thread Howard Pritchard
ch week to meet. This poll is just to decide on the time, not the location. https://doodle.com/48mew6i9uqm2nyf2 Thanks, Howard

[OMPI devel] Open MPI Developers Face to Face Q1 2015 (updated doodle poll link)

2014-11-04 Thread Howard Pritchard
/wiki/Meeting-2015-02 Sorry for the confusion. Howard

[OMPI devel] Open MPI Developers F2F Q1 2015 (poll closes on Friday, 7th of November)

2014-11-05 Thread Howard Pritchard
of November. Thanks, Howard

Re: [OMPI devel] Open MPI Developers F2F Q1 2015 (poll closes on Friday, 7th of November)

2014-11-05 Thread Howard Pritchard
Hi Folks, I think Dallas (either Love or DFW) is cheaper to fly in to than Atlanta. Howard 2014-11-05 11:46 GMT-07:00 Jeff Squyres (jsquyres) : > Isn't Dallas 1 flight away from Knoxville? Dallas is a bit more central > (i.e., shorter flights for those coming from the west) >

Re: [OMPI devel] Prepping for 1.8.4 release

2014-11-06 Thread Howard Pritchard
Hello Ralph, +- Add new PML to improve MXM performance > Do you mean yalla? I thought that was only going in to master. > > ___ > devel mailing list > de...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel > Link to thi

  1   2   3   >