Hi Ralph
I dont know if it's relevant, but I'm working on an ofi BTL so we can use the
OSC rdma.
Howard
Von meinem iPhone gesendet
> Am 15.03.2016 um 17:21 schrieb Ralph Castain :
>
> Hi folks
>
> We are working on integrating the RML with libfabric so we have acc
I think that's a better approach. Not clear you'd want to use same EP type as
BTL. I'm going for RDM type for now for BTL.
Howard
Von meinem iPhone gesendet
> Am 16.03.2016 um 09:35 schrieb Ralph Castain :
>
> Interesting! Yeah, we debated about BTL or go direct to O
of these components need to be maintained
going forward, please speak up.
ORTE and OMPI components will be discussed at next week's
devel meeting.
Thanks,
Howard
Hi Jeff,
I'd be okay with this as long as there would be a config option to revert
back to using malloc rather than calloc.
Howard
On 10/3/14 2:54 PM, Jeff Squyres (jsquyres) wrote:
WHAT: change the malloc() to calloc() in opal_obj_new() (perhaps only in debug
builds?)
WHY: Drasti
Hi Gilles
Could you check whether you also see this problem with v2.x?
Thanks,
Howard
Von meinem iPhone gesendet
> Am 10.11.2015 um 19:57 schrieb Gilles Gouaillardet :
>
> Nathan,
>
> a simple MPI_Win_create test hangs on my non uniform cluster
> (ibm/onesided/c_create)
fabric had a
> decent shared memory provider...
>
>
> On Mar 17, 2016, at 7:10 AM, Howard wrote:
>
> I think that's a better approach. Not clear you'd want to use same EP type
> as BTL. I'm going for RDM type for now for BTL.
>
> Howard
>
> Von me
ve ofi mtl, but perhaps there's
another way to get psm2 mtl to work for single node jobs? I'd prefer
to not ask users to disable psm2 mtl explicitly for their single node jobs.
Thanks for suggestions.
Howard
the PSM2MTL to handle this feature of PSM2.
Howard
Am Donnerstag, 14. April 2016 schrieb Cabral, Matias A :
> Hi Howard,
>
>
>
> I suspect this is the known issue that when using SLURM with OMPI and PSM
> that is discussed here:
>
> https://www.open-mpi.org/community/lists
I didn't copy dev on this.
-- Weitergeleitete Nachricht --
Von: *Howard Pritchard*
Datum: Donnerstag, 14. April 2016
Betreff: psm2 and psm2_ep_open problems
An: Open MPI Developers
Hi Matias
Actually I triaged this further. Open mpi PMI subsystem is actually doing
t
please point me to the patch.
--
sent from my smart phonr so no good type.
Howard
On Apr 15, 2016 1:04 PM, "Ralph Castain" wrote:
> I have a patch that I think will resolve this problem - would you please
> take a look?
>
> Ralph
>
>
>
> On Apr 1
6.x86_64
infinipath-*psm*-3.3-0.g6f42cdb1bb8.2.el7.x86_64
should we get newer rpms installed?
Is there a way to disable the AMSHM path? I'm wondering if that
would help since multi-node jobs seems to run fine.
Thanks for any help,
Howard
its not a SLURM
specific problem.
2016-04-19 12:25 GMT-06:00 Cabral, Matias A :
> Hi Howard,
>
>
>
> Couple more questions to understand a little better the context:
>
> - What type of job running?
>
> - Is this also under srun?
>
>
>
> Fo
ixes to get the PSM2 MTL working on our
omnipath clusters.
I don't think this problem has anything to do with SLURM except for the
jobid
manipulation to generate the unique key.
Howard
2016-04-19 17:18 GMT-06:00 Cabral, Matias A :
> Howard,
>
>
>
> PSM2_DEVICES, I went ba
I also think this symbol checker should not be in the tarball.
Howard
2016-04-20 13:08 GMT-06:00 Jeff Squyres (jsquyres) :
> On Apr 20, 2016, at 2:08 PM, dpchoudh . wrote:
> >
> > Just to clarify, I was doing a build (after adding code to support a new
> transport) from co
Am Mittwoch, 20. April 2016 schrieb Paul Hargrove :
> Not sure if Howard wants the check to be OFF by default in tarballs, or
> absent completely.
>
>
I meant the former.
> I test almost exclusively from RC tarballs, and have access to many
> uncommon platforms.
> So, if y
Hi Matias,
I updated the issue 1559 with the info requested.
It might be simpler to just switch over to using the issue
for tracking this conversation?
I don't want to be posting big attachments emails on this
list.
Thanks,
Howard
2016-04-20 19:21 GMT-06:00 Cabral, Matias A :
> H
Hi Jeff,
checkpoint/restart is not supported in this release.
Does this release work with totalview? I recall we had some problems,
and do not remember if they were resolved.
We may also want to clarify if any PML/MTLs are experimental in this
release.
MPI_THREAD_MULTIPLE support.
Howard
Hi Jeff,
Let's just update the MPI_THREAD_MULTIPLE comment to say that
enable-mpi-thread-multiple is still required as part of config.
Howard
2016-04-29 22:20 GMT-06:00 Orion Poplawski :
> On 04/28/2016 05:01 PM, Jeff Squyres (jsquyres) wrote:
>
>> At long last, here's t
migration guide.
The wiki page format of the guide is at
ttps://
github.com/open-mpi/ompi/wiki/User-Migration-Guide%3A-1.8.x-and-v1.10.x-to-v2.0.0
We'll discuss this at the devel telecon tomorrow (5/17).
Thanks,
Howard
orrow
when lanl-bot should be back in business wrt cori and edison.
Howard
he underlying branch which the PR targets.
Howard
2016-06-07 13:33 GMT-06:00 Ralph Castain :
> Hi folks
>
> I’m trying to get a handle on our use of Jenkins testing for PRs prior to
> committing them. When we first discussed this, it was my impression that
> our objective wa
priority
callbacks are made during Open MPI's main progress loop.
- Disable backtrace support by default in the PSM/PSM2 libraries to
prevent unintentional conflicting behavior.
Thanks,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
Hi Lisandro,
Thanks for giving the rc3 a try. Could you post the output of ompi_info
from your
install to the list?
Thanks,
Howard
2016-06-16 7:55 GMT-06:00 Lisandro Dalcin :
> ./configure --prefix=/home/devel/mpi/openmpi/2.0.0rc3 --enable-debug
> --enable-mem-debug
>
Paul,
Could you narrow down the versions of the PGCC where you get the ICE when
using the -m32 option?
Thanks,
Howard
2016-07-06 15:29 GMT-06:00 Paul Hargrove :
> The following are previously reported issues that I am *not* expecting to
> be resolved in 2.0.0.
> However, I am lis
issue to a 2.0.1 bug fix release.
Howard
2016-07-12 13:51 GMT-06:00 Eric Chamberland <
eric.chamberl...@giref.ulaval.ca>:
> Hi Edgard,
>
> I just saw that your patch got into ompi/master... any chances it goes
> into ompi-release/v2.x before rc5?
>
> thanks,
>
>
e socket
performance obtained with iperf for large messages (~16 Gb/sec).
We tried adjusting the tcp_btl_rendezvous threshold but that doesn't
appear to actually be adjustable from the mpirun command line.
Thanks for any suggestions,
Howard
what has happened.
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
Hi Gilles
I didn't see a merge commit for all these commits,
hence my concern that it was a mistake.
In general it's better to pull in commits via PR process.
Howard
Am Donnerstag, 1. Dezember 2016 schrieb Gilles Gouaillardet :
> fwiw, the major change is in https://github.com/
Ralph,
I don't know how it happened but if you do
git log --oneline --topo-order
you don't see a Merge pull request #2488 in the history for master.
Howard
2016-12-01 16:59 GMT-07:00 r...@open-mpi.org :
> Ummm...guys, it was done via PR. I saw it go by, and it was all done t
HI Paul,
Thanks for the checking the rc out. And for noting the grammar
mistake.
Howard
2016-12-16 1:00 GMT-07:00 Paul Hargrove :
> My testing is complete.
>
> The only problems not already known are related to PGI's recent "Community
> Edition" compilers and
HI Paul,
Would you mind resending the "runtime error w/ PGI usempif08 on OpenPOWER"
email without the config.log attached?
Thanks,
Howard
2016-12-16 12:17 GMT-07:00 Howard Pritchard :
> HI Paul,
>
> Thanks for the checking the rc out. And for noting the grammar
&g
Hi Orion,
Thanks for trying out the rc. Which compiler/version of compiler are you
using?
Howard
2016-12-20 10:50 GMT-07:00 Orion Poplawski :
> On 12/14/2016 07:58 PM, Jeff Squyres (jsquyres) wrote:
> > Please test!
> >
> > https://www.open-mpi.org/software/ompi/v2
HI Orion,
Opened issue 2610 <https://github.com/open-mpi/ompi/issues/2610>.
Thanks,
Howard
2016-12-20 11:27 GMT-07:00 Howard Pritchard :
> Hi Orion,
>
> Thanks for trying out the rc. Which compiler/version of compiler are you
> using?
>
> Howard
>
>
>
.
Thanks,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
HI Paul,
I opened
https://github.com/open-mpi/ompi/issues/2665
to track this.
Thanks for reporting this.
Howard
2017-01-04 14:43 GMT-07:00 Paul Hargrove :
> With the 2.0.2rc2 tarball on FreeBSD-11 (i386 or amd64) I am configuring
> with:
> --prefix=... CC=clang CXX=clang++
,128,64,32,32,32:S,2048,1024,128,32:S,
12288,1024,128,32:S,65536,1024,128,32 (all the reset of the command line
args)
and see if it then works?
Howard
2017-01-04 16:37 GMT-07:00 Dave Turner :
> --
> No OpenFabrics conn
Hi Paul,
I opened issue 2666 <https://github.com/open-mpi/ompi/issues/2666> to track
this.
Howard
2017-01-05 0:23 GMT-07:00 Paul Hargrove :
> On Macs running Yosemite (OS X 10.10 w/ Xcode 7.1) and El Capitan (OS X
> 10.11 w/ Xcode 8.1) I have configured with
> CC=cc CXX
Hi Paul,
Thanks for checking this.
This problem was previously reporting and there's an issue:
https://github.com/open-mpi/ompi/issues/2610
tracking it.
Howard
2017-01-05 21:19 GMT-07:00 Paul Hargrove :
> I have a standard Linux/ppc64 system with gcc-4.8.3
> I have configured t
Hi Paul,
Sorry for the confusion. This is a different problem.
I'll open an issue for this one too.
Howard
2017-01-06 9:18 GMT-07:00 Howard Pritchard :
> Hi Paul,
>
> Thanks for checking this.
>
> This problem was previously reporting and there's an issue:
>
&
HI Paul,
https://github.com/open-mpi/ompi/issues/2677
It seems we have a bunch of problems with PPC64 atomics and I'd like to
see if we can get at least some of these issues resolved for 2.0.2, so
I've set this as a blocker along with 2610.
Howard
2017-01-06 9:48 GMT-07:00 Howard
Siegmar,
Could you confirm that if you use one of the mpirun arg lists that works
for Gilles that
your test case passes. Something simple like
mpirun -np 1 ./spawn_master
?
Howard
2017-01-11 18:27 GMT-07:00 Gilles Gouaillardet :
> Ralph,
>
>
> so it seems the root cause
Thanks Siegmar. I just wanted to confirm you weren't having some other
issue besides the host and slot-list problems.
Howard
2017-01-12 23:50 GMT-07:00 Siegmar Gross <
siegmar.gr...@informatik.hs-fulda.de>:
> Hi Howard and Gilles,
>
> thank you very much for your help. Al
HI Paul,
This might be a result of building the tarball on a new system.
Would you mind trying the rc3 tarball and see if that builds on the
system?
Howard
2017-01-27 15:12 GMT-07:00 Paul Hargrove :
> I had no problem with 2.0.2rc3 on NetBSD, but with 2.0.2rc4 I am seeing a
> "m
Hello Amit
which version of Open MPI are you using?
Howard
--
sent from my smart phonr so no good type.
Howard
On Feb 20, 2017 12:09 PM, "Kumar, Amit" wrote:
> Dear OpenMPI,
>
>
>
> Wondering what preset parameters are this warning is indicating?
&g
Hi Paul
There is an entry 8 under OS-X FAQ which describes this problem.
Adding max allowable len is a good idea.
Howard
Paul Hargrove schrieb am Di. 7. März 2017 um 08:04:
> The following is fairly annoying (though I understand the problem is real):
>
> $ [full-path-to]/mpirun -m
Hello Emmanuel,
Which version of Open MPI are you using?
Howard
2017-03-28 3:38 GMT-06:00 BRELLE, EMMANUEL :
> Hi,
>
> We are working on a portals4 components and we have found a bug (causing
> a segmentation fault ) which must be related to the coll/basic component.
> D
Actually it looks like we're running out of disk space at AWS.
2017-03-30 9:28 GMT-06:00 r...@open-mpi.org :
> You didn’t do anything wrong - the Jenkins test server at LANL is having a
> problem.
>
> On Mar 30, 2017, at 8:22 AM, DERBEY, NADIA wrote:
>
> Hi,
>
> I just created a pull request an
well not sure what's going on. there was an upgrade of jenkins a bunch of
functionality seems to have gotten lost.
2017-03-30 9:37 GMT-06:00 Howard Pritchard :
> Actually it looks like we're running out of disk space at AWS.
>
>
> 2017-03-30 9:28 GMT-06:00 r...@open-mpi
Hi Folks,
I added an OS-X specific bot retest command for jenkins CI:
bot:osx:retest
Also added a blurb to the related wiki page:
https://github.com/open-mpi/ompi/wiki/PRJenkins
Hope this helps folks who encounter os-x specific problems with
their PRs.
Howard
HI Folks,
Reminder that we are planning to do a v2.1.1 bug release next Tuesday
(4/25/17)
as discussed in yesterday's con-call.
If you have bug fixes you'd like to get in to v2.1.1 please open PRs this
week
so there will be time for review and testing in MTT.
Thanks,
Howar
location. Thanks to Kevin
Buckley for reporting and supplying a fix.
- Fix a problem with conflicting PMI symbols when linking statically.
Thanks to Kilian Cavalotti for reporting.
Please try it out if you have time.
Thanks,
Howard and Jeff
___
devel
mangling of custom CFLAGS when configuring Open MPI.
Thanks to Phil Tooley for reporting.
- Fix some minor memory leaks and remove some unused variables.
Thanks to Joshua Gerrard for reporting.
- Fix MPI_ALLGATHERV bug with MPI_IN_PLACE.
Thanks,
Howard
I vote for removal too.
Howard
r...@open-mpi.org schrieb am Do. 1. Juni 2017 um 08:10:
> I’d vote to remove it - it’s too unreliable anyway
>
> > On Jun 1, 2017, at 6:30 AM, Jeff Squyres (jsquyres)
> wrote:
> >
> > Is it time to remove Travis?
> >
> > I b
Hi Ralph
I think a helpful error message would suffice.
Howard
r...@open-mpi.org schrieb am Di. 13. Juni 2017 um 11:15:
> Hey folks
>
> Brian brought this up today on the call, so I spent a little time
> investigating. After installing SLURM 17.02 (with just --prefix as confi
Hi Ralph
I think the alternative you mention below should suffice.
Howard
r...@open-mpi.org schrieb am Mo. 19. Juni 2017 um 07:24:
> So what you guys want is for me to detect that no opal/pmix framework
> components could run, detect that we are in a slurm job, and so print out
>
Hi Chris,
Thanks very much for the patch!
Howard
2017-06-21 9:43 GMT-06:00 Christoph Niethammer :
> Hello Ralph,
>
> Thanks for the update on this issue.
>
> I used the latest master (c38866eb3929339147259a3a46c6fc815720afdb).
>
> The behaviour is still the
Hi Chris,
Sorry for being a bit picky, but could you add a sign-off to the commit
message?
I'm not suppose to manually add it for you.
Thanks,
Howard
2017-06-21 9:45 GMT-06:00 Howard Pritchard :
> Hi Chris,
>
> Thanks very much for the patch!
>
> Howard
>
>
&
Hi Chris
Please go ahead and open a PR for master and I'll open corresponding ones
for the release branches.
Howard
Christoph Niethammer schrieb am Do. 22. Juni 2017 um
01:10:
> Hi Howard,
>
> Sorry, missed the new license policy. I added a Sign-off now.
> Shall I op
roblem myself.
F08 rocks!
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
#x27;ll do some more investigating, but probably not till next week.
Howard
2017-06-28 11:50 GMT-06:00 Barrett, Brian via devel <
devel@lists.open-mpi.org>:
> The first release candidate of Open MPI 3.0.0 is now available (
> https://www.open-mpi.org/software/ompi/v3.0/). We ex
Brian,
Things look much better with this patch. We need it for 3.0.0 release
The patch from 3794 applied cleanly from master.
Howard
2017-06-29 16:51 GMT-06:00 r...@open-mpi.org :
> I tracked down a possible source of the oob/tcp error - this should
> address it, I think: https://gith
n -df
but that did not help.
Is anyone else seeing this?
Just curious,
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
Hi Folks,
Open MPI v2.1.2rc1 tarballs are available for testing at the usual
place:
https://www.open-mpi.org/software/ompi/v2.1/
There is an outstanding issue which will be fixed before the final release:
https://github.com/open-mpi/ompi/issues/4069
but we wanted to get an rc1 out to see
reporting.
Thanks,
Howard and Jeff
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
error. Thanks to
Neil Carlson for
reporting.
Also, removed support for big endian PPC and XL compilers older than 13.1.
Thanks,
Jeff and Howard
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
y, 142)
Hello, world, I am 3 of 4, (Open MPI v2.1.1rc1, package: Open MPI
dshrader@tt-fey1 Distribution, ident: 2.1.1rc1, repo rev:
v2.1.1-4-g5ded3a2d, Unreleased developer copy, 142)
Anyone know what might be causing hwloc to report this invalid
knl_memoryside_cache
node via the slurmd daemon rather than mpirun.
- Fix a problem with one of Open MPI's opal_path_nfs make check test.
Thanks,
Howard and Jeff
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
is anyone seeing issues with MTT today?
When I go to the website and click on summary I get this back in my browser
window:
MTTDatabase abort: Could not connect to the ompidb database; submit this
run later.
Howard
___
devel mailing list
devel
HI Folks,
Open MPI 2.0.4rc1 is available for download and testing at
https://www.open-mpi.org/software/ompi/v2.0/
Fixes in this release include:
2.0.4 -- October, 2017
--
Bug fixes/minor improvements:
- Add configure check to prevent trying to build this release of
Open
HI Folks,
We decided to roll an rc2 to pick up a PMIx fix:
- Fix an issue with visibility of functions defined in the built-in PMIx.
Thanks to Siegmar Gross for reporting this issue.
Tarballs can be found at the usual place
https://www.open-mpi.org/software/ompi/v2.0/
Thanks,
Your Open MPI
Hi Folks,
We fixed one more thing for the 2.0.4 release, so there's another rc, now
rc3.
The fixed item was a problem with neighbor collectives. Thanks to Lisandro
Dalcin for reporting.
Tarballs are at the usual place,
https://www.open-mpi.org/software/ompi/v2.0/
Thanks,
Open MPI release team
h a configury argument change be traumatic for the hwloc community?
I think it would be weird to have both an --enable-cuda and a --with-cuda
configury argument for hwloc.
Third option, wait for the next major release of UCX with built-in cuda
support
Okay.
I'll wait till we've had the discussion about removing embedded versions.
I appreciate the use of pkg-config, but it doesn't look like cudatookit 8.0
installed on our systems includes *.pc files.
Howard
2017-12-20 14:55 GMT-07:00 r...@open-mpi.org :
> FWIW: what we do
Hello Folks,
Open MPI 2.1.3rc1 tarballs are available for testing at the usual place:
https://www.open-mpi.org/software/ompi/v2.1/
This is a bug fix release for the Open MPI 2.1.x release stream.
Items fixed in this release include the following:
- Update internal PMIx version to 1.2.5.
- Fix a
Hello Folks,
We discovered a bug in the osc/rdma component that we wanted to fix in this
release,hence an rc2.
Open MPI 2.1.3rc2 tarballs are available for testing at the usual place:
https://www.open-mpi.org/software/ompi/v2.1/
This is a bug fix release for the Open MPI 2.1.x release stream.
I
HI Folks,
A few MPI I/O (both in OMPI I/O and ROMIO glue layer) bugs were found in
the rc2
so we're doing an rc3.
Open MPI 2.1.3rc3 tarballs are available for testing at the usual place:
https://www.open-mpi.org/software/ompi/v2.1/
This is a bug fix release for the Open MPI 2.1.x release strea
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
Hi Folks,
Something seems to be borked up about the OMPI website. Got to website and
you'll
get some odd parsing error appearing.
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
Hello Sindhu,
Open a github PR with your changes. See
https://github.com/open-mpi/ompi/wiki/SubmittingPullRequests
Howard
Am Mo., 15. Okt. 2018 um 13:26 Uhr schrieb Devale, Sindhu <
sindhu.dev...@intel.com>:
> Hi,
>
>
>
> I need to add an entry to the *mca-btl-ope
The first release candidate for the Open MPI v4.0.1 release is posted at
https://www.open-mpi.org/software/ompi/v4.0/
Major changes include:
- Update embedded PMIx to 3.1.2.
- Fix an issue when using --enable-visibility configure option
and older versions of hwloc. Thanks to Ben Menadue fo
A second release candidate for the Open MPI v4.0.1 release is posted at
https://www.open-mpi.org/software/ompi/v4.0/
Fixes since 4.0.1rc1 include
- Fix an issue with Vader (shared-memory) transport on OS-X. Thanks
to Daniel Vollmer for reporting.
- Fix a problem with the usNIC BTL Makefile. Th
A third release candidate for the Open MPI v4.0.1 release is posted at
https://www.open-mpi.org/software/ompi/v4.0/
Fixes since 4.0.1rc2 include
- Add acquire semantics to an Open MPI internal lock acquire function.
Our goal is to release 4.0.1 by the end of March, so any testing is
appreciated.
Hello Christoph
The rdmacm messages while annoying are not causing the problem.
If you specify tcp BTL does the BW drop disappear?
Also could you post your configure options to the mail list?
Thanks
Howard
Am Freitag, 5. August 2016 schrieb Christoph Niethammer :
> Hello,
>
> W
ecause the projects I was working on had very
little concurrent commits going in.
thanks for pointing this out though,
Howard
2014-10-08 7:29 GMT-06:00 Dave Goodell (dgoodell) :
> On Oct 3, 2014, at 5:10 PM, git...@crest.iu.edu wrote
Hi Ralph,
Just so its clear to everyone, what is the definition of "mark" in this
context?
Howard
2014-10-09 16:28 GMT-06:00 Ralph Castain :
> Hi folks
>
> I would appreciate it if people marked their pull requests for the 1.8
> series with the commit hash from t
27;m doing the right thing here.
Howard
e. Maybe these were
accidentally copied from the configure.m4 for the cray pmi?
Howard
Hi Ralph,
2014-10-28 12:26 GMT-06:00 Ralph Castain :
>
> > On Oct 28, 2014, at 11:16 AM, Howard Pritchard
> wrote:
> >
> > Hi Folks,
> >
> > I'm trying to figure out what broke for pmi configure since now the
> pmix/cray component
> > does
.
The pc files for the various cray software packages are suppose to include
all dependencies on headers files, libs, etc. from other cay packages.
Howard
2014-10-28 13:20 GMT-06:00 Ralph Castain :
>
> On Oct 28, 2014, at 12:17 PM, Paul Hargrove wrote:
>
> Ralph,
>
> The Cr
HI Ralph,
I think I found the problem. Thanks.
Howard
2014-10-28 12:58 GMT-06:00 Ralph Castain :
>
> On Oct 28, 2014, at 11:53 AM, Howard Pritchard
> wrote:
>
> Hi Ralph,
>
>
> 2014-10-28 12:26 GMT-06:00 Ralph Castain :
>
>>
>> > On Oct 28, 201
Hi Ralph,
Oh on the cray, you don't need to specify the --with-pmi, except to say you
either want
a particular directory (for instance if you wanted to try your luck with s2
on a cray
nativized slurm), or you want to say --with-pmi=no.
Howard
2014-10-28 14:14 GMT-06:00 Ralph Castain :
&
Hi Paul,
Yes, that is the minor problem I was referring to. It does in fact
reflect the oldness of CLE 4. The cray pmi 5 and higher is newer
software which probably should never have been installed on
CLE 4, since the alps packaging changed completely between
CLE 4 and 5.
Howard
2014-10-28
Hi Paul,
Thanks for the forward. I've opened an issue #255
<https://github.com/open-mpi/ompi/issues/255> to track the ROMIO config
regression.
Just to make sure, older releases of the 1.8 branch still configure and
build properly with your
current lustre setup?
Thanks,
Howard
2
.
Thanks,
Howard
2014-10-30 8:10 GMT-06:00 Friedley, Andrew :
> Hi,
>
> I'm reporting a performance (message rate 16%, latency 3%) regression when
> using PSM that occurred between OMPI v1.6.5 and v1.8.1. I would guess it
> affects other networks too, but I haven't t
ch week to meet. This poll is just
to decide on the time, not the location.
https://doodle.com/48mew6i9uqm2nyf2
Thanks,
Howard
/wiki/Meeting-2015-02
Sorry for the confusion.
Howard
of November.
Thanks,
Howard
Hi Folks,
I think Dallas (either Love or DFW) is cheaper to fly in to than Atlanta.
Howard
2014-11-05 11:46 GMT-07:00 Jeff Squyres (jsquyres) :
> Isn't Dallas 1 flight away from Knoxville? Dallas is a bit more central
> (i.e., shorter flights for those coming from the west)
>
Hello Ralph,
+- Add new PML to improve MXM performance
>
Do you mean yalla? I thought that was only going in to master.
>
> ___
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to thi
1 - 100 of 286 matches
Mail list logo