example NCCL/RCCL. Is is also easier to optimize the communication
> protocols regarding BXI hardware. We also can benefit shared memory under
> some conditions, which seems to be harder using the portals4 mtl.
>
Ah that makes sense, esp. the shared memory part.
> Regards,
&g
consider. Actually I thought BXI used to talk portals4. Is
there a reason you can't use the existing portals4 mtl and osc components?
George's points on testing, ideally CI, are quite important.
Howard
Am Mi., 22. Jan. 2025 um 08:35 Uhr schrieb 'George Bosilca' via Open MP
LANL would be interested in supporting this feature as well.
Howard
On Mon, Aug 28, 2023 at 9:58 AM Jeff Squyres (jsquyres) via devel <
devel@lists.open-mpi.org> wrote:
> We got a presentation from the ABI WG (proxied via Quincey from AWS) a few
> months ago.
>
> The proposal
Hi All,
Open MPI v4.0.6rc4 (we messed up and had to skip rc3) is now available at
https://www.open-mpi.org/software/ompi/v4.0/
Changes since the 4.0.5 release include:
- Update embedded PMIx to 3.2.3. This update addresses several
MPI_COMM_SPAWN problems.
- Fix an issue with MPI_FILE_GET_B
Open MPI v4.0.4rc3 has been posted to
https://www.open-mpi.org/software/ompi/v4.0/
This rc includes a fix for a problem discovered with the memory patcher
code.
As described in the README:
- Open MPI v4.0.4 fixed an issue with the memory patcher's ability to
intercept shmat and shmdt that coul
Open MPI v4.0.4rc1 has been posted to
https://www.open-mpi.org/software/ompi/v4.0/
4.0.4 -- May, 2020
---
- Fix an ABI compatibility issue with the Fortran 2008 bindings.
Thanks to Alastair McKinstry for reporting.
- Fix an issue with rpath of /usr/lib64 when building OMP
A third release candidate for the Open MPI v4.0.1 release is posted at
https://www.open-mpi.org/software/ompi/v4.0/
Fixes since 4.0.1rc2 include
- Add acquire semantics to an Open MPI internal lock acquire function.
Our goal is to release 4.0.1 by the end of March, so any testing is
appreciated.
A second release candidate for the Open MPI v4.0.1 release is posted at
https://www.open-mpi.org/software/ompi/v4.0/
Fixes since 4.0.1rc1 include
- Fix an issue with Vader (shared-memory) transport on OS-X. Thanks
to Daniel Vollmer for reporting.
- Fix a problem with the usNIC BTL Makefile. Th
The first release candidate for the Open MPI v4.0.1 release is posted at
https://www.open-mpi.org/software/ompi/v4.0/
Major changes include:
- Update embedded PMIx to 3.1.2.
- Fix an issue when using --enable-visibility configure option
and older versions of hwloc. Thanks to Ben Menadue fo
Hello Sindhu,
Open a github PR with your changes. See
https://github.com/open-mpi/ompi/wiki/SubmittingPullRequests
Howard
Am Mo., 15. Okt. 2018 um 13:26 Uhr schrieb Devale, Sindhu <
sindhu.dev...@intel.com>:
> Hi,
>
>
>
> I need to add an entry to the *mca-btl-ope
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
Hi Folks,
Something seems to be borked up about the OMPI website. Got to website and
you'll
get some odd parsing error appearing.
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
HI Folks,
A few MPI I/O (both in OMPI I/O and ROMIO glue layer) bugs were found in
the rc2
so we're doing an rc3.
Open MPI 2.1.3rc3 tarballs are available for testing at the usual place:
https://www.open-mpi.org/software/ompi/v2.1/
This is a bug fix release for the Open MPI 2.1.x release strea
Hello Folks,
We discovered a bug in the osc/rdma component that we wanted to fix in this
release,hence an rc2.
Open MPI 2.1.3rc2 tarballs are available for testing at the usual place:
https://www.open-mpi.org/software/ompi/v2.1/
This is a bug fix release for the Open MPI 2.1.x release stream.
I
Hello Folks,
Open MPI 2.1.3rc1 tarballs are available for testing at the usual place:
https://www.open-mpi.org/software/ompi/v2.1/
This is a bug fix release for the Open MPI 2.1.x release stream.
Items fixed in this release include the following:
- Update internal PMIx version to 1.2.5.
- Fix a
Okay.
I'll wait till we've had the discussion about removing embedded versions.
I appreciate the use of pkg-config, but it doesn't look like cudatookit 8.0
installed on our systems includes *.pc files.
Howard
2017-12-20 14:55 GMT-07:00 r...@open-mpi.org :
> FWIW: what we do
h a configury argument change be traumatic for the hwloc community?
I think it would be weird to have both an --enable-cuda and a --with-cuda
configury argument for hwloc.
Third option, wait for the next major release of UCX with built-in cuda
support
Hi Folks,
We fixed one more thing for the 2.0.4 release, so there's another rc, now
rc3.
The fixed item was a problem with neighbor collectives. Thanks to Lisandro
Dalcin for reporting.
Tarballs are at the usual place,
https://www.open-mpi.org/software/ompi/v2.0/
Thanks,
Open MPI release team
HI Folks,
We decided to roll an rc2 to pick up a PMIx fix:
- Fix an issue with visibility of functions defined in the built-in PMIx.
Thanks to Siegmar Gross for reporting this issue.
Tarballs can be found at the usual place
https://www.open-mpi.org/software/ompi/v2.0/
Thanks,
Your Open MPI
HI Folks,
Open MPI 2.0.4rc1 is available for download and testing at
https://www.open-mpi.org/software/ompi/v2.0/
Fixes in this release include:
2.0.4 -- October, 2017
--
Bug fixes/minor improvements:
- Add configure check to prevent trying to build this release of
Open
is anyone seeing issues with MTT today?
When I go to the website and click on summary I get this back in my browser
window:
MTTDatabase abort: Could not connect to the ompidb database; submit this
run later.
Howard
___
devel mailing list
devel
node via the slurmd daemon rather than mpirun.
- Fix a problem with one of Open MPI's opal_path_nfs make check test.
Thanks,
Howard and Jeff
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
y, 142)
Hello, world, I am 3 of 4, (Open MPI v2.1.1rc1, package: Open MPI
dshrader@tt-fey1 Distribution, ident: 2.1.1rc1, repo rev:
v2.1.1-4-g5ded3a2d, Unreleased developer copy, 142)
Anyone know what might be causing hwloc to report this invalid
knl_memoryside_cache
error. Thanks to
Neil Carlson for
reporting.
Also, removed support for big endian PPC and XL compilers older than 13.1.
Thanks,
Jeff and Howard
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
reporting.
Thanks,
Howard and Jeff
___
devel mailing list
devel@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/devel
Hi Folks,
Open MPI v2.1.2rc1 tarballs are available for testing at the usual
place:
https://www.open-mpi.org/software/ompi/v2.1/
There is an outstanding issue which will be fixed before the final release:
https://github.com/open-mpi/ompi/issues/4069
but we wanted to get an rc1 out to see
n -df
but that did not help.
Is anyone else seeing this?
Just curious,
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
Brian,
Things look much better with this patch. We need it for 3.0.0 release
The patch from 3794 applied cleanly from master.
Howard
2017-06-29 16:51 GMT-06:00 r...@open-mpi.org :
> I tracked down a possible source of the oob/tcp error - this should
> address it, I think: https://gith
#x27;ll do some more investigating, but probably not till next week.
Howard
2017-06-28 11:50 GMT-06:00 Barrett, Brian via devel <
devel@lists.open-mpi.org>:
> The first release candidate of Open MPI 3.0.0 is now available (
> https://www.open-mpi.org/software/ompi/v3.0/). We ex
roblem myself.
F08 rocks!
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
Hi Chris
Please go ahead and open a PR for master and I'll open corresponding ones
for the release branches.
Howard
Christoph Niethammer schrieb am Do. 22. Juni 2017 um
01:10:
> Hi Howard,
>
> Sorry, missed the new license policy. I added a Sign-off now.
> Shall I op
Hi Chris,
Sorry for being a bit picky, but could you add a sign-off to the commit
message?
I'm not suppose to manually add it for you.
Thanks,
Howard
2017-06-21 9:45 GMT-06:00 Howard Pritchard :
> Hi Chris,
>
> Thanks very much for the patch!
>
> Howard
>
>
&
Hi Chris,
Thanks very much for the patch!
Howard
2017-06-21 9:43 GMT-06:00 Christoph Niethammer :
> Hello Ralph,
>
> Thanks for the update on this issue.
>
> I used the latest master (c38866eb3929339147259a3a46c6fc815720afdb).
>
> The behaviour is still the
Hi Ralph
I think the alternative you mention below should suffice.
Howard
r...@open-mpi.org schrieb am Mo. 19. Juni 2017 um 07:24:
> So what you guys want is for me to detect that no opal/pmix framework
> components could run, detect that we are in a slurm job, and so print out
>
Hi Ralph
I think a helpful error message would suffice.
Howard
r...@open-mpi.org schrieb am Di. 13. Juni 2017 um 11:15:
> Hey folks
>
> Brian brought this up today on the call, so I spent a little time
> investigating. After installing SLURM 17.02 (with just --prefix as confi
I vote for removal too.
Howard
r...@open-mpi.org schrieb am Do. 1. Juni 2017 um 08:10:
> I’d vote to remove it - it’s too unreliable anyway
>
> > On Jun 1, 2017, at 6:30 AM, Jeff Squyres (jsquyres)
> wrote:
> >
> > Is it time to remove Travis?
> >
> > I b
mangling of custom CFLAGS when configuring Open MPI.
Thanks to Phil Tooley for reporting.
- Fix some minor memory leaks and remove some unused variables.
Thanks to Joshua Gerrard for reporting.
- Fix MPI_ALLGATHERV bug with MPI_IN_PLACE.
Thanks,
Howard
location. Thanks to Kevin
Buckley for reporting and supplying a fix.
- Fix a problem with conflicting PMI symbols when linking statically.
Thanks to Kilian Cavalotti for reporting.
Please try it out if you have time.
Thanks,
Howard and Jeff
___
devel
HI Folks,
Reminder that we are planning to do a v2.1.1 bug release next Tuesday
(4/25/17)
as discussed in yesterday's con-call.
If you have bug fixes you'd like to get in to v2.1.1 please open PRs this
week
so there will be time for review and testing in MTT.
Thanks,
Howar
Hi Folks,
I added an OS-X specific bot retest command for jenkins CI:
bot:osx:retest
Also added a blurb to the related wiki page:
https://github.com/open-mpi/ompi/wiki/PRJenkins
Hope this helps folks who encounter os-x specific problems with
their PRs.
Howard
well not sure what's going on. there was an upgrade of jenkins a bunch of
functionality seems to have gotten lost.
2017-03-30 9:37 GMT-06:00 Howard Pritchard :
> Actually it looks like we're running out of disk space at AWS.
>
>
> 2017-03-30 9:28 GMT-06:00 r...@open-mpi
Actually it looks like we're running out of disk space at AWS.
2017-03-30 9:28 GMT-06:00 r...@open-mpi.org :
> You didn’t do anything wrong - the Jenkins test server at LANL is having a
> problem.
>
> On Mar 30, 2017, at 8:22 AM, DERBEY, NADIA wrote:
>
> Hi,
>
> I just created a pull request an
Hello Emmanuel,
Which version of Open MPI are you using?
Howard
2017-03-28 3:38 GMT-06:00 BRELLE, EMMANUEL :
> Hi,
>
> We are working on a portals4 components and we have found a bug (causing
> a segmentation fault ) which must be related to the coll/basic component.
> D
Hi Paul
There is an entry 8 under OS-X FAQ which describes this problem.
Adding max allowable len is a good idea.
Howard
Paul Hargrove schrieb am Di. 7. März 2017 um 08:04:
> The following is fairly annoying (though I understand the problem is real):
>
> $ [full-path-to]/mpirun -m
Hello Amit
which version of Open MPI are you using?
Howard
--
sent from my smart phonr so no good type.
Howard
On Feb 20, 2017 12:09 PM, "Kumar, Amit" wrote:
> Dear OpenMPI,
>
>
>
> Wondering what preset parameters are this warning is indicating?
&g
HI Paul,
This might be a result of building the tarball on a new system.
Would you mind trying the rc3 tarball and see if that builds on the
system?
Howard
2017-01-27 15:12 GMT-07:00 Paul Hargrove :
> I had no problem with 2.0.2rc3 on NetBSD, but with 2.0.2rc4 I am seeing a
> "m
Thanks Siegmar. I just wanted to confirm you weren't having some other
issue besides the host and slot-list problems.
Howard
2017-01-12 23:50 GMT-07:00 Siegmar Gross <
siegmar.gr...@informatik.hs-fulda.de>:
> Hi Howard and Gilles,
>
> thank you very much for your help. Al
Siegmar,
Could you confirm that if you use one of the mpirun arg lists that works
for Gilles that
your test case passes. Something simple like
mpirun -np 1 ./spawn_master
?
Howard
2017-01-11 18:27 GMT-07:00 Gilles Gouaillardet :
> Ralph,
>
>
> so it seems the root cause
HI Paul,
https://github.com/open-mpi/ompi/issues/2677
It seems we have a bunch of problems with PPC64 atomics and I'd like to
see if we can get at least some of these issues resolved for 2.0.2, so
I've set this as a blocker along with 2610.
Howard
2017-01-06 9:48 GMT-07:00 Howard
Hi Paul,
Sorry for the confusion. This is a different problem.
I'll open an issue for this one too.
Howard
2017-01-06 9:18 GMT-07:00 Howard Pritchard :
> Hi Paul,
>
> Thanks for checking this.
>
> This problem was previously reporting and there's an issue:
>
&
Hi Paul,
Thanks for checking this.
This problem was previously reporting and there's an issue:
https://github.com/open-mpi/ompi/issues/2610
tracking it.
Howard
2017-01-05 21:19 GMT-07:00 Paul Hargrove :
> I have a standard Linux/ppc64 system with gcc-4.8.3
> I have configured t
Hi Paul,
I opened issue 2666 <https://github.com/open-mpi/ompi/issues/2666> to track
this.
Howard
2017-01-05 0:23 GMT-07:00 Paul Hargrove :
> On Macs running Yosemite (OS X 10.10 w/ Xcode 7.1) and El Capitan (OS X
> 10.11 w/ Xcode 8.1) I have configured with
> CC=cc CXX
,128,64,32,32,32:S,2048,1024,128,32:S,
12288,1024,128,32:S,65536,1024,128,32 (all the reset of the command line
args)
and see if it then works?
Howard
2017-01-04 16:37 GMT-07:00 Dave Turner :
> --
> No OpenFabrics conn
HI Paul,
I opened
https://github.com/open-mpi/ompi/issues/2665
to track this.
Thanks for reporting this.
Howard
2017-01-04 14:43 GMT-07:00 Paul Hargrove :
> With the 2.0.2rc2 tarball on FreeBSD-11 (i386 or amd64) I am configuring
> with:
> --prefix=... CC=clang CXX=clang++
.
Thanks,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
HI Orion,
Opened issue 2610 <https://github.com/open-mpi/ompi/issues/2610>.
Thanks,
Howard
2016-12-20 11:27 GMT-07:00 Howard Pritchard :
> Hi Orion,
>
> Thanks for trying out the rc. Which compiler/version of compiler are you
> using?
>
> Howard
>
>
>
Hi Orion,
Thanks for trying out the rc. Which compiler/version of compiler are you
using?
Howard
2016-12-20 10:50 GMT-07:00 Orion Poplawski :
> On 12/14/2016 07:58 PM, Jeff Squyres (jsquyres) wrote:
> > Please test!
> >
> > https://www.open-mpi.org/software/ompi/v2
HI Paul,
Would you mind resending the "runtime error w/ PGI usempif08 on OpenPOWER"
email without the config.log attached?
Thanks,
Howard
2016-12-16 12:17 GMT-07:00 Howard Pritchard :
> HI Paul,
>
> Thanks for the checking the rc out. And for noting the grammar
&g
HI Paul,
Thanks for the checking the rc out. And for noting the grammar
mistake.
Howard
2016-12-16 1:00 GMT-07:00 Paul Hargrove :
> My testing is complete.
>
> The only problems not already known are related to PGI's recent "Community
> Edition" compilers and
Ralph,
I don't know how it happened but if you do
git log --oneline --topo-order
you don't see a Merge pull request #2488 in the history for master.
Howard
2016-12-01 16:59 GMT-07:00 r...@open-mpi.org :
> Ummm...guys, it was done via PR. I saw it go by, and it was all done t
Hi Gilles
I didn't see a merge commit for all these commits,
hence my concern that it was a mistake.
In general it's better to pull in commits via PR process.
Howard
Am Donnerstag, 1. Dezember 2016 schrieb Gilles Gouaillardet :
> fwiw, the major change is in https://github.com/
what has happened.
Howard
___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
Hi Brian,
Could you check what’s going on with the nightly tarball builds?
Nothing new has been built since 11/21 even though a number of PR’s
have been merged in since then.
Thanks,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
whether to go with a v2.2.x release next year
or to go from v2.1.x to v3.x in late 2017 or early 2018 at the link below:
https://www.open-mpi.org/sc16/
Thanks very much,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
___
devel mailing
.
- what version of Open MPI you are using
- if possible, the configure options used to build/install Open MPI
- the client/server test app (if its concise)
Thanks,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
From: devel
mailto:devel-boun...@lists.open-mpi.org>> on
beh
HI Gianmario,
You may want to check process limits on your system both
as root and as a user.You may want to check the ib related
gotchas here:
https://www.open-mpi.org/faq/?category=openfabrics
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
From: devel
mailto:devel
HI Gianmario,
Probably something went wrong at the spml layer.
Could you also add —mac spml_base_verbose 10
to the job launch line?
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
From: devel
mailto:devel-boun...@lists.open-mpi.org>> on
behalf of Gianmario
problem you found.
Thanks,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
From: devel
mailto:devel-boun...@lists.open-mpi.org>> on
behalf of "Ibanez, Daniel Alejandro"
mailto:daib...@sandia.gov>>
Reply-To: Open MPI Developers
mailto:devel@lis
mon’s URI is.
Is there a way to avoid this problem when using direct launch? I would do a
git bisect
but I’ve no time for such activities at the moment.
Thanks for any suggestions,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Labor
Hi Jeff,
I’m not using it.
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
On 10/19/16, 9:21 AM, "devel on behalf of Jeff Squyres (jsquyres)"
wrote:
>Looking through the OpenGrok requirements, I have to admit that I'm not
>excited about runn
HI Gilles,
From what point in the job launch are you needed to determine whether
or not the job was direct launched?
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
On 9/15/16, 7:38 AM, "devel on behalf of Gilles Gouaillardet"
wrote:
>Ralph,
>
>th
Ralph,
I know with older versions of git you may have problems since you can’t use
https. I think with newer versions it will prompt not just for passed but
also
2-factor.
That’s one problem I hit anyway when first enabling 2-factor.
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National
,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
On 9/13/16, 9:38 AM, "devel on behalf of Eric Chamberland"
wrote:
>Other relevant info: I never saw this problem with OpenMPI 1.6.5,1.8.4
>and 1.10.[3,4] which runs the same test suite...
>
>thanks,
&g
HI Ralph,
If the java bindings are of use, I could see if my student how did a lot
of the recent work in the Open MPI java bindings would be interested.
He doesn¹t have a lot of extra cycles at the moment though.
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
On 8/7/16
Hello Christoph
The rdmacm messages while annoying are not causing the problem.
If you specify tcp BTL does the BW drop disappear?
Also could you post your configure options to the mail list?
Thanks
Howard
Am Freitag, 5. August 2016 schrieb Christoph Niethammer :
> Hello,
>
> W
Hi Sreenidhi
Only partial resolution. By pushing out the eager path to 4 MB we were able to
get around 2GB/sec per socket connection with osu bw test.
The kernel is quite old though - 2.6.x - and being a summer student project
with a focus on IB vs rout able ROCE we've moved on.
H
Hi Folks,
The LANL/(soon to not be iu) jenkins should now work with
bot:lanl:retest
Also, NERSC Cori system went down this morning for maintenance during CI check
of
PR 1896 on master. I didn't see any others impacted by the cori maintenance.
Howard
--
Howard Pritchard
HPC-DES
Los A
e socket
performance obtained with iperf for large messages (~16 Gb/sec).
We tried adjusting the tcp_btl_rendezvous threshold but that doesn't
appear to actually be adjustable from the mpirun command line.
Thanks for any suggestions,
Howard
Jeff,
I think this was fixed in PR 1227 on v2.x
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
On 7/13/16, 1:47 PM, "devel on behalf of Jeff Squyres (jsquyres)"
wrote:
>I literally just noticed that this morning (that singleton was broken on
>mas
issue to a 2.0.1 bug fix release.
Howard
2016-07-12 13:51 GMT-06:00 Eric Chamberland <
eric.chamberl...@giref.ulaval.ca>:
> Hi Edgard,
>
> I just saw that your patch got into ompi/master... any chances it goes
> into ompi-release/v2.x before rc5?
>
> thanks,
>
>
Paul,
Could you narrow down the versions of the PGCC where you get the ICE when
using the -m32 option?
Thanks,
Howard
2016-07-06 15:29 GMT-06:00 Paul Hargrove :
> The following are previously reported issues that I am *not* expecting to
> be resolved in 2.0.0.
> However, I am lis
Hi Ralph,
thanks! does this impact particular systems or is it general problem.
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
From: devel mailto:devel-boun...@open-mpi.org>> on
behalf of Ralph Castain mailto:r...@open-mpi.org>>
Reply-To: Open MPI Develope
Hi Lisandro,
Thanks for giving the rc3 a try. Could you post the output of ompi_info
from your
install to the list?
Thanks,
Howard
2016-06-16 7:55 GMT-06:00 Lisandro Dalcin :
> ./configure --prefix=/home/devel/mpi/openmpi/2.0.0rc3 --enable-debug
> --enable-mem-debug
>
priority
callbacks are made during Open MPI's main progress loop.
- Disable backtrace support by default in the PSM/PSM2 libraries to
prevent unintentional conflicting behavior.
Thanks,
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
he underlying branch which the PR targets.
Howard
2016-06-07 13:33 GMT-06:00 Ralph Castain :
> Hi folks
>
> I’m trying to get a handle on our use of Jenkins testing for PRs prior to
> committing them. When we first discussed this, it was my impression that
> our objective wa
orrow
when lanl-bot should be back in business wrt cori and edison.
Howard
migration guide.
The wiki page format of the guide is at
ttps://
github.com/open-mpi/ompi/wiki/User-Migration-Guide%3A-1.8.x-and-v1.10.x-to-v2.0.0
We'll discuss this at the devel telecon tomorrow (5/17).
Thanks,
Howard
Hi Jeff,
Let's just update the MPI_THREAD_MULTIPLE comment to say that
enable-mpi-thread-multiple is still required as part of config.
Howard
2016-04-29 22:20 GMT-06:00 Orion Poplawski :
> On 04/28/2016 05:01 PM, Jeff Squyres (jsquyres) wrote:
>
>> At long last, here's t
Hi Jeff,
checkpoint/restart is not supported in this release.
Does this release work with totalview? I recall we had some problems,
and do not remember if they were resolved.
We may also want to clarify if any PML/MTLs are experimental in this
release.
MPI_THREAD_MULTIPLE support.
Howard
Hi Matias,
I updated the issue 1559 with the info requested.
It might be simpler to just switch over to using the issue
for tracking this conversation?
I don't want to be posting big attachments emails on this
list.
Thanks,
Howard
2016-04-20 19:21 GMT-06:00 Cabral, Matias A :
> H
Am Mittwoch, 20. April 2016 schrieb Paul Hargrove :
> Not sure if Howard wants the check to be OFF by default in tarballs, or
> absent completely.
>
>
I meant the former.
> I test almost exclusively from RC tarballs, and have access to many
> uncommon platforms.
> So, if y
I also think this symbol checker should not be in the tarball.
Howard
2016-04-20 13:08 GMT-06:00 Jeff Squyres (jsquyres) :
> On Apr 20, 2016, at 2:08 PM, dpchoudh . wrote:
> >
> > Just to clarify, I was doing a build (after adding code to support a new
> transport) from co
ixes to get the PSM2 MTL working on our
omnipath clusters.
I don't think this problem has anything to do with SLURM except for the
jobid
manipulation to generate the unique key.
Howard
2016-04-19 17:18 GMT-06:00 Cabral, Matias A :
> Howard,
>
>
>
> PSM2_DEVICES, I went ba
its not a SLURM
specific problem.
2016-04-19 12:25 GMT-06:00 Cabral, Matias A :
> Hi Howard,
>
>
>
> Couple more questions to understand a little better the context:
>
> - What type of job running?
>
> - Is this also under srun?
>
>
>
> Fo
6.x86_64
infinipath-*psm*-3.3-0.g6f42cdb1bb8.2.el7.x86_64
should we get newer rpms installed?
Is there a way to disable the AMSHM path? I'm wondering if that
would help since multi-node jobs seems to run fine.
Thanks for any help,
Howard
please point me to the patch.
--
sent from my smart phonr so no good type.
Howard
On Apr 15, 2016 1:04 PM, "Ralph Castain" wrote:
> I have a patch that I think will resolve this problem - would you please
> take a look?
>
> Ralph
>
>
>
> On Apr 1
I didn't copy dev on this.
-- Weitergeleitete Nachricht --
Von: *Howard Pritchard*
Datum: Donnerstag, 14. April 2016
Betreff: psm2 and psm2_ep_open problems
An: Open MPI Developers
Hi Matias
Actually I triaged this further. Open mpi PMI subsystem is actually doing
t
the PSM2MTL to handle this feature of PSM2.
Howard
Am Donnerstag, 14. April 2016 schrieb Cabral, Matias A :
> Hi Howard,
>
>
>
> I suspect this is the known issue that when using SLURM with OMPI and PSM
> that is discussed here:
>
> https://www.open-mpi.org/community/lists
ve ofi mtl, but perhaps there's
another way to get psm2 mtl to work for single node jobs? I'd prefer
to not ask users to disable psm2 mtl explicitly for their single node jobs.
Thanks for suggestions.
Howard
1 - 100 of 286 matches
Mail list logo