On 20 April 2017 at 12:58, r...@open-mpi.org wrote:
> Fully expected - if ORTE can’t start one or more daemons, then the MPI job
> itself will never be executed.
>
> There was an SGE integration issue in the 2.0 series - I fixed it, but IIRC
> it didn’t quite make the 2.0.2 release. In fact, I jus
On 20 April 2017 at 12:58, r...@open-mpi.org wrote:
> Fully expected - if ORTE can’t start one or more daemons, then the MPI job
> itself will never be executed.
>
> There was an SGE integration issue in the 2.0 series - I fixed it, but IIRC
> it didn’t quite make the 2.0.2 release. In fact, I jus
On 20 April 2017 at 12:58, r...@open-mpi.org wrote:
> Fully expected - if ORTE can’t start one or more daemons, then the MPI job
> itself will never be executed.
>
> There was an SGE integration issue in the 2.0 series - I fixed it, but IIRC
> it didn’t quite make the 2.0.2 release. In fact, I jus
On 19 April 2017 at 18:35, Kevin Buckley
wrote:
> If I compile against 2.0.2 the same command works at the command line
> but not in the "SGE" job submission, where I see a complaint about
>
> =
> Hos
I have source code for MrBayes.
If I compile against OpenMPI 1.8.3, then an
mpirun -np4 mb < somefile.txt
works at both the command line and in an "SGE" job submission where
I'm tagetting 4 cores on the same node.
If I compile against 2.0.2 the same command works at the command line
but not in
On 5 April 2017 at 13:01, Kevin Buckley
wrote:
> I also note that as things stand, the Relocation is used for all
> files except the Environment Module file, resulting from the
> rpmbuild beig done as follows
>
> --define 'install_shell_scripts 1' \
> --
Just in case anyone is interested in following this, I'll
try and document what I'm doing here
I have a forked repo and added a branch here
https://github.com/vuw-ecs-kevin/ompi/tree/make-specfile-scl-capable
and have applied a series of small changes that allow for the building
of an RPM that a
On 31 March 2017 at 23:35, Jeff Squyres (jsquyres) wrote:
and Gilles, who said,
>> you should only use the tarballs from www.open-mpi.org
> The GitHub tarballs are simple tars of the git repo at a given hash (e.g.,
> the v2.0.2 tag in git). ...
Yep I'm aware of the way that GitHub tarbals ca
On 29 March 2017 at 13:49, Jeff Squyres (jsquyres) wrote:
> I have no objections to this.
>
> Unfortunately, I don't have the time to work on it, but we'd be glad to look
> at pull requests to introduce this functionality. :-)
Yes, yes, alright.
I am though slightly confused, following the mov
Another than occured to me whilst looking around this
was whether the OpenMPI SRPM might benefit from
being given proper "Software Collections" package
capability, as opposed to having the "install in opt"
option.
I don't claim to have enough insight to say either way
here, however the Software C
On 23 March 2017 at 23:41, Jeff Squyres (jsquyres) wrote:
> Yoinks. Looks like this was an oversight. :-(
>
> Yes, I agree that install_in_opt should put the modulefile in /opt as well.
Actually, I have since read the SPEC file from top to bottom and seen a
Changelog entry (from you Jeff. from
Just came to rehash some old attempts to build previous OpenMPIs
for an RPM-based system and noticed that, despite specifying
--define 'install_in_opt 1' \
as part of this full "config" rpmbuild stage
(Note: SPEC-file Release tag is atered so as not to have the RPM clash with
any system MP
Watcha,
we recently updated the OpenMPI installation on our School's ArchLinux
machines, where OpenMPI is built as a PkgSrc package, to 1.10.0
In running through the build, we were told that PkgSrc wasn't too keen on
the use of the == with a single "if test" construct and so I needed to apply
the
On 9 December 2014 at 03:29, Howard Pritchard wrote:
> Hello Kevin,
>
> Could you try testing with Open MPI 1.8.3? There was a bug in 1.8.1
> that you are likely hitting in your testing.
>
> Thanks,
>
> Howard
Bingo!
Seems to have got rid of those messages.
Thanks.
Apologies for the lack of a subject line: cut and pasted the body
before the subject !
Should have been
Removing "registering part of your physical memory" warning message
Dunno if anyone can fix that in the maling list?
Watcha,
have recently come to install the PISM package on top of PETSc, which,
in turn is
built against OpenMPI 1.8.1 on our Science Faculty HPC Facility, which has SGI
C2112 compute nodes with 64GB RAM running on top of CentOS 6.
In testing the PETSc deployment out and when running PISM itself,
Hello again OpenMPI folk, been a while.
Have just come to build OpenMPI 1.8.1 within a PkgSrc environment for
our ArchLinux machines (yes, we used to be NetBSD, yes).
Latest PkgSrc build was for 1.6.4.
The 1.6.4 PkgSrc build required 4 patches, 3 of which were PkgSrc-specific
and just defined a
liure to
build sperated RPMs
with a vanilla spec-file as well.
Obviously no show stopper, as I can build the "all_in_one_rpm" but
thought to feed
the experience back.
Kevin Buckley
ECS, VUW, NZ
> > 5. ompi_mca.m4 has been cleaned up a bit, allowing autogen.pl to be a
> > little dumber than autogen.sh
>
> So you are dumbing down in search of improvements ?
>
Apologies. That was only meant to go to Jeff Squyres.
Kevin
> 5. ompi_mca.m4 has been cleaned up a bit, allowing autogen.pl to be a
> little dumber than autogen.sh
So you are dumbing down in search of improvements ?
> > OK, I humbly withdraw (a) above but now, equally humbly, suggest
> > that instead of using a list, those things be turned into standard,
> > single-target, configure options, vis:
> >
> > --with-=
> >
> > --enable-=
>
> True, this would be better. I believe that Brian didn't initially
> That contribution needs to be
>
> a) brought under the control of --enable-contrib-no-build=
>
> b) possibly renamed (it would seem to be an MPI specific thing)
> so maybe, libmpitrace ?
I'd like to qualify that, in the light of some more digging,
though (b) is still an issue.
It seems tha
Something I have just noticed on the NetBSD platform build
that I think goes further than just that platform.
There is a NetBSD packaging clash between the
libtrace.la
from
ompi/contrib/libtrace/
and that from an already existing package
libtrace-3.0.6
(Homepage:http://research.wand.net.nz
> 4) The other thing that comes to mind are the mountain of WARNINGs
> because of the "redefinition" of
>
> #define CACHE_LINE_SIZE 128
>
> in
>
> opal/include/opal/sys/cache.h
>
> although it's a bit "chicken and egg" because NetBSD's definition,
> in:
>
> /usr/include/sys/param.h
>
> obviously al
> Can we get a thumbs up / down from each organization about where you
> think we are with v1.5? Cisco and HLRS obviously give a thumbs up.
>
I can't claim to speak for NetBSD but, for info, I have just managed
to COMPILE 1.5rc3 on a NetBSD platform.
Notes:
==
1) The patch NetBSD was applyi
> I added several FAQ items -- how do they look?
>
> http://www.open-mpi.org/faq/?category=troubleshooting#erroneous-file-not-found-message
> http://www.open-mpi.org/faq/?category=troubleshooting#missing-symbols
> http://www.open-mpi.org/faq/?category=building#install-overwrite
>
"This is due to
Cc'd Aleksej as I'm not sure he's on the "devel" list, and Mark
Davies, as he is certainly not.
I'll also post this back onto the R HPC SIG list which is
where I came in.
Jeff Squyres wrote:
> Now, all this being said, IIRC (and I very well may not!), the real
> underlying issue here is that R i
Jeff,
> So the error message is at least *somewhat* better than a totally
> misleading "file not found" message -- but it still only speculates
> on the real reason that libltdl failed to load the DSO.
>
> 2. https://svn.open-mpi.org/trac/ompi/changeset/22806 put in an
> OMPI-specific change to li
> Which libltdl version is that NetBSD ltdl.h from? Which version is
> in opal/libltdl? Have you tried not doing the above change?
>
> libltdl 2.2.x has incompatible changes over 1.5.x, both in the library
> as well as in the header, as well as (I think) in preloaded modules.
Hey Ralf,
The lib
Hi there,
this is an issue that I started a while ago on the R HPC SIG mailing
list and which then moved into an off-list conversation with Jeff
Squyres but on which no progress has been made.
I believe that the issue is less with Rmpi than with something
that Rmpi is exposing in OpenMPI specific
30 matches
Mail list logo