Bug#833425: Aw: Re: Bug#833425: mpi-defaults: switch to openmpi on hppa architecture
Hello Mattia, On 04.08.2016 12:49, Mattia Rizzolo wrote: > On Thu, Aug 04, 2016 at 12:02:51PM +0200, Helge Deller wrote: >> Currently I've stopped all hppa buildds and plan to upgrade them to gcc6 >> before starting them again. And, I've started a test build of boost1.6.1 >> to check if the mpi-defaults change will help. I expect a result during >> the next few hours. I'll let you know of the outcome. > > I've committed the change to git, as I assume you know you things as a > hppa porter. Ok. > If you don't stop me I'll upload next hours. I see you pushed/uploaded the new version. It built successfully, and even my testbuild with boost1.61 worked. THANKS!! Helge -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#833425: Aw: Re: Bug#833425: mpi-defaults: switch to openmpi on hppa architecture
Hi all, On 04/08/2016 11:02, Helge Deller wrote: > Hi Mattia, > >> On Thu, Aug 04, 2016 at 09:34:35AM +0200, Helge Deller wrote: >>> mpi-defaults depends on libmpich-dev for the hppa architecture (like m68k >>> and sh4). >>> All other architectures use libopenmpi-dev. >>> Is there a reason for that? >> reason is that at that time openmpi was not available on those >> architecture. > Ok. I assumed that. OpenMPI recently moved to using gcc atomics where available, which means we no longer need to ship patches for each architecture. So I expect OpenMPI to work on all archs before stretch. >> Besides, do we know whether openmpi works correctly on those >> architectures? Since recently we have mpi-testsuite, but as you can see >> the situation is not nice: >> https://buildd.debian.org/status/package.php?p=mpi-testsuite > Oops, at least hppa is not more broken than others :-) Note, I'm currently working on OpenMPI 2.0.0 and hoping to get it into Stretch. This will involve a transition: https://release.debian.org/transitions/html/auto-openmpi.html Thanks for pointing to the mpi-testsuite results. I wasn't aware they are do bad. I'll investigate. OpenMPI 1.10.3 is "mostly ok" according to: https://buildd.debian.org/status/package.php?p=openmpi=unstable I've openmpi2 in experimental: https://buildd.debian.org/status/package.php?p=openmpi=experimental I'm working with upstream and hoping to add symbol versioning, as the regular soname changes are quite problematic. > Currently I've stopped all hppa buildds and plan to upgrade them to gcc6 > before starting them again. And, I've started a test build of boost1.6.1 > to check if the mpi-defaults change will help. I expect a result during > the next few hours. I'll let you know of the outcome. > > Helge > Alastair, as OpenMPI maintainer. -- Alastair McKinstry,, , https://diaspora.sceal.ie/u/amckinstry Misentropy: doubting that the Universe is becoming more disordered. -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#833425: Aw: Re: Bug#833425: mpi-defaults: switch to openmpi on hppa architecture
On Thu, Aug 04, 2016 at 12:02:51PM +0200, Helge Deller wrote: > > > The openmpi packages builds successfully on hppa, so I'd suggest to switch > > > to openmpi for hppa (and maybe m68k and sh4?) too. > > > > notice that switching default means rebuilding all the rdep in the > > correct order (ben is able to provide the correct order). I've been > > able to do it correctly for s390x (#813691) thanks to the release team > > tracking the transition, but we don't have tools for ports, so this is > > really up to you. Otherwise what you get is FTBFS of packages down in > > the chain, and runtime errors due to different ABI of the library (I > > noticed some programs are clever enough to say "libfoo has been linked > > against mpich but I'm now building against openmpi, I can't do that, > > please rebuild libfoo first", but most don't and just throw an error > > (IIRC a linking error)). > > I'd be fine with rebuilding all required packages, and I'd appreciate > info from you or Ben which order is required. be aware that ben is this: https://tracker.debian.org/pkg/ben which is what powers https://release.debian.org/transitions/index.html :) I'm really not able to provide such support, though maybe you can easily follow it by just see what fails to build. I suppose there are not that many users of MPI software in hppa anyway to notice a small breakage. > Currently I've stopped all hppa buildds and plan to upgrade them to gcc6 > before starting them again. And, I've started a test build of boost1.6.1 > to check if the mpi-defaults change will help. I expect a result during > the next few hours. I'll let you know of the outcome. I've committed the change to git, as I assume you know you things as a hppa porter. If you don't stop me I'll upload next hours. -- regards, Mattia Rizzolo GPG Key: 66AE 2B4A FCCF 3F52 DA18 4D18 4B04 3FCD B944 4540 .''`. more about me: https://mapreri.org : :' : Launchpad user: https://launchpad.net/~mapreri `. `'` Debian QA page: https://qa.debian.org/developer.php?login=mattia `- signature.asc Description: PGP signature -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers
Bug#833425: Aw: Re: Bug#833425: mpi-defaults: switch to openmpi on hppa architecture
Hi Mattia, > On Thu, Aug 04, 2016 at 09:34:35AM +0200, Helge Deller wrote: > > mpi-defaults depends on libmpich-dev for the hppa architecture (like m68k > > and sh4). > > All other architectures use libopenmpi-dev. > > Is there a reason for that? > > reason is that at that time openmpi was not available on those > architecture. Ok. I assumed that. > > The openmpi packages builds successfully on hppa, so I'd suggest to switch > > to openmpi for hppa (and maybe m68k and sh4?) too. > > notice that switching default means rebuilding all the rdep in the > correct order (ben is able to provide the correct order). I've been > able to do it correctly for s390x (#813691) thanks to the release team > tracking the transition, but we don't have tools for ports, so this is > really up to you. Otherwise what you get is FTBFS of packages down in > the chain, and runtime errors due to different ABI of the library (I > noticed some programs are clever enough to say "libfoo has been linked > against mpich but I'm now building against openmpi, I can't do that, > please rebuild libfoo first", but most don't and just throw an error > (IIRC a linking error)). I'd be fine with rebuilding all required packages, and I'd appreciate info from you or Ben which order is required. Furthermore, since the gcc-6 transition happens right now, it's even a good point to rebuild packages anyway. Just from history I know, that as long as we are using a non-standard (means: not like most other arches) library, we face issues which are sometimes only happening due to the non-standard lib. And such issues don't get fixed in general packages, because the standard packages build just fine. So, the burden to rebuild packages pay off later. > Besides, do we know whether openmpi works correctly on those > architectures? Since recently we have mpi-testsuite, but as you can see > the situation is not nice: > https://buildd.debian.org/status/package.php?p=mpi-testsuite Oops, at least hppa is not more broken than others :-) > PS: did you CCed me on your email? Yes. Will not do again. Currently I've stopped all hppa buildds and plan to upgrade them to gcc6 before starting them again. And, I've started a test build of boost1.6.1 to check if the mpi-defaults change will help. I expect a result during the next few hours. I'll let you know of the outcome. Helge -- debian-science-maintainers mailing list debian-science-maintainers@lists.alioth.debian.org http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/debian-science-maintainers