Dear all,

I forgot to confirm that all that was needed for Intel OneAPI (current) plus 
OpenMPI (current) was

shell$ lt_cv_ld_force_load=no ./configure …

As far as I can tell, this works without problems and, particularly, allows one 
to build a working scalapack version based on MKL + OpenMPI.

So if there was a way to embed this into the .configure script somehow (i.e., 
detect Intel Mac + MacOS + OneAPI and set the appropriate environment 
variable), my guess is that this would help the users of this particular 
hardware / software stack.

Best wishes
Volker

> On Sep 16, 2022, at 3:53 PM, Volker Blum via users <users@lists.open-mpi.org> 
> wrote:
> 
> Thank you! A patch would be great.
> 
> I seem to recall that the patch in that ticket did not solve the issue for me 
> about a year ago (was part of another discussion on OMPI Users). I did not 
> try this time around (time reasons, sorry, the configure step just takes very 
> long).
> 
> At this point, I found that the recommendation from the README:
> 
> shell$ lt_cv_ld_force_load=no ./configure …
> 
> does allow me to build a OpenMPI mpif90 that compiles a simple test program, 
> without applying a patch - i.e., the configure file is unmodified.
> 
> That's as far as I got for now. Somehow setting the environment variable 
> "lt_cv_ld_force_load" correctly might already do the trick, as far as the 
> original error is concerned.
> 
> Best wishes
> Volker
> 
> 
> From: users <users-boun...@lists.open-mpi.org> on behalf of Gilles 
> Gouaillardet via users <users@lists.open-mpi.org>
> Sent: Friday, September 16, 2022 1:41 AM
> To: Open MPI Users <users@lists.open-mpi.org>
> Cc: Gilles Gouaillardet <gilles.gouaillar...@gmail.com>
> Subject: Re: [OMPI users] ifort and openmpi
>  
> Volker,
> 
> https://ntq1982.github.io/files/20200621.html (mentioned in the ticket) 
> suggests that patching the generated configure file can do the trick.
> 
> We already patch the generated configure file in autogen.pl (if the 
> patch_autotools_output subroutine), so I guess that could be enhanced
> to support Intel Fortran on OSX.
> 
> I am confident a Pull Request that does fix this issue will be considered for 
> inclusion in future Open MPI releases.
> 
> 
> Cheers,
> 
> Gilles
> 
> On Fri, Sep 16, 2022 at 11:20 AM Volker Blum via users 
> <users@lists.open-mpi.org> wrote:
> Hi all,
> 
> This issue here:
> 
> https://github.com/open-mpi/ompi/issues/7615
> 
> is, unfortunately, still current. 
> 
> I understand that within OpenMPI there is a sense that this is Intel's 
> problem but I’m not sure it is. Is it possible to address this in the 
> configure script in the actual OpenMPI distribution in some form?
> 
> There are more issues with OpenMPI + Intel + scalapack, but this is the first 
> one that strikes. Eventually, the problem just renders a Macbook unusable as 
> a computing tool since the only way it seems to run is with libraries from 
> Homebrew (this works), but that appears to introduce unoptimized BLAS 
> libraries - very slow. It’s the only working MPI setup that I could 
> construct, though.
> 
> I know that one can take the view that Intel Fortran on Mac is just broken 
> for the default configure process, but it seems like a strange standoff to 
> me. It would be much better to see this worked out in some way. 
> 
> Does anyone have a solution for this issue that could be merged into the 
> actual configure script distributed with OpenMPI, rather than having to track 
> down a fairly arcane addition(*) and apply it by hand?
> 
> Sorry … I know this isn’t the best way of raising the issue but then, it is 
> also tiring to spend hours on an already large build process and find that 
> the issue is still there. If there was some way to figure this out so as to 
> at least not affect OpenMPI, I suspect that would help a lot of users. Would 
> anyone be willing to revisit the 2020 decision?
> 
> Thank you!
> 
> Best wishes
> Volker
> 
> (*) I know about the patch in the README:
> 
> - Users have reported (see
>   https://github.com/open-mpi/ompi/issues/7615) that the Intel Fortran
>   compiler will fail to link Fortran-based MPI applications on macOS
>   with linker errors similar to this:
> 
>       Undefined symbols for architecture x86_64:
>         "_ompi_buffer_detach_f08", referenced from:
>             import-atom in libmpi_usempif08.dylib
>       ld: symbol(s) not found for architecture x86_64
> 
>   It appears that setting the environment variable
>   lt_cx_ld_force_load=no before invoking Open MPI's configure script
>   works around the issue.  For example:
> 
>       shell$ lt_cv_ld_force_load=no ./configure …
> 
> This is nice but it does not help stop the issue from striking unless one 
> reads a very long file in detail first. Isn’t this perhaps something that the 
> configure script itself should be able to catch if it detects ifort?
> 
> 

Reply via email to