Hi all,

This issue here:

https://github.com/open-mpi/ompi/issues/7615

is, unfortunately, still current. 

I understand that within OpenMPI there is a sense that this is Intel's problem 
but I’m not sure it is. Is it possible to address this in the configure script 
in the actual OpenMPI distribution in some form?

There are more issues with OpenMPI + Intel + scalapack, but this is the first 
one that strikes. Eventually, the problem just renders a Macbook unusable as a 
computing tool since the only way it seems to run is with libraries from 
Homebrew (this works), but that appears to introduce unoptimized BLAS libraries 
- very slow. It’s the only working MPI setup that I could construct, though.

I know that one can take the view that Intel Fortran on Mac is just broken for 
the default configure process, but it seems like a strange standoff to me. It 
would be much better to see this worked out in some way. 

Does anyone have a solution for this issue that could be merged into the actual 
configure script distributed with OpenMPI, rather than having to track down a 
fairly arcane addition(*) and apply it by hand?

Sorry … I know this isn’t the best way of raising the issue but then, it is 
also tiring to spend hours on an already large build process and find that the 
issue is still there. If there was some way to figure this out so as to at 
least not affect OpenMPI, I suspect that would help a lot of users. Would 
anyone be willing to revisit the 2020 decision?

Thank you!

Best wishes
Volker

(*) I know about the patch in the README:

- Users have reported (see
  https://github.com/open-mpi/ompi/issues/7615) that the Intel Fortran
  compiler will fail to link Fortran-based MPI applications on macOS
  with linker errors similar to this:

      Undefined symbols for architecture x86_64:
        "_ompi_buffer_detach_f08", referenced from:
            import-atom in libmpi_usempif08.dylib
      ld: symbol(s) not found for architecture x86_64

  It appears that setting the environment variable
  lt_cx_ld_force_load=no before invoking Open MPI's configure script
  works around the issue.  For example:

      shell$ lt_cv_ld_force_load=no ./configure …

This is nice but it does not help stop the issue from striking unless one reads 
a very long file in detail first. Isn’t this perhaps something that the 
configure script itself should be able to catch if it detects ifort?


Reply via email to