Thank You,  Nick and Gilles,

I hope the administrators of the cluster will be so kind  and will update 
OpenMPI for me (and others) soon.

Greetings
Michael

Von: users [mailto:users-boun...@open-mpi.org] Im Auftrag von Gilles 
Gouaillardet
Gesendet: Donnerstag, 19. November 2015 12:59
An: Open MPI Users
Betreff: Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with 
Intel-Ftn-compiler

Thanks Nick for the pointer !

Michael,

good news is you do not have to upgrade ifort,
but you have to update to 1.10.1
(intel 16 changed the way gcc pragmas are handled, and ompi has been made aware 
in 1.10.1)
1.10.1 fixes many bugs from 1.10.0, so I strongly encourage anyone to use 1.10.1

Cheers,

Gilles

On Thursday, November 19, 2015, Nick Papior 
<nickpap...@gmail.com<mailto:nickpap...@gmail.com>> wrote:
Maybe I can chip in,

We use OpenMPI 1.10.1 with Intel /2016.1.0.423501 without problems.

I could not get 1.10.0 to work, one reason is: 
http://www.open-mpi.org/community/lists/users/2015/09/27655.php

On a side-note, please note that if you require scalapack you may need to 
follow this approach:
https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/590302

2015-11-19 11:24 GMT+01:00 
<michael.rach...@dlr.de<javascript:_e(%7B%7D,'cvml','michael.rach...@dlr.de');>>:
Sorry, Gilles,

I cannot  update to more recent versions, because what I used is the newest 
combination of OpenMPI and Intel-Ftn  available on that cluster.

When looking at the list of improvements  on the OpenMPI website for  OpenMPI 
1.10.1 compared to 1.10.0, I do not remember having seen this item to be 
corrected.

Greeting
Michael Rachner


Von: users 
[mailto:users-boun...@open-mpi.org<javascript:_e(%7B%7D,'cvml','users-boun...@open-mpi.org');>]
 Im Auftrag von Gilles Gouaillardet
Gesendet: Donnerstag, 19. November 2015 10:21
An: Open MPI Users
Betreff: Re: [OMPI users] Bug in Fortran-module MPI of OpenMPI 1.10.0 with 
Intel-Ftn-compiler

Michael,

I remember i saw similar reports.

Could you give a try to the latest v1.10.1 ?
And if that still does not work, can you upgrade icc suite and give it an other 
try ?

I cannot remember whether this is an ifort bug or the way ompi uses fortran...

Btw, any reason why you do not
Use mpi_f08 ?

HTH

Gilles

michael.rach...@dlr.de<javascript:_e(%7B%7D,'cvml','michael.rach...@dlr.de');> 
wrote:
Dear developers of OpenMPI,

I am trying to run our parallelized Ftn-95 code on a Linux cluster with 
OpenMPI-1-10.0 and Intel-16.0.0 Fortran compiler.
In the code I use the  module MPI  (“use MPI”-stmts).

However I am not able to compile the code, because of compiler error messages 
like this:

/src_SPRAY/mpi_wrapper.f90(2065): error #6285: There is no matching specific 
subroutin for this generic subroutine call.   [MPI_REDUCE]


The problem seems for me to be this one:

The interfaces in the module MPI for the MPI-routines do not accept a send or 
receive buffer array, which is
actually a variable, an array element or a constant (like MPI_IN_PLACE).

Example 1:
     This does not work (gives the compiler error message:      error #6285: 
There is no matching specific subroutin for this generic subroutine call  )
             ivar=123    ! <-- ivar is an integer variable, not an array
          call MPI_BCAST( ivar, 1, MPI_INTEGER, 0, MPI_COMM_WORLD), ierr_mpi )  
  ! <--- this should work, but is not accepted by the compiler

      only this cumbersome workaround works:
              ivar=123
                allocate( iarr(1) )
                iarr(1) = ivar
         call MPI_BCAST( iarr, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr_mpi )    
! <--- this workaround works
                ivar = iarr(1)
                deallocate( iarr(1) )

Example 2:
     Any call of an MPI-routine with MPI_IN_PLACE does not work, like that 
coding:

      if(lmaster) then
        call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, MPI_MAX &    
! <--- this should work, but is not accepted by the compiler
                                         ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
      else  ! slaves
        call MPI_REDUCE( rbuffarr, rdummyarr, nelem, MPI_REAL8, MPI_MAX &
                        ,0_INT4, MPI_COMM_WORLD, ierr_mpi )
      endif

    This results in this compiler error message:

      /src_SPRAY/mpi_wrapper.f90(2122): error #6285: There is no matching 
specific subroutine for this generic subroutine call.   [MPI_REDUCE]
            call MPI_REDUCE( MPI_IN_PLACE, rbuffarr, nelem, MPI_REAL8, MPI_MAX &
-------------^


In our code I observed the bug with MPI_BCAST, MPI_REDUCE, MPI_ALLREDUCE,
but probably there may be other MPI-routines with the same kind of bug.

This bug occurred for                               :     OpenMPI-1.10.0  with 
Intel-16.0.0
In contrast, this bug did NOT occur for:     OpenMPI-1.8.8    with Intel-16.0.0
                                                                            
OpenMPI-1.8.8    with Intel-15.0.3
                                                                            
OpenMPI-1.10.0  with gfortran-5.2.0

Greetings
Michael Rachner

_______________________________________________
users mailing list
us...@open-mpi.org<javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/11/28052.php



--
Kind regards Nick

Reply via email to