Hi,

thank you for your help. Unfortunately I don't have access to the source oft
he calling program. Maybe there is a subtle problem with some MPI commands.
But I have solved the problem in another way.

There is a module in the basic library using PRIVATE variables to call
predefined procedures according to several cases of calculation. That means
the private variables are changed so that they can adapt a general routine
to special calculations.

So I deleted the private variables and put them into procedure calling as
arguments. Now there is no problem with MPI calling any more.

Maybe you have an idea why it didn't work with those private variables? But
- well, if not there would not be a problem any more (although I don't know
why). ;)

Best regards

Michael



______________________________________________________________
Dipl.-Ing. Michael Mauersberger
michael.mauersber...@tu-dresden.de
Tel +49 351 463-38099 | Fax +49 351 463-37263
Technische Universität Dresden
Institut für Luft- und Raumfahrttechnik / Institute of Aerospace Engineering
Professur für Luftfahrzeugtechnik / Chair of Aircraft Engineering
Prof. Dr. K. Wolf | 01062 Dresden | tu-dresden.de/ilr/lft

-----Ursprüngliche Nachricht-----
Von: users [mailto:users-boun...@lists.open-mpi.org] Im Auftrag von Reuti
Gesendet: Dienstag, 24. Oktober 2017 13:09
An: Open MPI Users <users@lists.open-mpi.org>
Betreff: Re: [OMPI users] Vague error message while executing MPI-Fortran
program

Hi,

> Am 24.10.2017 um 09:33 schrieb Michael Mauersberger
<michael.mauersber...@tu-dresden.de>:
> 
>  
>  
> Dear all,
>  
> When compiling and running a Fortran program on Linux (OpenSUSE Leap 42.3)
I get an undefinable error message stating, that some "Boundary Run-Time
Check Failure" ocurred for variable "ARGBLOCK_0.0.2". But this variable I
don't know or use in my code and the compiler is tracing me back to the line
of a "CONTAINS" statement in a module.

A `strings * | grep ARGBLOCK` in
/opt/intel/compilers_and_libraries_2017.4.196/linux/bin/intel64 reveals:

ARGBLOCK_%d
ARGBLOCK_REC_%d

So it looks like the output is generated on-the-fly and doesn't point to any
existing variable. But to which argument of which routine is still unclear.
Does the Intel Compile have the feature to output a cross-refernce of all
used variables? Maybe it's listed there.

-- Reuti


> I am using the Intel Fortran Compiler from Intel Composer XE 2013 with the
following Options:
> ifort -fPIC -g -traceback -O2 -check all,noarg_temp_created -warn all
>  
> Furthermore, the program uses Intel MKL with the functions DGETRF, 
> DGETRS, DSYGV, DGEMM, DGGEV and the C-Library NLopt.
>  
> The complete error message looks like:
>  
> Boundary Run-Time Check Failure for variable 'ARGBLOCK_0.0.2'
>  
> forrtl: error (76): Abort trap signal
> Image              PC                Routine            Line        Source

> libc.so.6          00007F2BF06CC8D7  Unknown               Unknown
Unknown
> libc.so.6          00007F2BF06CDCAA  Unknown               Unknown
Unknown
> geops              00000000006A863F  Unknown               Unknown
Unknown
> libmodell.so       00007F2BF119E54D  strukturtest_mod_         223
strukturtest_mod.f90
> libmodell.so       00007F2BF1184056  modell_start_             169
modell_start.f90
> geops              000000000045D1A3  Unknown               Unknown
Unknown
> geops              000000000042C2C6  Unknown               Unknown
Unknown
> geops              000000000040A14C  Unknown               Unknown
Unknown
> libc.so.6          00007F2BF06B86E5  Unknown               Unknown
Unknown
> geops              000000000040A049  Unknown               Unknown
Unknown
>
============================================================================
=======
> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> =   EXIT CODE: 134
> =   CLEANING UP REMAINING PROCESSES
> =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
> ======================================================================
> ============= YOUR APPLICATION TERMINATED WITH THE EXIT STRING: 
> Aborted (signal 6) This typically refers to a problem with your 
> application.
> Please see the FAQ page for debugging suggestions
>  
>  
> The program has the following structure:
> - basic functions linked into static library (*.a), containing only 
> modules --> using MKL routines
> - main program linked into a dynamic library, containing 1 bare 
> subroutine, modules else
> - calling program (executed with mpiexec), calls mentioned subroutine 
> in main program
>  
> Without the calling program (in Open MPI) the subroutine runs without
problems. But when invoking it with the MPI program I get the error message
above.
>  
> So maybe some of you encountered a similar problem and is able to help me.
I would be really grateful.
>  
> Thanks,
>  
> Michael
>  
> _______________________________________________________
>  
> Dipl.-Ing. Michael Mauersberger
> 
> Tel. +49 351 463 38099 | Fax +49 351 463 37263 Marschnerstraße 30, 
> 01307 Dresden Professur für Luftfahrzeugtechnik | Prof. Dr. Klaus Wolf
> 
> Institut für Luft- und Raumfahrttechnik | Fakultät Maschinenwesen 
> Technische Universität Dresden
>  
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to