Hi,

Not sure if this is related to your current issues but
I once had issues with this particular function with IBM XLF compilers
but never got to a solution because...

a) VecGetArrayF90 also worked for me
b) and the IBM Machine got switched off

Anyway, I did not have any issues since then... not with intel nor with
gfortran compilers.

The helpdesk back then created a petsc bug report but I think the issue
was never resolved...

For the sake of completeness, I am attaching the messages we had then,
maybe the debugging efforts are related to your problems?

Good Luck,

Fabian


On 04/05/2018 05:03 PM, Randall Mackie wrote:
> Dear PETSc users,
>
> I’m curious if anyone else experiences problems using
DMDAVecGetArrayF90 in conjunction with Intel compilers?
> We have had many problems (typically 11 SEGV segmentation violations)
when PETSc is compiled in optimize mode (with various combinations of
options).
> These same codes run valgrind clean with gfortran, so I assume this is
an Intel bug, but before we submit a bug report I wanted to see if
anyone else had similar experiences?
> We have basically gone back and replaced our calls to
DMDAVecGetArrayF90 with calls to VecGetArrayF90 and pass those pointers
into a “local” subroutine that works fine.
>
> In case anyone is curious, the attached test code shows this behavior
when PETSc is compiled with the following options:
>
> ./configure \
>   --with-clean=1 \
>   --with-debugging=0 \
>   --with-fortran=1 \
>   --with-64-bit-indices \
>   --download-mpich=../mpich-3.3a2.tar.gz \
>   --with-blas-lapack-dir=/opt/intel/mkl \
>   --with-cc=icc \
>   --with-fc=ifort \
>   --with-cxx=icc \
>   --FOPTFLAGS='-O2 -xSSSE3 -axcore-avx2' \
>   --COPTFLAGS='-O2 -xSSSE3 -axcore-avx2' \
>   --CXXOPTFLAGS='-O2 -xSSSE3 -axcore-avx2’ \
>
>
>
> Thanks, Randy M.
>
--- Begin Message ---
Hello,
My userid is b380246.

I would like to run my Fortran model, which depends on a recent version
of PETSC, http://www.mcs.anl.gov/petsc/.
While it compiles fine, it crashes at a certain point.
My knowledge of the code is however very limited and I am at the same
time not familiar with the Blizzard machine.

I hoped that, before I send this as a bug request to the Petsc People,
you might be able to have a look at compile flags if I am doing
something silly or maybe if you could even reproduce the error.

In the following, I try to elaborate on the steps I used to compile PETSC:

# load modules... while I tried also the default ones, I currently have:

Currently Loaded Modulefiles:

IBM/xlf14.1.0.8

IBM/xlC12.1.0.9

IBM/xlc12.1.0.9

GCC/gcc-4.5.1

NAG/5.1.340

NCAR/ncarg5.1.0                                                                 
                                                                                
                 
 

IGES/grads2.0.a6                                                                
                                                                                
                 
 

UNITE/1.0                                                                       
                                                                                
                 
 

./default                                                                       
                                                                                
                 
 

NETCDF/4.2.1.1


# Get current petsc version.... I also tried stable version 3.5.2... and
git branch master and next.... here however check out git branch
stable/maint:

/sw/aix61/git-1.7.4.1/bin/git clone
ssh://g...@bitbucket.org/petsc/petsc.git petsc -b maint


#export PETSC_DIR to point to the just created git directory

export PETSC_DIR=$(pwd)/petsc




# The following is a script configure petsc to compile on blizzard:

export PETSC_DIR=$(pwd)

export PETSC_ARCH=debug


CONFIGURE_SCRIPT="conftest-$PETSC_ARCH"

RECONFIGURE_SCRIPT="reconfigure-$PETSC_ARCH.py"

rm $RECONFIGURE_SCRIPT $CONFIGURE_SCRIPT


make allclean

./configure \

  --with-make-np=8 \

  --with-batch=1 \

  --with-mpi-shared=0 \

  --with-cc=mpcc_r \

  --with-cxx=mpCC_r \

  --with-fc="mpxlf_r"\

  --with-debugging=1 \

  --known-mpi-shared-libraries=0      \

  --with-shared-libraries=0           \

  --with-fortran                      \

  --with-fortran-interfaces           \

  --download-sowing-configure-options='CFLAGS=-maix64 CXXFLAGS=-maix64
LDFLAGS=-maix64 CPPFLAGS=-maix64' \

  --with-c2html=0 \

  --with-cmake=$(which cmake)         \

  --with-blas-lapack-dir=/sw/aix53/lapack-3.2.0/ \

  COPTFLAGS='-g -O0 ' \

  FOPTFLAGS='-qextname -qwarn64  -O0 -qfullpath -C -g -qextchk -qnosave  ' \

  --LIBS="-L/sw/aix53/lapack-3.2.0/lib -llapack -lessl -lblas"


  if [ ! -e $CONFIGURE_SCRIPT ] ; then

    echo 'Configure failed in creating conftest script....'

    exit

  fi


cat > petsc_conf-$PETSC_ARCH.ll << EOF

#!/client/bin/ksh

# 1 node, ST mode (Single Threading)

# 1 MPI Processes

#---------------------------------------

#

# @ shell = /client/bin/ksh

# @ job_type = parallel

# @ node_usage= shared

# @ rset = rset_mcm_affinity

# @ mcm_affinity_options = mcm_accumulate

# @ node = 1

# @ tasks_per_node = 1

# @ resources = ConsumableMemory(750mb)

# @ wall_clock_limit = 00:10:00

# @ job_name = petsc_conf-$PETSC_ARCH.job

# @ output = \$(job_name).o\$(jobid)

# @ error = \$(job_name).e\$(jobid)

# @ notification = error

# @ queue

export MEMORY_AFFINITY=MCM

#export MP_PRINTENV=YES

#export MP_LABELIO=YES

#export MP_INFOLEVEL=2

export MP_EAGER_LIMIT=64k

export MP_BUFFER_MEM=64M,256M

export MP_USE_BULK_XFER=NO

export MP_BULK_MIN_MSG_SIZE=128k

export MP_RFIFO_SIZE=4M

export MP_SHM_ATTACH_THRESH=500000

export LAPI_DEBUG_STRIPE_SEND_FLIP=8

#

#  run the program

#

cd $(pwd)

poe $(pwd)/conftest-$PETSC_ARCH

exit

EOF


llsubmit petsc_conf-$PETSC_ARCH.ll


while true

do

  if [ -x "$RECONFIGURE_SCRIPT" ]

  then

    sleep 2

    ./$RECONFIGURE_SCRIPT && make all

    rm $RECONFIGURE_SCRIPT $CONFIGURE_SCRIPT

    rm petsc_conf-$PETSC_ARCH.ll

    rm petsc_conf-$PETSC_ARCH.job.*

    break

  else

    sleep 5; echo "waiting for reconfigure script... $RECONFIGURE_SCRIPT"

  fi

done


exit



# This compiles fine for me and a make check build correctly the
binaries, however it is of course not able to run it with ll.
# Here the output of configure:

PETSc:

  File creation : Generated Fortran stubs

  Build         : Set default architecture to fast in
lib/petsc-conf/petscvariables

  File creation : Created fast/lib/petsc-conf/reconfigure-fast.py for
automatic reconfiguration

Framework:

  File creation : Created makefile configure header
fast/lib/petsc-conf/petscvariables

  File creation : Created makefile configure header
fast/lib/petsc-conf/petscrules

  File creation : Created configure header fast/include/petscconf.h

  File creation : Created C specific configure header
fast/include/petscfix.h

    Pushing language C

    Popping language C

    Pushing language Cxx

    Popping language Cxx

    Pushing language FC

    Popping language FC

Compilers:

  C Compiler:         mpcc_r  -g -O0

  C++ Compiler:       mpCC_r  -qrtti=dyna -g -+

  Fortran Compiler:   mpxlf_r  -qextname -qwarn64  -O0 -qfullpath -C -g
-qextchk -qnosave

Linkers:

  Static linker:   /usr/bin/ar cr

make:

BLAS/LAPACK: -L/sw/aix53/lapack-3.2.0 -L/sw/aix53/lapack-3.2.0 -llapack
-L/sw/aix53/lapack-3.2.0 -L/sw/aix53/lapack-3.2.0 -lblas

cmake:

MPI:

  Arch:

X:

  Library:  -lX11

pthread:

  Library:  -lpthread

sowing:

ssl:

  Library:  -lssl -lcrypto

PETSc:

  PETSC_ARCH: fast

  PETSC_DIR: /pf/b/b380246/libs/petsc

  Clanguage: C


  Memory alignment: 16

  Scalar type: real

  Precision: double

  shared libraries: disabled

xxx=========================================================================xxx

 Configure stage complete. Now build PETSc libraries with (gnumake build):

   make PETSC_DIR=/pf/b/b380246/libs/petsc PETSC_ARCH=fast all

xxx=========================================================================xxx

================================================================================

Finishing Configure Run at Fri Jan 23 11:32:10 2015

================================================================================


# The usual C examples work fine, I do however have one problem with the
Function DMDAVecGetArrayF90 and DMDAVecRestoreArrayF90 which crashes my
model.
# To test this I can easily reproduce the issue with example
<petsc/src/dm/examples/tutorials/ex11f90.F>

make -C $PETSC_DIR/src/dm/examples/tutorials/  ex11F90

#!/client/bin/ksh

#

#---------------------------------------

# 1 node, ST mode (Single Threading)

# 2 MPI Processes

#---------------------------------------

#

# @ shell = /client/bin/ksh

# @ job_type = parallel

# @ node_usage= shared

# @ rset = rset_mcm_affinity

# @ mcm_affinity_options = mcm_accumulate

# @ node = 1

# @ tasks_per_node = 2

# @ resources = ConsumableMemory(750mb)

# @ task_affinity = core(1)

# @ wall_clock_limit = 00:01:00

# @ job_name = ex11f90.job

# @ output = $(job_name).o$(jobid)

# @ error  = $(job_name).e$(jobid)

# @ queue

export MEMORY_AFFINITY=MCM

#export MP_PRINTENV=YES

#export MP_LABELIO=YES

#export MP_INFOLEVEL=2

export MP_EAGER_LIMIT=64k

export MP_BUFFER_MEM=64M,256M

export MP_USE_BULK_XFER=NO

export MP_BULK_MIN_MSG_SIZE=128k

export MP_RFIFO_SIZE=4M

export MP_SHM_ATTACH_THRESH=500000

export LAPI_DEBUG_STRIPE_SEND_FLIP=8

#

#  run the program

#

export MALLOCTYPE=debug

export MALLOCDEBUG=validate_ptrs,report_allocations


prtconf


cd /pf/b/b380246/libs/petsc/src/dm/examples/tutorials/

poe /pf/b/b380246/libs/petsc/src/dm/examples/tutorials//ex11f90


exit


# It compiles fine and then crashes when DMDAVecRestoreArrayF90 is called.
# And here comes my inability to debug the code.
# I have put print statements around the calls to see where it hangs and
this looks as follows:

      ....

      call DMDAGetCorners(ada,xs,PETSC_NULL_INTEGER,PETSC_NULL_INTEGER, 
...)


      print *,'DEBUG: before VecGetArrayF90 :: g',loc(g),'x',loc(x1)

      call DMDAVecGetArrayF90(ada,g,x1,ierr)

      print *,'DEBUG: after  VecGetArrayF90 :: g',loc(g),'x',loc(x1)


      do i=xs,xs+xl-1

        ...

      enddo


      print *,'DEBUG: before VecRestoreArrayF90 :: g',loc(g),'x',loc(x1)

      call DMDAVecRestoreArrayF90(ada,g,x1,ierr)

      print *,'DEBUG: after  VecRestoreArrayF90 :: g',loc(g),'x',loc(x1)


      call VecView(g,PETSC_VIEWER_STDOUT_WORLD,ierr)

      ...


# and produces the output:

 DEBUG: before VecGetArrayF90 :: g 1152921504606842264 x 648535941247935320
 DEBUG: before VecGetArrayF90 :: g 1152921504606842264 x 648535941247935320

 DEBUG: after  VecGetArrayF90 :: g 1152921504606842264 x 4723646000
 DEBUG: after  VecGetArrayF90 :: g 1152921504606842264 x 4723621744

 DEBUG: before VecRestoreArrayF90 :: g 1152921504606842264 x 4723646000
 DEBUG: before VecRestoreArrayF90 :: g 1152921504606842264 x 4723621744

# As one can see is that the print statement does not go beyond the
DMDAVecRestoreArrayF90 call.
# I do however not get a stacktrace, error messages are:

[0]PETSC ERROR:
------------------------------------------------------------------------

[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range

[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger

[0]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind

[0]PETSC ERROR: or try http://valgrind.org on GNU/linux and Apple Mac OS
X to find memory corruption errors

[0]PETSC ERROR: likely location of problem given in stack below

[0]PETSC ERROR: ---------------------  Stack Frames
------------------------------------

[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,

[0]PETSC ERROR:       INSTEAD the line number of the start of the function

[0]PETSC ERROR:       is given.

[0]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------

[0]PETSC ERROR: Signal received

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
for trouble shooting.

[0]PETSC ERROR: Petsc Development GIT revision: v3.5.2-1438-g4ed42c4 
GIT Date: 2015-01-21 17:53:54 -0600

[0]PETSC ERROR:
/pf/b/b380246/libs/petsc/src/dm/examples/tutorials//ex11f90 on a fast
named p174 by b380246 Fri Jan 23 11:27:53 2015

[0]PETSC ERROR: Configure options --known-level1-dcache-size=32768
--known-level1-dcache-linesize=32 --known-level1-dcache-assoc=2
--known-memcmp-ok=1 --known-endian=big --known-sizeof-char=1
--known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4
--known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4
--known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8
--known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4
--known-mpi-long-double=1 --known-mpi-c-double-complex=1
--known-sdot-returns-double=0 --known-snrm2-returns-double=0
--with-make-np=8 --with-batch=1 --with-mpi-shared=0 --with-cc=mpcc_r
--with-cxx=mpCC_r --with-fc=mpxlf_r --with-debugging=1
--known-mpi-shared-libraries=0 --with-shared-libraries=0 --with-fortran
--with-fortran-interfaces
--download-sowing-configure-options="CFLAGS=-maix64 CXXFLAGS=-maix64
LDFLAGS=-maix64 CPPFLAGS=-maix64" --with-c2html=0
--with-cmake=/scratch/b/b380246//libs/cmake/bin//cmake
--with-blas-lapack-dir=/sw/aix53/lapack-3.2.0/ COPTFLAGS="-g "
FOPTFLAGS="-qextname -qwarn64  -O0 -qfullpath -C -g -qextchk -qnosave  "
--LIBS="-L/sw/aix53/lapack-3.2.0/lib -llapack -lessl -lblas"

[0]PETSC ERROR: #1 User provided function() line 0 in  unknown file


# This is hardly a useful bug report for the Petsc people....
# How should I go on to find out what happens?
# I tried to run it with the memchecker -- I am however not sure which
options would be beneficial in my case:

export MALLOCTYPE=debug
export MALLOCDEBUG=validate_ptrs,report_allocations

# This does not give any hints either....




I would very much appreciate to run it in a debugger -- how would I go
about that, is there something similar to gdb?
In this context, is it possible to use the interactive node p249? Should
I apply for that access in a seperate mail?

Thank you very much for reading this far :)

Greatly appreciating your help,

Yours,

Fabian Jakub

P.S. my enviroment:

MODULE_VERSION_STACK=3.2.7b

LDFLAGS=-L/sw/aix61/netcdf-4.2.1.1/lib -lnetcdff -lnetcdf
-L/sw/aix61/netcdf-4.2.1.1/lib -lnetcdf -lhdf5_hl -lhdf5 -lm -lz
-L/sw/aix61/hdf5-1.8.8/lib -L/sw/aix61/zlib-1.2.6/lib
-L/sw/aix53/szip-2.1/lib -lsz

MANPATH=/sw/aix61/netcdf-4.2.1.1/share/man:/sw/aix53/ncl-ncarg-5.1.0/man:/sw/aix53/NAG/NAGWare-5.1.340/man:/sw/aix61/gcc-4.5.1/man:/sw/ibm/xlc/12.1.0.9/usr/vac/man/en_US:/sw/ibm/xlC/12.1.0.9/usr/vacpp/man/en_US:/sw/ibm/xlf/14.1.0.8/usr/lpp/xlf/man/en_US:/client/man:/usr/lpp/LoadL/full/man

AUTHSTATE=compat

TERM=xterm

SHELL=/bin/bash

PROFILEREAD= /client/etc/profile.blizzard

PETSC_ARCH=fast

SSH_CLIENT=138.246.2.1 58541 22

OBJECT_MODE=64

MORE=-W notite

OLDPWD=/pf/b/b380246

SSH_TTY=/dev/pts/110

LOCPATH=/usr/lib/nls/loc

XAPPLRESDIR=/sw/aix53/ncview-1.93g/lib/ncview

GMTHOME=/sw/aix53/gmt-4.4.0

USER=b380246

LS_COLORS=rs=0:di=01;34:ln=01;36:hl=44;37:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:

ODMDIR=/etc/objrepos

NAG=/sw/aix53/NAG/NAGWare-5.1.340

SSH_AUTH_SOCK=/tmp/ssh-0wYpLKQpOG/agent.16187702

GASCRP=/sw/aix53/grads-2.0.a6/share/scripts

WRKSHR=/scratch/b/b380246

MODULE_VERSION=3.2.7

MAIL=/usr/spool/mail/b380246

PATH=/pf/b/b380246/.local/bin:/scratch/b/b380246//libs/cmake/bin/:/sw/aix61/netcdf-4.2.1.1/bin:/sw/aix53/grads-2.0.a6/bin:/sw/aix53/ncl-ncarg-5.1.0/bin:/sw/aix53/NAG/NAGWare-5.1.340/bin:/sw/aix61/gcc-4.5.1/bin:/sw/ibm/xlc/12.1.0.9/usr/vac/bin:/sw/ibm/xlC/12.1.0.9/usr/vacpp/bin:/sw/ibm/xlf/14.1.0.8/usr/bin:/client/bin:/usr/bin:/etc:/usr/sbin:/usr/ucb:/usr/bin/X11:/sbin:/usr/java14/jre/bin:/usr/java14/bin:/usr/lpp/LoadL/full/bin

XLFRTEOPTS=err_recovery=no

LOGIN=b380246

PWD=/pf/b/b380246/libs/petsc

_LMFILES_=/sw/aix61/Modules/IBM/xlf14.1.0.8:/sw/aix61/Modules/IBM/xlC12.1.0.9:/sw/aix61/Modules/IBM/xlc12.1.0.9:/sw/aix61/Modules/GCC/gcc-4.5.1:/sw/aix61/Modules/NAG/5.1.340:/sw/aix61/Modules/NCAR/ncarg5.1.0:/sw/aix61/Modules/IGES/grads2.0.a6:/sw/aix61/Modules/UNITE/1.0:/sw/aix61/Modules/./default:/sw/aix61/Modules/NETCDF/4.2.1.1

NCARG_ROOT=/sw/aix53/ncl-ncarg-5.1.0

LANG=en_US

MODULEPATH=/sw/aix61/modules-3.2.7b/Modules/versions:/sw/aix61/modules-3.2.7b/Modules/$MODULE_VERSION/modulefiles:/sw/aix61/Modules:/sw/aix61/unite/modulefiles/tools

LOADEDMODULES=IBM/xlf14.1.0.8:IBM/xlC12.1.0.9:IBM/xlc12.1.0.9:GCC/gcc-4.5.1:NAG/5.1.340:NCAR/ncarg5.1.0:IGES/grads2.0.a6:UNITE/1.0:./default:NETCDF/4.2.1.1

TZ=CET-1CEST-02:00:00,M3.5.0/02:00:00,M10.5.0/03:00:00

UNITE_ROOT=/sw/aix61/unite

HISTCONTROL=ignoredups

SHLVL=1

HOME=/pf/b/b380246

LC__FASTMSG=true

MAILMSG=[YOU HAVE NEW MAIL]

NETCDF=/sw/aix61/netcdf-4.2.1.1

LOGNAME=b380246

GADDIR=/sw/aix53/grads-2.0.a6/share/data

SSH_CONNECTION=138.246.2.1 58541 136.172.40.16 22

MODULESHOME=/sw/aix61/modules-3.2.7b/Modules/3.2.7

DISPLAY=localhost:75.0

PETSC_DIR=/pf/b/b380246/libs/petsc

SCRATCH=/scratch/b/b380246/

G_BROKEN_FILENAMES=1

FFLAGS=-q64 -qxlf90=autodealloc -O3 -qhot -qstrict -qMAXMEM=-1 -Q
-qarch=pwr6 -qtune=pwr6 -qextname -I/sw/aix61/netcdf-4.2.1.1/include

NAG_KUSARI_FILE=license.zmaw.de:

_=/client/bin/env

NLSPATH=/usr/lib/nls/msg/%L/%N:/usr/lib/nls/msg/%L/%N.cat

LIBPATH=/client/lib

LD_LIBRARY_PATH=




--- End Message ---
--- Begin Message ---
Hello Fabian,

I managed to reproduce the error and started debugging using the DDT debugger. Since the options you used for configuring PETSc do look reasonable, it might indeed be a problem in PETSc itself. Whether I find a solution or not, I will reply to you again today.

Hendryk

On 23/01/15 12:00, Fabian.Jakub wrote:
Hello,
My userid is b380246.

I would like to run my Fortran model, which depends on a recent version
of PETSC, http://www.mcs.anl.gov/petsc/.
While it compiles fine, it crashes at a certain point.
My knowledge of the code is however very limited and I am at the same
time not familiar with the Blizzard machine.

I hoped that, before I send this as a bug request to the Petsc People,
you might be able to have a look at compile flags if I am doing
something silly or maybe if you could even reproduce the error.


--- End Message ---
--- Begin Message ---
Hello Fabian,

I finally located the problem in the F90Array1dDestroyScalar subroutine in f90_fwrap.F where nullification of the fortran pointer causes the segfault. This problem might not
directly be related to PETSc, hence I would not contact PETSc support.
We had a similar problem with the xlf compiler some time ago - I will check whether we
got a bugfix from IBM or not.

Kind regards,
Hendryk

--- End Message ---
--- Begin Message --- Okay, I sent the bug report to PETSc maint and hopefully we get a quick response. Maybe it will be a problem that the combination of power6 system and IBM xl compiler is no longer given in the HPC community (we are the last dinosaur), but we will see.

Hendryk

On 05/02/15 11:11, Fabian wrote:
Hi,

my apologies, it seems my reply got lost somewhere....
here again:

On 28.01.2015 16:15:
You are probably more qualified to answer any upcoming questions relating the machine/architecture. If you find the time. Please, you are very welcome to open the bug report.

Many many thanks for your efforts so far!
Greatly appreciating your help.

Sincerely,  Fabian Jakub





On 05.02.2015 08:11, Hendryk Bockelmann wrote:
Hello Fabian,

as I wrote you in the last mail, the problem is related to the
data management (DM) module itself and (probably) not to
the xl-compiler or AIX system.
My question was: do you want us to open a bug report at PETSc?
Or do you want to report the problem yourself?

Regards,
Hendryk



--- End Message ---
--- Begin Message ---
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,
I couldn't find an archive of [petsc-maint]
- -- i thought reports, sent to maint would get forwarded to the mailing
lists petsc-user or petsc-dev....
Is there an archive availabe somewhere? Did anyone respond to your request?

Anyway, for the time being I changed my code to use VecGetArray instead
of DMDAVecGetArray.
I can confirm that no problems occur with the non-DMDA functions.
I am therefore happy to say, that my code works.

The following is not related to Petsc any more, you may however be the
right person to ask.
My model is meant to be run in the LES code UCLALES, maintained at ZMAW.
It seems that the up to date git version crashes due to not yet known
reasons and debugging proved to be difficult with the queuing system.
At the dkrz webpages, it states that I may ask for access to the
interactive node p249.

Is it possible for me to gain access to an interactive node?

Thanks again so much for your efforts,

Yours,

Fabian Jakub


Am 05.02.2015 um 14:06 schrieb Hendryk Bockelmann:
> Okay, I sent the bug report to PETSc maint and hopefully we get a quick 
> response.
> Maybe it will be a problem that the combination of power6 system and
IBM xl compiler is no longer given in the HPC community (we are the last
dinosaur), but we will see.
>
> Hendryk
>
> On 05/02/15 11:11, Fabian wrote:
>> Hi,
>>
>> my apologies, it seems my reply got lost somewhere....
>> here again:
>>
>> On 28.01.2015 16:15:
>>> You are probably  more qualified to answer any upcoming questions
relating the machine/architecture.
>>> If you find the time.  Please, you are very welcome to open the bug
report.
>>>
>>> Many many thanks for your efforts so far!
>>> Greatly appreciating your help.
>>>
>>> Sincerely,  Fabian Jakub
>>
>>
>>
>>
>>
>> On 05.02.2015 08:11, Hendryk Bockelmann wrote:
>>> Hello Fabian,
>>>
>>> as I wrote you in the last mail, the problem is related to the
>>> data management (DM) module itself and (probably) not to
>>> the xl-compiler or AIX system.
>>> My question was: do you want us to open a bug report at PETSc?
>>> Or do you want to report the problem yourself?
>>>
>>> Regards,
>>> Hendryk
>>
>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iEYEARECAAYFAlTeItQACgkQxlXvUoDMRvogiwCglFt38N1Wyp20MbwA5awWtjVM
oc0AoLEl9kdajkTghn26lOQ6vFSBxAE2
=BLzk
-----END PGP SIGNATURE-----



--- End Message ---
--- Begin Message ---
Hello Fabian,

I sent the mail on the 05.02. to petsc-maint. There was no reply so far
and it seems that there is no archive for this list. I might send the mail
again, although getting a bug fix for the AIX system in the last 6 month
of blizzards lifetime is not promising.
Luckily, the VecGetArray function works for you - so just use it.

I will arrange access for your account b380246 on p249 and contact you
again when it is possible to log in.

Hendryk

On 13/02/15 17:14, Fabian.Jakub wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,
I couldn't find an archive of [petsc-maint]
- -- i thought reports, sent to maint would get forwarded to the mailing
lists petsc-user or petsc-dev....
Is there an archive availabe somewhere? Did anyone respond to your request?

Anyway, for the time being I changed my code to use VecGetArray instead
of DMDAVecGetArray.
I can confirm that no problems occur with the non-DMDA functions.
I am therefore happy to say, that my code works.

The following is not related to Petsc any more, you may however be the
right person to ask.
My model is meant to be run in the LES code UCLALES, maintained at ZMAW.
It seems that the up to date git version crashes due to not yet known
reasons and debugging proved to be difficult with the queuing system.
At the dkrz webpages, it states that I may ask for access to the
interactive node p249.

Is it possible for me to gain access to an interactive node?

Thanks again so much for your efforts,

Yours,

Fabian Jakub





--- End Message ---

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to