Thanks for persevering with this. I'm far from sure that the
information I am providing is of much use, largely because I'm pretty
confused about what's going on. Anyway...
Brian Barrett wrote:
> Can you rebuild Open MPI with debugging symbols (just setting CFLAGS
> to -g during configure shoul
On Jan 26, 2006, at 9:38 PM, Glenn Morris wrote:
Thanks for your suggestions.
Jeff Squyres wrote:
From the stack trace, it looks like you're in the middle of a
complex deallocation of some C++ objects, so I really can't tell
(i.e., not in an MPI function at all).
Well, not intentionally! I'
Thanks for your suggestions.
Jeff Squyres wrote:
> From the stack trace, it looks like you're in the middle of a
> complex deallocation of some C++ objects, so I really can't tell
> (i.e., not in an MPI function at all).
Well, not intentionally! I'm just calling "deallocate" in a purely
Fortra
This looks like a problem with the memory allocator. It could be a
genuine problem with Open MPI, or it could be a memory fault in your
application (that happens to dead-end in one of our libraries because
we intercept memory allocation functions). From the stack trace, it
looks like you'
I tried nightly snapshot 1.1a1r8803 and it said the following. I'm
willing to try and debug this further, but would need some guidance. I
have access to totalview.
Signal:11 info.si_errno:0(Success) si_code:2(SEGV_ACCERR)
Failing at addr:0x97421004
[0]
func:/afs/slac.stanford.edu/g/ki/users/gmo
Don't know if this will be of help, but on further investigation the
problem seems to be some code that essentially does the following:
!$OMP PARALLEL DO
do i=1,n
do j=1,m
call sub(arg1,...)
end do
end do
!$OMP END PARALLEL DO
where subroutine sub allocates a temporary array:
subrout
Brian Barrett wrote:
[debugging advice]
Thanks, I will look into this some more and try to provide a proper
report (if it is not a program bug), as I should have done in the
first place. I think we may have totalview around somewhere...
On Jan 13, 2006, at 10:41 PM, Glenn Morris wrote:
The combination OpenMP + OpenMPI works fine if I restrict the
application to only 1 OpenMP thread per MPI process (in other words
the code at least compiles and runs fine with both options on, in this
limited sense). If I try to use my desired va