I meant "another way to find a backtrace with MPI_ALLOC_MEM/MPI_FREE_MEM being 
ancestors of a write() system call on /dev/infiniband/verbs, i.e., doing RDMA 
over IB."


I read your opened issue #3183 and I think we are on the right track. Yay~


Cheers,


Jingchao

________________________________
From: users <users-boun...@lists.open-mpi.org> on behalf of Jeff Squyres 
(jsquyres) <jsquy...@cisco.com>
Sent: Thursday, March 16, 2017 8:46:30 AM
To: Open MPI User's List
Subject: Re: [OMPI users] openib/mpi_alloc_mem pathology

On Mar 16, 2017, at 10:37 AM, Jingchao Zhang <zh...@unl.edu> wrote:
>
> One of my earlier replies includes the backtraces of cp2k.popt process and 
> the problem points to MPI_ALLOC_MEM/MPI_FREE_MEM.
> https://mail-archive.com/users@lists.open-mpi.org/msg30587.html

Yep -- saw it.  That -- paired with the profiling indicating that a LOT of time 
is being spent in these functions -- is why I want to disable what is likely 
the expensive / slow part of ALLOC/FREE_MEM and see if that fixes the 
performance issue.  This is a useful data point to figure out what we should do 
next.

> If that part of the code is commented out, is there another way for openmpi 
> to find that backtrace?

I'm not quite sure what you're asking here...?

The application will still be calling ALLOC/FREE_MEM, so you can still get 
stack traces from there, if you wish.

--
Jeff Squyres
jsquy...@cisco.com

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to