Hi Jeff,

Yes, the memory issue caused by Isend/Irecv without calling MPI_Wait
probably is the reason. Attached is my test results showing that calling
MPI_Isend without using MPI_Wait at all leads to a wired wtime for my
program. The Wtime should be linear, but some jumps show up after several
iterations.

Yes, I'm using MPI_Waitall or Testall in my case for the common approach.
For one iteration, my common approach is
-----------------------------------------------------------------------------------------------
//
// packing data
//
 ....

 //
 // init send/rev
 //
 for(int z=0;z<_n_proc;++z){
      int nif=n_if_to_proc[z];

      //send data
      if(nif>0){
        MPI_Isend(&snd_buf_[z][0],n_buf_[z],MPI_DOUBLE,z,tag,
MPI_COMM_WORLD, &s_sol_req_[n_proc_exchange]);
        MPI_Irecv(&rev_buf_[z][0],n_buf_[z],MPI_DOUBLE,z,tag,
MPI_COMM_WORLD, &r_sol_req_[n_proc_exchange]);
        n_proc_exchange++;
      }
  }

  //
  // unpacking and do some local jobs here
  //
  ....

  //
  // wait for send/rev finish
  //
  MPI_Waitall(n_proc_exchange_,s_sol_req_,MPI_STATUS_IGNORE);
  MPI_Waitall(n_proc_exchange_,r_sol_req_,MPI_STATUS_IGNORE);

  //do some jobs which depend on the exchanged data (rev_buf_)
  .....
-----------------------------------------------------------------------------------------------


But I want to avoid calling MPI_Waitall,  since for my case, I dont care
the data is the latest correct one or some previous initial data. When I
comment out the MPI_Waitall, some wired thing happens. I think as you said
some memory is leaked. The system memory may run out after some iterations.

I try to use MPI_Ibsend. So I provide my own system buffer to the MPI
library. But still slow down the program (as shown in the test figure).

Can I use MPI_Irecv with MPI_Ibsend or I need to pair MPI_Ibsend with the
blocking MPI_Recv function?

P.S.  Pavan suggests me to use MPI_Request_free. I will give it a try.

Thanks for your reply and suggestions.

Best,

Lei Shi


Sincerely Yours,

Lei Shi
---------

On Fri, Apr 3, 2015 at 5:34 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:

> In the general case, MPI defines that you *have* to call one of the
> MPI_Test or MPI_Wait functions to finish the communication request.  If you
> don't do so, you're basically leaking resources (e.g., memory).
>
> In a single-threaded MPI implementation, the call to
> MPI_Test/MPI_Wait/etc. may be where the actual message passing progress
> occurs, too.
>
> If you don't want to block waiting for the communication, you can keep an
> array of outstanding MPI_Requests and call MPI_Testall() on them to allow
> MPI to make progress on them, but not block your application until all (or
> any) of them complete.
>
>
>
> > On Apr 3, 2015, at 3:43 AM, Matthieu Brucher <matthieu.bruc...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > I think you have to call either Wait or Test to make the communications
> move forward in the general case. Some hardware may have a hardware thread
> that makes the communication, but usually you have to make it "advance"
> yourself by either calling Wait ot Test.
> >
> > Cheers,
> >
> > Matthieu
> >
> > 2015-04-03 5:48 GMT+01:00 Lei Shi <lei...@ku.edu>:
> > I want to use non-blocking send/rev MPI_Isend/MPI_Irev to do
> communication. But in my case, I don't really care what kind of data I get
> or it is ready to use or not. So I don't want to waste my time to do any
> synchronization  by calling MPI_Wait or etc API.
> >
> > But when I avoid calling MPI_Wait, my program is freezed several secs
> after running some iterations (after multiple MPI_Isend/Irev callings),
> then continues. It takes even more time than the case with MPI_Wait.  So my
> question is how to do a "true" non-blocking communication without waiting
> for the data ready or not. Thanks.
> >
> > Sincerely Yours,
> >
> > Lei Shi
> > ---------
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/04/26596.php
> >
> >
> >
> > --
> > Information System Engineer, Ph.D.
> > Blog: http://matt.eifelle.com
> > LinkedIn: http://www.linkedin.com/in/matthieubrucher
> > Music band: http://liliejay.com/
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/04/26598.php
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/04/26600.php
>

Reply via email to