George Bosilca wrote:

On Nov 6, 2007, at 8:38 AM, Terry Dontje wrote:

George Bosilca wrote:
If I understand correctly your question, then we don't need any
extension. Each request has a unique ID (from PERUSE perspective).
However, if I remember well this is only half implemented in our
PERUSE layer (i.e. it works only for expected requests).
Looking at the peruse macros it looks to be that the unique ID is the
base_req address which I imagine rarely matches between processes.

That's a completely different topic. If what you need is a unique ID for each request between processes, in other words, a unique ID for each message, then here is the way to go. Use the same information as the MPI matching logic, i.e. (comm_id, remote, tag) to create an identifier for each message. It will not be unique as multiple messages can generate the same ID, but you can generate a unique ID per messages with easy tricks.

I understand that one could try and rely on the order of the message being sent and received however this only works if you ultimately capture every message which is something I would like to avoid. My hope was to use something already embedded into the library not having to add more crap on top of the library. This seems like something that would be useful to any tracing utility (like vampir). However, I imagine the arguement against such a thing is that not all MPI Librarys would support such an ID thus making this a one off.
The PERUSE standard requires that the ID is unique for each process, and for the lifetime of the request. It does not require that the ID be unique across processes. And this is why we're using the base_req as an ID.

I understand that the PERUSE spec did not define the ID to be unique across processes which is why I was surprised by your answer. Score one for miscommunications. It would have been nice if the PERUSE committee would have provided an option for an implementation to expose message ids.

--td
  george.



This should be quite easy to fix, if someone invest few hours into it.

For the context id, a user can always use the c2f function to get the
fortran ID (which for Open MPI is the communicator ID).

Cool, I didn't realize that.

thanks,

--td
 Thanks,
   george.

On Nov 5, 2007, at 8:01 AM, Terry Dontje wrote:

Currently in order to do message tracing one either has to rely on some error prone postprocessing of data or replicating some MPI internals up
in the PMPI layer.  It would help Sun's tools group (and I believe U
Dresden also) if Open MPI would create a couple APIs that exoposed the
following:

1. PML Message ids used for a request
2. Context id for a specific communicator

I could see a couple ways of providing this information.  Either by
extending the PERUSE probes or creating actual functions that one would
pass in a request handle or communicator handle to get the appropriate
data back.

This is just a thought right now which why this email is not in an RFC
format. I wanted to get a feel from the community as to the interest in such APIs and if anyone may have specific issues with us providing such interfaces. If the responses seems positive I will follow this message
up with an RFC.

thanks,

--td
_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel

------------------------------------------------------------------------

_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel


_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel

------------------------------------------------------------------------

_______________________________________________
devel mailing list
de...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel

Reply via email to