Tim

MPI is a library providing support for passing messages among several
distinct processes.  It offers datatype constructors that let an
application describe complex layouts of data in the local memory of a
process so a message can be sent from a complex data layout or received
into a complex layout.

MPI  does not have access to decisions made by the C++ compiler or the C++
runtime so the MPI library cannot deduce the layout for you.  To use MPI
you must either organize the data in some way that is easy to describe with
MPI datatypes or you must do rather complex data type constructions for
every message sent or received.

Any support for automatic serialization of C++ objects would need to be in
some sophisticated utility that is not part of MPI.  There may be such
utilities but I do not think anyone who has been involved in the discussion
knows of one you can use.  I certainly do not.

             Dick

Dick Treumann  -  MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363



                                                                                
                                                              
  From:       Tim <timlee...@yahoo.com>                                         
                                                              
                                                                                
                                                              
  To:         Open MPI Users <us...@open-mpi.org>                               
                                                              
                                                                                
                                                              
  Date:       01/29/2010 11:11 AM                                               
                                                              
                                                                                
                                                              
  Subject:    Re: [OMPI users] speed up this problem by MPI                     
                                                              
                                                                                
                                                              
  Sent by:    users-boun...@open-mpi.org                                        
                                                              
                                                                                
                                                              





By serialization, I mean in the context of data storage and transmission.
See http://en.wikipedia.org/wiki/Serialization

e.g. in a structure or class, if there is a pointer pointing to some memory
outside the structure or class, one has to send the content of the memory
besides the structure or class, right?

--- On Fri, 1/29/10, Eugene Loh <eugene....@sun.com> wrote:

> From: Eugene Loh <eugene....@sun.com>
> Subject: Re: [OMPI users] speed up this problem by MPI
> To: "Open MPI Users" <us...@open-mpi.org>
> Date: Friday, January 29, 2010, 11:06 AM
> Tim wrote:
>
> > Sorry, my typo. I meant to say OpenMPI documentation.
> >
> Okay.  "Open (space) MPI" is simply an implementation
> of the MPI standard -- e.g.,
http://www.mpi-forum.org/docs/mpi21-report.pdf .
> I imagine an on-line search will turn up a variety of
> tutorials and explanations of that standard.  But the
> standard, itself, is somewhat readable.
>
> > How to send/recieve and broadcast objects of
> self-defined class and of std::vector? If using
> MPI_Type_struct, the setup becomes complicated if the class
> has various types of data members, and a data member of
> another class.
> >
> I don't really know any C++, but I guess you're looking at
> it the right way.  That is, use derived MPI data types
> and "it's complicated".
>
> > How to deal with serialization problems?
> >
> Which serialization problems?  You seem to have a
> split/join problem.  The master starts, at some point
> there is parallel computation, then the masters does more
> work at the end.
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>




_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to