Brian -- is this enough information to complete the blocker defect 
https://svn.open-mpi.org/trac/ompi/ticket/2656?


On Dec 21, 2010, at 2:54 PM, George Bosilca wrote:

> Anyway, back to your question. The MPI and OPAL datatypes uses the same 
> indexes, for all the OPAL predefined types. Several MPI types map to the same 
> underlying OPAL type: such as MPI_INT, MPI_INTEGER, MPI_INT32_T. All MPI 
> types not supported at OPAL level, will get their indexes contiguously after 
> the OPAL_DATATYPE_MAX_PREDEFINED upper bound (up to 
> OMPI_DATATYPE_MPI_MAX_PREDEFINED). Moreover, the OPAL layer has been modified 
> to support up to OPAL_DATATYPE_MAX_SUPPORTED datatypes, and this value should 
> be modified based on the upper level requirements (today it is set to 46 as 
> this is the total number of MPI supported datatypes, including the Fortran 
> ones). bdt_used is currently defined as an uint32_t, so obviously there is 
> not enough place to hold all possible MPI datatypes.
> 
> Solution 1: We can change the bdt_used to uint64_t. This requires some work, 
> and I will prefer to have some time to see exactly all the implications.
> 
> Solution 2: Quick and dirty, but not the fastest one. Instead of walking the 
> bdt_used you can walk the btypes array. If the count is not zero, then the 
> MPI datatype corresponding to the index in the array is used.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to