Not that I'm aware of; that would be great.

Unlike George, however, I'm not concerned about converting to linear operations 
for attributes.

Attributes are not used often, but when they are:

a) there aren't many of them (so a linear penalty is trivial)
b) they're expected to be low performance

So if it makes the code simpler, I certainly don't mind linear operations.



On Jan 17, 2013, at 9:32 AM, KAWASHIMA Takahiro <rivis.kawash...@nifty.com>
 wrote:

> George,
> 
> Your idea makes sense.
> Is anyone working on it? If not, I'll try.
> 
> Regards,
> KAWASHIMA Takahiro
> 
>> Takahiro,
>> 
>> Thanks for the patch. I deplore the lost of the hash table in the attribute 
>> management, as the potential of transforming all attributes operation to a 
>> linear complexity is not very appealing.
>> 
>> As you already took the decision C, it means that at the communicator 
>> destruction stage the hash table is not relevant anymore. Thus, I would have 
>> converted the hash table to an ordered list (ordered by the creation index, 
>> a global entity atomically updated every time an attribute is created), and 
>> proceed to destroy the attributed in the desired order. Thus instead of 
>> having a linear operation for every operation on attributes, we only have a 
>> single linear operation per communicator (and this during the destruction 
>> stage).
>> 
>>  George.
>> 
>> On Jan 16, 2013, at 16:37 , KAWASHIMA Takahiro <rivis.kawash...@nifty.com> 
>> wrote:
>> 
>>> Hi,
>>> 
>>> I've implemented ticket #3123 "MPI-2.2: Ordering of attribution deletion
>>> callbacks on MPI_COMM_SELF".
>>> 
>>> https://svn.open-mpi.org/trac/ompi/ticket/3123
>>> 
>>> As this ticket says, attributes had been stored in unordered hash.
>>> So I've replaced opal_hash_table_t with opal_list_t and made necessary
>>> modifications for it. And I've also fixed some multi-threaded concurrent
>>> (get|set|delete)_attr call issues.
>>> 
>>> By this modification, following behavior changes are introduced.
>>> 
>>> (A) MPI_(Comm|Type|Win)_(get|set|delete)_attr function may be slower
>>>     for MPI objects that has many attributes attached.
>>> (B) When the user-defined delete callback function is called, the
>>>     attribute is already removed from the list. In other words,
>>>     if MPI_(Comm|Type|Win)_get_attr is called by the user-defined
>>>     delete callback function for the same attribute key, it returns
>>>     flag = false.
>>> (C) Even if the user-defined delete callback function returns non-
>>>     MPI_SUCCESS value, the attribute is not reverted to the list.
>>> 
>>> (A) is due to a sequential list search instead of a hash. See find_value
>>> function for its implementation.
>>> (B) and (C) are due to an atomic deletion of the attribute to allow
>>> multi-threaded concurrent (get|set|delete)_attr call in MPI_THREAD_MULTIPLE.
>>> See ompi_attr_delete function for its implementation. I think this does
>>> not matter because MPI standard doesn't specify behavior in such cases.
>>> 
>>> The patch for Open MPI trunk is attached. If you like it, take in
>>> this patch.
>>> 
>>> Though I'm a employee of a company, this is my independent and private
>>> work at my home. No intellectual property from my company. If needed,
>>> I'll sign to Individual Contributor License Agreement.
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to