If you're going that route, we're probably better off using your original 
"hash" based solution so people can just assign a character string to "point" 
to their block of data. Otherwise, we get into the problem of potentially 
overlapping indexes with people on branches.


On Oct 3, 2012, at 7:29 AM, Jeff Squyres <jsquy...@cisco.com> wrote:

> On Oct 3, 2012, at 10:22 AM, George Bosilca wrote:
> 
>> In the case such a functionality become necessary, I would suggest we use a 
>> mechanism similar to the attributes in MPI (but without the multi-language 
>> mess). That will allow whoever want to attach data to a hwloc node, to do it 
>> without the mess of dealing with reserving a slot. It might require a little 
>> bit more memory, but so far the number of nodes in the HWLOC data is limited.
> 
> You mean something like putting this in opal/mca/hwloc/base/base.h:
> 
> typedef enum {
>  OPAL_HWLOC_BASE_RMAPS_BASE,
>  OPAL_HWLOC_BASE_BTL_OPENIB,
>  OPAL_HWLOC_BASE_GEORGE_STUFF,
>  OPAL_HWLOC_BASE_JEFF_STUFF,
>  /* ... */
>  OPAL_HWLOC_BASE_MAX
> } opal_hwloc_base_userdata_consumers_t;
> 
> And then:
> 
> 0. if any new upper-level consumer wants to hang stuff off hwloc userdata, 
> they just add another enum
> 1. hwloc base hangs a (void *opal[OPAL_HWLOC_BASE_MAX]) off each hwloc obj
> 2. each upper level consumer uses their enum to set/get their stuff
> 
> Is that what you're thinking?
> 
> 
>> george.
>> 
>> On Oct 3, 2012, at 16:13 , Jeff Squyres <jsquy...@cisco.com> wrote:
>> 
>>> WHAT: allowing multiple entities in the OMPI code base to hang data off 
>>> hwloc_obj->userdata
>>> 
>>> WHY: anticipating that more parts of the OMPI code base will be using the 
>>> hwloc data
>>> 
>>> WHERE: hwloc base
>>> 
>>> WHEN: no real hurry; Ralph and I just identified the potential for this 
>>> issue this morning.  We're not aware of it being an actual problem (yet).
>>> 
>>> MORE DETAIL:
>>> 
>>> The rmaps base (in mpirun) is currently hanging its own data off various 
>>> objects in the hwloc topology tree.  However, it should be noted that the 
>>> hwloc topology tree is a global data structure in each MPI processes; 
>>> multiple upper-level entities in the ORTE and OMPI layers may want to hang 
>>> their own userdata off hwloc objects.
>>> 
>>> Ralph and I figured that some functionality could be added to the hwloc 
>>> base to hang a opal_pointer_array off each hwloc object; each array value 
>>> will be a (void*).  Then upper-level entities can reserve a slot in all the 
>>> pointer arrays and store whatever they want in their (void*) slot.
>>> 
>>> For example, if the openib BTL wants to use the hwloc data and hang its own 
>>> userdata off hwloc objects, it can call the hwloc base and reserve a slot.  
>>> The hwloc base will say "Ok, you can have slot 7".  Then the openib BTL can 
>>> always use slot 7 in the opal_pointer_array off any hwloc object.
>>> 
>>> Does this sound reasonable?
>>> 
>>> -- 
>>> Jeff Squyres
>>> jsquy...@cisco.com
>>> For corporate legal information go to: 
>>> http://www.cisco.com/web/about/doing_business/legal/cri/
>>> 
>>> 
>>> _______________________________________________
>>> devel mailing list
>>> de...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> 
>> 
>> _______________________________________________
>> devel mailing list
>> de...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
> 
> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 
> 
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/devel


Reply via email to