WHAT: allowing multiple entities in the OMPI code base to hang data off hwloc_obj->userdata
WHY: anticipating that more parts of the OMPI code base will be using the hwloc data WHERE: hwloc base WHEN: no real hurry; Ralph and I just identified the potential for this issue this morning. We're not aware of it being an actual problem (yet). MORE DETAIL: The rmaps base (in mpirun) is currently hanging its own data off various objects in the hwloc topology tree. However, it should be noted that the hwloc topology tree is a global data structure in each MPI processes; multiple upper-level entities in the ORTE and OMPI layers may want to hang their own userdata off hwloc objects. Ralph and I figured that some functionality could be added to the hwloc base to hang a opal_pointer_array off each hwloc object; each array value will be a (void*). Then upper-level entities can reserve a slot in all the pointer arrays and store whatever they want in their (void*) slot. For example, if the openib BTL wants to use the hwloc data and hang its own userdata off hwloc objects, it can call the hwloc base and reserve a slot. The hwloc base will say "Ok, you can have slot 7". Then the openib BTL can always use slot 7 in the opal_pointer_array off any hwloc object. Does this sound reasonable? -- Jeff Squyres [email protected] For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
