Question for everyone (possibly a topic for tomorrow's call...):

hwloc is evolving into a fairly nice package.  It's not ready for inclusion 
into Open MPI yet, but it's getting there.  I predict it will come in somewhere 
early in the 1.5 series (potentially not 1.5.0, though).  hwloc will provide 
two things:

1. A listing of all processors and memory, to include caches (and cache sizes!) 
laid out in a map, so you can see what processors share what memory (e.g., 
caches).  Open MPI currently does not have this capability.  Additionally, 
hwloc is currently growing support to include PCI devices in the map; that may 
make it into hwloc v1.0 or not.

2. Cross-platform / OS support.  hwloc currently support a nice variety of OSs 
and hardware platforms.

Given that hwloc is already cross-platform, do we really need the carto 
framework?  I.e., do we really need multiple carto plugins?  More specifically: 
should we just use hwloc directly -- with no framework? 

Random points:

- I'm about halfway finished with "embedding" code for hwloc like PLPA has, so, 
for example, all of hwloc's symbols can be prepended with opal_ or orte_ or 
whatever.  Hence, embedding hwloc in OMPI would be "safe".

- If we keep the carto framework, then we'll have to translate from hwloc's map 
to carto's map; there may be subtleties involved in the translation.  

- I guarantee that [much] more thought has been put into the hwloc map data 
structure design than carto's.  :-)  Indeed, to make all of hwloc's data 
available to OMPI, carto's map data structures may end up evolving to look 
pretty much exactly like hwloc's.  In which case -- what's the point of carto?

Thoughts?

hwloc also provides processor binding functions, so it might also make the 
paffinity framework moot...

-- 
Jeff Squyres
jsquy...@cisco.com


Reply via email to