Sorry for the delay in replying to this; been caught up in SC09 prep...



On Nov 5, 2009, at 8:22 AM, Brice Goglin wrote:

* PLPA-like API is prefixed with hwloc_plpa_ and all functions get a new
hwloc_topology_t parameter. The problematic ones are:

+ int hwloc_plpa_sched_getaffinity(pid_t pid, hwloc_cpuset_t cpuset);


Hmm. I'm a little confused. If we don't provide a drop-in PLPA replacement API implementation, what's the point of implementing a PLPA-like API? PLPA users will still need to modify their code -- shouldn't we be pointing them to the more-powerful hwloc API instead?

There's certainly some desirable PLPA API features that could be imported to the HWLOC API -- but I would think that if people want to keep using the PLPA API, they can. It just won't [ever] be updated. The existing (and future) hwloc API is the migration path forward -- I'm not convinced that providing a new API that's halfway between PLPA and hwloc is worthwhile...

(I'm really sorry that I didn't reply about this earlier! :-( )

It's just a hwloc_get_cpubind(), but we don't have it since it would not
be supported on all OS. But I think we should add it anyway.

+ int hwloc_plpa_get_core_flags(hwloc_topology_t topology, int socket_id, int core_id, int *exists, int *online);

Is says whether a core (given by core+socket os_index) exists and is
online. First, we don't have topology information about offline
processors. Secondly, on Nehalem you can disable a single thread within
a hyperthreaded core, so an "offline core" doesn't mean much. I would
just vote for returning whether the core exists and remove the online
return value here (see below for more about offline CPUs).


Good point. PLPA was definitely not thought through well with regards to hardware threads. This is another reason not to expose this function in hwloc at all.

* Then we have all count-spec related API, which lets you look for
information about all processors, or all online ones, or all offline ones.

If people are really interested with offline CPUs, they can look at the get_offline_cpuset below. There is no topology information about offline
CPUs on Linux anyway, so I am not sure it's worth trying to manage
offline and online CPUs in a uniform way. I would rather remove the
count-spec argument and just only work on available/online/enabled
processors with:

+ int hwloc_plpa_get_processor_data(hwloc_topology_t topology, int *num_processors, int *max_processor_id);

+ int hwloc_plpa_get_processor_id(int processor_num, int *processor_id);


I think Samuel pointed out that some OS's *do* return info about offline CPUs (Solaris?). That would make exposing offline/unavailable/ otherwise-not-usage CPUs useful -- you can tell that they're *there*, even if you can't *use* them. If nothing else, it's an excellent diagnostic tool.

* Probing

>From what I understand, plpa_have_topology_information() tells whether PLPA knows what's in the hardware, while plpa_api_probe() tells whether
binding is supported. We could add:

+ hwloc_topology_support(hwloc_topology_t topology, unsigned *support)

which fills "support" with a bitmask of things like OS is supported,
binding a thread is possible, binding a processor is possible, getting
the binding of a process is possible, ...

Then we could reimplement

+ int hwloc_plpa_have_topology_information(hwloc_topology_t topology);
+ int hwloc_plpa_api_probe(hwloc_topology_t topology);


I think it would be better to have a capabilities vector like you describe later -- might as well unify all this stuff.

* Finally, I plan to reimplement the PLPA tools, either in tests/plpa/
or as a real (installed) tools for a transition period.

+ plpa-info already works in my tree. Are there people that really need
it? "lstopo -v -" basically shows the same and even more (offline CPUs
are not reported in the trunk but I modified my tree to print the number
of offline CPUs and the corresponding cpuset).


I'm ok with not re-implementing plpa-info. That tool still exists and people can use it if they have scripts that depend on its specific output. We should be pushing people to the hwloc executables for all future work -- plpa-info output should be for legacy stuff only (IMHO).

+ plpa-taskset needs a lot of work for convering its own cpuset stuff
into ours. It has an advanced binding syntax that some people may be
used to. hwloc-bind has an advanced but different syntax. Apart from
that, the features are the same.


I think it *might* be worthwhile to convert some of the command line syntax to be supported by hwloc-bind. I (really, really, really) didn't like some of the syntax that was supported, but I stole a bunch of ideas from taskset(1) -- I was trying to make plpa-taskset be a drop-in replacement for taskset. Hence, I had to support its syntax. I'm ok jettisoning much of that now -- hwloc-bind is just much mo' betta than taskset ever imagined.

I like the core@socket ideas, but we might need to think this through a little more for threads -- thread@core@socket? Hmm. Seems weird.

By the way, I wonder if we want to add public functions converting
between cpusets (0x0f00ffff) and cpulist string (0-15,24-27)
(plpa-taskset uses something like this).



I found it helpful when embedding plpa in other things -- having this kind of utility gorp function just meant helping other developers who were *using* plpa (e.g., if they expose their own argv command line options for specifying affinity, why not let them use the same string parsing functions that plpa used?). So: +1 on this idea. :-)

--
Jeff Squyres
jsquy...@cisco.com

Reply via email to