If you really want the old netloc API now, you could try hwloc 2.x with
the old netloc. But that's certainly not maintained anymore, and that
only works for IB while the new netloc should have OPA and Cray support
soon.

The plan should rather be to tell us what you need from netloc so that
we can reenable it with a good API. We hear lots of people saying they
are interested in netloc, but *nobody* ever told us anything about what
they want to do for real. And I am not even sure anybody ever played
with the old API. This software cannot go forward unless we know where
it's going. There are many ways to design the netloc API.

* We had an explicit graph API in the old netloc but that API implied
expensive graph algorithmics in the runtimes using it. It seemed
unusable for taking decision at runtime anyway, but again ever nobody
tried. Also it was rather strange to expose the full graph when you know
the fabric is a 3D dragonfly on Cray, etc.

* In the new netloc, we're thinking of having higher-level implicit
topologies for each class of fabric (dragon-fly, fat-tree, clos-network,
etc) that require more work on the netloc side and easier work in the
runtime using it. However that's less portable than exposing the full
graph. Not sure which one is best, or if both are needed.

* There are also issues regarding nodes/links failure etc. How do we
expose topology changes at runtime? Do we have a daemon running as root
in the background, etc?

Lots of question that need to be discussed before we expose a new API In
the wild. Unfortunately, we lost several years because of the lack of
users' feedback. I don't want to invest time and rush for a new API if
MPICH never actually uses it like other people did in the past.

Brice




Le 04/04/2018 à 01:36, Balaji, Pavan a écrit :
> Brice,
>
> We want to use both hwloc and netloc in mpich.  What are our options here?  
> Move back to hwloc-1.x?  That’d be a bummer because we already invested a lot 
> of effort to migrate to hwloc-2.x.
>
>   — Pavan
>
> Sent from my iPhone
>
>> On Apr 3, 2018, at 6:19 PM, Brice Goglin <brice.gog...@inria.fr> wrote:
>>
>> It's not possible now but that would certainly be considered whenever
>> people start using the API and linking against libnetloc.
>>
>> Brice
>>
>>
>>
>>
>>> Le 03/04/2018 à 21:34, Madhu, Kavitha Tiptur a écrit :
>>> Hi
>>> A follow up question, is it possible to build netloc along with hwloc in 
>>> embedded mode?
>>>
>>>
>>>> On Mar 30, 2018, at 1:34 PM, Brice Goglin <brice.gog...@inria.fr> wrote:
>>>>
>>>> Hello
>>>>
>>>> In 2.0, netloc is still highly experimental. Hopefully, a large rework
>>>> will be merged in git master next month for being released in hwloc 2.1.
>>>>
>>>> Most of the API from the old standalone netloc was made private when
>>>> integrated in hwloc because there wasn't any actual user. The API was
>>>> quite large (things for traversing the graph of both the fabric and the
>>>> servers' internals). We didn't want to expose such a large API before
>>>> getting actual user feedback.
>>>>
>>>> In short, in your need features, please let us know, so that we can
>>>> discuss what to expose in the public headers and how.
>>>>
>>>> Brice
>>>>
>>>>
>>>>
>>>>
>>>>> Le 30/03/2018 à 20:14, Madhu, Kavitha Tiptur a écrit :
>>>>> Hi
>>>>>
>>>>> I need some info on the status of netloc integration with hwloc. I see 
>>>>> the include/netloc.h header is almost empty in hwloc 2.0 and lots of 
>>>>> functionality missing compared to the previous standalone netloc release, 
>>>>> even in private/netloc.h. Am I missing something here?
>>>>>
>>>>> Thanks
>>>>> Kavitha
>>>>>
>>>> _______________________________________________
>>>> hwloc-users mailing list
>>>> hwloc-users@lists.open-mpi.org
>>>> https://lists.open-mpi.org/mailman/listinfo/hwloc-users
>>> _______________________________________________
>>> hwloc-users mailing list
>>> hwloc-users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/hwloc-users
>> _______________________________________________
>> hwloc-users mailing list
>> hwloc-users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/hwloc-users
> _______________________________________________
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-users

_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Reply via email to