[hwloc-users] hwloc RPM spec file

2010-04-23 Thread Jirka Hladky
Hello,

I have written hwloc RPM spec file. It's attached.

Thanks
Jirka
Summary:   Portable Hardware Locality - portable abstraction of hierarchical architectures
Name:  hwloc
Version:   1.0a1r1944
Release:   1.0%{?dist}
License:   GPL
Group: Applications/System
URL:	   http://www.open-mpi.org/projects/hwloc/
Source0:   %{name}-%{version}.tar.gz
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root-%(%{__id_u} -n)
Provides:  hwloc
Requires:  ,/bin/sh

%description
The Portable Hardware Locality (hwloc) software package provides 
a portable abstraction (across OS, versions, architectures, ...) 
of the hierarchical topology of modern architectures, including 
NUMA memory nodes, sockets, shared caches, cores and simultaneous
multithreading. It also gathers various system attributes such as
cache and memory information. It primarily aims at helping applications
with gathering information about modern computing hardware so as 
to exploit it accordingly and efficiently.

hwloc may display the topology in multiple convenient formats. 
It also offers a powerful programming interface to gather information 
about the hardware, bind processes, and much more.
%package devel
Summary: hwloc headers and shared development libraries
Group:   Development/Libraries
Requires: hwloc = %{version}-%{release}
Provides: hwloc-devel = %{version}-%{release}

%description devel
Headers and shared object symlinks for the hwloc.

%prep
%setup -q

%build
%configure
%{__make} %{?_smp_mflags}

%install
%{__rm} -rf %{buildroot}

%{__make} install DESTDIR=%{buildroot} INSTALL="%{__install} -p"

%clean
%{__rm} -rf %{buildroot}

%files
%defattr(-, root, root, -)
%{_bindir}/%{name}*
%{_bindir}/lstopo
%{_mandir}/man7/%{name}*
%{_mandir}/man1/%{name}*
%{_mandir}/man1/lstopo*
%{_prefix}/share/%{name}/%{name}.dtd
%docdir %{_defaultdocdir}/%{name}
%doc %{_defaultdocdir}/%{name}/*
%doc AUTHORS COPYING NEWS README VERSION
%files devel
%defattr(-, root, root, -)
%{_libdir}/libhwloc*
%dir %{_libdir}/pkgconfig/
%{_libdir}/pkgconfig/hwloc.pc
%{_mandir}/man3/%{name}*
%{_mandir}/man3/HWLOC*
%dir %{_includedir}/%{name}
%{_includedir}/%{name}/*
%{_includedir}/%{name}.h




Re: [hwloc-users] hwloc RPM spec file

2010-04-26 Thread Jirka Hladky
On Monday 26 April 2010 05:21:35 pm Brice Goglin wrote:
> On 23/04/2010 18:09, Jirka Hladky wrote:
> > Hello,
> > 
> > I have written hwloc RPM spec file. It's attached.
> > 
> > Thanks
> > Jirka
> 
> Thanks Jirka, but don't you need some BuildRequires such as the following?
> 
> libX11-devel
> libxml2-devel
> cairo-devel
> ncurses-devel
> 
> 
> Tony (CCed) also worked on RPMs for Fedora in the past (see
> http://koji.fedoraproject.org/koji/taskinfo?taskID=1815736). I don't
> know which one is better. It would be good to have somebody upload hwloc
> in Redhat and Fedora repos at some point.
> 
> Maybe adding the spec file to the SVN could be good too? IIRC, you can
> build RPM packages with a single command line from the tarball thanks to
> this.
> 
> Brice
> ___
> hwloc-users mailing list
> hwloc-us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-users


Hi Brice,

I'm using it on system without X11 being installed. 

I have created it as the starting point to pack hwloc into rpm. What the 
opinion of others - should we add dependency on X11 and sacrify the 
possibility to run hwloc on systems without X11 installed?

Thanks
Jirka


[hwloc-users] hwloc on systems with more than 64 cpus?

2010-05-14 Thread Jirka Hladky
Hello,

I have tested hwloc on several systems and I was very impressed with results. 
It's a great tool!

The biggest box I have tested it on had 64 CPUs. (32 cores + hyper threading 
enabled).

I wonder if somebody has tested it on box with more than 64 CPUs. If so, can 
you please share your results?

Thanks a lot!
Jirka



Re: [hwloc-users] hwloc on systems with more than 64 cpus?

2010-05-17 Thread Jirka Hladky
On Monday 17 May 2010 05:49:01 pm Jeff Squyres wrote:
> On May 17, 2010, at 11:41 AM, Jirka Hladky wrote:
> > BTW, is there any time-plan for hwloc 1.0 to be released?
> 
> There were some trivial changes since rc6; I have one more trivial change
> to make today and then we're probably good to go.

Cool ! 

Jirka


Re: [hwloc-users] hwloc on systems with more than 64 cpus?

2010-05-27 Thread Jirka Hladky
On Thursday 27 May 2010 11:47:25 pm Brice Goglin wrote:
> Le 27/05/2010 23:28, Jirka Hladky a écrit :
> >> hwloc-calc doesn't accept input from stdin, it only reads the
> >> command-line. We have a TODO entry about this, I'll work on it soon.
> >> 
> >> For now, you can do:
> >>  hwloc-distrib ... | xargs -n 1 utils/hwloc-calc
> > 
> > I forgot to use "-n 1" switch in xargs to send only 1 cpu set per one
> > hwloc- calc command.
> > 
> > This works just fine: :-)
> > hwloc-distrib --single 8 | xargs -n1 hwloc-calc --taskset
> > 
> > Perhaps you can add this example to hwloc-distrib man page?
> 
> I've added the stdin support to hwloc-calc so I don't think it matters
> anymore: "hwloc-distrib --single 8 | hwloc-calc --taskset" should do
> what you want. I'll add something like this to the manpage.
> 
> Brice

Great, thanks!
Jirka



[hwloc-users] hwloc is now available as package for Fedora

2010-08-02 Thread Jirka Hladky
Hi guys,

hwloc is now available as rpm package for Fedora 12, 13 and 14.

===
http://download.englab.brq.redhat.com/pub/fedora/linux/updates/12/x86_64/hwloc-1.0.2-1.fc12.i686.rpm
http://download.englab.brq.redhat.com/pub/fedora/linux/updates/12/x86_64/hwloc-1.0.2-1.fc12.x86_64.rpm

http://download.englab.brq.redhat.com/pub/fedora/linux/updates/13/x86_64/hwloc-1.0.2-1.fc13.i686.rpm
http://download.englab.brq.redhat.com/pub/fedora/linux/updates/13/x86_64/hwloc-1.0.2-1.fc13.x86_64.rpm
===

It will be available for Red Hat Linux 5 and 6 as well as part of EPEL 
repositories once we have enough Karma collected. You can vote at

https://admin.fedoraproject.org/updates/hwloc-1.0.2-1.el5
http://admin.fedoraproject.org/updates/hwloc-1.0.2-1.fc12
http://admin.fedoraproject.org/updates/hwloc-1.0.2-1.fc13

Thanks
Jirka


[hwloc-users] Fwd: [Fedora Update] [comment] hwloc-1.0.2-1.el5

2010-08-03 Thread Jirka Hladky
Hi all,

thanks for the voting, hwloc is now available for RHEL 5 in the EPEL 
repository:

http://download.fedora.redhat.com/pub/epel/5/x86_64/hwloc-1.0.2-1.el5.i386.rpm
http://download.fedora.redhat.com/pub/epel/5/x86_64/hwloc-1.0.2-1.el5.x86_64.rpm

Thanks
Jirka
--- Begin Message ---
The following comment has been added to the hwloc-1.0.2-1.el5 update:

bodhi - 2010-08-02 14:59:04 (karma: 0)
This update has been pushed to stable

To reply to this comment, please visit the URL at the bottom of this mail


 hwloc-1.0.2-1.el5

  Update ID: FEDORA-EPEL-2010-3118
Release: Fedora EPEL 5
 Status: stable
   Type: newpackage
  Karma: 4
   Bugs: 606498 - Review Request: hwloc - portable abstraction of
   : hierarchical architectures
  Notes: Portable Hardware Locality - portable abstraction of hierarchical
   : architecturesThe Portable Hardware Locality
   : (hwloc) software package provides   a portable
   : abstraction (across OS, versions, architectures, ...)
   : of the hierarchical topology of modern architectures,
   : including   NUMA memory nodes,  shared caches,
   : processor sockets, processor cores  and processing
   : units (logical processors or "threads"). It also
   : gathers  various system attributes such as cache and
   : memory information. It primarily  aims at helping
   : applications with gathering information about modern
   : computing hardware so as to exploit it accordingly and
   : efficiently.hwloc may display the topology in
   : multiple convenient formats.   It also offers a
   : powerful programming interface (C API) to gather
   : information   about the hardware, bind processes, and
   : much more.
  Submitter: jhladky
  Submitted: 2010-07-27 15:50:47
   Comments: bodhi - 2010-07-27 15:50:52 (karma 0)
 This update has been submitted for testing by jhladky.

 bodhi - 2010-07-27 15:52:01 (karma 0)
 This update has been submitted for stable by jhladky.

 bodhi - 2010-07-29 23:19:03 (karma 0)
 This update has been submitted for testing by ausil.

 bodhi - 2010-07-30 01:04:08 (karma 0)
 This update has been pushed to testing

 jhla...@redhat.com (unauthenticated) - 2010-07-30 08:50:57 (karma 
1)
 Hi all,I have tested hwloc on ~40 different boxes
 on different archs (x86_64, i386, ppc, ia64). It works
 great.Thanks  Jirka

 jhla...@redhat.com (unauthenticated) - 2010-07-30 08:51:48 (karma 
1)
 Hi all, I have tested hwloc on ~40 different boxes on
 different archs (x86_64, i386, ppc, ia64). It works
 great. Thanks Jirka

 jhladky - 2010-07-30 08:55:25 (karma 1)
 I was wondering why my updates are marked as
 "Anonymous" - I forgot to click on Login. Sorry guys.
 Hi all, I have tested hwloc on ~40 different boxes on
 different archs (x86_64, i386, ppc, ia64). It works
 great. Thanks Jirka

 kkola...@redhat.com (unauthenticated) - 2010-08-02 08:58:17 (karma 
1)
 Excellent and useful tool! I run it on ~20 machines.
 Thanks Jirka

 brice.gog...@inria.fr (unauthenticated) - 2010-08-02 09:09:38 
(karma 1)
 Works for me on el5 on various architectures.

 kkolakow - 2010-08-02 09:10:32 (karma 1)
 Excellent and useful tool! I run it on ~20 machines.
 Thanks Jirka

 aokul...@redhat.com (unauthenticated) - 2010-08-02 09:27:51 (karma 
1)
 Wery useful tool - works well for me. Thanks Jirka

 aokuliar - 2010-08-02 09:28:59 (karma 1)
 Very useful tool - works well for me. Thanks Jirka

 bgoglin - 2010-08-02 09:42:42 (karma 1)
 Works for me on el5 on various architectures.

 bodhi - 2010-08-02 14:59:04 (karma 0)
 This update has been pushed to stable

  http://admin.fedoraproject.org/updates/hwloc-1.0.2-1.el5

--- End Message ---


Re: [hwloc-users] [hwloc-announce] hwloc 2.3.0 released

2020-10-02 Thread Jirka Hladky
Great, thank you!

Jirka

On Fri, Oct 2, 2020 at 9:48 AM Brice Goglin  wrote:

>
> Le 02/10/2020 à 01:59, Jirka Hladky a écrit :
>
>
> I'll see if I can make things case-insensitive in the tools (not in the C
>> API).
>
> Yes, it would be a nice improvement.  Currently, there is a mismatch
> between different commands.  hwloc-info supports both bandwidth and
> Bandwidth, but hwloc-annotate requires a capital letter.
>
> I just fixed that, and pushed some manpage updates as discussed earlier.
>
> We have several minor issues (spurious runtime warnings) that may justify
> doing a 2.3.1 in the near future, your changes will be in there.
>
> Thanks
>
> Brice
>
> ___
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-users



-- 
-Jirka
___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Re: [hwloc-users] [hwloc-announce] hwloc 2.3.0 released

2020-10-01 Thread Jirka Hladky
Hi Brice,

this new feature sounds very interesting!

Add hwloc/memattrs.h for exposing latency/bandwidth information
> between initiators (CPU sets for now) and target NUMA nodes,
> typically on heterogeneous platforms.


If I get it right, I need to have an ACPI HMAT table on the system to use
the new functionality, right?

I have tried following on Fedora
acpidump -o acpidump.bin
acpixtract -a acpidump.bin

but there is no HMAT table reported. So it seems I'm out of luck, and I
cannot test the new functionality, right?

Also, where can we find the list of attributes supported by --best-memattr?
  --best-memattr  Only display the best target among the local nodes

By trial and error, I have found out that latency and bandwidth are
supported. Are there any other? Could you please add the list to hwloc-info
-h?

hwloc-info --best-memattr bandwidth
hwloc-info --best-memattr latency

Thanks a lot!
Jirka


On Thu, Oct 1, 2020 at 12:45 AM Brice Goglin  wrote:

> hwloc (Hardware Locality) 2.3.0 is now available for download.
>
>   https://www.open-mpi.org/software/hwloc/v2.3/ 
> 
>
> v2.3.0 brings quite a lot of changes. The biggest one is the addition
> of the memory attribute API to expose hardware information that vendors
> are (slowly) adding to ACPI tables to describe heterogeneous memory
> platforms (mostly DDR+NVDIMMs right now).
>
> The following is a summary of the changes since v2.2.0.
>
> Version 2.3.0
> -
> * API
>   + Add hwloc/memattrs.h for exposing latency/bandwidth information
> between initiators (CPU sets for now) and target NUMA nodes,
> typically on heterogeneous platforms.
> - When available, bandwidths and latencies are read from the ACPI HMAT
>   table exposed by Linux kernel 5.2+.
> - Attributes may also be customized to expose user-defined performance
>   information.
>   + Add hwloc_get_local_numanode_objs() for listing NUMA nodes that are
> local to some locality.
>   + The new topology flag HWLOC_TOPOLOGY_FLAG_IMPORT_SUPPORT causes
> support arrays to be loaded from XML exported with hwloc 2.3+.
> - hwloc_topology_get_support() now returns an additional "misc"
>   array with feature "imported_support" set when support was imported.
>   + Add hwloc_topology_refresh() to refresh internal caches after modifying
> the topology and before consulting the topology in a multithread context.
> * Backends
>   + Add a ROCm SMI backend and a hwloc/rsmi.h helper file for getting
> the locality of AMD GPUs, now exposed as "rsmi" OS devices.
> Thanks to Mike Li.
>   + Remove POWER device-tree-based topology on Linux,
> (it was disabled by default since 2.1).
> * Tools
>   + Command-line options for specifying flags now understand comma-separated
> lists of flag names (substrings).
>   + hwloc-info and hwloc-calc have new --local-memory --local-memory-flags
> and --best-memattr options for reporting local memory nodes and filtering
> by memory attributes.
>   + hwloc-bind has a new --best-memattr option for filtering by memory 
> attributes
> among the memory binding set.
>   + Tools that have a --restrict option may now receive a nodeset or
> some custom flags for restricting the topology.
>   + lstopo now has a --thickness option for changing line thickness in the
> graphical output.
>   + Fix lstopo drawing when autoresizing on Windows 10.
>   + Pressing the F5 key in lstopo X11 and Windows graphical/interactive 
> outputs
> now refreshes the display according to the current topology and binding.
>   + Add a tikz lstopo graphical backend to generate picture easily included 
> into
> LaTeX documents. Thanks to Clement Foyer.
> * Misc
>   + The default installation path of the Bash completion file has changed to
> ${datadir}/bash-completion/completions/hwloc. Thanks to Tomasz Kłoczko.
>
>
> Changes since 2.3.0rc1 are negligible.
> --
> Brice
>
>
> ___
> hwloc-announce mailing list
> hwloc-annou...@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-announce



-- 
-Jirka
___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Re: [hwloc-users] [hwloc-announce] hwloc 2.3.0 released

2020-10-01 Thread Jirka Hladky
Hi Brice,

thanks for the quick reply! I was able to get access to one KNL server and
I have tested it there. It works - bandwidth is annotated and reported:-)
See [1]

It's also possible to add memory attribute using the C API or with
> hwloc-annotate to modify a XML


This is interesting! ACPI tables are often wrong - having the option to
annotate more accurate data to the hwloc is great.

We have a simple C program to measure the bandwidth between NUMA nodes,
producing a table similar to the output of numactl -H (but with values in
GB/s).

node   0   1   2   3
 0:  10  16  16  16
 1:  16  10  16  16
 2:  16  16  10  16
 3:  16  16  16  10

I was trying to annotate it using hwloc-annotate, but I have not succeeded.
:

lstopo in.xml
hwloc-annotate in.xml out.xml node:0 memattr bandwidth node:0 18
Failed to find memattr by name bandwidth

Is there some example of how to do this?

Also, are there any plans for having a tool, which would measure the memory
bandwidth and annotate the results to XML for later usage with hwloc
commands?

There are 4 standard attributes defined in hwloc/memattrs.h: capacity,
> locality, latency and bandwidth.They are also visible in lstopo -vv or
> lstopo --memattrs. I'll something in the doc.


Thanks for the hint! Adding pointer to "lstopo --memattrs" to "hwloc-info
-h" output would be IMHO sufficient:

--best-memattr  Only display the best target among the local nodes.
See the output of "lstopo --memattrs" for the list of supported attributes.

Thanks a lot!
Jirka

[root@intel-knightslanding-01 hwloc-2.3.0]# utils/lstopo/lstopo-no-graphics
--memattrs
Memory attribute #0 name `Capacity' flags 1
 NUMANode L#0 = 33417318400
 NUMANode L#1 = 16910688256
Memory attribute #1 name `Locality' flags 2
 NUMANode L#0 = 272
 NUMANode L#1 = 272
Memory attribute #2 name `Bandwidth' flags 5
 NUMANode L#0 = 9 from cpuset
0x,0x,0x,0x,0x,0x,0x,0x,0x
(Machine L#0)
 NUMANode L#1 = 36 from cpuset
0x,0x,0x,0x,0x,0x,0x,0x,0x
(Machine L#0)






On Thu, Oct 1, 2020 at 7:28 PM Brice Goglin  wrote:

>
> Le 01/10/2020 à 19:16, Jirka Hladky a écrit :
>
> Hi Brice,
>
> this new feature sounds very interesting!
>
> Add hwloc/memattrs.h for exposing latency/bandwidth information
>> between initiators (CPU sets for now) and target NUMA nodes,
>> typically on heterogeneous platforms.
>
>
> If I get it right, I need to have an ACPI HMAT table on the system to use
> the new functionality, right?
>
>
> Hello Jirka
>
> It's also possible to add memory attribute using the C API or with
> hwloc-annotate to modify a XML (you may create attribute, or add values for
> a given attribute).
>
>
> I have tried following on Fedora
> acpidump -o acpidump.bin
> acpixtract -a acpidump.bin
>
> but there is no HMAT table reported. So it seems I'm out of luck, and I
> cannot test the new functionality, right?
>
>
> Besides KNL (which is too old to have HMAT, but hwloc now provides
> hardwired bandwidth/latency values), the only platforms with heterogeneous
> memories right now are Intel machines with Optane DCPMM (NVDIMMs). Some
> have a HMAT, some don't. If your machine doesn't, it's possible to provide
> a custom HMAT table in the initrd. That's not easy, so adding attribute
> values with hwloc-annotate might be easier.
>
>
>
> Also, where can we find the list of attributes supported by --best-memattr?
>   --best-memattr  Only display the best target among the local nodes
>
>
> There are 4 standard attributes defined in hwloc/memattrs.h: capacity,
> locality, latency and bandwidth.They are also visible in lstopo -vv or
> lstopo --memattrs. I'll something in the doc.
>
>
>
> By trial and error, I have found out that latency and bandwidth are
> supported. Are there any other? Could you please add the list to hwloc-info
> -h?
>
>
> I could add the default ones, but I'll need to specify that additional
> user-given attributes may exist.
>
> Thanks for the feedback.
>
> Brice
>
>
>
>
> hwloc-info --best-memattr bandwidth
> hwloc-info --best-memattr latency
>
> Thanks a lot!
> Jirka
>
>
> On Thu, Oct 1, 2020 at 12:45 AM Brice Goglin 
> wrote:
>
>> hwloc (Hardware Locality) 2.3.0 is now available for download.
>>
>>  https://www.open-mpi.org/software/hwloc/v2.3/ 
>> <https://www.open-mpi.org/software/hwloc/v2.0/>
>>
>> v2.3.0 brings quite a lot of changes. The biggest one is the addition
>> of the memory attribute API to expose hardware information that vendors
>> are (slowly) adding to ACPI tables to describe heterogeneous memory
>> pl

Re: [hwloc-users] [hwloc-announce] hwloc 2.3.0 released

2020-10-01 Thread Jirka Hladky
>
> The ACPI SLIT table (reported by numactl -H) was indeed often dumb or even
> wrong. But SLIT wasn't widely used anyway, so vendors didn't care much
> about putting valid info there, it didn't break anything in most
> applications. Hopefully it won't be the case for HMAT because HMAT will be
> the official way to figure out which target memory is fast or not. If
> vendors don't fill it properly, the OS may use HBM or NVDIMMs by default
> instead of DDR, which will likely cause more problems than a broken SLIT.


Right. Even now, SLIT values have an impact on the Linux scheduler.  See
this: https://www.codeblueprint.co.uk/2019/07/12/what-are-slit-tables.html

"The current magic value used inside Linux kernel is 30 – if the NUMA node
distance between two nodes is more than 30, the Linux kernel scheduler will
try not to migrate tasks between them."
https://github.com/torvalds/linux/blob/master/include/linux/topology.h#L60

There's an example at the end of the manpage of hwloc-annotate. It's very
> similar to your line, but you likely need a capital to "Bandwidth".

Yes, it works as expected when used with the capital "B"  See [1].

I'll see if I can make things case-insensitive in the tools (not in the C
> API).

Yes, it would be a nice improvement.  Currently, there is a mismatch
between different commands.  hwloc-info supports both bandwidth and
Bandwidth, but hwloc-annotate requires a capital letter.

hwloc-info --best-memattr bandwidth
hwloc-info --best-memattr Bandwidth
hwloc-annotate in.xml out.xml node:0 memattr Bandwidth node:0 18 && mv
out.xml in.xml

Merci beaucoup!
Jirka


[1]
lstopo in.xml
hwloc-annotate in.xml out.xml node:0 memattr Bandwidth node:0 18 && mv
out.xml in.xml
hwloc-annotate in.xml out.xml node:0 memattr Bandwidth node:1 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:0 memattr Bandwidth node:2 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:0 memattr Bandwidth node:3 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:1 memattr Bandwidth node:0 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:1 memattr Bandwidth node:1 18 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:1 memattr Bandwidth node:2 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:1 memattr Bandwidth node:3 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:2 memattr Bandwidth node:0 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:2 memattr Bandwidth node:1 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:2 memattr Bandwidth node:2 18 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:2 memattr Bandwidth node:3 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:3 memattr Bandwidth node:0 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:3 memattr Bandwidth node:1 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:3 memattr Bandwidth node:2 9 && mv -f
out.xml in.xml
hwloc-annotate in.xml out.xml node:3 memattr Bandwidth node:3 18 && mv -f
out.xml in.xml
$ lstopo-no-graphics --input in.xml --memattrs
Memory attribute #0 name `Capacity' flags 1
 NUMANode L#0 = 16469168128
 NUMANode L#1 = 16908922880
 NUMANode L#2 = 16881680384
 NUMANode L#3 = 16908451840
Memory attribute #1 name `Locality' flags 2
 NUMANode L#0 = 8
 NUMANode L#1 = 8
 NUMANode L#2 = 8
 NUMANode L#3 = 8
Memory attribute #2 name `Bandwidth' flags 5
 NUMANode L#0 = 18 (NUMANode L#0)
 NUMANode L#0 = 9 (NUMANode L#1)
 NUMANode L#0 = 9 (NUMANode L#2)
 NUMANode L#0 = 9 (NUMANode L#3)
 NUMANode L#1 = 9 (NUMANode L#0)
 NUMANode L#1 = 18 (NUMANode L#1)
 NUMANode L#1 = 9 (NUMANode L#2)
 NUMANode L#1 = 9 (NUMANode L#3)
 NUMANode L#2 = 9 (NUMANode L#0)
 NUMANode L#2 = 9 (NUMANode L#1)
 NUMANode L#2 = 18 (NUMANode L#2)
 NUMANode L#2 = 9 (NUMANode L#3)
 NUMANode L#3 = 9 (NUMANode L#0)
 NUMANode L#3 = 9 (NUMANode L#1)
 NUMANode L#3 = 9 (NUMANode L#2)
 NUMANode L#3 = 18 (NUMANode L#3)
Memory attribute #3 name `Latency' flags 6


On Fri, Oct 2, 2020 at 12:43 AM Brice Goglin  wrote:

> Le 01/10/2020 à 22:17, Jirka Hladky a écrit :
>
>
> This is interesting! ACPI tables are often wrong - having the option to
> annotate more accurate data to the hwloc is great.
>
>
> The ACPI SLIT table (reported by numactl -H) was indeed often dumb or even
> wrong. But SLIT wasn't widely used anyway, so vendors didn't care much
> about putting valid info there, it didn't break anything in most
> applications. Hopefully it won't be the case for HMAT because HMAT will be
> the official way to figure out which target memory is fast or not. If
> vendors don't fill it properly, the OS may use HBM or NVDIMMs by default
> instead of DDR, which will likely cause more problems than a broken SLIT.
>
>
> We have a 

[hwloc-users] hwloc on IBM Power LPAR VMs

2021-04-26 Thread Jirka Hladky
Hi Brice,

how are you doing? I hope you are fine. We are all well and safe.

I have been running hwloc on IBM Power LPAR VM with only 1 CPU core and 8
PUs [1]. There is only one NUMA node. The numbering is however quite
strange, the NUMA node number is "2".  See [2].

hwloc reports "Topology does not contain any NUMA node, aborting!"

$ lstopo
Topology does not contain any NUMA node, aborting!
hwloc_topology_load() failed (No such file or directory).

Could you please double-check if this behavior is correct? I believe hwloc
should work on this HW setup.

FYI, we can get it working with --disallowed option [3] (but I think it
should work without this option as well)

Thanks a lot!
Jirka


[1] $ lscpu
Architecture:ppc64le
Byte Order:  Little Endian
CPU(s):  8
On-line CPU(s) list: 0-7
Thread(s) per core:  8
Core(s) per socket:  1
Socket(s):   1
NUMA node(s):1

[2] There is ONE NUMA node with the number "2":
$ numactl -H
available: 1 nodes (2)
node 2 cpus: 0 1 2 3 4 5 6 7
node 2 size: 7614 MB
node 2 free: 1098 MB
node distances:
node   2
 2:  10

[3]
$ lstopo --disallowed
Machine (7615MB total)
 Package L#0
   NUMANode L#0 (P#0 7615MB)
   L3 L#0 (4096KB) + L2 L#0 (1024KB) + Core L#0
 L1d L#0 (32KB) + L1i L#0 (48KB)
   Die L#0 + PU L#0 (P#0)
   PU L#1 (P#2)
   PU L#2 (P#4)
   PU L#3 (P#6)
 L1d L#1 (32KB) + L1i L#1 (48KB)
   PU L#4 (P#1)
   PU L#5 (P#3)
   PU L#6 (P#5)
   PU L#7 (P#7)
 Block(Disk) "sda"
 Net "env2"
___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Re: [hwloc-users] Support for Intel's hybrid architecture - can I restrict hwloc-distrib to P cores only?

2023-11-24 Thread Jirka Hladky
Thank you, Brice!

I'm testing it on a Lenovo P1 laptop with an i7-12800H CPU with 6P +
8E. --cpukinds option solves the problem for me:-)

hwloc-calc  --cpukind 1 all

BTW, Intel's patches to improve Linux scheduling on hybrid architectures
were significantly improved this year and released recently in kernel
version 6.6.
https://www.phoronix.com/news/Intel-Hybrid-Cluster-Sched-v3

Jirka

On Fri, Nov 24, 2023 at 9:19 AM Brice Goglin  wrote:

>
> Le 24/11/2023 à 08:51, John Hearns a écrit :
> > Good question.  Maybe not an answer referring to hwloc.
> > When managing a large NUMA machine, SGI UV, I ran the OS processes in
> > a boot cpuset which was restricted to (AFAIR) the first 8 Cpus.
> > On Intel architecures with E and P cores could we think of running OS
> > on E cores only and having the batch system schedule compute tasks on
> > P cores?
> >
>
> That's certainly possible. Linux has things like isolcpus to force
> isolate some cores away from the OS tasks, should work for these
> platforms too (by the way, it's also for ARM big.LITTLE platforms
> running Linux, including Apple M1, etc).
>
> However, keep in mind that splitting P+E CPUs is not like splitting NUMA
> platforms: isolating NUMA node #0 on SGI left tons of cores available
> for HPC tasks on NUMA nodes. Current P+E from Intel usually have more E
> than P, and several models are even 2P+8E, that would be a lot of
> E-cores for the OS and very few P-cores for real apps. Your idea would
> apply better if we rather had 2E+8P but that's not the trend.
>
> Things might be more interesting with MeteorLake which (according to
>
> https://www.hardwaretimes.com/intel-14th-gen-meteor-lake-cpu-cores-almost-identical-to-13th-gen-its-a-tic/)
>
> has P+E as usual but also 2 "low-power E" on the side. There, you could
> put the OS on those 2 Low-Power E.
>
> By the way, the Linux scheduler is supposed to get enhanced to
> automatically find out which tasks to put on P and E core but they've
> been discussing things for a long time and it's hard to know what's
> actually working well already.
>
> Brice
>
>
> ___
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-users



-- 
-Jirka
___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

[hwloc-users] Support for Intel's hybrid architecture - can I restrict hwloc-distrib to P cores only?

2023-11-23 Thread Jirka Hladky
Hi Brice,

I have a question about the hwloc's support for Intel's hybrid
architectures, like in Alder Lake CPUs:
https://en.wikipedia.org/wiki/Alder_Lake

There are P (performance) and E (efficiency) cores. Is hwloc able to detect
which core is which? Can I, for example, restrict hwloc-distrib to P cores
only?

Pseudocode:
hwloc-distrib --single --restrict <> <>

Merci beaucoup!
Jirka
___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users