Hello
Good to know, thanks.
There are two ways to workaround the issue:
* run "lstopo foo.xml" on a node that doesn't have the bug and do export
HWLOC_XMLFILE=foo.xml and HWLOC_THISSYSTEM=1 on buggy nodes. (that's
what you call a "map" below). Works with very old hwloc releases.
* export
Hello
This is a kernel bug for 12-core AMD Bulldozer/Piledriver (62xx/63xx)
processors. hwloc is just complaining about buggy L3 information. lstopo
should report one L3 above each set of 6 cores below each NUMA node.
Instead you get strange L3s with 2, 4 or 6 cores.
If you're not binding tasks
> Most people don't care about cache when binding with MPI, so you may
> just ignore the issue and hide the message by setting
> HWLOC_HIDE_ERRORS=1 in the environment. It may work fine (assuming
> MPIs don't have troubles with asymmetric topologies where there are
> some missing L3).
We do see
Hi Brice,
> Your kernel looks recent enough, can you try upgrading your BIOS ? You
> have version 3.0b and there's a 3.5 version at
> http://www.supermicro.com/aplus/motherboard/opteron6000/sr56x0/h8qg6-f.cfm
Flashing bios is not the easiest option for us since I'd need to bring
down the whole
Hello,
Your platform reports buggy L3 cache locality information. This is very
common on AMD 62xx and 63xx platforms unfortunately.
You have 8 L3 caches (one per 6-core NUMA node, two per socket), but the
platform report 11 L3 caches instead:
Socket s1, 2 and 4 report one L3 above 2 cores, one