Hello
Make sure you use a very recent Linux kernel. There was a bug regarding L3 
caches on 24-core Epyc processors which has been fixed in 4.14 and backported 
in 4.13.x (and maybe in distro kernels too).
However, that would likely not cause huge performance difference unless your 
application heavily depends on the L3 cache.
Brice


Le 24 décembre 2017 12:46:01 GMT+01:00, Matthew Scutter 
<yellowplant...@gmail.com> a écrit :
>I'm getting poor performance on OpenMPI tasks on a new AMD 7401P EPYC
>server. I suspect hwloc providing a poor topology may have something to
>do
>with it as I receive this warning below when creating a job.
>Requested data files available at http://static.skysight.io/out.tgz
>Cheers,
>Matthew
>
>****************************************************************************
>
>
>* hwloc 1.11.8 has encountered what looks like an error from the
>operating
>system.
>
>*
>
>
>* L3 (cpuset 0x60000060) intersects with NUMANode (P#0 cpuset
>0x3f00003f
>nodeset 0x00000001) without inclusion!
>
>
>* Error occurred in topology.c line 1088
>
>
>
>*
>
>
>
>
>* The following FAQ entry in the hwloc documentation may help:
>
>
>*   What should I do when hwloc reports "operating system" warnings?
>
>
>* Otherwise please report this error message to the hwloc user's
>mailing
>list,
>
>* along with the files generated by the hwloc-gather-topology script.
>
>
>
>****************************************************************************
>
>
>depth 0:        1 Machine (type #1)
>
>
> depth 1:       1 Package (type #3)
>
>
>  depth 2:      4 NUMANode (type #2)
>
>
>   depth 3:     10 L3Cache (type #4)
>
>
>    depth 4:    24 L2Cache (type #4)
>
>
>     depth 5:   24 L1dCache (type #4)
>
>
>      depth 6:  24 L1iCache (type #4)
>
>
>       depth 7: 24 Core (type #5)
>
>
>
>        depth 8:        48 PU (type #6)
>
>
>
>Special depth -3:       12 Bridge (type #9)
>
>
>Special depth -4:       9 PCI Device (type #10)
>
>
>Special depth -5:       4 OS Device (type #11)
_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Reply via email to