Hi

Ubuntu 20.04 aarch64, Open-MPI 4.0.3, HPL 2.3

Having changed the MTU across my small cluster from 1500 to 9000, I’m wondering 
how/if Open-MPI can take advantage of this increased maximum packet size. 

ip link show eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode 
DEFAULT group default qlen 1000
    link/ether dc:a6:32:60:7b:cd brd ff:ff:ff:ff:ff:ff

Having run a Linpack benchmark before and after the MTU change, it appears to 
have had minimal impact on performance. I was, probably naively, expecting some 
benchmark improvement. Are there any Open-MPI parameters, or compiler options, 
that can tweaked that are related to MTU size?

Kind regards
 

Reply via email to