Hi Alastair,

On Fri, Mar 12, 2021 at 09:32:03AM +0000, Alastair Basden via lustre-discuss 
wrote:
Reading is more problematic. A request from a client (say 10.0.0.100) for data on OST2 will come in via card 2 (10.0.0.2). A thread on CPU2 (hopefully) will then read the data from OST2, and send it out to the client, 10.0.0.100. However, here, Linux will route the packet through the first card on this subnet, so it will go over the inter-cpu link, and out of IB card 1. And this will be the case even if the thread is pinned on CPU2.

The question then is whether there is a way to configure Lustre to use IB card 2 when sending out data from OST2.

The routing table entries referenced here: 
https://wiki.lustre.org/LNet_Router_Config_Guide#ARP_flux_issue_for_MR_node
should do this for you I believe, ensuring essentially that packets will be routed out over the interface that they are received on.

I think this is sufficient, but maybe someone more knowledgeable on this can confirm.

Cheers,
Matt

On Wed, 10 Mar 2021, Ms. Megan Larko wrote:

[EXTERNAL EMAIL]
Greetings Alastair,

Bonding is supported on InfiniBand, but  I believe that it is only 
active/passive.
I think what you might be looking for WRT avoiding data travel through the inter-cpu link is cpu 
"affinity" AKA cpu "pinning".

Cheers,
megan

WRT = "with regards to"
AKA = "also known as"

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

--
Matt Rásó-Barnett
_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to