Hi Tom,

I came across your draft "Scale-Up Network Header (SUNH)" and found it very 
interesting and timely. It resonated well with a draft I wrote a while ago 
(https://datatracker.ietf.org/doc/html/draft-song-ship-edge-05) although that 
one addressed a more general scope.
Here are some comments and thoughts I'd like to share.


  1.  "Some traffic patterns may have a majority of small packets, like for KV 
cache in AI, where packet sizes may commonly be 256 bytes or less."
As far as I know, the token KV cache is pretty big and the packet size is 
limited by the MTU. The small packets in Scale Up network are usually for 
control plane signaling and synchronization, or small memory-semantic 
transactions. Anyway, I agree the header overhead is a big concern in AIDCN and 
the current SUE solution is flawed.


  1.  There's no dedicated Scale Up Network NICs available. Scale Up network 
interface is usually supposed to be integrated into the GPU dies. When Ethernet 
interface is used, the situation might change. But a GPU cannot afford to have 
two PCIe interfaces to connect two separate NICs. So most likely the scale up 
and scale out networks will be converged and share the same NIC. If that's 
true, compatibility and the ability to interoperate with standard IP protocols 
become a necessity.


  1.  I think 16-bit SUNH address is too long for now, and probably too short 
in the future if the scale up and scale out networks are converged. So it's 
better to maintain flexibility. The SHIP draft I mentioned earlier provides a 
flexible scheme and allows the gateway switches to translate the 
header-compressed packets into normal IPv4/v6 packets so the inter-DC traffic 
can be seamlessly supported.  With this, the routing header (i.e., compressed 
SRv6) can also be supported, making the scheme even more flexible to support SR 
(in another research, I found that capability is very useful in certain DCN 
topologies).

I think this is an area and opportunity that IETF can contribute to the AI 
network, and I'm looking forward to contributing to it.

Best regards,
Haoyu

_______________________________________________
Int-area mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to