Hi Troy, 
In my opinion, it should have an impact. 
Transport layer protocols can use multicast technology as the carrier of the 
network layer.
For IB, etc., corresponding adaptation is also required.
Thanks,
Sandy












Original


From: DiegoTroyLopez <[email protected]>
  
To: 张征00007940;
  
Cc: [email protected] <[email protected]>;[email protected] <[email protected]>;
Date: 
 2025年10月24日 21:26 
  
Subject: Re: [rtgwg] Fw: New Version Notification for 
draft-zhang-rtgwg-llmmoe-multicast-01.txt
  



Hi,
What are the implications of this multicast proposal with RDMA transports like 
infiniband / UE / RoCE?
Thanks
---
Troy


On Tue, Oct 21, 2025 at 6:41 PM <[email protected]> wrote:



Hi, 
The draft LLM MoE multicast use case proposed at the last IETF side meeting has 
been updated based on the feedback received. 
Welcome more reviews and comments.
Thank you!
Best regards,
Sandy










Original

From: [email protected] <[email protected]>      
To: 段威10036319;Xiaohu Xu <[email protected]>;张征00007940;      
Date:      2025年10月20日 15:22          
Subject: New Version Notification for draft-zhang-rtgwg-llmmoe-multicast-01.txt 
     


A new version of Internet-Draft draft-zhang-rtgwg-llmmoe-multicast-01.txt has
been successfully submitted by Zheng Zhang and posted to the
IETF repository.
 
Name:     draft-zhang-rtgwg-llmmoe-multicast
Revision: 01
Title:    Multicast usage in LLM MoE
Date:     2025-10-20
Group:    Individual Submission
Pages:    7
URL:      
https://www.ietf.org/archive/id/draft-zhang-rtgwg-llmmoe-multicast-01.txt
Status:   https://datatracker.ietf.org/doc/draft-zhang-rtgwg-llmmoe-multicast/
HTML:     
https://www.ietf.org/archive/id/draft-zhang-rtgwg-llmmoe-multicast-01.html
HTMLized: 
https://datatracker.ietf.org/doc/html/draft-zhang-rtgwg-llmmoe-multicast
Diff:     
https://author-tools.ietf.org/iddiff?url2=draft-zhang-rtgwg-llmmoe-multicast-01
 
Abstract:
 
   Large Language Models (LLMs) have been widely used in recent years.
   The Mixture of Experts (MoE) architecture is one of the features of
   LLMs that enables efficient inference and cost-effective training.
   With the MoE architecture, there are potential multicast use cases
   such as tokens dispatching.  This draft attempts to analyze these use
   cases.
 
 
 
The IETF Secretariat
 
                     







_______________________________________________
 rtgwg mailing list -- [email protected]
 To unsubscribe send an email to [email protected]
_______________________________________________
rtgwg mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to