Hzfengsy commented on code in PR #95:
URL: https://github.com/apache/tvm-rfcs/pull/95#discussion_r1000601436


##########
rfcs/0095-empowering-new-scoped-module.md:
##########
@@ -0,0 +1,74 @@
+- Feature Name: [Process RFC] Empowering New Scoped Module to the Project
+- Start Date: 2022-10-19
+- RFC PR: [apache/tvm-rfcs#95](https://github.com/apache/tvm-rfcs/pull/95)
+
+# Background
+
+Machine Learning Compilation (MLC) is an emerging field in fast development. 
With the tremendous help from the whole community, it’s exciting to see that 
TVM delivers significant needs from and to developers, and thus has become 
widely popular in both academia and industry.
+
+As a rapidly growing field, inevitable needs keep emerging daily as new 
workloads and demands come in. For example, demand has been evolving from 
static shape compilation to dynamic shape compilation, from scalar code to 
tensor cores. As an early player in the field, we led in some of the most 
important areas, thanks to our close collaboration and agile iteration for 
innovations.
+
+Success comes from listening to the community's demands. As one of the 
first-movers in this field, who wants to build the project toward future 
success, it is important for us to keep listening and always have the following 
two goals in mind.
+
+- G0: Maintain stable solutions for existing use-cases
+- G1: Always be open-minded to new demands, land technical commitment timely, 
continue to reinvent ourselves, and welcome new members to the community.
+
+G0 is important in the sense that we would like to continue making sure we do 
not create disruptions in existing code. In the meantime, enabling G1 in a 
timely manner helps us to stay up in the competition and keep pushing state of 
the art.
+
+Definition: We categorize a new module as S0-module if it satisfies the 
following criteria:
+
+- Clearly isolated in its own namespace.
+- Clearly needed by some users in the community.
+- No disruptive change to the rest of the codebase
+- Can be easily deprecated by removing the related namespaces
+- Can be turned off through a feature toggle to contain the overall dependency 
from the rest of the modules.
+
+Common practices: in most projects is to introduce improvements in different 
phases.
+
+- S0: as being defined in this proposal
+- S1: Evolving the overall solutions to make use of the new component.
+- S2: Deprecation of some existing solutions or evolving the solutions.
+
+Notably, not all changes have to be scoped as S0-level changes. There are many 
features that involve S1 level changes which can also be evaluated as part of 
the RFC process. But nevertheless, having a clear phased development helps us 
to bring advances to both goals.
+
+Keeping both goals in mind, it is important to enable a mechanism for the 
community to welcome new scoped modules to the project. Enabling new modules is 
one way to quickly enable G1 while keeping the existing G0 part stable. This is 
a common practice established in Apache and non-apache projects. For example, 
Apache Spark initially started with an optional module GraphX for the graph 
process, and then came follow-up improvements along the line of SparkGraph. 
MLIR enables different improvements as dialects, such as TOSA, Torch-MLIR. 
PyTorch enables new graph exporting mechanisms named TorchFX while also 
maintaining TorchScript for other existing use cases.
+
+In those past practices, the new components are introduced as optional modules 
with minimum changes to existing ones. Notably, there can be perceived overlap 
with some of the existing components, e.g. Torch-MLIR contains similar features 
around computational graphs as TOSA, but also brings orthogonal improvements to 
the overall system. As a related example, TorchFX certainly has overlapping 
features with TorchScript, but also brings in new capabilities along. While not 
all of them are ASF projects, they are successful practices that enable some of 
the open source projects to thrive in a similar field that we are in.

Review Comment:
   Great point. We mistake the TorchMLIR here. There are similar sets of 
in-tree computational graph dialects that we intended to refer to – both TOSA 
and Linalg for example are in the MLIR tree and serve as computational graph 
dialects.
   
   Thanks for pointing it out. The text is updated!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to