Lunderberg commented on code in PR #16584:
URL: https://github.com/apache/tvm/pull/16584#discussion_r1499995629


##########
tests/python/tir-base/test_tir_specialize.py:
##########
@@ -324,5 +326,41 @@ def expected(A_data: T.handle("float32")):
     tvm.ir.assert_structural_equal(expected, after)
 
 
+def test_specialization_removes_struct_info():
+    """Reset struct info in specialization
+
+    While a PrimFunc usually doesn't have a `relax.StructInfo`, the
+    field can be populated in some edge cases.  If that PrimFunc is

Review Comment:
   There's a couple of MLC 
([example](https://github.com/mlc-ai/mlc-llm/blob/fb4c52c749c4b32594f1e9382113fecd11847893/mlc_llm/transform/fuse_split_rotary_embedding.py#L100))
 where the a dynamic PrimFunc is annotated with the `relax.FuncStructInfo`.  
This allows the `R.call_tir` shape inference to work correctly, even after the 
arguments are mutated.
   
   IIRC, the primary reasons not to add `FuncStructInfo` to PrimFuncs was that 
primfuncs didn't follow the relax requirements for functional purity.  Since we 
now have the purity expressed within `FuncStructInfo`, I think that means we 
could add `FuncStructInfo` to every PrimFunc on construction, without making 
any incorrect statements about the functions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to