gemini-code-assist[bot] commented on code in PR #18403:
URL: https://github.com/apache/tvm/pull/18403#discussion_r2474140433


##########
tests/python/relax/test_frontend_from_exported_program.py:
##########
@@ -1198,16 +1190,27 @@ def forward(self, input):
     class expected_tril:
         @R.function
         def main(
-            input_1: R.Tensor((10, 10), dtype="float32")
+            input: R.Tensor((10, 10), dtype="float32")
         ) -> R.Tuple(R.Tensor((10, 10), dtype="float32")):
             # block 0
             with R.dataflow():
-                lv: R.Tensor((10, 10), dtype="float32") = R.tril(input_1, 1)
-                gv: R.Tuple(R.Tensor((10, 10), dtype="float32")) = (lv,)
+                lv: R.Tensor((10,), dtype="int64") = R.arange(
+                    R.prim_value(0), R.prim_value(10), R.prim_value(1), 
dtype="int64"
+                )
+                lv1: R.Tensor((1, 10), dtype="int64") = R.expand_dims(lv, 
axis=[-2])
+                lv2: R.Tensor((10,), dtype="int64") = R.arange(
+                    R.prim_value(0), R.prim_value(10), R.prim_value(1), 
dtype="int64"
+                )
+                lv3: R.Tensor((10, 1), dtype="int64") = R.expand_dims(lv2, 
axis=[-1])

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   The `arange` call to create `lv2` is redundant, as `lv` created on line 1197 
is identical. You can reuse `lv` to create `lv3` and remove the definition of 
`lv2` to make the IR more concise and avoid a redundant operation. This same 
issue is present in `expected_triu` as well.
   
   ```python
                   lv3: R.Tensor((10, 1), dtype="int64") = R.expand_dims(lv, 
axis=[-1])
   ```



##########
tests/python/relax/test_frontend_from_exported_program.py:
##########
@@ -1217,16 +1220,27 @@ def forward(self, input):
     class expected_triu:
         @R.function
         def main(
-            input_1: R.Tensor((10, 10), dtype="float32")
+            input: R.Tensor((10, 10), dtype="float32")
         ) -> R.Tuple(R.Tensor((10, 10), dtype="float32")):
             # block 0
             with R.dataflow():
-                lv: R.Tensor((10, 10), dtype="float32") = R.triu(input_1, 1)
-                gv: R.Tuple(R.Tensor((10, 10), dtype="float32")) = (lv,)
+                lv: R.Tensor((10,), dtype="int64") = R.arange(
+                    R.prim_value(0), R.prim_value(10), R.prim_value(1), 
dtype="int64"
+                )
+                lv1: R.Tensor((1, 10), dtype="int64") = R.expand_dims(lv, 
axis=[-2])
+                lv2: R.Tensor((10,), dtype="int64") = R.arange(
+                    R.prim_value(0), R.prim_value(10), R.prim_value(1), 
dtype="int64"
+                )
+                lv3: R.Tensor((10, 1), dtype="int64") = R.expand_dims(lv2, 
axis=[-1])

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   Similar to `expected_tril`, the `arange` call to create `lv2` is redundant. 
You can reuse `lv` (defined on line 1227) to create `lv3` and remove the 
definition of `lv2`.
   
   ```python
                   lv3: R.Tensor((10, 1), dtype="int64") = R.expand_dims(lv, 
axis=[-1])
   ```



##########
tests/python/relax/test_frontend_from_exported_program.py:
##########
@@ -1159,32 +1159,24 @@ def main(
             input: R.Tensor((1, 3, 10, 10), dtype="float32"),
         ) -> R.Tuple(R.Tensor((1, 3, 10, 10), dtype="float32")):
             with R.dataflow():
-                lv: R.Tensor((1, 3, 10, 10), dtype="float32") = R.subtract(
-                    input, R.const(0.5, "float32")
-                )
-                lv1: R.Tensor((1, 3, 10, 10), dtype="bool") = R.greater(
-                    input, R.const(0.5, "float32")
+                lv: R.Tensor((1, 3, 10, 10), dtype="float32") = R.abs(input)
+                lv1: R.Tensor((1, 3, 10, 10), dtype="bool") = R.greater(lv, 
R.const(0.5, "float32"))
+                lv2: R.Tensor((1, 3, 10, 10), dtype="float32") = R.sign(input)
+                lv3: R.Tensor((1, 3, 10, 10), dtype="float32") = R.multiply(
+                    lv2, R.const(0.5, "float32")
                 )
-                lv2: R.Tensor((1, 3, 10, 10), dtype="float32") = R.astype(lv1, 
"float32")
-                lv3: R.Tensor((1, 3, 10, 10), dtype="float32") = 
R.multiply(lv, lv2)
-
-                lv4: R.Tensor((1, 3, 10, 10), dtype="float32") = R.add(
-                    input, R.const(0.5, "float32")
+                lv4: R.Tensor((1, 3, 10, 10), dtype="float32") = 
R.subtract(input, lv3)
+                lv5: R.Tensor((1, 3, 10, 10), dtype="float32") = R.multiply(
+                    input, R.const(0.0, "float32")
                 )

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   Using `R.zeros_like(input)` would be more idiomatic and clearer to express 
the intent of creating a zero tensor with the same shape and dtype as `input`.
   
   ```python
                   lv5: R.Tensor((1, 3, 10, 10), dtype="float32") = 
R.zeros_like(input)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to