Mousius commented on code in PR #12082:
URL: https://github.com/apache/tvm/pull/12082#discussion_r923158579


##########
tests/python/contrib/test_hexagon/test_2d_physical_buffers.py:
##########
@@ -84,6 +83,12 @@ def target_host(target):
     return tvm.target.Target(target, host=host)
 
 
+# Disabling redefined-outer-name for the whole file as there isn't any easy
+# solution yet to refactor tvm.testing.fixture fixtures that avoid redefining
+# outer variable names
+# pylint: disable=redefined-outer-name

Review Comment:
   Does it not work if we just wrap the two fixtures?



##########
tests/python/contrib/test_hexagon/test_benchmark_elemwise_add.py:
##########
@@ -124,6 +119,7 @@ def _get_irmod_elemwise_add(
         # Also: The VTCM budget is a very rough estimate, based only on 
experience.
         # Assuming that it's even reasonable to use a hard-coded estimate AT 
ALL, this number
         # may need tweaking.
+        # pylint: disable=unreachable

Review Comment:
   Can we not just remove this and add it when it's functional? 



##########
tests/python/contrib/test_hexagon/test_2d_physical_buffers.py:
##########
@@ -189,18 +196,19 @@ def schedule_args(
         working_layout,
         working_scope,
     ):
-        InputTensor = te.placeholder(input_shape, dtype, name="Input")
-        OutputTensor = te.compute(
-            shape=InputTensor.shape,
-            fcompute=lambda *indices: (2 * InputTensor[indices]).astype(dtype),
+        """Create and return the schedule and input args after applying layout 
transform"""
+        input_tensor = te.placeholder(input_shape, dtype, name="Input")
+        output_tensor = te.compute(
+            shape=input_tensor.shape,
+            fcompute=lambda *indices: (2 * 
input_tensor[indices]).astype(dtype),
             name="Output",
         )
-        schedule = te.create_schedule(OutputTensor.op)
+        schedule = te.create_schedule(output_tensor.op)
 
-        WriteCache = schedule.cache_write(OutputTensor, working_scope)
-        ReadCache = schedule.cache_read(InputTensor, working_scope, 
[WriteCache])
+        write_cache = schedule.cache_write(output_tensor, working_scope)
+        read_cache = schedule.cache_read(input_tensor, working_scope, 
[write_cache])
 
-        def apply_transform(tensor, layout):
+        def apply_transform(tensor, layout):  # pylint: 
disable=inconsistent-return-statements

Review Comment:
   Does this work with:
   ```
               if layout == "nhwc":
                   return
   ```
   or 
   ```
               if layout == "nhwc":
                   return None
   ```
   instead? 



##########
tests/python/contrib/test_hexagon/test_benchmark_elemwise_add.py:
##########
@@ -151,6 +151,7 @@ def main(a: T.handle, b: T.handle, c: T.handle):
                 for j in range(dim1_size):
                     C[i, j] = A[i, j] + B[i, j]
 
+    # pylint: enable=no-self-argument,invalid-name,missing-function-docstring

Review Comment:
   ```suggestion
           # pylint: 
enable=no-self-argument,invalid-name,missing-function-docstring
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to