gemini-code-assist[bot] commented on code in PR #18726:
URL: https://github.com/apache/tvm/pull/18726#discussion_r2777698525


##########
python/tvm/relax/frontend/torch/dynamo.py:
##########
@@ -129,6 +129,10 @@ def to_tvm_tensor(torch_tensor):
         mod = mod.with_attr("target", target)
         mod = seq(mod)
 
+        if device.type == "cuda":
+            with target:
+                mod = tvm.tir.transform.DefaultGPUSchedule()(mod)

Review Comment:
   ![medium](https://www.gstatic.com/codereviewagent/medium-priority.svg)
   
   This correctly adds the default GPU scheduling pass for CUDA, which is a 
great step for enabling GPU support.
   
   To make this more scalable for other GPU backends (e.g., ROCm, Metal), it 
would be beneficial to generalize this check. 
`tvm.tir.transform.DefaultGPUSchedule` is not CUDA-specific.
   
   A future improvement could be to check for any GPU device type and apply 
this pass. This would need to be done in conjunction with updating the 
device/target creation logic around line 112 to support more GPU types.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to