t-vi commented on a change in pull request #5834:
URL: https://github.com/apache/incubator-tvm/pull/5834#discussion_r442079677



##########
File path: python/tvm/relay/frontend/pytorch.py
##########
@@ -115,64 +115,70 @@ def inplace_add_to_add(op_name):
     return False
 
 
+
 # operator implementation
 def _elemwise(name):
     def _impl(inputs, input_types):
-        # TODO: Figure out a better way to get typing to work for tensor + 
scalar
-        type0 = input_types[0]
-        if isinstance(inputs[1], _expr.Expr):
-            type0 = input_types[1]
-
-        type1 = input_types[1]
-        if isinstance(inputs[0], _expr.Expr):
-            type1 = input_types[0]
-
-        data0 = _convert_elemwise_input(inputs[0], type0)
-        data1 = _convert_elemwise_input(inputs[1], type1)
-
+        data0, data1 = _pytorch_promote_types(inputs[:2], input_types[:2])

Review comment:
       So the immediate error was caused by numpy scalars instead of Python 
number. :confused: 
   That is easily fixed. I do bump into something about the shape in a where 
statement not being broadcast. That's probably unrelated to the dtype fixes, 
but I can throw in broadcasting for where (the question being whether we want 
it in relay, too, or just in the frontend).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to