shingjan commented on code in PR #11863:
URL: https://github.com/apache/tvm/pull/11863#discussion_r905701026


##########
python/tvm/relay/frontend/pytorch.py:
##########
@@ -1952,6 +1952,13 @@ def expand_as(self, inputs, input_types):
             target = _op.cast(target, t0)
         return _op.broadcast_to_like(inputs[0], target)
 
+    def broadcast_tensors(self, inputs, input_types):
+        tensor_list = inputs[0]
+        import torch
+
+        res_shape = list(torch.broadcast_shapes(*[self.infer_shape(t) for t in 
tensor_list]))
+        return [_op.broadcast_to(tensor, res_shape) for tensor in tensor_list]

Review Comment:
   Thanks for the review! This is definition of `broadcast_shapes` in pytorch's 
[doc](https://pytorch.org/docs/stable/generated/torch.broadcast_shapes.html?highlight=broadcast_shapes#torch.broadcast_shapes):
   ```
   This is equivalent to torch.broadcast_tensors(*map(torch.empty, 
shapes))[0].shape but avoids the need create to intermediate tensors. This is 
useful for broadcasting tensors of common batch shape but different rightmost 
shape, e.g. to broadcast mean vectors with covariance matrices.
   ```
   This workaround avoid the need to create intermediate tensors on our end. 
`broadcast_tensors` works like the following:
   ```
   a.shape = [1,1,2]
   b.shape = [1,3,1]
   c.shape = [4,1,1]
   x, y, z = broadcast_tensors(a,b,c)
   x = a.broadcast_to(shape = [4,3,2])
   y = b.broadcast_to(shape = [4,3,2])
   y = b.broadcast_to(shape = [4,3,2])
   ```
   meaning the resulting tensors of `broadcast_tensors` will all have the same 
shape `[4,3,2]`. This is a bit different from what we discussed yesterday but 
seems to be the right broadcasting semantics.
   
   reference: https://numpy.org/doc/stable/user/basics.broadcasting.html



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to