padreofthegame opened a new issue, #12400:
URL: https://github.com/apache/tvm/issues/12400

   Thanks for participating in the TVM community! We use https://discuss.tvm.ai 
for any general usage questions and discussions. The issue tracker is used for 
actionable items such as feature proposals discussion, roadmaps, and bug 
tracking.  You are always welcomed to post on the forum first :smile_cat:
   
   Issues that are inactive for a period of time may get closed. We adopt this 
policy so that we won't lose track of actionable issues that may fall at the 
bottom of the pile. Feel free to reopen a new one if you feel there is an 
additional problem that needs attention when an old one gets closed.
   
   Hello everyone, 
   
   I am  new here so I am not completely familiar with the way of communication 
here, but I will try to summarize my observations related to relay.squeeze 
functionality. This report may be a more comprehensive version of #11697, but 
also it contain additional scenario which is not related to that one. 
   
   ## relay.squeeze with argument `axis=[]`  
   
   ### Scenario 1.
   
   Only relay.squeeze(axis=[])
   
   `def @main(%x: Tensor[(1, 1, 1, 1, 1, 2, 1, 3, 4, 5), float32]) {
     squeeze(%x, axis=[])
   }`
   
   #### Expected behavior
   
   Program should compile without errors.
   
   #### Actual behavior
   
   Program compile and the shape of the result tensor is the same as the input 
tensor shape. 
   
   `Input tensor shape:   (1, 1, 1, 1, 1, 2, 1, 3, 4, 5)`
   `Output tensor shape:  (1, 1, 1, 1, 1, 2, 1, 3, 4, 5)`
   
   Looks like parameter `axis=[]` does not change the shape of the 
relay.squeeze input tensor. (this is the assumption that may make problems 
later)
   
   ### Scenario 2.
   
   relay.squeeze(axis=[]) + relay.squeeze(axis=[...])
   
   `def @main(%x: Tensor[(1, 1, 1, 1, 1, 2, 1, 3, 4, 5), float32]) {
     %0 = squeeze(%x, axis=[]);
     squeeze(%0, axis=[0])
   }`
   
   #### Expected behavior
   
   Program should compile without errors as long as we send the appropriate 
parameters for the `axis` in second squeeze call. (for example using `axis=[5]` 
should be an error because dimension which is not equal to 1 could not be 
squeezed).
   
   #### Actual behavior
   
   Program does not compile, and there is 2 types of unexpected errors:
   First one is obtained for parameter `axis=[0]`, and the error message is:
   
   `TVMError:` 
   `An error occurred during the execution of TVM.`
   `For more information, please see: https://tvm.apache.org/docs/errors.html`
   `Check failed: GetConstInt(x->shape[val]) == 1 (2 vs. 1) : Dimension 0 must 
have size 1`
   
   Error of the same type is obtained using any of numbers 1, 2, 3 instead of 0 
for the `axis` in previous example.
   
   Second one is obtained for parameter `axis=[4]`, and the error message is:
   
   `TVMError: `
   `An error occurred during the execution of TVM.`
   `For more information, please see: https://tvm.apache.org/docs/errors.html`
   `Check failed: (0 <= i && i < p->size_) is false: IndexError: indexing 4 on 
an array of size 4`
   
   Error of the same type is also obtained using number 6 instead of 0 for the 
axis in previous example.
   
   For other possible options for axis (which are 5, 7, 8 and 9) we get 
expected error since in input tensor those dimensions are not equal to 1.
   
   From these error descriptions it looks like that using parameter `axis=[]` 
actually changes the shape of tensor internally (in CPP) in a same way as it 
will be when we send `axis=None`, but Python for some reason does not see this. 
   
   ### Scenario 3.
   
   relay.squeeze(axis=[]) + relay.nn.bias_add
   
   `def @main(%x: Tensor[(1, 1, 1, 1, 5, 2, 1, 3, 4, 5), float32], %b: 
Tensor[(5), float32]) {
     %0 = squeeze(%x, axis=[]);
     nn.bias_add(%0, %b, axis=4)
   }`
   
   #### Expected behavior
   
   Program should compile without errors as long as we send the appropriate 
parameters for the `axis` in bias_add call. (for example using `axis=0` should 
be an error because corresponding dimension must be equal to the dimension of 
the tensor `b` which should be added).
   
   #### Actual behavior
   
   Program does not compile, and there is additional 2 types of unexpected 
errors:
   First one is obtained for parameter `axis=4`, and the error message is:
   
   `TVMError:` 
   `An error occurred during the execution of TVM.`
   `For more information, please see: https://tvm.apache.org/docs/errors.html`
   ` Check failed: ret == 0 (-1 vs. 0) : Assert fail: (5 == 
tir.tvm_struct_get(arg2, 0, 4)), arg2.ndim is expected to equal 5`
   
   Second one is obtained for parameter `axis=9`, and the error message is:
   
   `TVMError: `
   `An error occurred during the execution of TVM.`
   `For more information, please see: https://tvm.apache.org/docs/errors.html`
   `Check failed: (num_newaxis >= 0) is false: expand_dims only accepts 
`num_newaxis >= 0`, but got num_newaxis = -5`
   
   For other possible options for axis (which are 0, 1, 2, 3, 5, 6, 7 and 8) we 
get expected error since in input tensor those dimensions are not equal to the 
shape of tensor `b`.
   
   From these error descriptions it looks like that using parameter `axis=[]` 
actually changes the input tensor as it is described in Scenario 2.
   
   ## relay.squeeze with argument `axis=Constant`  
   
   ### Scenario 1.
   
   Using `axis = Constant([])` instead of `axis = []` will result with same 
errors described above, when used in the same scenario.
   
   ### Scenario 2.
   
   Using `axis = Constant(2)` where the argument of the `Constant` is 
`tvm.nd.array` created only using integer (in this example that integer is 2).
   
   #### Expected behavior
   
   Following code should pass without errors:
   
   `import numpy as np`
   `import tvm`
   `from tvm import relay, transform, cpu, IRModule`
   
   `x_shape = (1, 1, 1, 1, 1, 2, 1, 3, 4, 5)`
   `x = relay.var('x', shape=x_shape)`
   
   `arr = np.array(0)`
   `ndarr = tvm.nd.array(arr)`
   `const = relay.Constant(ndarr)`
   
   `y = relay.squeeze(x, axis=const)`
   `mod = IRModule.from_expr(y)`
   
   #### Actual behavior
   
   Program does not compile with the following error:
   
   `Traceback (most recent call last):`
   `  File 
"/home/padre/TVM/Issues/Relay/Bugs/issue_with_relay_squeeze/relay_squeeze_test_constant.py",
 line 104, in <module>`
   `    y = relay.squeeze(x, axis=const)`
   
   `  File "/home/padre/TVM_full_repo/tvm/python/tvm/relay/op/transform.py", 
line 221, in squeeze`
   `    axis = list(axis.data.numpy())`
   
   `TypeError: iteration over a 0-d array`
   
   Since the functionality of the `tvm.nd.array` is based on np.array, which 
could have dimension `()` and which in that case is not iterable, this scenario 
should be extra covered.
   
   ### Environment
   
   Ubuntu 20.04, TVM commit f5f5a75ae compiled with LLVM support.
   
   ### Steps to reproduce
   
   Code may be the same as in #11697, just relay definition and corresponding 
GraphModule.run function should be changed according to the mentioned scenario.
   
   ## Solution
   
   All mentioned errors could be solved implementing few small changes in 
`squeeze` function defined inside `python/tvm/relay/op/transform.py` file.
   
   #### 1. change
   
   Instead of:
   
   `if isinstance(axis, Constant):`
   `        axis = list(axis.data.numpy())`
   
   should be:
   
   `if isinstance(axis, Constant):`
   `        if axis.data.shape:`
   `            axis = list(axis.data.numpy())`
   `        else:`
   `            axis = [axis.data.numpy().item()]`
   
   Internal if statement could also be changed with try except block, depending 
on the speed.
   
   #### 2. change
   
   Depending on the wanted functionality in front of the line:
   
   `return _make.squeeze(data, axis)`
   
   should stay (if using `axis=[]` should be same as using `axis=None`):
   
   `if not axis:`
   `        axis = None`
   
   or (if using `axis=[]` should not change the input tensor `data`):
   
   `if axis == []:`
   `        return data`
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to