huenwei-arch opened a new issue, #18752:
URL: https://github.com/apache/tvm/issues/18752

   ### Expected behavior
   
   According to the **ONNX Tile specification (Opset 13+)**, the `repeats` 
input is a 1D tensor of integers. The specification allows `repeats` to be a 
**Graph Input** (a dynamic tensor provided at runtime). TVM should be able to 
handle this by mapping it to a dynamic Relax operator that accepts a tensor for 
the `repeats` argument.
   
   ### Actual behavior
   
   The TVM ONNX frontend incorrectly mandates that `repeats` must be a 
compile-time constant (`Initializer`). 
   
   - **Static Mode**: When `repeats` is an Initializer, the conversion 
succeeds, and the output matches ONNX Runtime.
   - **Dynamic Mode**: When `repeats` is a Graph Input, the conversion fails 
with an explicit error: `Dynamic reps for Tile are supported yet.`. 
Additionally, the `BlockBuilder` is destroyed with remaining blocks due to the 
unhandled exception during conversion.
   
   **Reproduction Log:**
   ```text
   >>> Testing STATIC repeats mode...
     [ORT] Model is valid. Output shape: (4, 6)
     [TVM] Conversion Successful!
     [Result] Success: Shapes match (4, 6)
   
   >>> Testing DYNAMIC repeats mode...
     [ORT] Model is valid. Output shape: (4, 6)
     [TVM] Conversion Failed!
     Error Message: Dynamic reps for Tile are supported yet. 
src/relax/ir/block_builder.cc:65: Warning: BlockBuilder destroyed with 
remaining blocks!
   ```
   
   ### Environment
   
   OS: Ubuntu 20.04.6 LTS
   
   TVM Version: 0.19.0 (Relax)
   
   ONNX Version: 1.18.0
   
   ONNX Runtime Version: 1.24.1
   
   NumPy Version: 2.4.2
   
   
   ### Steps to reproduce
   
   ```python
   import onnx
   import numpy as np
   import onnxruntime as ort
   from onnx import helper, TensorProto
   import tvm
   from tvm import relax
   from tvm.relax.frontend.onnx import from_onnx
   
   def create_tile_model(is_dynamic):
       input_shape = [2, 3]
       repeats_val = [2, 2]
       
       x_info = helper.make_tensor_value_info('X', TensorProto.FLOAT, 
input_shape)
       inputs = [x_info]
       initializers = []
       
       if is_dynamic:
           r_info = helper.make_tensor_value_info('R', TensorProto.INT64, [2])
           inputs.append(r_info)
           tile_inputs = ['X', 'R']
           model_name = 'DynamicTileModel'
       else:
           r_init = helper.make_tensor('R_const', TensorProto.INT64, [2], 
repeats_val)
           initializers.append(r_init)
           tile_inputs = ['X', 'R_const']
           model_name = 'StaticTileModel'
   
       node = helper.make_node('Tile', inputs=tile_inputs, outputs=['Y'])
       graph = helper.make_graph(
           [node], model_name, inputs,
           [helper.make_tensor_value_info('Y', TensorProto.FLOAT, [None, 
None])],
           initializer=initializers
       )
       return helper.make_model(graph, opset_imports=[helper.make_opsetid("", 
13)])
   
   def test_tvm_tile(is_dynamic):
       mode = "DYNAMIC" if is_dynamic else "STATIC"
       print(f"\n>>> Testing {mode} repeats mode...")
       
       model = create_tile_model(is_dynamic)
       x_np = np.ones([2, 3]).astype(np.float32)
       r_np = np.array([2, 2], dtype=np.int64)
   
       # 1. Verify with ONNX Runtime
       try:
           sess = ort.InferenceSession(model.SerializeToString())
           feeds = {'X': x_np}
           if is_dynamic: feeds['R'] = r_np
           ort_res = sess.run(None, feeds)[0]
           print(f"  [ORT] Model is valid. Output shape: {ort_res.shape}")
       except Exception as e:
           print(f"  [ORT] Model error: {e}")
           return
   
       # 2. Verify with TVM
       try:
           tvm_mod = from_onnx(model)
           print(f"  [TVM] Conversion Successful!")
           
           target = tvm.target.Target("llvm")
           exe = relax.build(tvm_mod, target)
           vm = relax.VirtualMachine(exe, tvm.cpu())
           
           # Using positional arguments for VM call
           args = [tvm.nd.array(x_np)]
           if is_dynamic: args.append(tvm.nd.array(r_np))
           
           tvm_res = vm["main"](*args).asnumpy()
           np.testing.assert_allclose(ort_res, tvm_res)
           print(f"  [Result] Success: Shapes match {tvm_res.shape}")
   
       except Exception as e:
           print(f"  [TVM] Conversion Failed!")
           error_msg = str(e).splitlines()[-1]
           print(f"  Error Message: {error_msg}")
   
   if __name__ == "__main__":
       test_tvm_tile(is_dynamic=False)
       test_tvm_tile(is_dynamic=True)
   ```
   
   ### Triage
   
   * relax:frontend:onnx
   * needs-triage
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to