Hi all, 

I am trying to run inference for onnx model. I have read the tutorial "[Compile 
ONNX 
Models](https://tvm.apache.org/docs/tutorials/frontend/from_onnx.html#sphx-glr-tutorials-frontend-from-onnx-py)",
 but in that tutorial, only one input is needed. 

`tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)), 
**params).asnumpy()`

If I need two inputs, how should I feed them into the network?





---
[Visit 
Topic](https://discuss.tvm.ai/t/how-to-feed-more-than-one-input-to-the-network-when-i-run-onnx-model/6858/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/450152a8eeceebab9230784ce3b8bc0ab8074edad83f9e2fea44d616f5d1ae8e).

Reply via email to