Since tvm is a compiler infrastructure, though the convolution is defined using 
a Python API, it is simply defining the computation.  When the operation runs, 
this computation is compiled to a backend, e.g. LLVM, OpenCL, CUDA.  So there 
isn't an overhead in inference time by using Python here.

To get an intuition, you can see [in this 
example](https://docs.tvm.ai/tutorials/tensor_expr_get_started.html) how we 
define the vector addition using the tvm Python API, and then compile it to a 
fast module.





---
[Visit 
Topic](https://discuss.tvm.ai/t/why-convolution-written-in-python/6072/2) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/c11f13bd68a4d184a11abc25e65e2a9986d6841d3c9917b2fb52e6f042ac7581).

Reply via email to