tp-nan commented on issue #338: URL: https://github.com/apache/tvm-ffi/issues/338#issuecomment-3650200384
Thanks for all the explanations! > If you are primarily interested in opaqueness, TVM-FFI has a mechanism to handle all Python opaque types, where all unknown Python objects are translated to: https://github.com/apache/tvm-ffi/blob/main/src/ffi/object.cc#L464-L478 This might be the solution! One approach is to leverage this to implement new syntactic sugar for `make_any` and `any_cast`. > Would you like to elaborate on what TVM-FFI is used for in this case? For example, is it used as an ABI convention to simplify linking, to exchange tensors with other frameworks (e.g., PyTorch, cuteDSL), or for something else? TVM-FFI is primarily used in c++ side for two purposes: (1) To leverage `tvm::ffi::Any` and builti types for convenient data conversion and function calls between C++ and Python. (2) To eliminate the dependency on pybind11. Previously, we implemented our own C++–Python interoperability layer based on pybind11 and a custom registration mechanism. Switching to ffi paves the way for exposing a more convenient C API. <details> <summary>Simplified backend model(C++ pseudocode): </summary> Assume `Dict` is a non-copy-on-write (non-COW) version of something like `Map<String, tvm::ffi::Any>` with shared ownership semantics. ```cpp class SubComputeBackend { public: void forward(const Array<Dict>& ios) { // Gather batched data from `ios`, e.g., NVCV tensors in CVDUA auto batched_data = [any_cast<nvcv::Tensor>(io["data"]) for io in ios]; // Perform batched computation (e.g., TensorRT inference) // The execution typically occurs on a dedicated CUDA stream to enable concurrency. // ... // Write results back [io["result"] = ... for io in ios]; } size_t max() {return 4;} }; ``` </details> > Will the types you mentioned (e.g., `cv::Mat`, `Event`, `Status`) be opaque to TVM-FFI? Personally, I prefer to keep `cv::Mat` independent. Wrapping it is redundant in pure C++ call chain from `cv::Mat → Any → cv::Mat`. We typically use an additional `Mat2Numpy` backend—or handle the conversion automatically—when returning to Python -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
