chenjunweii opened a new issue #13434: Inference with C++ using gpu on tx2 got error URL: https://github.com/apache/incubator-mxnet/issues/13434 I got following error when I used c++ api to inference on tx2 with cuda The codes works on my x86 pc, no matter cpu version or gpu version, but failed on tx2 with cuda those errors happened when I dump the NDArray with the following code `cout << exe->out[0] << endl;` by the way, is there any api like mx.contrib.tensorrt.tensorrt_bind in c++ version ? --- Error ---- [01:20:50] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v1.3.1. Attempting to upgrade... [01:20:50] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded! [01:20:54] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:97: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable) 1 terminate called after throwing an instance of 'dmlc::Error' what(): [01:20:54] /usr/include/mxnet-cpp/ndarray.hpp:236: Check failed: MXNDArrayWaitToRead(blob_ptr_->handle_) == 0 (-1 vs. 0) Stack trace returned 7 entries: [bt] (0) ./image(dmlc::StackTrace[abi:cxx11]()+0x48) [0x40ce94] [bt] (1) ./image(dmlc::LogMessageFatal::~LogMessageFatal()+0x4c) [0x40d164] [bt] (2) ./image() [0x410b30] [bt] (3) ./image() [0x41143c] [bt] (4) ./image() [0x40a280] [bt] (5) ./image() [0x40a898] [bt] (6) /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe0) [0x7f76cc18a0] [1] 22206 abort (core dumped) ./image
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services