steven12356789 opened a new issue #20134:
URL: https://github.com/apache/incubator-mxnet/issues/20134


   HI,
   I use ResNeSt model to train my own dataset. 
   from this following link : 
https://github.com/zhanghang1989/ResNeSt#transfer-learning-models.
   Now I can transform model to ONNX without any error.
   But when I want to use tensorRT to speed up inference and use C++ to export 
onnx to int8.
   
   My terminal shows like these:
   WARN TRT: No implementation of layer (Unnamed Layer* 69) [Shuffle] + 
Transpose_52 obeys the requested constraints in strict mode. No conforming 
implementation was found i.e. requested layer computation precision and output 
precision types are ignored, using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer ReduceSum_68 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer ReduceSum_103 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer (Unnamed Layer* 158) [Shuffle] obeys 
the requested constraints in strict mode. No conforming implementa
   WARN TRT: No implementation of layer Softmax_116 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer (Unnamed Layer* 160) [Shuffle] + 
Transpose_117 obeys the requested constraints in strict mode. No conforming 
implementation was found i.e requested layer computation precision and output 
precision types are ignored, using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer ReduceSum_133 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer ReduceSum_166 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer (Unnamed Layer* 247) [Shuffle] obeys 
the requested constraints in strict mode. No conforming implementation was 
found i.e requested layer computation precision and output precision types are 
ignored, using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer Softmax_179 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer (Unnamed Layer* 249) [Shuffle] + 
Transpose_180 obeys the requested constraints in strict mode. No conforming 
implementation was found i.e requested layer computation precision and output 
precision types are ignored, using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer ReduceSum_196 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   WARN TRT: No implementation of layer ReduceSum_229 obeys the requested 
constraints in strict mode. No conforming implementation was found i.e 
requested layer computation precision and output precision types are ignored, 
using the fastest implementation. trt_utils.cpp:253
   
   So,
   **Does that mean that I can not use INT8?**
   
   ## Environment
   onnx 1.7.0
   onnxruntime 1.5.2
   tensorrt 7.2.1.4
   cuda version  11.1
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to