knjwhn opened a new issue #16749: Ask for advice about using my int8gemm 
URL: https://github.com/apache/incubator-mxnet/issues/16749
 
 
   Hello everyone.
   I implemented a gemm multiply function for u8s8s32 and s8s8s32 data types , 
and the function interface are same as openblas , I want to use it in mxnet and 
I realized only after quantization can we use the int8 gemm ,but when I look 
into source code , I find the quantization code in convolution isn't include 
the  gemm but link to the normal convolution code which is for float data type. 
and the only concerned gemm code in quantization is in fully_connected  , in 
which called mkldnn api cblas_gemm_s8u8s32, However , that must use mkl or 
mkldnn ,and I want to avoid using them instead using my own gemm function , Can 
anyone give some suggestion? 
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to