aaltonenzhang opened a new issue #7196:
URL: https://github.com/apache/tvm/issues/7196


   I'm checking tensorflow models from tensorflow models garden at 
[https://github.com/tensorflow/models/tree/master/community](url).
   For community models, there are types and operators which tvm doesn't 
supported. Most of them are quantize related, and some are not. I really hope 
these operators could be supported officially and I can check if these models 
work well.
   The details are listed below.
   
   > 
   
   model name | result
   -- | --
   inceptionv3_int8 | Op type not registered 'QuantizedConcatV2'
   inceptionv4_int8 | Op type not registered 'QuantizedConcatV2'
   mobilenetv1_int8 | The  following operators are not implemented: 
{'QuantizeV2', 'Dequantize',  'QuantizedAvgPool',  
'QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize',  
'QuantizedConv2DWithBiasAndReluAndRequantize',  
'QuantizedConv2DWithBiasAndRequantize'}
   resnet101_int8 | The  following operators are not implemented: 
{'Dequantize',  'QuantizedConv2DWithBiasAndRequantize', 'QuantizeV2',  
'QuantizedConv2DWithBiasAndReluAndRequantize', 'QuantizedMaxPool',  
'QuantizedConv2DWithBiasSignedSumAndReluAndRequantize',  
'QuantizedConv2DWithBiasSumAndReluAndRequantize'}
   resnet50_int8 | The  following operators are not implemented: {'Dequantize', 
'QuantizeV2',  'QuantizedConv2DWithBiasSumAndReluAndRequantize',  
'QuantizedConv2DWithBiasAndReluAndRequantize',  
'QuantizedConv2DWithBiasSignedSumAndReluAndRequantize',  
'QuantizedConv2DWithBiasAndRequantize'}
   resnet50_v1_5_bfloat16 | data type 'bfloat16' not understood
   resnet50v1_5_int8 | The  following operators are not implemented:  
{'QuantizedConv2DWithBiasAndReluAndRequantize',  
'QuantizedConv2DWithBiasSumAndReluAndRequantize', 'QuantizedMaxPool',  
'QuantizedConv2DWithBiasSignedSumAndReluAndRequantize', 'Dequantize',  
'QuantizeV2', 'QuantizedConv2DWithBiasAndRequantize'}
   ssdmobilenet_fp32 | The following operators are not implemented: 
{'CombinedNonMaxSuppression'}
   ssdmobilenet_int8 | The  following operators are not implemented:  
{'QuantizedDepthwiseConv2DWithBiasAndReluAndRequantize',  
'CombinedNonMaxSuppression', 'Dequantize',  
'QuantizedConv2DWithBiasAndRequantize', 'QuantizeV2',  
'QuantizedConv2DWithBiasAndReluAndRequantize'}
   ssd_resnet34_fp32_1200x1200 | The following operators are not implemented: 
{'CombinedNonMaxSuppression'}
   ssd_resnet34_int8_bs1 | The  following operators are not implemented:  
{'QuantizedConv2DWithBiasSignedSumAndReluAndRequantize',  
'QuantizedConv2DWithBiasAndRequantize', 'QuantizedMaxPool',  'QuantizeV2', 
'QuantizedConv2DWithBiasAndReluAndRequantize',  
'QuantizedConv2DWithBiasSumAndReluAndRequantize', 'Dequantize'}
   ssd_resnet34_int8_1200x1200 | The  following operators are not implemented: 
{'QuantizeV2',  'QuantizedConv2DWithBiasSignedSumAndReluAndRequantize',  
'QuantizedConv2DWithBiasSumAndReluAndRequantize',  'CombinedNonMaxSuppression', 
 'QuantizedConv2DWithBiasAndReluAndRequantize',  
'QuantizedConv2DWithBiasAndRequantize', 'Dequantize',  'QuantizedMaxPool'}
   wide_deep_fp32 | RuntimeError: Unsupported dtype: int64
   wide_deep_int8 | The  following operators are not implemented: 
{'QuantizedMatMulWithBias',  'Requantize', 
'QuantizedMatMulWithBiasAndReluAndRequantize',  'Dequantize', 'QuantizeV2'}
   
   
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to