[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-30 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r372971382
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -1662,6 +1667,112 @@ def convert_transpose_conv(self, op):
 
 return out
 
+def convert_detection_postprocess(self, op):
+"""Convert TFLite_Detection_PostProcess"""
+_option_names = [
+"w_scale",
+"max_detections",
+"_output_quantized",
+"detections_per_class",
+"x_scale",
+"nms_score_threshold",
+"num_classes",
+"max_classes_per_detection",
+"use_regular_nms",
+"y_scale",
+"h_scale",
+"_support_output_type_float_in_quantized_op",
+"nms_iou_threshold"
+]
+
+custom_options = get_custom_options(op, _option_names)
+if custom_options["use_regular_nms"]:
+raise tvm.error.OpAttributeUnImplemented(
+"use_regular_nms=True is not yet supported for operator {}."
+.format("TFLite_Detection_PostProcess")
+)
+
+inputs = self.get_input_tensors(op)
 
 Review comment:
   Does it make sense adding one assert `assert len(inputs) == 3`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371281677
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   I think if we could view the TOCO source code, maybe we could find how to 
construct detection_postprocess. Please refer our `_test_prelu` comment. I ever 
write what the pattern tflite could produce prelu.  However, current way is 
acceptable too in my opinion. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371281677
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   I think if we could view the TOCO source code, maybe we could find how to 
construct detection_postprocess. Please refer our `test_prelu` comment. I ever 
write what the pattern tflite could produce prelu.  However, current way is 
acceptable too in my opinion. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371269023
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   Alright, we could remove ssd mobilenet model because of this limitation, but 
we should still keep the unit testing of detection postprocess. After we 
resolve the limitation, we could add ssd mobilenet testing back. Morever, we 
could remove the atol=1 of test_qconv2d and so on. Because we could get the 
same result completely compared with the tflite. Does it make sense to you?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371193247
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   I think we should resolve the issue of rounding in TVM. Would you mind 
opening an RFC to describe it? We could discuss and resolve it. This case is 
one good candidate why we need to keep the same as the rounding behavior of 
TFLite when we parse TFLite quantized model.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-14 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r366438454
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   do you mean the issue of quantized rounding here? 
https://github.com/apache/incubator-tvm/pull/3900#discussion_r334334418


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-10 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r365487118
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -1494,6 +1499,112 @@ def convert_transpose_conv(self, op):
 
 return out
 
+def _convert_detection_postprocess(self, op):
+"""Convert TFLite_Detection_PostProcess"""
+_option_names = [
+"w_scale",
+"max_detections",
+"_output_quantized",
+"detections_per_class",
+"x_scale",
+"nms_score_threshold",
+"num_classes",
+"max_classes_per_detection",
+"use_regular_nms",
+"y_scale",
+"h_scale",
+"_support_output_type_float_in_quantized_op",
+"nms_iou_threshold"
+]
+
+custom_options = get_custom_options(op, _option_names)
+if custom_options["use_regular_nms"]:
+raise tvm.error.OpAttributeUnImplemented(
+"use_regular_nms=True is not yet supported for operator {}."
+.format("TFLite_Detection_PostProcess")
+)
+
+inputs = self.get_input_tensors(op)
+cls_pred = self.get_expr(inputs[1].tensor_idx)
+loc_prob = self.get_expr(inputs[0].tensor_idx)
+anchor_values = self.get_tensor_value(inputs[2])
+anchor_boxes = len(anchor_values)
+anchor_type = self.get_tensor_type_str(inputs[2].tensor.Type())
+anchor_expr = self.exp_tab.new_const(anchor_values, dtype=anchor_type)
+
+if inputs[0].qnn_params:
+loc_prob = _qnn.op.dequantize(data=loc_prob,
+  
input_scale=inputs[0].qnn_params['scale'],
+  
input_zero_point=inputs[0].qnn_params['zero_point'])
+if inputs[1].qnn_params:
+cls_pred = _qnn.op.dequantize(data=cls_pred,
+  
input_scale=inputs[1].qnn_params['scale'],
+  
input_zero_point=inputs[1].qnn_params['zero_point'])
+if inputs[2].qnn_params:
+anchor_expr = _qnn.op.dequantize(data=anchor_expr,
+ 
input_scale=inputs[2].qnn_params['scale'],
+ 
input_zero_point=inputs[2].qnn_params['zero_point'])
+
+# reshape the cls_pred and loc_prob tensors so
+# they can be consumed by multibox_transform_loc
+cls_pred = _op.transpose(cls_pred, [0, 2, 1])
+# loc_prob coords are in yxhw format
+# need to convert to xywh
+loc_coords = _op.split(loc_prob, 4, axis=2)
+loc_prob = _op.concatenate(
+[loc_coords[1], loc_coords[0], loc_coords[3], loc_coords[2]], 
axis=2
+)
+loc_prob = _op.reshape(loc_prob, [1, anchor_boxes*4])
+
+# anchor coords are in yxhw format
+# need to convert to ltrb
+anchor_coords = _op.split(anchor_expr, 4, axis=1)
+anchor_y = anchor_coords[0]
+anchor_x = anchor_coords[1]
+anchor_h = anchor_coords[2]
+anchor_w = anchor_coords[3]
+plus_half = _expr.const(0.5, dtype='float32')
+minus_half = _expr.const(-0.5, dtype='float32')
+anchor_l = _op.add(anchor_x, _op.multiply(anchor_w, minus_half))
+anchor_r = _op.add(anchor_x, _op.multiply(anchor_w, plus_half))
+anchor_t = _op.add(anchor_y, _op.multiply(anchor_h, minus_half))
+anchor_b = _op.add(anchor_y, _op.multiply(anchor_h, plus_half))
+anchor_expr = _op.concatenate([anchor_l, anchor_t, anchor_r, 
anchor_b], axis=1)
+anchor_expr = _op.expand_dims(anchor_expr, 0)
+
+# attributes for multibox_transform_loc
+new_attrs0 = {}
 
 Review comment:
   change to `multibox_transform_loc_attrs`, `new_attrs0` is not good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-10 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r365487145
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -1494,6 +1499,112 @@ def convert_transpose_conv(self, op):
 
 return out
 
+def _convert_detection_postprocess(self, op):
+"""Convert TFLite_Detection_PostProcess"""
+_option_names = [
+"w_scale",
+"max_detections",
+"_output_quantized",
+"detections_per_class",
+"x_scale",
+"nms_score_threshold",
+"num_classes",
+"max_classes_per_detection",
+"use_regular_nms",
+"y_scale",
+"h_scale",
+"_support_output_type_float_in_quantized_op",
+"nms_iou_threshold"
+]
+
+custom_options = get_custom_options(op, _option_names)
+if custom_options["use_regular_nms"]:
+raise tvm.error.OpAttributeUnImplemented(
+"use_regular_nms=True is not yet supported for operator {}."
+.format("TFLite_Detection_PostProcess")
+)
+
+inputs = self.get_input_tensors(op)
+cls_pred = self.get_expr(inputs[1].tensor_idx)
+loc_prob = self.get_expr(inputs[0].tensor_idx)
+anchor_values = self.get_tensor_value(inputs[2])
+anchor_boxes = len(anchor_values)
+anchor_type = self.get_tensor_type_str(inputs[2].tensor.Type())
+anchor_expr = self.exp_tab.new_const(anchor_values, dtype=anchor_type)
+
+if inputs[0].qnn_params:
+loc_prob = _qnn.op.dequantize(data=loc_prob,
+  
input_scale=inputs[0].qnn_params['scale'],
+  
input_zero_point=inputs[0].qnn_params['zero_point'])
+if inputs[1].qnn_params:
+cls_pred = _qnn.op.dequantize(data=cls_pred,
+  
input_scale=inputs[1].qnn_params['scale'],
+  
input_zero_point=inputs[1].qnn_params['zero_point'])
+if inputs[2].qnn_params:
+anchor_expr = _qnn.op.dequantize(data=anchor_expr,
+ 
input_scale=inputs[2].qnn_params['scale'],
+ 
input_zero_point=inputs[2].qnn_params['zero_point'])
+
+# reshape the cls_pred and loc_prob tensors so
+# they can be consumed by multibox_transform_loc
+cls_pred = _op.transpose(cls_pred, [0, 2, 1])
+# loc_prob coords are in yxhw format
+# need to convert to xywh
+loc_coords = _op.split(loc_prob, 4, axis=2)
+loc_prob = _op.concatenate(
+[loc_coords[1], loc_coords[0], loc_coords[3], loc_coords[2]], 
axis=2
+)
+loc_prob = _op.reshape(loc_prob, [1, anchor_boxes*4])
+
+# anchor coords are in yxhw format
+# need to convert to ltrb
+anchor_coords = _op.split(anchor_expr, 4, axis=1)
+anchor_y = anchor_coords[0]
+anchor_x = anchor_coords[1]
+anchor_h = anchor_coords[2]
+anchor_w = anchor_coords[3]
+plus_half = _expr.const(0.5, dtype='float32')
+minus_half = _expr.const(-0.5, dtype='float32')
+anchor_l = _op.add(anchor_x, _op.multiply(anchor_w, minus_half))
+anchor_r = _op.add(anchor_x, _op.multiply(anchor_w, plus_half))
+anchor_t = _op.add(anchor_y, _op.multiply(anchor_h, minus_half))
+anchor_b = _op.add(anchor_y, _op.multiply(anchor_h, plus_half))
+anchor_expr = _op.concatenate([anchor_l, anchor_t, anchor_r, 
anchor_b], axis=1)
+anchor_expr = _op.expand_dims(anchor_expr, 0)
+
+# attributes for multibox_transform_loc
+new_attrs0 = {}
+new_attrs0["clip"] = False
+new_attrs0["threshold"] = custom_options["nms_score_threshold"]
+new_attrs0["variances"] = (
+1 / custom_options["x_scale"],
+1 / custom_options["y_scale"],
+1 / custom_options["w_scale"],
+1 / custom_options["h_scale"],
+)
+
+# attributes for non_max_suppression
+new_attrs1 = {}
 
 Review comment:
   change to `non_max_suppression_attrs`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-10 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r365486547
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -98,6 +98,7 @@ def __init__(self, model, subgraph, exp_tab):
 'SPACE_TO_BATCH_ND': self.convert_space_to_batch_nd,
 'PRELU': self.convert_prelu,
 'TRANSPOSE_CONV': self.convert_transpose_conv,
+'DETECTION_POSTPROCESS': self._convert_detection_postprocess
 
 Review comment:
   Please change to `self._convert_detection_postprocess` as other convert 
function's code style. We should keep the same cod style.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-10 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r365486808
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -98,6 +98,7 @@ def __init__(self, model, subgraph, exp_tab):
 'SPACE_TO_BATCH_ND': self.convert_space_to_batch_nd,
 'PRELU': self.convert_prelu,
 'TRANSPOSE_CONV': self.convert_transpose_conv,
+'DETECTION_POSTPROCESS': self._convert_detection_postprocess
 
 Review comment:
   Please change to self._convert_detection_postprocess as other convert 
function's code style. We should keep the same cod style.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-10 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r365486547
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -98,6 +98,7 @@ def __init__(self, model, subgraph, exp_tab):
 'SPACE_TO_BATCH_ND': self.convert_space_to_batch_nd,
 'PRELU': self.convert_prelu,
 'TRANSPOSE_CONV': self.convert_transpose_conv,
+'DETECTION_POSTPROCESS': self._convert_detection_postprocess
 
 Review comment:
   Please change to `self._convert_detection_postprocess` as other convert 
function's code style. We should keep the same cod style.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-10 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r365486236
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -98,6 +98,7 @@ def __init__(self, model, subgraph, exp_tab):
 'SPACE_TO_BATCH_ND': self.convert_space_to_batch_nd,
 'PRELU': self.convert_prelu,
 'TRANSPOSE_CONV': self.convert_transpose_conv,
 
 Review comment:
   Please change to `self.convert_detection_postprocess`! We should keep the 
same code style as other convert function jn the dictionary.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-10 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r365486236
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -98,6 +98,7 @@ def __init__(self, model, subgraph, exp_tab):
 'SPACE_TO_BATCH_ND': self.convert_space_to_batch_nd,
 'PRELU': self.convert_prelu,
 'TRANSPOSE_CONV': self.convert_transpose_conv,
 
 Review comment:
   Please change to `self.convert_detection_postprocess`! We should keep the 
same code style as other convert function jn the dictionary.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services