[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-04 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r435020698



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -2027,6 +2081,8 @@ def _impl(inputs, attr, params, mod):
 'Mod'   : _elemwise('mod'),
 'Mul'   : _elemwise('multiply'),
 'Neg'   : AttrCvt('negative'),
+'NonMaxSuppressionV2'   : _nms(),
+'NonMaxSuppressionV3'   : _nms(),

Review comment:
   V3 adds an extra argument: score_threshold. This has already been 
handled.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-03 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r435002980



##
File path: include/tvm/relay/attrs/transform.h
##
@@ -210,14 +210,22 @@ struct SplitAttrs : public tvm::AttrsNode {
 
 /*! \brief Attributes for StridedSlice operator */
 struct StridedSliceAttrs : public tvm::AttrsNode {
-  Array begin;
-  Array end;
-  Array strides;
+  Optional> begin;
+  Optional> end;
+  Optional> strides;
+  bool slice_mode;
 
   TVM_DECLARE_ATTRS(StridedSliceAttrs, "relay.attrs.StridedSliceAttrs") {
 TVM_ATTR_FIELD(begin).describe("Indices for begin of slice, begin index is 
also inclusive");
 TVM_ATTR_FIELD(end).describe("Indices for end of slice, end index is 
exclusive");
-TVM_ATTR_FIELD(strides).set_default(Array({})).describe("Stride 
values of the slice");
+TVM_ATTR_FIELD(strides).describe("Stride values of the slice");
+TVM_ATTR_FIELD(slice_mode)

Review comment:
   This sounds great to me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-03 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r434927495



##
File path: include/tvm/relay/attrs/transform.h
##
@@ -210,14 +210,22 @@ struct SplitAttrs : public tvm::AttrsNode {
 
 /*! \brief Attributes for StridedSlice operator */
 struct StridedSliceAttrs : public tvm::AttrsNode {
-  Array begin;
-  Array end;
-  Array strides;
+  Optional> begin;
+  Optional> end;
+  Optional> strides;
+  bool slice_mode;
 
   TVM_DECLARE_ATTRS(StridedSliceAttrs, "relay.attrs.StridedSliceAttrs") {
 TVM_ATTR_FIELD(begin).describe("Indices for begin of slice, begin index is 
also inclusive");
 TVM_ATTR_FIELD(end).describe("Indices for end of slice, end index is 
exclusive");
-TVM_ATTR_FIELD(strides).set_default(Array({})).describe("Stride 
values of the slice");
+TVM_ATTR_FIELD(strides).describe("Stride values of the slice");
+TVM_ATTR_FIELD(slice_mode)

Review comment:
   Will ```use_size``` be more clear?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-02 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r434115162



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -614,6 +614,53 @@ def _impl(inputs, attr, params, mod):
 return out
 return _impl
 
+def _nms():
+def _impl(inputs, attr, params, mod):
+# Get parameter values
+max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])

Review comment:
   Handle symbolic max_output_size:
   ```suggestion
   try:
   max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])
   except Exception:
   try:
   max_output_size = _infer_value(inputs[2], params, 
mod).asnumpy().astype("int64").tolist()[0]
   except Exception:
   max_output_size = -1
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-02 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r434115162



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -614,6 +614,53 @@ def _impl(inputs, attr, params, mod):
 return out
 return _impl
 
+def _nms():
+def _impl(inputs, attr, params, mod):
+# Get parameter values
+max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])

Review comment:
   Handle symbolic max_output_size:
   ```suggestion
   try:
   max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])
   except Exception:
   try:
   max_output_size = _infer_value(inputs[2], params, 
mod).asnumpy().astype("int64").tolist()[0]
   except Exception:
   max_output_size = inputs[2]
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-02 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r434111309



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -1119,7 +1166,11 @@ def _impl(inputs, attr, params, mod):
 try:
 begin = _get_list_param(params, inputs[1])
 except (IndexError, KeyError, AttributeError):
-begin = _infer_value(inputs[1], params).asnumpy().tolist()[0]
+# Handle symbolic begin
+try:
+begin = _infer_value(inputs[1], params).asnumpy().tolist()[0]

Review comment:
   ```suggestion
   begin = _infer_value(inputs[1], params).asnumpy().tolist()
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-02 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r434111490



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -1128,16 +1179,7 @@ def _impl(inputs, attr, params, mod):
 size = _infer_value(inputs[2], params).asnumpy().tolist()[0]

Review comment:
   ```suggestion
   size = _infer_value(inputs[2], params).asnumpy().tolist()
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-31 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432913904



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -1466,8 +1508,11 @@ def _transform_mask(stride_dim, ellipsis_mask):
 fshape_indices = None
 if begin_mask or end_mask or ellipsis_mask or new_axis_mask or 
shrink_axis_mask:
 begin, end, stride, fshape_indices = _transform_mask(stride_dim, 
ellipsis_mask)
-out = _op.strided_slice(inputs[0], begin=begin, end=end, 
strides=stride)
-out_shape = _infer_shape(out, mod)
+out = _op.strided_slice(inputs[0],
+begin=_expr.const(begin),
+end=_expr.const(end),
+strides=_expr.const(stride))

Review comment:
   Don't need _expr.const for begin, end and strides, since we allow normal 
python list to be passed.

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1772,73 +1788,161 @@ Array> 
StridedSliceInferCorrectLayout(const Attrs& attrs,
   }
 
   CHECK(old_in_layouts.defined());
-  CHECK_EQ(old_in_layouts.size(), 1);
+  CHECK_GE(old_in_layouts.size(), 1);
   CHECK(old_in_shapes.defined());
-  CHECK_EQ(old_in_shapes.size(), 1);
+  CHECK_GE(old_in_shapes.size(), 1);
 
   auto layout = old_in_layouts[0];
   if (layout.defined() && new_in_layouts.defined()) {
-CHECK_EQ(new_in_layouts.size(), 1);
+CHECK_GE(new_in_layouts.size(), 1);
 auto new_layout = new_in_layouts[0];
 auto shape = old_in_shapes[0];
 
 // NOTE: Discard "const" qualifier here.
 auto* params = 
const_cast(attrs.as());
+CHECK(params != nullptr);
+Array begin, end, strides;
+if (params->begin && params->end && params->strides) {
+  for (Integer i : params->strides.value()) {
+CHECK(i.defined());
+strides.push_back(params->slice_mode ? 1 : i->value);
+  }
+
+  for (Integer i : params->begin.value()) {
+CHECK(i.defined());
+begin.push_back(i->value);
+  }
+  for (Integer i : params->end.value()) {
+CHECK(i.defined());
+end.push_back(i->value);
+  }
+}
 
 Array new_begin, new_end;
 
-for (size_t i = 0; i < params->begin.size(); i++) {
+for (size_t i = 0; i < begin.size(); i++) {
   const LayoutAxis& axis = layout[i];
   if (!axis.IsPrimal()) {
 // original layout that contains splitted axes is not supported
 return {{Layout::Undef()}, {Layout::Undef()}};
   }
   auto factor = new_layout.FactorOf(axis);
   if (factor == -1) {
-new_begin.push_back(params->begin[i]);
-new_end.push_back(params->end[i]);
+new_begin.push_back(begin[i]);
+new_end.push_back(end[i]);
   } else {
-if (params->strides.defined() && i < params->strides.size()) {
-  auto stride = params->strides[i];
+if (strides.defined() && i < strides.size()) {
+  auto stride = strides[i];
   // arbitrary stride is not supported
   if (stride.defined() && stride->value != 1) {
 return {{Layout::Undef()}, {Layout::Undef()}};
   }
 }
-int64_t begin = params->begin[i].defined() ? params->begin[i]->value : 
0;
-int64_t end =
-params->end[i].defined() ? params->end[i]->value : 
shape[i].as()->value;
-if (begin % factor || end % factor) {
+int64_t bg = begin[i].defined() ? begin[i]->value : 0;
+int64_t ed;
+if (!end[i].defined()) {
+  ed = shape[i].as()->value;
+} else if (params->slice_mode) {
+  if (end[i]->value < 0) {
+ed = shape[i].as()->value;
+  } else {
+ed = bg + end[i]->value;
+  }
+} else {
+  ed = end[i]->value;
+}
+
+if (bg % factor || ed % factor) {
   // transform to original layout
   return {{Layout::Undef()}, {Layout::Undef()}};
 }
-new_begin.push_back(tvm::Integer(begin / factor));
-new_end.push_back(tvm::Integer(end / factor));
+new_begin.push_back(tvm::Integer(bg / factor));
+new_end.push_back(tvm::Integer(ed / factor));
   }
 }
+
 layout = new_layout;
 params->begin = new_begin;
 params->end = new_end;
   }
-  return {{layout}, {layout}};
+  return {{layout, Layout("C"), Layout("C"), Layout("C")}, {layout}};
 }
 
-// Positional relay function to create StridedSlice operator used by frontend 
FFI.
-Expr MakeStridedSlice(Expr data, Array begin, Array end, 
Array strides) {
-  auto attrs = make_object();
-  attrs->begin = std::move(begin);
-  attrs->end = std::move(end);
-  attrs->strides = std::move(strides);
-  static const Op& op = Op::Get("strided_slice");
-  return Call(op, {data}, Attrs(attrs), {});
+inline te::Tensor DynamicStridedSlice(const te::Tensor& input, const 
te::Tensor& begin,
+  

[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432049322



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -614,6 +614,52 @@ def _impl(inputs, attr, params, mod):
 return out
 return _impl
 
+def _nms():
+def _impl(inputs, attr, params, mod):
+# Get parameter values
+max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])
+iou_threshold = np.atleast_1d(inputs[3].data.asnumpy())[0]
+# score_threshold was introduced from V3
+score_threshold = np.atleast_1d(inputs[4].data.asnumpy())[0] if 
len(inputs) > 4 else 0.0
+
+# Generate data with shape (1, num_anchors, 5)
+scores = AttrCvt(op_name="expand_dims",
+ ignores=['T_threshold'],
+ extras={'axis': -1, 'num_newaxis': 1})([inputs[1]], 
attr)
+data = get_relay_op('concatenate')([scores, inputs[0]], -1)
+data = get_relay_op('expand_dims')(data, 0, 1)
+
+# reason why using get_valid_counts is for inference performance
+ct, data, indices = get_relay_op('get_valid_counts')(data,
+ 
score_threshold=score_threshold,
+ id_index=-1,
+ score_index=0)
+# TensorFlow NMS doesn't have parameter top_k
+top_k = -1
+# TF doesn't have class id for nms input
+score_index = 0
+nms_ret = get_relay_op('non_max_suppression')(data=data,
+  valid_count=ct,
+  indices=indices,
+  
max_output_size=max_output_size,
+  
iou_threshold=iou_threshold,
+  force_suppress=True,
+  top_k=top_k,
+  coord_start=1,
+  score_index=score_index,
+  id_index=-1,
+  return_indices=True,
+  invalid_to_bottom=False)
+
+# squeeze it, TF NMS is not batched
+end = get_relay_op("squeeze")(nms_ret[1], axis=[1])
+data_slice = get_relay_op("squeeze")(nms_ret[0], axis=[0])
+
+# slice to get the dynamic result
+ret = get_relay_op("strided_slice")(data_slice, _expr.const([0]), end, 
_expr.const([1]))
+return ret
+return _impl

Review comment:
   We can use slice_mode for tensorflow slice now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432045059



##
File path: python/tvm/relay/op/transform.py
##
@@ -611,31 +611,41 @@ def split(data, indices_or_sections, axis=0):
 return TupleWrapper(_make.split(data, indices_or_sections, axis), ret_size)
 
 
-def strided_slice(data, begin, end, strides=None):
+def strided_slice(data, begin, end, strides=None, slice_mode=False):
 """Strided slice of an array.
 
 Parameters
 --
 data : relay.Expr
 The source array to be sliced.
 
-begin: list of int
+begin: relay.Expr or List[int]
 The indices to begin with in the slicing.
 
-end: list of int
+end: relay.Expr or List[int]
 Indices indicating end of the slice.
 
-strides: list of int, optional
+strides: relay.Expr or List[int], optional
 Specifies the stride values, it can be negative in that case,
 the input tensor will be reversed in that particular axis.
 
+slice_mode: boolean, optional
+Whether to ignore the negative elements in input end,
+will slice to the end of data for the ignored element.

Review comment:
   Provide more details about this attribute, such as end is actually slice 
size and strides is ignored.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432044019



##
File path: python/tvm/relay/op/_transform.py
##
@@ -99,8 +99,80 @@ def _arange_shape_func(start, stop, step):
 
 @_reg.register_shape_func("arange", True)
 def arange_shape_func(attrs, inputs, _):
+"""
+Shape func for arange
+"""
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func_input_data(data, begin, end, strides,
+ slice_mode):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if strides.shape[0] > i:
+cstride = strides[i]
+if begin.shape[0] > i:
+cbegin = begin[i]
+if end.shape[0] <= i:
+cend = data.shape[i]
+elif slice_mode != 0:
+if end[i] < 0:
+cend = data.shape[i]
+elif cstride < 0:

Review comment:
   Should we always set cstride=1 for slice mode?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432043625



##
File path: python/tvm/relay/op/_transform.py
##
@@ -99,8 +99,80 @@ def _arange_shape_func(start, stop, step):
 
 @_reg.register_shape_func("arange", True)
 def arange_shape_func(attrs, inputs, _):
+"""
+Shape func for arange
+"""
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func_input_data(data, begin, end, strides,
+ slice_mode):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if strides.shape[0] > i:
+cstride = strides[i]
+if begin.shape[0] > i:
+cbegin = begin[i]
+if end.shape[0] <= i:
+cend = data.shape[i]
+elif slice_mode != 0:
+if end[i] < 0:
+cend = data.shape[i]
+elif cstride < 0:
+cend = cbegin - end[i]
+else:
+cend = cbegin + end[i]
+else:
+cend = end[i]
+assert cstride != 0, "Strides can't be zero."
+out[i] = int64(ceil_div((int64(cend) - int64(cbegin)), int64(cstride)))
+return out
+
+@script
+def _strided_slice_shape_func_input_shape(data_shape, begin, end, strides, 
slice_mode):
+ndim = data_shape.shape[0]
+assert ndim == 2, "not correct"
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = int64(0)
+cend = int64(data_shape[i])
+cstride = int64(1)
+if len(strides) > i:
+cstride = int64(strides[i])
+if len(begin) > i:
+cbegin = int64(begin[i])
+if len(end) <= i:
+cend = int64(data_shape[i])
+elif slice_mode != 0:
+if end[i] < 0:
+cend = int64(data_shape[i])
+elif cstride < 0:

Review comment:
   Should we always set cstride=1 for slice mode?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-19 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r427575525



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1660,93 +1662,161 @@ Array GetIntArray(Array arr) {
 
 // strided_slice
 TVM_REGISTER_NODE_TYPE(StridedSliceAttrs);
+
+int64_t* ToVector(const runtime::NDArray& array) {
+  size_t len = array.Shape().front();
+  int64_t* rel_vec = new int64_t[len];
+  if (array->dtype.code == kDLInt) {
+if (array->dtype.bits == 8) {
+  int8_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 16) {
+  int16_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 32) {
+  int32_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 64) {
+  int64_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+}
+  } else if (array->dtype.code == kDLUInt) {
+if (array->dtype.bits == 8) {
+  uint8_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 16) {
+  uint16_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 32) {
+  uint32_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 64) {
+  uint64_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+}
+  }
+  LOG(FATAL) << "Unknown data type: " << 
tvm::runtime::DLDataType2String(array->dtype);
+  return rel_vec;
+}
+
 bool StridedSliceRel(const Array& types, int num_inputs, const Attrs& 
attrs,
  const TypeReporter& reporter) {
-  CHECK_EQ(types.size(), 2);
-  const auto* data = types[0].as();
-  if (data == nullptr) return false;
-
+  CHECK_EQ(types.size(), 5);
   const StridedSliceAttrs* param = attrs.as();
   CHECK(param != nullptr);
-
+  const auto* data = types[0].as();
+  CHECK(data != nullptr);
   auto dshape = data->shape;
-  auto num_axis = dshape.size();
-
-  std::vector stride_vec;
-  for (Integer i : param->strides) {
-CHECK(i.defined());
-stride_vec.push_back(i->value);
-  }
-  for (size_t i = stride_vec.size(); i < num_axis; ++i) {
-stride_vec.push_back(1);
-  }
-  const int64_t max_range = std::numeric_limits::max();
-
-  std::vector begin_vec;
-  for (size_t i = 0; i < param->begin.size(); ++i) {
-if (!param->begin[i].defined()) {
-  // value=None
+  int64_t num_axis = dshape.size();
+
+  // calculate output shape
+  std::vector oshape(num_axis);
+  if (param->begin && param->end && param->strides) {
+std::vector stride_vec;
+for (Integer i : param->strides.value()) {
+  CHECK(i.defined());
+  stride_vec.push_back(i->value);
+}
+for (int64_t i = stride_vec.size(); i < num_axis; ++i) {
+  stride_vec.push_back(1);
+}
+const int64_t max_range = std::numeric_limits::max();
+std::vector begin_vec;
+for (size_t i = 0; i < param->begin.value().size(); ++i) {
+  if (!param->begin.value()[i].defined()) {
+begin_vec.push_back(stride_vec[i] > 0 ? 0 : max_range);
+  } else {
+begin_vec.push_back(param->begin.value()[i]->value);
+  }
+}
+for (int64_t i = begin_vec.size(); i < num_axis; ++i) {
   begin_vec.push_back(stride_vec[i] > 0 ? 0 : max_range);
-} else {
-  begin_vec.push_back(param->begin[i]->value);
 }
-  }
-  for (size_t i = begin_vec.size(); i < num_axis; ++i) {
-begin_vec.push_back(stride_vec[i] > 0 ? 0 : max_range);
-  }
 
-  std::vector end_vec;
-  for (size_t i = 0; i < param->end.size(); ++i) {
-// allow end to be None
-if (!param->end[i].defined()) {
+std::vector end_vec;
+for (size_t i = 0; i < param->end.value().size(); ++i) {
+  // allow end to be None
+  if (param->ignore_end || (!param->end.value()[i].defined())) {

Review comment:
   Allow partial ignore.

##
File path: python/tvm/relay/op/_transform.py
##
@@ -101,6 +101,29 @@ def _arange_shape_func(start, stop, step):
 def arange_shape_func(attrs, inputs, _):
 

[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-15 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r426057603



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1660,93 +1664,171 @@ Array GetIntArray(Array arr) {
 
 // strided_slice
 TVM_REGISTER_NODE_TYPE(StridedSliceAttrs);
-bool StridedSliceRel(const Array& types, int num_inputs, const Attrs& 
attrs,
- const TypeReporter& reporter) {
-  CHECK_EQ(types.size(), 2);
-  const auto* data = types[0].as();
-  if (data == nullptr) return false;
 
+int64_t* ToVector(const runtime::NDArray& array) {

Review comment:
   Refactored in https://github.com/apache/incubator-tvm/pull/5459, and 
this can be removed after that PR is merged.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-04 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r419826528



##
File path: python/tvm/relay/op/_transform.py
##
@@ -101,6 +101,28 @@ def _arange_shape_func(start, stop, step):
 def arange_shape_func(attrs, inputs, _):
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func(data, begin, end, strides):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if len(begin) > i:
+cbegin = begin[i]
+if len(end) > i:
+cend = end[i]
+if len(strides) > i:

Review comment:
   ```suggestion
   if strides.shape[0] > i:
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-04 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r419826341



##
File path: python/tvm/relay/op/_transform.py
##
@@ -101,6 +101,28 @@ def _arange_shape_func(start, stop, step):
 def arange_shape_func(attrs, inputs, _):
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func(data, begin, end, strides):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if len(begin) > i:

Review comment:
   ```suggestion
   if begin.shape[0] > i:
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-04 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r419826426



##
File path: python/tvm/relay/op/_transform.py
##
@@ -101,6 +101,28 @@ def _arange_shape_func(start, stop, step):
 def arange_shape_func(attrs, inputs, _):
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func(data, begin, end, strides):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if len(begin) > i:
+cbegin = begin[i]
+if len(end) > i:

Review comment:
   ```suggestion
   if end.shape[0] > i:
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-29 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r417499071



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -612,6 +615,51 @@ def _impl(inputs, attr, params, mod):
 out = _op.transpose(out, axes=(0, 2, 3, 4, 1))
 
 return out
+
+def _nms():
+def _impl(inputs, attr, params, mod):
+# Get parameter values
+max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])
+iou_threshold = np.atleast_1d(inputs[3].data.asnumpy())[0]
+# score_threshold was introduced from V3
+score_threshold = np.atleast_1d(inputs[4].data.asnumpy())[0] if 
len(inputs) > 4 else None

Review comment:
   Need to use 0.0 instead of None here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r417026419



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1891,81 +1952,163 @@ Array > StridedSliceInferCorrectLayout(
   }
 
   CHECK(old_in_layouts.defined());
-  CHECK_EQ(old_in_layouts.size(), 1);
+  CHECK_GE(old_in_layouts.size(), 1);
   CHECK(old_in_shapes.defined());
-  CHECK_EQ(old_in_shapes.size(), 1);
+  CHECK_GE(old_in_shapes.size(), 1);
 
   auto layout = old_in_layouts[0];
   if (layout.defined() && new_in_layouts.defined()) {
-CHECK_EQ(new_in_layouts.size(), 1);
+CHECK_GE(new_in_layouts.size(), 1);
 auto new_layout = new_in_layouts[0];
 auto shape = old_in_shapes[0];
 
 // NOTE: Discard "const" qualifier here.
 auto *params = 
const_cast(attrs.as());
+CHECK(params != nullptr);
+Array begin, end, strides;
+const ConstantNode *cbegin, *cend, *cstrides;
+if ((cbegin = params->begin.as()) &&
+(cend = params->end.as()) &&
+(cstrides = params->strides.as())) {
+  int64_t* strides_val = ToVector(cstrides->data);
+  for (int64_t i = 0; i < cstrides->data.Shape().front(); ++i) {
+strides.push_back(strides_val[i]);
+  }
+  int64_t* begin_val = ToVector(cbegin->data);
+  for (int64_t i = 0; i < cbegin->data.Shape().front(); ++i) {
+begin.push_back(begin_val[i]);
+  }
+  int64_t* end_val = ToVector(cend->data);
+  for (int64_t i = 0; i < cend->data.Shape().front(); ++i) {
+end.push_back(end_val[i]);
+  }
+}

Review comment:
   For else case, I think we should directly return the original layout as 
new layout, since we can't compute for symbolic begin/end/strides.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-28 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r416214792



##
File path: python/tvm/relay/op/transform.py
##
@@ -613,13 +613,13 @@ def strided_slice(data, begin, end, strides=None):
 data : relay.Expr
 The source array to be sliced.
 
-begin: list of int
+begin: relay.Expr

Review comment:
   Should we allow begin, end and strides to be list/tuple as well? And do 
conversion to const before calling backend API.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-27 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r416214792



##
File path: python/tvm/relay/op/transform.py
##
@@ -613,13 +613,13 @@ def strided_slice(data, begin, end, strides=None):
 data : relay.Expr
 The source array to be sliced.
 
-begin: list of int
+begin: relay.Expr

Review comment:
   Should we allow begin, end and strides to be list/tuple as well? And do 
convertsion to const before calling backend API.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-24 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r414959052



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1891,81 +1952,163 @@ Array > StridedSliceInferCorrectLayout(
   }
 
   CHECK(old_in_layouts.defined());
-  CHECK_EQ(old_in_layouts.size(), 1);
+  CHECK_GE(old_in_layouts.size(), 1);
   CHECK(old_in_shapes.defined());
-  CHECK_EQ(old_in_shapes.size(), 1);
+  CHECK_GE(old_in_shapes.size(), 1);
 
   auto layout = old_in_layouts[0];
   if (layout.defined() && new_in_layouts.defined()) {
-CHECK_EQ(new_in_layouts.size(), 1);
+CHECK_GE(new_in_layouts.size(), 1);
 auto new_layout = new_in_layouts[0];
 auto shape = old_in_shapes[0];
 
 // NOTE: Discard "const" qualifier here.
 auto *params = 
const_cast(attrs.as());
+CHECK(params != nullptr);
+Array begin, end, strides;
+const ConstantNode *cbegin, *cend, *cstrides;
+if ((cbegin = params->begin.as()) &&
+(cend = params->end.as()) &&
+(cstrides = params->strides.as())) {
+  int64_t* strides_val = ToVector(cstrides->data);
+  for (int64_t i = 0; i < cstrides->data.Shape().front(); ++i) {
+strides.push_back(strides_val[i]);
+  }
+  int64_t* begin_val = ToVector(cbegin->data);
+  for (int64_t i = 0; i < cbegin->data.Shape().front(); ++i) {
+begin.push_back(begin_val[i]);
+  }
+  int64_t* end_val = ToVector(cend->data);
+  for (int64_t i = 0; i < cend->data.Shape().front(); ++i) {
+end.push_back(end_val[i]);
+  }
+}
 
 Array new_begin, new_end;
 
-for (size_t i = 0; i < params->begin.size(); i++) {
+for (size_t i = 0; i < begin.size(); i++) {
   const LayoutAxis& axis = layout[i];
   if (!axis.IsPrimal()) {
 // original layout that contains splitted axes is not supported
 return {{Layout::Undef()}, {Layout::Undef()}};
   }
   auto factor = new_layout.FactorOf(axis);
   if (factor == -1) {
-new_begin.push_back(params->begin[i]);
-new_end.push_back(params->end[i]);
+new_begin.push_back(begin[i]);
+new_end.push_back(end[i]);
   } else {
-if (params->strides.defined() && i < params->strides.size()) {
-  auto stride = params->strides[i];
+if (strides.defined() && i < strides.size()) {
+  auto stride = strides[i];
   // arbitrary stride is not supported
   if (stride.defined() && stride->value != 1) {
 return {{Layout::Undef()}, {Layout::Undef()}};
   }
 }
-int64_t begin = params->begin[i].defined() ? params->begin[i]->value : 
0;
-int64_t end = params->end[i].defined() ? params->end[i]->value :
+int64_t bg = begin[i].defined() ? begin[i]->value : 0;
+int64_t ed = end[i].defined() ? end[i]->value :
 shape[i].as()->value;
-if (begin % factor || end % factor) {
+if (bg % factor || ed % factor) {
   // transform to original layout
   return {{Layout::Undef()}, {Layout::Undef()}};
 }
-new_begin.push_back(tvm::Integer(begin / factor));
-new_end.push_back(tvm::Integer(end / factor));
+new_begin.push_back(tvm::Integer(bg / factor));
+new_end.push_back(tvm::Integer(ed / factor));
   }
 }
-layout = new_layout;
-params->begin = new_begin;
-params->end = new_end;
-  }
-  return {{layout}, {layout}};
-}
 
+layout = new_layout;
 
-// Positional relay function to create StridedSlice operator used by frontend 
FFI.
-Expr MakeStridedSlice(Expr data,
-  Array begin,
-  Array end,
-  Array strides) {
-  auto attrs = make_object();
-  attrs->begin = std::move(begin);
-  attrs->end = std::move(end);
-  attrs->strides = std::move(strides);
-  static const Op& op = Op::Get("strided_slice");
-  return Call(op, {data}, Attrs(attrs), {});
+DLContext ctx;
+ctx.device_type = kDLCPU;
+ctx.device_id = 0;
+auto begin_ndarray = runtime::NDArray::Empty({int64_t(new_begin.size())},
+ DataType::Int(64), ctx);
+auto end_ndarray = runtime::NDArray::Empty({int64_t(new_begin.size())},
+   DataType::Int(64), ctx);
+auto strides_ndarray = runtime::NDArray::Empty({int64_t(new_begin.size())},
+   DataType::Int(64), ctx);
+int64_t* begin_data = static_cast(begin_ndarray->data);
+int64_t* end_data = static_cast(end_ndarray->data);
+for (size_t i = 0; i < new_begin.size(); ++i) {
+  begin_data[i] = new_begin[i];
+  end_data[i] = new_end[i];
+}
+params->begin = Constant(begin_ndarray);
+params->end = Constant(end_ndarray);
+  }
+  return {{layout, Layout("C"), Layout("C"), Layout("C")}, {layout}};
+}
+
+inline te::Tensor DynamicStridedSlice(const 

[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-19 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r411062353



##
File path: topi/python/topi/vision/nms.py
##
@@ -64,7 +67,56 @@ def hybrid_rearrange_out(data, one):
 
 
 @hybrid.script
-def hybrid_get_valid_counts(data, score_threshold, id_index, score_index, one):
+def hybrid_rearrange_indices_out(data, one, batch_size):

Review comment:
   Same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-19 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r411062505



##
File path: topi/python/topi/vision/nms.py
##
@@ -139,24 +200,28 @@ def get_valid_counts(data, score_threshold=0, id_index=0, 
score_index=1):
 
 Returns
 ---
+valid_count : tvm.te.Tensor
+1-D tensor for valid number of boxes.
+
 out_tensor : tvm.te.Tensor
 Rearranged data tensor.
 
-valid_count : tvm.te.Tensor
-1-D tensor for valid number of boxes.
+out_indices: tvm.te.Tensor or numpy NDArray
+Related index in input data.
 """
 score_threshold_const = tvm.tir.const(score_threshold, data.dtype)
 id_index_const = tvm.tir.const(id_index, "int32")
 score_index_const = tvm.tir.const(score_index, "int32")
 return hybrid_get_valid_counts(data, score_threshold_const,
id_index_const, score_index_const,
-   tvm.tir.const(1, data.dtype))
+   tvm.tir.const(1, data.dtype),
+   data.shape[0])
 
 
 @hybrid.script
-def hybrid_nms(data, sorted_index, valid_count,
-   max_output_size, iou_threshold, force_suppress,
-   top_k, coord_start, id_index, score_index, zero, one):
+def hybrid_nms(data, sorted_index, valid_count, indices, batch_size, 
max_output_size,

Review comment:
   Same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-19 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r411062395



##
File path: topi/python/topi/vision/nms.py
##
@@ -64,7 +67,56 @@ def hybrid_rearrange_out(data, one):
 
 
 @hybrid.script
-def hybrid_get_valid_counts(data, score_threshold, id_index, score_index, one):
+def hybrid_rearrange_indices_out(data, one, batch_size):
+"""Hybrid routine to rearrange nms output to
+move all valid entries to top.
+
+Parameters
+--
+data : tvm.te.Tensor or numpy NDArray
+NMS output. 3-D tensor with shape
+[batch_size, num_anchors, 6] or
+[batch_size, num_anchors, 5], or 2-D
+tensor with shape [batch_size, num_anchors].
+
+one: tvm.tir.const
+Constant one with the same dtype as data.
+
+batch_size: tvm.tir.IntImm or tvm.tir.Var
+Batch size. We need to pass it in since hybrid script doesn't support
+binding variable to symbolic dim.
+
+Returns
+---
+output : tvm.te.Tensor or numpy NDArray
+2-D tensor with shape [batch_size, num_anchors].
+
+valid_box_count : tvm.te.Tensor or numpy NDArray
+Tensor with shape [batch_size, 1], indicates
+the valid number of boxes.
+"""
+num_anchors = data.shape[1]
+valid_box_count = output_tensor((batch_size, 1), "int32")
+output = output_tensor((batch_size, num_anchors), data.dtype)
+
+for i in parallel(batch_size):
+valid_idx = 0
+for j in range(num_anchors):
+if data[i, j] >= 0:
+output[i, valid_idx] = data[i, j]
+valid_idx += 1
+if data[i, j] > num_anchors or data[i, j] < -num_anchors:
+output[i, valid_idx] = 0
+valid_idx += 1
+if j >= valid_idx:
+output[i, j] = -one
+valid_box_count[i, 0] = valid_idx
+
+return output, valid_box_count
+
+
+@hybrid.script
+def hybrid_get_valid_counts(data, score_threshold, id_index, score_index, one, 
batch_size):

Review comment:
   Same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-19 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r411062313



##
File path: topi/python/topi/vision/nms.py
##
@@ -23,7 +23,7 @@
 from ..sort import argsort
 
 @hybrid.script
-def hybrid_rearrange_out(data, one):
+def hybrid_rearrange_box_out(data, one, batch_size):

Review comment:
   We might want to do the same thing for num_anchors.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-19 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r411021266



##
File path: src/relay/transforms/fuse_ops.cc
##
@@ -247,6 +247,28 @@ class IndexedForwardGraph::Creator : private ExprVisitor {
   this->Update(call->op, node, kOpaque);
 }
 
+if (call->attrs.as()) {
+  bool is_dyn{false};
+  for (auto arg :  call->args) {
+if (!arg.as()) {
+   is_dyn = true;
+   break;
+}
+auto arg_tt = arg->checked_type().as();
+if (arg_tt) {

Review comment:
   Since we have already checked begin, end and stride through 
as, I think here we just need to check the first arg?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-19 Thread GitBox


kevinthesun commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r411020578



##
File path: src/relay/transforms/fuse_ops.cc
##
@@ -247,6 +247,28 @@ class IndexedForwardGraph::Creator : private ExprVisitor {
   this->Update(call->op, node, kOpaque);
 }
 
+if (call->attrs.as()) {
+  bool is_dyn{false};
+  for (auto arg :  call->args) {
+if (!arg.as()) {

Review comment:
   Do we need to check from the second arg, since the first arg is input 
data?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-18 Thread GitBox
kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] 
Dynamic NMS and strided_slice
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r410751616
 
 

 ##
 File path: src/relay/op/tensor/transform.cc
 ##
 @@ -1775,105 +1776,165 @@ Array GetIntArray(Array arr) {
   return Downcast >(arr);
 }
 
-
 // strided_slice
 TVM_REGISTER_NODE_TYPE(StridedSliceAttrs);
+
+int64_t* ToVector(const runtime::NDArray& array) {
+  size_t len = array.Shape().front();
+  int64_t* rel_vec = new int64_t[len];
+  if (array->dtype.code == kDLInt) {
+if (array->dtype.bits == 8) {
+  int8_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 16) {
+  int16_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 32) {
+  int32_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 64) {
+  int64_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+}
+  } else if (array->dtype.code == kDLUInt) {
+if (array->dtype.bits == 8) {
+  uint8_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 16) {
+  uint16_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 32) {
+  uint32_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 64) {
+  uint64_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+}
+  }
+  LOG(FATAL) << "Unknown data type: " << 
tvm::runtime::DLDataType2String(array->dtype);
+  return rel_vec;
+}
+
 bool StridedSliceRel(const Array& types,
  int num_inputs,
  const Attrs& attrs,
  const TypeReporter& reporter) {
-  CHECK_EQ(types.size(), 2);
-  const auto* data = types[0].as();
-  if (data == nullptr) return false;
-
-  const StridedSliceAttrs *param = attrs.as();
+  CHECK_EQ(types.size(), 5);
+  const StridedSliceAttrs* param = attrs.as();
   CHECK(param != nullptr);
-
+  const auto* data = types[0].as();
+  CHECK(data != nullptr);
   auto dshape = data->shape;
-  auto num_axis = dshape.size();
-
-  std::vector stride_vec;
-  for (Integer i : param->strides) {
-CHECK(i.defined());
-stride_vec.push_back(i->value);
-  }
-  for (size_t i = stride_vec.size(); i < num_axis; ++i) {
-stride_vec.push_back(1);
-  }
-  const int64_t max_range = std::numeric_limits::max();
-
-  std::vector begin_vec;
-  for (size_t i = 0; i < param->begin.size(); ++i) {
-if (!param->begin[i].defined()) {
-  // value=None
+  int64_t num_axis = dshape.size();
+
+  // calculate output shape
+  std::vector oshape(num_axis);
+  const ConstantNode *cbegin, *cend, *cstrides;
+  if ((cbegin = param->begin.as()) &&
+  (cend = param->end.as()) &&
+  (cstrides = param->strides.as())) {
+std::vector stride_vec;
+int64_t* strides_val = ToVector(cstrides->data);
+for (int64_t i = 0; i < cstrides->data.Shape().front(); ++i) {
+  stride_vec.push_back(strides_val[i]);
+}
+for (int64_t i = stride_vec.size(); i < num_axis; ++i) {
+  stride_vec.push_back(1);
+}
+const int64_t max_range = std::numeric_limits::max();
+std::vector begin_vec;
+int64_t* begin_val = ToVector(cbegin->data);
+for (int64_t i = 0; i < cbegin->data.Shape().front(); ++i) {
+  begin_vec.push_back(begin_val[i]);
+}
+for (int64_t i = begin_vec.size(); i < num_axis; ++i) {
   begin_vec.push_back(stride_vec[i] > 0 ? 0 : max_range);
-} else {
-  begin_vec.push_back(param->begin[i]->value);
 }
-  }
-  for (size_t i = begin_vec.size(); i < num_axis; ++i) {
-begin_vec.push_back(stride_vec[i] > 0 ? 0 : max_range);
-  }
-
-  std::vector end_vec;
-  for (size_t i = 0; i < param->end.size(); ++i) {
-// allow end to be None
-if (!param->end[i].defined()) {
+std::vector end_vec;
+int64_t* end_val = ToVector(cend->data);
+for (int64_t i = 0; i < cend->data.Shape().front(); ++i) {
+  end_vec.push_back(end_val[i]);
+}

[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-04-18 Thread GitBox
kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] 
Dynamic NMS and strided_slice
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r410751616
 
 

 ##
 File path: src/relay/op/tensor/transform.cc
 ##
 @@ -1775,105 +1776,165 @@ Array GetIntArray(Array arr) {
   return Downcast >(arr);
 }
 
-
 // strided_slice
 TVM_REGISTER_NODE_TYPE(StridedSliceAttrs);
+
+int64_t* ToVector(const runtime::NDArray& array) {
+  size_t len = array.Shape().front();
+  int64_t* rel_vec = new int64_t[len];
+  if (array->dtype.code == kDLInt) {
+if (array->dtype.bits == 8) {
+  int8_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 16) {
+  int16_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 32) {
+  int32_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 64) {
+  int64_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+}
+  } else if (array->dtype.code == kDLUInt) {
+if (array->dtype.bits == 8) {
+  uint8_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 16) {
+  uint16_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 32) {
+  uint32_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+} else if (array->dtype.bits == 64) {
+  uint64_t* init_array = reinterpret_cast(array->data);
+  for (size_t i = 0; i < len; ++i) {
+rel_vec[i] = int64_t(init_array[i]);
+  }
+  return rel_vec;
+}
+  }
+  LOG(FATAL) << "Unknown data type: " << 
tvm::runtime::DLDataType2String(array->dtype);
+  return rel_vec;
+}
+
 bool StridedSliceRel(const Array& types,
  int num_inputs,
  const Attrs& attrs,
  const TypeReporter& reporter) {
-  CHECK_EQ(types.size(), 2);
-  const auto* data = types[0].as();
-  if (data == nullptr) return false;
-
-  const StridedSliceAttrs *param = attrs.as();
+  CHECK_EQ(types.size(), 5);
+  const StridedSliceAttrs* param = attrs.as();
   CHECK(param != nullptr);
-
+  const auto* data = types[0].as();
+  CHECK(data != nullptr);
   auto dshape = data->shape;
-  auto num_axis = dshape.size();
-
-  std::vector stride_vec;
-  for (Integer i : param->strides) {
-CHECK(i.defined());
-stride_vec.push_back(i->value);
-  }
-  for (size_t i = stride_vec.size(); i < num_axis; ++i) {
-stride_vec.push_back(1);
-  }
-  const int64_t max_range = std::numeric_limits::max();
-
-  std::vector begin_vec;
-  for (size_t i = 0; i < param->begin.size(); ++i) {
-if (!param->begin[i].defined()) {
-  // value=None
+  int64_t num_axis = dshape.size();
+
+  // calculate output shape
+  std::vector oshape(num_axis);
+  const ConstantNode *cbegin, *cend, *cstrides;
+  if ((cbegin = param->begin.as()) &&
+  (cend = param->end.as()) &&
+  (cstrides = param->strides.as())) {
+std::vector stride_vec;
+int64_t* strides_val = ToVector(cstrides->data);
+for (int64_t i = 0; i < cstrides->data.Shape().front(); ++i) {
+  stride_vec.push_back(strides_val[i]);
+}
+for (int64_t i = stride_vec.size(); i < num_axis; ++i) {
+  stride_vec.push_back(1);
+}
+const int64_t max_range = std::numeric_limits::max();
+std::vector begin_vec;
+int64_t* begin_val = ToVector(cbegin->data);
+for (int64_t i = 0; i < cbegin->data.Shape().front(); ++i) {
+  begin_vec.push_back(begin_val[i]);
+}
+for (int64_t i = begin_vec.size(); i < num_axis; ++i) {
   begin_vec.push_back(stride_vec[i] > 0 ? 0 : max_range);
-} else {
-  begin_vec.push_back(param->begin[i]->value);
 }
-  }
-  for (size_t i = begin_vec.size(); i < num_axis; ++i) {
-begin_vec.push_back(stride_vec[i] > 0 ? 0 : max_range);
-  }
-
-  std::vector end_vec;
-  for (size_t i = 0; i < param->end.size(); ++i) {
-// allow end to be None
-if (!param->end[i].defined()) {
+std::vector end_vec;
+int64_t* end_val = ToVector(cend->data);
+for (int64_t i = 0; i < cend->data.Shape().front(); ++i) {
+  end_vec.push_back(end_val[i]);
+}

[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2019-11-12 Thread GitBox
kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] 
Dynamic NMS and strided_slice
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r345404746
 
 

 ##
 File path: include/tvm/relay/attrs/vision.h
 ##
 @@ -96,6 +96,7 @@ struct GetValidCountsAttrs : public 
tvm::AttrsNode {
 /*! \brief Attributes used in non_maximum_suppression operator */
 struct NonMaximumSuppressionAttrs : public 
tvm::AttrsNode {
   int max_output_size;
+  double score_threshold;
 
 Review comment:
   If we change dynamic nms implementation, we might not need this attribute


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2019-11-12 Thread GitBox
kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] 
Dynamic NMS and strided_slice
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r345404746
 
 

 ##
 File path: include/tvm/relay/attrs/vision.h
 ##
 @@ -96,6 +96,7 @@ struct GetValidCountsAttrs : public 
tvm::AttrsNode {
 /*! \brief Attributes used in non_maximum_suppression operator */
 struct NonMaximumSuppressionAttrs : public 
tvm::AttrsNode {
   int max_output_size;
+  double score_threshold;
 
 Review comment:
   If we changed dynamic nms implementation, we might not need this attribute


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services