[tvm] branch main updated: Fix _get_yolo_detections (#8477)

2021-07-16 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 7388715  Fix _get_yolo_detections (#8477)
7388715 is described below

commit 73887156321fcee1700ef8661f052d8d38022a4d
Author: Alexander Pivovarov 
AuthorDate: Fri Jul 16 10:26:25 2021 -0700

Fix _get_yolo_detections (#8477)
---
 python/tvm/relay/testing/yolo_detection.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/python/tvm/relay/testing/yolo_detection.py 
b/python/tvm/relay/testing/yolo_detection.py
index a387f30..949d024 100644
--- a/python/tvm/relay/testing/yolo_detection.py
+++ b/python/tvm/relay/testing/yolo_detection.py
@@ -103,8 +103,8 @@ def _get_yolo_detections(l, im_shape, net_shape, thresh, 
relative, dets):
 l["biases"],
 np.asarray(l["mask"])[location[0]],
 location,
-data.shape[2],
 data.shape[3],
+data.shape[2],
 net_shape[0],
 net_shape[1],
 )
@@ -139,10 +139,10 @@ def _get_region_detections(l, im_shape, net_shape, 
thresh, relative, dets):
 l["biases"],
 n,
 location,
-data.shape[2],
 data.shape[3],
 data.shape[2],
 data.shape[3],
+data.shape[2],
 )
 objectness = scale if scale > thresh else 0
 if objectness:


[tvm] branch main updated (76eb16f -> 9a72ba3)

2021-04-17 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 76eb16f  [µTVM] Zephyr: Add STM32F746 disco board as a test platform 
(#7863)
 add 9a72ba3  [frontend][tflite] float16 quant support (#7736)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  35 +-
 tests/python/frontend/tflite/test_forward.py | 463 +++
 2 files changed, 349 insertions(+), 149 deletions(-)


[tvm] branch main updated: Free TensorRT engine and context (#7702)

2021-03-19 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 570767f  Free TensorRT engine and context (#7702)
570767f is described below

commit 570767f78851fbc0472c230adcb2c98e47bad0e8
Author: Trevor Morris 
AuthorDate: Fri Mar 19 01:09:45 2021 -0700

Free TensorRT engine and context (#7702)
---
 src/runtime/contrib/tensorrt/tensorrt_runtime.cc | 8 
 1 file changed, 8 insertions(+)

diff --git a/src/runtime/contrib/tensorrt/tensorrt_runtime.cc 
b/src/runtime/contrib/tensorrt/tensorrt_runtime.cc
index 3f87f8d..e28c5a8 100644
--- a/src/runtime/contrib/tensorrt/tensorrt_runtime.cc
+++ b/src/runtime/contrib/tensorrt/tensorrt_runtime.cc
@@ -109,6 +109,14 @@ class TensorRTRuntime : public JSONRuntimeBase {
   }
 
 #ifdef TVM_GRAPH_RUNTIME_TENSORRT
+  /*! \brief Destroy engines and contexts. */
+  ~TensorRTRuntime() {
+for (auto& it : trt_engine_cache_) {
+  it.second.context->destroy();
+  it.second.engine->destroy();
+}
+  }
+
   /*! \brief Run inference using built engine. */
   void Run() override {
 BuildEngine();


[tvm] branch main updated: [TFLite] Cast operator adapted for MLIR-based convertor (#7639)

2021-03-18 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 2ee860e  [TFLite] Cast operator adapted for MLIR-based convertor 
(#7639)
2ee860e is described below

commit 2ee860e902e77f45996a5585fc09c5e5c29788e1
Author: Dmitriy Smirnov 
AuthorDate: Fri Mar 19 06:47:45 2021 +

[TFLite] Cast operator adapted for MLIR-based convertor (#7639)

* [TFLite] Cast operator adapted for MLIR-based convertor

Cast operator now can be executed in MLIR-based version.
Unit test updated

Change-Id: I30e5c1c9d69355116b560af8f6d0582b2d593538

* Comment added

Change-Id: I3e2d29ef201283de337168d0b82679b63ca2fcf4
---
 python/tvm/relay/frontend/tflite.py  | 17 -
 tests/python/frontend/tflite/test_forward.py | 19 ++-
 2 files changed, 26 insertions(+), 10 deletions(-)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index d6f7047..a5c9a58 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -2336,11 +2336,18 @@ class OperatorConverter(object):
 input_tensor = input_tensors[0]
 in_expr = self.get_expr(input_tensor.tensor_idx)
 
-assert op.BuiltinOptionsType() == BuiltinOptions.CastOptions
-op_options = op.BuiltinOptions()
-cast_options = CastOptions()
-cast_options.Init(op_options.Bytes, op_options.Pos)
-cast_dtype = cast_options.OutDataType()
+# MLIR-based converter outputs no BuiltinOptions for Cast operator. In 
this
+# case the output type can be derived from the Cast operator output 
tensor.
+# When TOCO converter is used there will be "normal" 
BuiltinOptions.CastOptions
+# with output type.
+if op.BuiltinOptions() is not None:
+assert op.BuiltinOptionsType() == BuiltinOptions.CastOptions
+op_options = op.BuiltinOptions()
+cast_options = CastOptions()
+cast_options.Init(op_options.Bytes, op_options.Pos)
+cast_dtype = cast_options.OutDataType()
+else:
+cast_dtype = self.get_output_tensors(op)[0].tensor.Type()
 
 out = _op.cast(in_expr, self.get_tensor_type_str(cast_dtype))
 
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 0d02c15..7c12cd3 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -647,19 +647,28 @@ def test_forward_transpose():
 # 
 
 
-def _test_cast(data, cast_dtype):
+def _test_cast(data, cast_dtype, use_mlir=False):
 """ One iteration of CAST """
 with tf.Graph().as_default():
 in_data = array_ops.placeholder(shape=data.shape, dtype=data.dtype)
 out = math_ops.cast(in_data, cast_dtype)
-compare_tflite_with_tvm(data, "Placeholder:0", [in_data], [out])
+compare_tflite_with_tvm(
+data, "Placeholder:0", [in_data], [out], 
experimental_new_converter=use_mlir
+)
 
 
 def test_forward_cast():
 """ CAST """
-_test_cast(np.arange(6.0, dtype=np.float32).reshape((1, 6)), 
cast_dtype=tf.int32)
-_test_cast(np.arange(6.0, dtype=np.float32).reshape((1, 6)), 
cast_dtype=tf.uint8)
-_test_cast(np.arange(6.0, dtype=np.int32).reshape((1, 6)), 
cast_dtype=tf.int64)
+for use_mlir in [False, True]:
+_test_cast(
+np.arange(6.0, dtype=np.float32).reshape((1, 6)), 
cast_dtype=tf.int32, use_mlir=use_mlir
+)
+_test_cast(
+np.arange(6.0, dtype=np.float32).reshape((1, 6)), 
cast_dtype=tf.uint8, use_mlir=use_mlir
+)
+_test_cast(
+np.arange(6.0, dtype=np.int32).reshape((1, 6)), 
cast_dtype=tf.int64, use_mlir=use_mlir
+)
 
 
 ###


[tvm] branch main updated (45442ed -> 431a7d6)

2021-03-18 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 45442ed  [Relay][Training][Pass] Factor out first-order AD to a module 
pass (#7677)
 add 431a7d6  Default value for graph_runtime Init lookup_linked_param_func 
(#7676)

No new revisions were added by this update.

Summary of changes:
 src/runtime/graph/graph_runtime.cc | 5 +++--
 src/runtime/graph/graph_runtime.h  | 5 +++--
 2 files changed, 6 insertions(+), 4 deletions(-)


[tvm] branch main updated (c5f608f -> 5d5bbfb)

2021-03-05 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from c5f608f  [BYOC][TRT]Fix groups cannot divide output channel count 
error for deconv when groups>1 (#7595)
 add 5d5bbfb  Support negative axis for gather (#7600)

No new revisions were added by this update.

Summary of changes:
 include/tvm/topi/transform.h  |   3 +
 python/tvm/relay/op/transform.py  |   2 +-
 src/relay/op/tensor/transform.cc  |   3 +
 tests/python/frontend/pytorch/test_forward.py |   6 +-
 tests/python/relay/test_op_level3.py  | 223 ++
 5 files changed, 167 insertions(+), 70 deletions(-)



[tvm] branch main updated (c118b08 -> 38c9eb1)

2021-02-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from c118b08  Support negative pad values (#7375)
 add 38c9eb1  Fix Bug in Bilinear Interpolation and Add Deform Conv to PT 
FrontEnd (#7397)

No new revisions were added by this update.

Summary of changes:
 include/tvm/topi/detail/tensor_utils.h | 95 +-
 python/tvm/relay/frontend/pytorch.py   | 27 ++
 .../tvm/topi/testing/deformable_conv2d_python.py   | 26 --
 python/tvm/topi/testing/roi_align_python.py| 34 
 python/tvm/topi/vision/rcnn/roi_align.py   |  4 +-
 tests/python/frontend/pytorch/test_forward.py  | 88 ++--
 tests/python/relay/test_op_level5.py   | 71 
 7 files changed, 257 insertions(+), 88 deletions(-)



[tvm] branch main updated (f1b9663 -> c118b08)

2021-02-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from f1b9663  [RPC] Replace timestamp with counter (#7389)
 add c118b08  Support negative pad values (#7375)

No new revisions were added by this update.

Summary of changes:
 src/relay/op/nn/pad.cc   |  9 +++
 tests/python/relay/test_op_level2.py | 51 +++-
 2 files changed, 43 insertions(+), 17 deletions(-)



[tvm] branch main updated (2290cc0 -> f8c55db)

2021-01-19 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 2290cc0  [TOPI] Minor perf improvement for GPU scatter (#7233)
 add f8c55db  [TFLite] Added ability to infer shapes for arguments (#7293)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py | 35 +++
 1 file changed, 23 insertions(+), 12 deletions(-)



[tvm] branch main updated (d1399f3 -> 59699a7)

2020-12-29 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from d1399f3  [Torch] Support hard_swish op (#7174)
 add 59699a7  [TFLite] Reshape - support different qnn params for input and 
output (#7159)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  30 +-
 tests/python/frontend/tflite/test_forward.py | 131 +--
 2 files changed, 130 insertions(+), 31 deletions(-)



[tvm] branch main updated (a7bf979 -> 22a0877)

2020-12-02 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from a7bf979  [AutoScheduler] Support layout rewrite for whole networks 
(#6987)
 add 22a0877  Fix trt Test (#7016)

No new revisions were added by this update.

Summary of changes:
 tests/python/contrib/test_tensorrt.py | 18 --
 1 file changed, 8 insertions(+), 10 deletions(-)



[tvm] branch main updated: Use channels from attrs if possible (#7011)

2020-12-01 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 0778afd  Use channels from attrs if possible (#7011)
0778afd is described below

commit 0778afd6d0fb0283fba5d4839f27e2ac548a3284
Author: Trevor Morris 
AuthorDate: Tue Dec 1 22:04:43 2020 -0800

Use channels from attrs if possible (#7011)
---
 src/runtime/contrib/tensorrt/tensorrt_ops.cc | 4 
 tests/python/contrib/test_tensorrt.py| 5 +
 2 files changed, 9 insertions(+)

diff --git a/src/runtime/contrib/tensorrt/tensorrt_ops.cc 
b/src/runtime/contrib/tensorrt/tensorrt_ops.cc
index 057743c..c3ff1c4 100644
--- a/src/runtime/contrib/tensorrt/tensorrt_ops.cc
+++ b/src/runtime/contrib/tensorrt/tensorrt_ops.cc
@@ -243,6 +243,10 @@ class Conv2DOpConverter : public TensorRTOpConverter {
 auto str_padding = 
params->node.GetAttr>("padding");
 int groups = 
std::stoi(params->node.GetAttr>("groups")[0]);
 int channels = weight_shape[0];
+if (params->node.HasAttr("channels") &&
+
!params->node.GetAttr>("channels")[0].empty()) {
+  channels = 
std::stoi(params->node.GetAttr>("channels")[0]);
+}
 // TRT conv2d op doesn't support asymmetric padding before 5.1, so we
 // workaround by adding a padding layer before the pooling op.
 nvinfer1::DimsHW prepadding, postpadding;
diff --git a/tests/python/contrib/test_tensorrt.py 
b/tests/python/contrib/test_tensorrt.py
index 10c311a..de98222 100644
--- a/tests/python/contrib/test_tensorrt.py
+++ b/tests/python/contrib/test_tensorrt.py
@@ -352,6 +352,7 @@ def test_conv2d():
 padding=(0, 0),
 strides=(1, 1),
 dilation=(1, 1),
+channels=None,
 ):
 x = relay.var("x", shape=(x_shape), dtype="float32")
 kernel = relay.var("kernel", shape=(k_shape), dtype="float32")
@@ -363,6 +364,7 @@ def test_conv2d():
 padding=padding,
 strides=strides,
 dilation=dilation,
+channels=channels,
 )
 f = relay.Function([x, kernel], out)
 return f, {"x": x_shape, "kernel": k_shape}, ["kernel"]
@@ -380,6 +382,9 @@ def test_conv2d():
 dilation=dilation,
 )
 )
+run_and_verify_func(
+get_graph((1, 3, 16, 16), (3, 8, 7, 7), 3, [2, 2, 3, 3], [2, 2], [1, 
1], 24)
+)
 
 
 def test_conv2d_nhwc():



[tvm] branch main updated (93758ca -> 7950ea1)

2020-12-01 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 93758ca  [TVMC] use target_host when it is set (#6855)
 add 7950ea1  Dynamic Batch Support for TRT  (#6955)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/tensorrt.py  | 117 --
 src/relay/backend/utils.h|   3 +-
 src/runtime/contrib/tensorrt/tensorrt_runtime.cc |  40 +++--
 tests/python/contrib/test_tensorrt.py| 185 +++
 4 files changed, 318 insertions(+), 27 deletions(-)



[incubator-tvm] branch main updated (a6e2417 -> e009188)

2020-11-14 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from a6e2417  [TF parser] Handle int64 dtype in range (#6918)
 add e009188  [ShapeFunc] Handle weights in shape func (#6912)

No new revisions were added by this update.

Summary of changes:
 src/relay/backend/compile_engine.cc | 22 +-
 tests/python/relay/test_vm.py   | 25 +
 2 files changed, 46 insertions(+), 1 deletion(-)



[incubator-tvm] branch main updated (f9d26fb -> a6e2417)

2020-11-14 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from f9d26fb  Make TVMLogf platform-independent (#6916)
 add a6e2417  [TF parser] Handle int64 dtype in range (#6918)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tensorflow.py  | 10 +-
 tests/python/frontend/tensorflow/test_forward.py |  9 +
 2 files changed, 10 insertions(+), 9 deletions(-)



[incubator-tvm] branch main updated (c7c39a4 -> b4f99e5)

2020-11-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c7c39a4  Fix edge cases in const_int_bound and fold_scale_axis (#6911)
 add b4f99e5  [TRT][BYOC] handling dynamism in TensorRT to support OD 
models (#6905)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/tensorrt.py   | 222 -
 src/relay/backend/contrib/tensorrt/codegen.cc |  17 +-
 src/runtime/contrib/tensorrt/tensorrt_ops.cc  |   2 +-
 tests/python/contrib/test_tensorrt.py | 441 --
 4 files changed, 438 insertions(+), 244 deletions(-)



[incubator-tvm] branch main updated (c7c39a4 -> b4f99e5)

2020-11-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c7c39a4  Fix edge cases in const_int_bound and fold_scale_axis (#6911)
 add b4f99e5  [TRT][BYOC] handling dynamism in TensorRT to support OD 
models (#6905)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/tensorrt.py   | 222 -
 src/relay/backend/contrib/tensorrt/codegen.cc |  17 +-
 src/runtime/contrib/tensorrt/tensorrt_ops.cc  |   2 +-
 tests/python/contrib/test_tensorrt.py | 441 --
 4 files changed, 438 insertions(+), 244 deletions(-)



[incubator-tvm] branch main updated (c7c39a4 -> b4f99e5)

2020-11-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c7c39a4  Fix edge cases in const_int_bound and fold_scale_axis (#6911)
 add b4f99e5  [TRT][BYOC] handling dynamism in TensorRT to support OD 
models (#6905)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/tensorrt.py   | 222 -
 src/relay/backend/contrib/tensorrt/codegen.cc |  17 +-
 src/runtime/contrib/tensorrt/tensorrt_ops.cc  |   2 +-
 tests/python/contrib/test_tensorrt.py | 441 --
 4 files changed, 438 insertions(+), 244 deletions(-)



[incubator-tvm] branch main updated (c7c39a4 -> b4f99e5)

2020-11-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c7c39a4  Fix edge cases in const_int_bound and fold_scale_axis (#6911)
 add b4f99e5  [TRT][BYOC] handling dynamism in TensorRT to support OD 
models (#6905)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/tensorrt.py   | 222 -
 src/relay/backend/contrib/tensorrt/codegen.cc |  17 +-
 src/runtime/contrib/tensorrt/tensorrt_ops.cc  |   2 +-
 tests/python/contrib/test_tensorrt.py | 441 --
 4 files changed, 438 insertions(+), 244 deletions(-)



[incubator-tvm] branch main updated (c7c39a4 -> b4f99e5)

2020-11-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c7c39a4  Fix edge cases in const_int_bound and fold_scale_axis (#6911)
 add b4f99e5  [TRT][BYOC] handling dynamism in TensorRT to support OD 
models (#6905)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/tensorrt.py   | 222 -
 src/relay/backend/contrib/tensorrt/codegen.cc |  17 +-
 src/runtime/contrib/tensorrt/tensorrt_ops.cc  |   2 +-
 tests/python/contrib/test_tensorrt.py | 441 --
 4 files changed, 438 insertions(+), 244 deletions(-)



[incubator-tvm] branch main updated (ad92efd -> 4c4888b)

2020-10-28 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ad92efd  [API] Added remove_global_func to the Python API (#6787)
 add 4c4888b  [ManifestAlloc] Handle TupleType inputs in CheckReshapeOnly 
(#6776)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/transform/memory_alloc.py |  5 +
 tests/python/relay/test_vm.py  | 16 
 2 files changed, 21 insertions(+)



[incubator-tvm] branch main updated: TF argmax - handling int64 datatype (#6674)

2020-10-12 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new d24634a  TF argmax - handling int64 datatype (#6674)
d24634a is described below

commit d24634a7cc3c502176838d238af42cf7c94defbe
Author: Animesh Jain 
AuthorDate: Mon Oct 12 20:36:00 2020 -0700

TF argmax - handling int64 datatype (#6674)

Co-authored-by: Ubuntu 
---
 python/tvm/relay/frontend/tensorflow.py  |  6 +-
 tests/python/frontend/tensorflow/test_forward.py | 12 ++--
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/python/tvm/relay/frontend/tensorflow.py 
b/python/tvm/relay/frontend/tensorflow.py
index c7e8c00..3df582a 100644
--- a/python/tvm/relay/frontend/tensorflow.py
+++ b/python/tvm/relay/frontend/tensorflow.py
@@ -146,7 +146,11 @@ def _argx(func, func_name):
 raise TypeError(
 "Unsupported argument for `{}` : `axis` should be a 
constant".format(func_name)
 )
-return func(inputs[0], axis=axis_input_value, keepdims=False)
+out = func(inputs[0], axis=axis_input_value, keepdims=False)
+dtype = attr["output_type"].name
+if dtype != "int32":
+out = _op.cast(out, dtype=dtype)
+return out
 
 return _impl
 
diff --git a/tests/python/frontend/tensorflow/test_forward.py 
b/tests/python/frontend/tensorflow/test_forward.py
index fb4c104..8e347e7 100644
--- a/tests/python/frontend/tensorflow/test_forward.py
+++ b/tests/python/frontend/tensorflow/test_forward.py
@@ -1601,16 +1601,16 @@ def _test_argx(func, data, **kwargs):
 
 with tf.Graph().as_default():
 inp = array_ops.placeholder(shape=data.shape, dtype=data.dtype, 
name="c0")
-func(inp, name="argx0", output_type=tf.int32, **kwargs)
-
+func(inp, name="argx0", **kwargs)
 compare_tf_with_tvm(data, "c0:0", "argx0:0")
 
 
 def test_forward_argminmax():
-for axis in [None, 0, 1, 2]:
-data = np.random.uniform(size=(8, 4, 9)).astype("float32")
-_test_argx(tf.argmax, data=data, axis=axis)
-_test_argx(tf.argmin, data=data, axis=axis)
+for output_type in [tf.int64, tf.int32]:
+for axis in [None, 0, 1, 2]:
+data = np.random.uniform(size=(8, 4, 9)).astype("float32")
+_test_argx(tf.argmax, data=data, axis=axis, 
output_type=output_type)
+_test_argx(tf.argmin, data=data, axis=axis, 
output_type=output_type)
 
 
 ###



[incubator-tvm] branch master updated (2658ebe -> 21002cd)

2020-10-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 2658ebe  Dynamic ONNX Importer (#6351)
 add 21002cd  Fix Strided Slice Infer Layout (#6621)

No new revisions were added by this update.

Summary of changes:
 src/relay/op/tensor/transform.cc  | 111 +++---
 tests/python/relay/test_pass_convert_op_layout.py |  46 +
 2 files changed, 124 insertions(+), 33 deletions(-)



[incubator-tvm] branch master updated (e52e9e9 -> 1a9dcf1)

2020-09-25 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e52e9e9  [TIR] Fix rewrite_simplify tir::builtin::shift_left (#6555)
 add 1a9dcf1  Make missing desired layout non-fatal (#6553)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/convert_layout.cc| 31 +++---
 tests/python/relay/test_pass_convert_op_layout.py | 50 +++
 2 files changed, 66 insertions(+), 15 deletions(-)



[incubator-tvm] branch master updated (0c3efc2 -> 63d203c)

2020-09-24 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 0c3efc2  [Frontend][Onnx] Added broadcasting to prelu alpha. (#6549)
 add 63d203c  [Relay/TOPI] Added dilation_value attribute to dilate 
operator. (#6550)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/attrs/nn.h |  2 ++
 include/tvm/topi/nn/dilate.h | 10 ++
 python/tvm/relay/op/nn/_nn.py|  2 +-
 python/tvm/relay/op/nn/nn.py | 11 +++
 python/tvm/topi/nn/dilate.py |  9 ++---
 python/tvm/topi/testing/dilate_python.py |  8 ++--
 src/relay/op/nn/nn.cc|  5 +++--
 src/topi/nn.cc   |  2 +-
 tests/python/relay/test_any.py   | 13 ++---
 tests/python/topi/python/test_topi_dilate.py | 13 ++---
 10 files changed, 52 insertions(+), 23 deletions(-)



[incubator-tvm] branch master updated (22b8121 -> 39c4719)

2020-09-24 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 22b8121  tvmc: solve a linting error on onnx command line driver 
frontend (#6536)
 add 39c4719  [Relay/TOPI] Added 'offsets' and 'alignment' attributes to 
MATRIX_SET_DIAG. (#6429)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/attrs/transform.h | 19 +
 include/tvm/topi/transform.h| 41 ---
 python/tvm/relay/op/transform.py| 37 --
 python/tvm/topi/testing/matrix_set_diag.py  | 50 
 python/tvm/topi/transform.py| 39 +--
 src/relay/op/tensor/transform.cc| 52 -
 src/topi/transform.cc   |  6 ++-
 tests/python/relay/test_op_level10.py   | 17 
 tests/python/topi/python/test_topi_transform.py | 17 
 9 files changed, 232 insertions(+), 46 deletions(-)



[incubator-tvm] branch master updated (b4f8b28 -> 8de10e3)

2020-09-21 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b4f8b28  [CI] Cancel previous build if new commit has been pushed to a 
PR (#6518)
 add 8de10e3  QnnBinaryLayout bugfix + unit test (#6513)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/infer_layout_util.h  |  4 ++--
 tests/python/relay/test_pass_convert_op_layout.py | 28 +++
 2 files changed, 30 insertions(+), 2 deletions(-)



[incubator-tvm] branch master updated (b81bdee -> aeef16d)

2020-09-10 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b81bdee  [Relay] Add Defunctionalization Pass  (#6400)
 add aeef16d  [QNN][Relay] Fixed bug in quantized conv2d. (#6420)

No new revisions were added by this update.

Summary of changes:
 src/relay/qnn/op/convolution.cc  | 23 +--
 tests/python/relay/test_op_qnn_conv2d.py | 28 
 2 files changed, 49 insertions(+), 2 deletions(-)



[incubator-tvm] branch master updated (34647ed -> 4c9a391)

2020-08-28 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 34647ed  Add docker/lint.sh, for running dockerized lint scripts 
locally (#6333)
 add 4c9a391  quanitze operation expanded to take const argument (#6127)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  2 +-
 tests/python/frontend/tflite/test_forward.py | 27 +++
 2 files changed, 28 insertions(+), 1 deletion(-)



[incubator-tvm] branch master updated (158e9be -> 035a438)

2020-08-21 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 158e9be  [RELAY][DYN] Dynamic upsampling relay op (#6273)
 add 035a438  Retrigger build. (#6304)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 17 +
 tests/python/frontend/tflite/test_forward.py | 27 +++
 2 files changed, 44 insertions(+)



[incubator-tvm] branch master updated (f577aa6 -> e52c5ba)

2020-08-20 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from f577aa6  [RELAY][DYN] Implementation of the dynamic pad operator 
(#6284)
 add e52c5ba  Constant input attr added to fully connected operation in 
TFLite frontend (#6228)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  5 ++-
 tests/python/frontend/tflite/test_forward.py | 48 
 2 files changed, 29 insertions(+), 24 deletions(-)



[incubator-tvm] branch master updated (75df190 -> 7704232)

2020-08-18 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 75df190  [Torch] Support index_select (#6295)
 add 7704232  Gather operation with indices as tensor expr in TFLite 
frontend (#6168)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 54 
 tests/python/frontend/tflite/test_forward.py | 41 +
 2 files changed, 49 insertions(+), 46 deletions(-)



[incubator-tvm] branch master updated (8d91058 -> aa0271e)

2020-08-14 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 8d91058  Update precision in the ONNX strided_slice, update precision 
of ToScalar (#6272)
 add aa0271e  Added support for tflite quantized maximum and minimum (#6018)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  42 +-
 tests/python/frontend/tflite/test_forward.py | 118 ++-
 2 files changed, 86 insertions(+), 74 deletions(-)



[incubator-tvm] branch master updated (06d7565 -> 9d34eaa)

2020-07-23 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 06d7565  [Rust] Clean up conversions between TVM and Rust functions 
(#6114)
 add 9d34eaa  Improve reduction schedule on arm CPUs (#6110)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/strategy/arm_cpu.py | 6 ++
 1 file changed, 6 insertions(+)



[incubator-tvm] branch master updated (3c12a5e -> ccacb1e)

2020-07-17 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 3c12a5e  [Test] Add missing test for fast erf (#6058)
 add ccacb1e  Fixed point multiplication improvements for AArch64 (#5980)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/attrs/transform.h   | 13 
 include/tvm/tir/builtin.h |  8 +
 include/tvm/tir/op.h  | 21 +
 python/tvm/relay/op/_tensor.py|  8 +
 python/tvm/relay/op/tensor.py | 21 +
 python/tvm/tir/__init__.py|  1 +
 python/tvm/tir/op.py  | 28 +
 src/relay/op/tensor/unary.cc  | 23 ++
 src/relay/qnn/op/requantize.cc| 12 ++-
 src/relay/qnn/util.cc | 43 ++---
 src/relay/qnn/util.h  | 32 +++
 src/relay/quantize/realize.cc | 20 ++--
 src/relay/transforms/pattern_util.h   |  8 +
 src/target/intrin_rule.cc | 46 +++
 src/tir/op/builtin.cc |  5 +++
 src/tir/op/op.cc  |  6 
 src/tir/transforms/lower_intrin.cc| 13 ++--
 tests/python/relay/test_op_level3.py  | 17 ++
 topi/python/topi/arm_cpu/conv2d_gemm.py   | 13 +---
 topi/python/topi/arm_cpu/conv2d_int8.py   | 14 -
 topi/python/topi/arm_cpu/injective.py |  3 +-
 topi/python/topi/arm_cpu/tensor_intrin.py | 52 +++
 topi/python/topi/math.py  | 27 
 23 files changed, 382 insertions(+), 52 deletions(-)



[incubator-tvm] branch master updated: Add support for tflite arg_min and arg_max (#5992)

2020-07-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 712c82f  Add support for tflite arg_min and arg_max (#5992)
712c82f is described below

commit 712c82fb38ec2beea5a72662fb00899ab9bc0a08
Author: Dmitriy Smirnov 
AuthorDate: Tue Jul 14 00:08:56 2020 +0100

Add support for tflite arg_min and arg_max (#5992)

* [Relay][Frontend][TFLite] Add parser support for arg_min_max

* this implementation supports only the case when the axis is a scalar
* tflite 1.13 removes all dims of size 1, Relay doesn't do this
* WARNING: every newer version of tflite > 1.13 needs keepdims=TRUE

* Migrated to tflite 2.1.0

keepdims set to False and added some checks

Note the unit tests emmitted following warning:
/workspace/src/te/schedule/bound.cc:119: not in feed graph consumer = 
compute(T_multiply_red_temp, 0x53f5050)

* linter

* Removed quantized argmin

Removed quantized argmin due to inablility to provide proper test case

* added negative ranges

* re-trigger CI

Co-authored-by: Ina_Dobreva 
---
 python/tvm/relay/frontend/tflite.py  | 50 
 tests/python/frontend/tflite/test_forward.py | 34 +++
 2 files changed, 84 insertions(+)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index 36221b7..1ec8237 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -67,6 +67,8 @@ class OperatorConverter(object):
 'ABS': self.convert_abs,
 'ADD': self.convert_add,
 'ADD_N': self.convert_add_n,
+'ARG_MAX': self.convert_arg_max,
+'ARG_MIN': self.convert_arg_min,
 'AVERAGE_POOL_2D': self.convert_average_pool2d,
 'BATCH_TO_SPACE_ND': self.convert_batch_to_space_nd,
 'CAST': self.convert_cast,
@@ -1634,6 +1636,54 @@ class OperatorConverter(object):
 def convert_reduce_any(self, op):
 return self._convert_reduce(_op.reduce.any, op)
 
+def _convert_arg_min_max(self, relay_op, op):
+"""Generic method converting TFLite arg_min_max"""
+try:
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.ArgMinOptions import ArgMinOptions
+from tflite.ArgMaxOptions import ArgMaxOptions
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 2, "two input tensor arguments expected"
+
+output_tensors = self.get_output_tensors(op)
+assert len(output_tensors) == 1, "one output tensor expected"
+
+input_tensor = input_tensors[0]
+in_expr = self.get_expr(input_tensor.tensor_idx)
+axis_tensor = input_tensors[1]
+# In Tensorflow, `axis` argument is a Tensor, not attribute. We
+# support the case where it inputs from a scalar constant.
+axis_value = self.get_tensor_value(axis_tensor)
+assert axis_value.size == 1
+axis_value = axis_value.item()
+
+if op.BuiltinOptionsType() == BuiltinOptions.ArgMinOptions:
+arg_min_max_options = ArgMinOptions()
+elif op.BuiltinOptionsType() == BuiltinOptions.ArgMaxOptions:
+arg_min_max_options = ArgMaxOptions()
+op_options = op.BuiltinOptions()
+arg_min_max_options.Init(op_options.Bytes, op_options.Pos)
+
+# set keepdims to True since tflite 1.13 removes all dims of size 1
+# WARNING: all other versions of tflite > 1.13 need keepdims=False
+out = relay_op(in_expr, axis=axis_value, keepdims=False, exclude=False)
+
+return out
+
+def convert_arg_min(self, op):
+"""Convert TFLite ARG_MIN"""
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized ARG_MIN operator is not supported yet.')
+return self._convert_arg_min_max(_op.argmin, op)
+
+def convert_arg_max(self, op):
+"""Convert TFLite ARG_MAX"""
+return self._convert_arg_min_max(_op.argmax, op)
+
 def convert_fully_connected(self, op):
 """Convert TFLite fully connected"""
 try:
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 52491b2..5118467 100644
--- a/tests/python/frontend/tflite

[incubator-tvm] branch master updated (2dcfd61 -> 7902d0f)

2020-06-21 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 2dcfd61  [Target] Introduce Target Id Registry (#5838)
 add 7902d0f  [QUANTIZE] Add nn.batch_flatten as quantizable. (#5805)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/quantize/_partition.py   | 25 +
 src/relay/quantize/realize.cc |  5 -
 tests/python/relay/test_pass_auto_quantize.py | 23 +++
 3 files changed, 44 insertions(+), 9 deletions(-)



[incubator-tvm] branch master updated (7f37eb4 -> 3e72be5)

2020-06-16 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 7f37eb4  [RUNTIME][String] Overload string operators (#5806)
 add 3e72be5  [Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence 
operator (#5495)

No new revisions were added by this update.

Summary of changes:
 docs/api/python/topi.rst |  2 +
 docs/langref/relay_op.rst|  1 +
 include/tvm/relay/attrs/transform.h  | 14 +
 python/tvm/relay/frontend/mxnet.py   | 16 +
 python/tvm/relay/frontend/tflite.py  | 34 --
 python/tvm/relay/op/_transform.py|  1 +
 python/tvm/relay/op/op_attrs.py  |  4 ++
 python/tvm/relay/op/transform.py | 47 ++
 src/relay/op/tensor/transform.cc | 93 +++-
 tests/python/frontend/mxnet/test_forward.py  | 34 ++
 tests/python/frontend/tflite/test_forward.py | 27 
 tests/python/relay/test_op_level3.py | 67 
 topi/include/topi/transform.h| 78 +++
 topi/python/topi/transform.py| 31 ++
 topi/src/transform.cc|  7 ++-
 topi/tests/python/test_topi_transform.py | 80 
 16 files changed, 504 insertions(+), 32 deletions(-)



[incubator-tvm] branch master updated (9948367 -> 52bf113)

2020-06-16 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 9948367  Error msg update (#5818)
 add 52bf113  [Frontend][TFlite] Add parser support for relu6, leaky_relu, 
relu_n1_to_1, log_softmax (#4805)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 134 +++
 tests/python/frontend/tflite/test_forward.py | 109 ++
 2 files changed, 243 insertions(+)



[incubator-tvm] branch master updated (490510d -> c2e248f)

2020-06-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 490510d  codegen llvm: move nvptx-specific intrinsic handling into 
codegen_nvptx (#5726)
 add c2e248f  [TOPI,RELAY][TFLITE] Sparse to dense operator (#5447)

No new revisions were added by this update.

Summary of changes:
 docs/api/python/topi.rst |  2 +
 docs/langref/relay_op.rst|  1 +
 include/tvm/relay/attrs/transform.h  |  9 
 python/tvm/relay/frontend/tflite.py  | 32 
 python/tvm/relay/op/_transform.py|  1 +
 python/tvm/relay/op/transform.py | 31 
 src/relay/op/tensor/transform.cc | 74 +++
 tests/python/frontend/tflite/test_forward.py | 76 +++-
 tests/python/relay/test_op_level3.py | 55 +++-
 topi/include/topi/transform.h| 48 ++
 topi/python/topi/transform.py| 29 +++
 topi/src/transform.cc|  4 ++
 topi/tests/python/test_topi_transform.py | 63 +++
 13 files changed, 423 insertions(+), 2 deletions(-)



[incubator-tvm] branch master updated (c2e248f -> 34c95a8)

2020-06-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c2e248f  [TOPI,RELAY][TFLITE] Sparse to dense operator (#5447)
 add 34c95a8  [Frontend][TFLite] Add parser support for shape and range 
(#5329)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  35 ++
 tests/python/frontend/tflite/test_forward.py | 168 +++
 2 files changed, 179 insertions(+), 24 deletions(-)



[incubator-tvm] branch master updated: [Doc] Misc doc fix (#5672)

2020-05-26 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 101dfb5  [Doc] Misc doc fix (#5672)
101dfb5 is described below

commit 101dfb5a5ad2578cd786fc9c9e93e181bfe0f868
Author: Zhao Wu 
AuthorDate: Tue May 26 23:15:44 2020 +0800

[Doc] Misc doc fix (#5672)
---
 docs/dev/convert_layout.rst  | 2 +-
 tutorials/frontend/deploy_prequantized_tflite.py | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/docs/dev/convert_layout.rst b/docs/dev/convert_layout.rst
index ee5350c..07ebc20 100644
--- a/docs/dev/convert_layout.rst
+++ b/docs/dev/convert_layout.rst
@@ -246,7 +246,7 @@ In order to specify the layouts to convert to, we create a 
mapping of heavily-la
 # RemoveUnunsedFunctions is used to clean up the graph.
 seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
 
relay.transform.ConvertLayout(desired_layouts)])
-with relay.transform.PassContext(opt_level=3):
+with tvm.transform.PassContext(opt_level=3):
 mod = seq(mod)
 
 # Call relay compilation
diff --git a/tutorials/frontend/deploy_prequantized_tflite.py 
b/tutorials/frontend/deploy_prequantized_tflite.py
index 3cdd423..5fd6837 100644
--- a/tutorials/frontend/deploy_prequantized_tflite.py
+++ b/tutorials/frontend/deploy_prequantized_tflite.py
@@ -18,6 +18,7 @@
 Deploy a Framework-prequantized Model with TVM - Part 3 (TFLite)
 
 **Author**: `Siju Samuel <https://github.com/siju-samuel>`_
+
 Welcome to part 3 of the Deploy Framework-Prequantized Model with TVM tutorial.
 In this part, we will start with a Quantized TFLite graph and then compile and 
execute it via TVM.
 



[incubator-tvm] branch master updated (d0b15fe -> 301f515)

2020-05-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from d0b15fe  [RELAY][Convert Layout] Specify additional layouts in convert 
layout pass (#5422)
 add 301f515  Add a quantized conv2 unit test for the tflite front-end 
(#5558)

No new revisions were added by this update.

Summary of changes:
 tests/python/frontend/tflite/test_forward.py | 48 +---
 1 file changed, 37 insertions(+), 11 deletions(-)



[incubator-tvm] branch master updated (b1eb97a -> d0b15fe)

2020-05-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b1eb97a  Fix the runtime raise error (#5586)
 add d0b15fe  [RELAY][Convert Layout] Specify additional layouts in convert 
layout pass (#5422)

No new revisions were added by this update.

Summary of changes:
 docs/dev/convert_layout.rst   |  52 ++--
 include/tvm/relay/op_attr_types.h |   6 +-
 include/tvm/relay/transform.h |   6 +-
 python/tvm/relay/op/nn/_nn.py |  55 +---
 python/tvm/relay/qnn/op/layout_conversions.py |  28 ++--
 python/tvm/relay/transform/transform.py   |  11 +-
 src/relay/transforms/convert_layout.cc|  28 ++--
 tests/python/relay/test_pass_convert_op_layout.py | 152 --
 8 files changed, 267 insertions(+), 71 deletions(-)



[incubator-tvm] branch master updated (70a5902 -> 32a094c)

2020-05-05 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 70a5902  [RPC] Call sync in remote cpu to gpu copies (#5512)
 add 32a094c  [QNN] Support CallNode inputs in qnn.concatenate (#5360)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/qnn/op/qnn.py| 13 +++--
 src/relay/qnn/op/concatenate.cc   | 14 +++---
 tests/python/relay/test_op_qnn_concatenate.py | 25 +
 3 files changed, 43 insertions(+), 9 deletions(-)



[incubator-tvm] branch master updated: [TFLITE] Match TFLite shape for SSD custom op (#5473)

2020-04-29 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 95a816c  [TFLITE] Match TFLite shape for SSD custom op (#5473)
95a816c is described below

commit 95a816c9078c5cc7cb08d354a069a15f5d18951c
Author: mbaret <55580676+mba...@users.noreply.github.com>
AuthorDate: Wed Apr 29 17:13:16 2020 +0100

[TFLITE] Match TFLite shape for SSD custom op (#5473)

This patch ensures that the output shape from TVM's
Detection_PostProcess is the same as TFLite's and
expands the unit test to confirm this.

Change-Id: If5db95741533f131241dfebbaa7708dbd528fe70
---
 python/tvm/relay/frontend/tflite.py  | 13 +
 tests/python/frontend/tflite/test_forward.py |  7 +++
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index b9a1657..66d0ff3 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -2257,6 +2257,7 @@ class OperatorConverter(object):
 assert len(inputs) == 3, "inputs length should be 3"
 cls_pred = self.get_expr(inputs[1].tensor_idx)
 loc_prob = self.get_expr(inputs[0].tensor_idx)
+batch_size = inputs[1].tensor.Shape(0)
 anchor_values = self.get_tensor_value(inputs[2])
 anchor_boxes = len(anchor_values)
 anchor_type = self.get_tensor_type_str(inputs[2].tensor.Type())
@@ -2284,7 +2285,7 @@ class OperatorConverter(object):
 loc_prob = _op.concatenate(
 [loc_coords[1], loc_coords[0], loc_coords[3], loc_coords[2]], 
axis=2
 )
-loc_prob = _op.reshape(loc_prob, [1, anchor_boxes*4])
+loc_prob = _op.reshape(loc_prob, [batch_size, anchor_boxes*4])
 
 # anchor coords are in yxhw format
 # need to convert to ltrb
@@ -2327,10 +2328,14 @@ class OperatorConverter(object):
 ret = _op.vision.non_max_suppression(ret[0], ret[1], 
**non_max_suppression_attrs)
 ret = _op.vision.get_valid_counts(ret, 0)
 valid_count = ret[0]
+# keep only the top 'max_detections' rows
+ret = _op.strided_slice(ret[1],
+[0, 0, 0],
+[batch_size, custom_options["max_detections"], 
anchor_boxes])
 # the output needs some reshaping to match tflite
-ret = _op.split(ret[1], 6, axis=2)
-cls_ids = ret[0]
-scores = ret[1]
+ret = _op.split(ret, 6, axis=2)
+cls_ids = _op.reshape(ret[0], [batch_size, -1])
+scores = _op.reshape(ret[1], [batch_size, -1])
 boxes = _op.concatenate([ret[3], ret[2], ret[5], ret[4]], axis=2)
 ret = _expr.TupleWrapper(_expr.Tuple([boxes, cls_ids, scores, 
valid_count]), size=4)
 return ret
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 7ff4c31..bc3f32a 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -1731,7 +1731,14 @@ def test_detection_postprocess():
["raw_outputs/box_encodings", 
"raw_outputs/class_predictions"], num_output=4)
 # check valid count is the same
 assert tvm_output[3] == tflite_output[3]
+# check all the output shapes are the same
+assert tvm_output[0].shape == tflite_output[0].shape
+assert tvm_output[1].shape == tflite_output[1].shape
+assert tvm_output[2].shape == tflite_output[2].shape
 valid_count = tvm_output[3][0]
+# only check the valid detections are the same
+# tvm has a different convention to tflite for invalid detections, it uses 
all -1s whereas
+# tflite appears to put in nonsense data instead
 tvm_boxes = tvm_output[0][0][:valid_count]
 tvm_classes = tvm_output[1][0][:valid_count]
 tvm_scores = tvm_output[2][0][:valid_count]



[incubator-tvm] branch master updated (dbd0114 -> a3b1397)

2020-04-23 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from dbd0114  fix [RUNTIME][VULKAN] vkBuffer released before memory copy 
command send to GPU (#5388) (#5418)
 add a3b1397  [Frontend] Asymmetric padding of convolution support (#4803)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/coreml.py | 11 ++-
 python/tvm/relay/frontend/keras.py  | 20 ++--
 python/tvm/relay/frontend/tflite.py |  8 +---
 3 files changed, 5 insertions(+), 34 deletions(-)



[incubator-tvm] branch master updated (09eb508 -> 3d18adf)

2020-04-15 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 09eb508  [BYOC] Prevent duplicate outputs in subgraph Tuple (#5320)
 add 3d18adf  [Tutorial, QNN] Add tutorial for loading quantized PyTorch 
model (#5321)

No new revisions were added by this update.

Summary of changes:
 docs/dev/relay_pass_infra.rst |   2 +-
 tutorials/frontend/deploy_prequantized.py | 237 ++
 tutorials/frontend/from_pytorch.py|   4 +-
 3 files changed, 240 insertions(+), 3 deletions(-)
 create mode 100644 tutorials/frontend/deploy_prequantized.py



[incubator-tvm] branch master updated: Adding support for TFLite QnnSub operator. (#5230)

2020-04-09 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 6ecfaaf  Adding support for TFLite QnnSub operator. (#5230)
6ecfaaf is described below

commit 6ecfaaff1daa40fddaab6d7b17d0563e2b318930
Author: shoubhik 
AuthorDate: Thu Apr 9 22:32:28 2020 -0700

Adding support for TFLite QnnSub operator. (#5230)
---
 python/tvm/relay/frontend/tflite.py  | 7 ---
 tests/python/frontend/tflite/test_forward.py | 7 +--
 2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index caf8f92..d489bd3 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -926,8 +926,7 @@ class OperatorConverter(object):
 """Convert TFLite SUB"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
-raise tvm.error.OpNotImplemented(
-'TFlite quantized SUB operator is not supported yet.')
+return self._convert_elemwise(_qnn.op.subtract, op)
 return self._convert_elemwise(_op.subtract, op)
 
 def convert_mul(self, op):
@@ -1355,7 +1354,9 @@ class OperatorConverter(object):
 if is_depthwise_conv:
 params['channels'] = int(in_channels)
 params['groups'] = int(input_c)
-params['kernel_layout'] = 'HWOI'
+# If number of input channels is 1, treat as normal
+# convolution.
+params['kernel_layout'] = 'HWIO' if input_c == 1 else 'HWOI'
 else:
 params['channels'] = int(output_channels)
 params['kernel_layout'] = 'HWIO'
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 831b021..db4deb1 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -541,6 +541,8 @@ def test_forward_convolution():
 _test_convolution([4, 17, 17, 124], [1, 1, 124, 1], [1, 1], [1, 1], 
'SAME', 'NHWC', True)
 _test_convolution([4, 17, 17, 12], [3, 3, 12, 1], [1, 1], [2, 2], 'VALID', 
'NHWC', True)
 _test_convolution([4, 17, 17, 12], [3, 3, 12, 2], [1, 1], [2, 2], 'VALID', 
'NHWC', True)
+# dephtwise convolution with single input channel
+_test_convolution([1, 76, 64, 1], [9, 5, 1, 96], [1, 1], [1, 1], 'SAME', 
'NHWC', True)
 
 
 ###
@@ -902,9 +904,9 @@ def _test_add(data, fused_activation_function=None, 
quantized=False, qnn_op=None
 # Subtract
 # 
 
-def _test_sub(data, fused_activation_function=None):
+def _test_sub(data, fused_activation_function=None, quantized=False, 
qnn_op=None):
 """ One iteration of subtract """
-return _test_elemwise(math_ops.subtract, data, fused_activation_function)
+return _test_elemwise(math_ops.subtract, data, fused_activation_function, 
quantized, qnn_op)
 ###
 # Mul
 # ---
@@ -1036,6 +1038,7 @@ def test_all_elemwise():
 _test_forward_elemwise(partial(_test_add, 
fused_activation_function="RELU"))
 _test_forward_elemwise(partial(_test_add, 
fused_activation_function="RELU6"))
 _test_forward_elemwise(_test_sub)
+_test_forward_elemwise_quantized(_test_sub)
 _test_forward_elemwise(partial(_test_sub, 
fused_activation_function="RELU"))
 _test_forward_elemwise(partial(_test_sub, 
fused_activation_function="RELU6"))
 _test_forward_elemwise(_test_mul)



[incubator-tvm] branch master updated: [Relay][OP] Add fast_erf implementation (#5241)

2020-04-07 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new f5b02fd  [Relay][OP] Add fast_erf implementation (#5241)
f5b02fd is described below

commit f5b02fdb1b5a7b6be79df97035ec1c3b80e3c665
Author: Haichen Shen 
AuthorDate: Tue Apr 7 12:05:33 2020 -0700

[Relay][OP] Add fast_erf implementation (#5241)

* add fast erf

* doc

* lint

* fix

* fix indent
---
 include/tvm/target/generic_func.h   |  2 +-
 python/tvm/relay/op/_tensor.py  |  2 +
 src/relay/op/tensor/unary.cc| 11 +
 src/relay/transforms/fast_math.cc   |  4 ++
 src/relay/transforms/pattern_util.h |  5 +++
 tests/python/relay/test_op_fast_math.py |  3 ++
 topi/include/topi/elemwise.h| 73 -
 topi/python/topi/math.py| 16 
 topi/src/elemwise.cc|  5 +++
 topi/tests/python/test_topi_math.py |  9 ++--
 10 files changed, 124 insertions(+), 6 deletions(-)

diff --git a/include/tvm/target/generic_func.h 
b/include/tvm/target/generic_func.h
index 89a7f57..f2a361b3 100644
--- a/include/tvm/target/generic_func.h
+++ b/include/tvm/target/generic_func.h
@@ -72,7 +72,7 @@ class GenericFunc : public ObjectRef {
*
* \code
*   // Example code on how to call generic function
-   *   void CallGeneirc(GenericFunc f) {
+   *   void CallGeneric(GenericFunc f) {
* // call like normal functions by pass in arguments
* // return value is automatically converted back
* int rvalue = f(1, 2.0);
diff --git a/python/tvm/relay/op/_tensor.py b/python/tvm/relay/op/_tensor.py
index f24da05..a607a47 100644
--- a/python/tvm/relay/op/_tensor.py
+++ b/python/tvm/relay/op/_tensor.py
@@ -76,6 +76,7 @@ register_injective_schedule("shape_of")
 register_injective_schedule("ndarray_size")
 register_broadcast_schedule("fast_exp")
 register_broadcast_schedule("fast_tanh")
+register_broadcast_schedule("fast_erf")
 
 
 # zeros
@@ -222,3 +223,4 @@ register_shape_func("exp", False, elemwise_shape_func)
 register_shape_func("tan", False, elemwise_shape_func)
 register_shape_func("fast_exp", False, elemwise_shape_func)
 register_shape_func("fast_tanh", False, elemwise_shape_func)
+register_shape_func("fast_erf", False, elemwise_shape_func)
diff --git a/src/relay/op/tensor/unary.cc b/src/relay/op/tensor/unary.cc
index 3da77e9..4cca8b0 100644
--- a/src/relay/op/tensor/unary.cc
+++ b/src/relay/op/tensor/unary.cc
@@ -128,6 +128,17 @@ RELAY_REGISTER_UNARY_OP("erf")
 .set_attr("FTVMCompute", RELAY_UNARY_COMPUTE(topi::erf));
 
 
+RELAY_REGISTER_UNARY_OP("fast_erf")
+.describe(R"code(Returns the error function value for input array, computed 
element-wise.
+
+.. math::
+   \fast_erf(x)
+
+)code" TVM_ADD_FILELINE)
+.set_support_level(1)
+.set_attr("FTVMCompute", RELAY_UNARY_COMPUTE(topi::fast_erf));
+
+
 RELAY_REGISTER_UNARY_OP("sqrt")
 .describe(R"code(Returns the sqrt input array, computed element-wise.
 
diff --git a/src/relay/transforms/fast_math.cc 
b/src/relay/transforms/fast_math.cc
index 861566f..cf00a89 100644
--- a/src/relay/transforms/fast_math.cc
+++ b/src/relay/transforms/fast_math.cc
@@ -35,11 +35,14 @@ class FastMathMutator : public ExprRewriter {
  public:
   FastMathMutator()
   : exp_op_(Op::Get("exp")),
+erf_op_(Op::Get("erf")),
 tanh_op_(Op::Get("tanh")) {}
 
   Expr Rewrite_(const CallNode* pre, const Expr& post) override {
 if (pre->op == exp_op_) {
   return FastExp(post.as()->args[0]);
+} else if (pre->op == erf_op_) {
+  return FastErf(post.as()->args[0]);
 } else if (pre->op == tanh_op_) {
   return FastTanh(post.as()->args[0]);
 }
@@ -51,6 +54,7 @@ class FastMathMutator : public ExprRewriter {
   // operator equivalence checking so that the registry lookup overhead can be
   // reduced.
   const Op& exp_op_;
+  const Op& erf_op_;
   const Op& tanh_op_;
 };
 
diff --git a/src/relay/transforms/pattern_util.h 
b/src/relay/transforms/pattern_util.h
index 350d9e1..cd2af9f 100644
--- a/src/relay/transforms/pattern_util.h
+++ b/src/relay/transforms/pattern_util.h
@@ -322,6 +322,11 @@ inline Expr FastExp(Expr e) {
   return Call(op, {e});
 }
 
+inline Expr FastErf(Expr e) {
+  static const Op& op = Op::Get("fast_erf");
+  return Call(op, {e});
+}
+
 inline Expr FastTanh(Expr e) {
   static const Op& op = Op::Get("fast_tanh");
   return Call(op, {e});
diff --git a/tests/python/relay/test_op_fast_math.py 
b/tests/python/relay/test_op_fast_math.py
index 1d661c3..215b83e 100644
--- a/

[incubator-tvm] branch master updated (f4286cc -> dada676)

2020-03-27 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from f4286cc  [TOPI][Tensor Core] Conv2d and Dense ops support on Tensor 
Core (#5099)
 add dada676  Adding support for QNN subtract op (#5153)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/qnn/op/qnn.py |  51 -
 src/relay/qnn/op/add.cc|  85 ---
 src/relay/qnn/op/mul.cc|  47 -
 src/relay/qnn/op/op_common.h   | 163 ++---
 src/relay/qnn/op/subtract.cc   | 103 ++
 src/relay/qnn/util.h   |   2 +-
 tests/python/relay/test_op_qnn_add.py  |  54 +-
 tests/python/relay/test_op_qnn_subtract.py | 136 
 8 files changed, 507 insertions(+), 134 deletions(-)
 create mode 100644 src/relay/qnn/op/subtract.cc
 create mode 100644 tests/python/relay/test_op_qnn_subtract.py



[incubator-tvm] branch master updated: [Torch] Add initial 3D op support and test on Resnet 3D (#5075)

2020-03-19 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 8607947  [Torch] Add initial 3D op support and test on Resnet 3D 
(#5075)
8607947 is described below

commit 86079479f0556002adfce2f438ea2a607e318c23
Author: masahi 
AuthorDate: Fri Mar 20 05:39:04 2020 +0900

[Torch] Add initial 3D op support and test on Resnet 3D (#5075)

* fix minor lint issue

* add conv3d and adaptive avg pool3d conversion with test

* fix max pool handling

* add batch norm 3d test

* add resnet 3d test

* add more conv3d test

* clean up batch norm test

* add note on disabling inception v3 test

* add more tests

* add more tests

* fix names
---
 python/tvm/relay/frontend/pytorch.py  | 93 +--
 tests/python/frontend/pytorch/qnn_test.py |  5 +-
 tests/python/frontend/pytorch/test_forward.py | 71 ++--
 3 files changed, 116 insertions(+), 53 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 0c7465b..83436f2 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -163,7 +163,7 @@ def _relu():
 return _op.nn.relu(data)
 return _impl
 
-def _adaptive_avg_2d():
+def _adaptive_avg_pool_2d():
 def _impl(inputs, input_types):
 data = inputs[0]
 output_size = _infer_shape(inputs[1])
@@ -178,14 +178,32 @@ def _adaptive_avg_2d():
 
 return _impl
 
-def _adaptive_max_2d():
+def _adaptive_max_pool_2d():
 def _impl(inputs, input_types):
 data = inputs[0]
 output_size = _infer_shape(inputs[1])
 
+# returns dummy indices too
 return _op.nn.adaptive_max_pool2d(
 data,
-output_size=output_size)
+output_size=output_size), None
+return _impl
+
+def _adaptive_max_pool_3d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+# returns dummy indices too
+return _op.nn.adaptive_max_pool3d(data, output_size=output_size), None
+
+return _impl
+
+def _adaptive_avg_pool_3d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+return _op.nn.adaptive_avg_pool3d(data, output_size=output_size)
+
 return _impl
 
 def _maxpool_2d():
@@ -249,33 +267,30 @@ def _convolution():
 if isinstance(dilation, _expr.Expr):
 dilation = _infer_shape(dilation)
 
-if use_transpose:
-conv_out = _op.nn.conv2d_transpose(data,
-   weight,
-   strides=strides,
-   padding=padding,
-   dilation=dilation,
-   groups=groups,
-   channels=channels,
-   kernel_size=kernel_size,
-   data_layout="NCHW",
-   kernel_layout="OIHW",
-   out_layout="",
-   out_dtype="")
-else:
-conv_out = _op.nn.conv2d(data,
- weight,
- strides=strides,
- padding=padding,
- dilation=dilation,
- groups=groups,
- channels=channels,
- kernel_size=kernel_size,
- data_layout="NCHW",
- kernel_layout="OIHW",
- out_layout="",
- out_dtype="")
+data_layout = "NCHW"
+kernel_layout = "OIHW"
+conv_op = _op.nn.conv2d
 
+if use_transpose:
+assert len(kernel_size) == 2, "ConvTranspose 3D not supported"
+conv_op = _op.nn.conv2d_transpose
+if len(kernel_size) == 3:
+conv_op = _op.nn.conv3d
+data_layout = "NCDHW"
+kernel_layout = "OIDHW"
+
+conv_out = conv_op(data,
+   weight,
+   strides=strides,
+   padding=padding,
+   dilation=dilation,
+  

[incubator-tvm] branch master updated: [ConvertLayout] Support QNN ops. (#5066)

2020-03-18 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 38118be  [ConvertLayout] Support QNN ops. (#5066)
38118be is described below

commit 38118befc0a7e8a3db87d652b30a9369abb60363
Author: Animesh Jain 
AuthorDate: Wed Mar 18 20:03:56 2020 -0700

[ConvertLayout] Support QNN ops. (#5066)

* [ConvertLayout] Support QNN ops.

* Changing layouts to C.

* Fixing dilation.

* Empty commit.

Co-authored-by: Ubuntu 
---
 python/tvm/relay/op/nn/_nn.py |  10 +-
 python/tvm/relay/qnn/op/__init__.py   |   2 +-
 python/tvm/relay/qnn/op/layout_conversions.py |  53 ++
 src/relay/op/nn/bitserial.cc  |   2 +-
 src/relay/op/nn/convolution.cc|  19 +-
 src/relay/op/nn/convolution.h |  15 ++
 src/relay/op/nn/nn.cc |  12 +-
 src/relay/op/nn/pad.cc|   2 +-
 src/relay/op/nn/pooling.cc|   2 +-
 src/relay/op/nn/upsampling.cc |   2 +-
 src/relay/op/tensor/reduce.cc |   7 +-
 src/relay/op/tensor/transform.cc  |  57 +-
 src/relay/op/tensor/transform.h   |  58 ++
 src/relay/qnn/op/add.cc   |  21 ++-
 src/relay/qnn/op/concatenate.cc   |  41 +++-
 src/relay/qnn/op/convolution.cc   |  20 +-
 src/relay/qnn/op/requantize.cc|  77 +++-
 src/relay/transforms/infer_layout_util.h  |  17 +-
 src/relay/transforms/transform_layout.h   |  16 +-
 tests/python/relay/test_pass_convert_op_layout.py | 217 ++
 20 files changed, 544 insertions(+), 106 deletions(-)

diff --git a/python/tvm/relay/op/nn/_nn.py b/python/tvm/relay/op/nn/_nn.py
index c2fe6d0..a9bd900 100644
--- a/python/tvm/relay/op/nn/_nn.py
+++ b/python/tvm/relay/op/nn/_nn.py
@@ -138,8 +138,6 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
 """
 # pylint: disable=import-outside-toplevel
 from tvm import relay
-data_layout = attrs['data_layout']
-kernel_layout = attrs['kernel_layout']
 data, weight = inputs
 assert desired_layout == 'NCHW', \
 "Currently only transformation to NCHW layout is supported."
@@ -147,13 +145,7 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
 new_attrs = dict(attrs)
 new_attrs['data_layout'] = desired_layout
 new_attrs['kernel_layout'] = 'OIHW'
-
-if data_layout == 'NHWC' and kernel_layout == 'HWIO':
-# Convert (NHWC, HWIO) to (NCHW, OIHW)
-return relay.nn.conv2d(data, weight, **new_attrs)
-if data_layout == 'NHWC' and kernel_layout == 'HWOI':
-# Convert (NHWC, HWOI) to (NCHW, OIHW). Depthwise conv2d.
-return relay.nn.conv2d(data, weight, **new_attrs)
+return relay.nn.conv2d(data, weight, **new_attrs)
 return None
 
 
diff --git a/python/tvm/relay/qnn/op/__init__.py 
b/python/tvm/relay/qnn/op/__init__.py
index 042dcb9..6d66e12 100644
--- a/python/tvm/relay/qnn/op/__init__.py
+++ b/python/tvm/relay/qnn/op/__init__.py
@@ -19,4 +19,4 @@
 from __future__ import absolute_import as _abs
 from .qnn import *
 from .op import register_qnn_legalize
-from . import legalizations
+from . import legalizations, layout_conversions
diff --git a/python/tvm/relay/qnn/op/layout_conversions.py 
b/python/tvm/relay/qnn/op/layout_conversions.py
new file mode 100644
index 000..f5850b8
--- /dev/null
+++ b/python/tvm/relay/qnn/op/layout_conversions.py
@@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument
+"""Convert layout related registration"""
+from __future__ import absolute_import
+
+from tvm.relay.op import op as reg
+
+
+@reg.registe

[incubator-tvm] branch master updated: [Torch, QNN] Add missing upcast to uint8 avg_pool conversion (#5089)

2020-03-18 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new b64a843  [Torch, QNN] Add missing upcast to uint8 avg_pool conversion  
(#5089)
b64a843 is described below

commit b64a843acd15ca34d2baf9fce730e81f91b3a580
Author: masahi 
AuthorDate: Thu Mar 19 02:31:06 2020 +0900

[Torch, QNN] Add missing upcast to uint8 avg_pool conversion  (#5089)

* add missing upcast to avgpool

* add avg pool test
---
 python/tvm/relay/frontend/pytorch.py  | 22 +++---
 python/tvm/relay/frontend/qnn_torch.py|  5 ++---
 tests/python/frontend/pytorch/qnn_test.py | 15 +--
 3 files changed, 30 insertions(+), 12 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 6da91c1..0c7465b 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -172,7 +172,7 @@ def _adaptive_avg_2d():
 return _op.nn.adaptive_avg_pool2d(x, output_size=output_size)
 
 if input_types[0] == "quint8":
-return qnn_torch.quantized_adaptive_avg_2d(data, func)
+return qnn_torch.apply_with_upcast(data, func)
 
 return func(data)
 
@@ -484,14 +484,22 @@ def _avg_pool2d():
 ceil_mode = int(inputs[4])
 count_include_pad = int(inputs[5])
 
-return _op.nn.avg_pool2d(data,
- pool_size=pool_size,
- strides=strides,
- padding=padding,
- ceil_mode=ceil_mode,
- count_include_pad=count_include_pad)
+def func(x):
+return _op.nn.avg_pool2d(x,
+ pool_size=pool_size,
+ strides=strides,
+ padding=padding,
+ ceil_mode=ceil_mode,
+ count_include_pad=count_include_pad)
+
+if input_types[0] == "quint8":
+return qnn_torch.apply_with_upcast(data, func)
+
+return func(data)
+
 return _impl
 
+
 def _dropout():
 def _impl(inputs, input_types):
 data = inputs[0]
diff --git a/python/tvm/relay/frontend/qnn_torch.py 
b/python/tvm/relay/frontend/qnn_torch.py
index 70178be..e6a015f 100644
--- a/python/tvm/relay/frontend/qnn_torch.py
+++ b/python/tvm/relay/frontend/qnn_torch.py
@@ -359,10 +359,9 @@ def add_quant_params(params, quant_params):
 params[qparam.bias_var.name_hint] = tvm.nd.array(qparam.bias)
 
 
-def quantized_adaptive_avg_2d(data, func_fp32):
-# this follows tflite impl
+def apply_with_upcast(data, func):
 inp = _op.cast(data, dtype="int32")
-out = func_fp32(inp)
+out = func(inp)
 return _op.cast(out, "uint8")
 
 
diff --git a/tests/python/frontend/pytorch/qnn_test.py 
b/tests/python/frontend/pytorch/qnn_test.py
index 23fcb7c..ebc00bf 100644
--- a/tests/python/frontend/pytorch/qnn_test.py
+++ b/tests/python/frontend/pytorch/qnn_test.py
@@ -218,7 +218,6 @@ class MulScalarNegative(nn.Module):
 class UpsamplingBilinear(nn.Module):
 def __init__(self):
 super().__init__()
-self.relu = QuantWrapper(nn.ReLU())
 self.quant = QuantStub()
 self.dequant = DeQuantStub()
 
@@ -233,12 +232,25 @@ class UpsamplingBilinear(nn.Module):
 pass
 
 
+class AvgPool2d(nn.Module):
+def __init__(self):
+super().__init__()
+self.pool = QuantWrapper(nn.AvgPool2d(kernel_size=2))
+
+def forward(self, x):
+return self.pool(x)
+
+def fuse_model(self):
+pass
+
+
 def test_quantized_modules():
 imagenet_ishape = (1, 3, 224, 224)
 
 qmodules = [
("relu", imagenet_ishape, ReLU(), False),
("upsample bilinear", (1, 3, 64, 64), UpsamplingBilinear(), False),
+   ("avgpool", imagenet_ishape, AvgPool2d(), False),
 ]
 
 for per_channel in [False, True]:
@@ -276,7 +288,6 @@ def test_quantized_modules():
 pt_result = script_module(inp.clone()).numpy()
 
 input_name = get_graph_input_names(script_module)[0]
-
 runtime = get_tvm_runtime(script_module, input_name, ishape)
 runtime.set_input(input_name, inp.numpy().copy())
 runtime.run()



[incubator-tvm] branch master updated (6ee9c2f -> f346c60)

2020-03-09 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 6ee9c2f  typo (#5008)
 add f346c60  Revert "[Torch, QNN] Add support for quantized models via QNN 
(#4977)" (#5013)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  |  88 +---
 python/tvm/relay/frontend/qnn_torch.py| 692 --
 tests/python/frontend/pytorch/qnn_test.py | 455 -
 tests/python/frontend/pytorch/test_forward.py |   6 -
 4 files changed, 9 insertions(+), 1232 deletions(-)
 delete mode 100644 python/tvm/relay/frontend/qnn_torch.py
 delete mode 100644 tests/python/frontend/pytorch/qnn_test.py



[incubator-tvm] branch master updated: [Torch, QNN] Add support for quantized models via QNN (#4977)

2020-03-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new fc7f078  [Torch, QNN] Add support for quantized models via QNN (#4977)
fc7f078 is described below

commit fc7f0783940c362bf48cd46817956381196201e2
Author: Animesh Jain 
AuthorDate: Wed Mar 4 11:24:56 2020 -0800

[Torch, QNN] Add support for quantized models via QNN (#4977)

* qnn support initial import

* fix upsampling num input

* imagenet tests added

* add qunatized module tests

* quantized module tests working

* imagenet test working

* fix lint

* remove top level torch import to fix ci error

* disable lint warning on outside toplevel import

* revert parse -> convert change

* add comments to qnn translation

* address comments, add sample outputs

* add more comments

* refactor bias add and requantize step
---
 python/tvm/relay/frontend/pytorch.py  |  88 +++-
 python/tvm/relay/frontend/qnn_torch.py| 692 ++
 tests/python/frontend/pytorch/qnn_test.py | 455 +
 tests/python/frontend/pytorch/test_forward.py |   6 +
 4 files changed, 1232 insertions(+), 9 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 19bccca..1bdcf0a 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -19,6 +19,7 @@
 # pylint: disable=import-outside-toplevel, simplifiable-if-expression, 
unnecessary-comprehension
 """PT: PyTorch frontend."""
 import itertools
+import logging
 
 import numpy as np
 
@@ -32,6 +33,8 @@ from .common import get_relay_op
 from .common import infer_shape as _infer_shape
 from .common import infer_value as _infer_value
 
+from . import qnn_torch
+
 __all__ = ["from_pytorch"]
 
 # operator implementation
@@ -146,6 +149,10 @@ def _zeros():
 def _relu():
 def _impl(inputs, input_types):
 data = inputs[0]
+if input_types[0] == "quint8":
+assert len(inputs) == 3, "Input quant param not found in op inputs"
+input_zero_point = _expr.const(inputs[2], dtype="int32")
+return qnn_torch.quantized_relu(data, input_zero_point)
 return _op.nn.relu(data)
 return _impl
 
@@ -154,9 +161,14 @@ def _adaptive_avg_2d():
 data = inputs[0]
 output_size = _infer_shape(inputs[1])
 
-return _op.nn.adaptive_avg_pool2d(
-data,
-output_size=output_size)
+def func(x):
+return _op.nn.adaptive_avg_pool2d(x, output_size=output_size)
+
+if input_types[0] == "quint8":
+return qnn_torch.quantized_adaptive_avg_2d(data, func)
+
+return func(data)
+
 return _impl
 
 def _adaptive_max_2d():
@@ -503,7 +515,18 @@ def _mean():
 else:
 exclude = False
 
-return _op.mean(data, axis, keepdims, exclude)
+def func(x):
+return _op.mean(x, axis, keepdims, exclude)
+
+if input_types[0] == "quint8":
+assert len(inputs) == 6, "Input quant param not found in op inputs"
+input_scale = _expr.const(inputs[4])
+input_zero_point = _expr.const(inputs[5])
+return qnn_torch.quantized_mean(data, input_scale,
+input_zero_point, func)
+
+return func(data)
+
 return _impl
 
 def _chunk():
@@ -665,10 +688,40 @@ def _upsample(method):
 else:
 coord_trans = "half_pixel"
 
-return _op.image.resize(data, out_size, "NCHW", method, coord_trans)
+def func(x):
+return _op.image.resize(x, out_size, "NCHW", method, coord_trans)
+
+if input_types[0] == "quint8":
+import torch
+from packaging import version
+
+# Torch version > 1.4 changed upsampling API
+if version.parse(torch.__version__) > version.parse("1.4.0"):
+num_inputs = 7
+else:
+num_inputs = 5
+
+assert len(inputs) == num_inputs, "Input quant param not found in 
op inputs"
+
+input_scale = _expr.const(inputs[-2])
+input_zero_point = _expr.const(inputs[-1])
+return qnn_torch.quantized_upsample(data, input_scale,
+input_zero_point, func)
+return func(data)
 
 return _impl
 
+
+def _expand_as():
+def _impl(inputs, input_types):
+# TODO: maybe fix this
+# This assumes expand_as can be removed because TVM has broadcast op
+ms

[incubator-tvm] branch master updated (892dc91 -> 0fb4836)

2020-03-01 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 892dc91  [Doc]refine the example description of max/min/sum/tag_scope 
(#4974)
 add 0fb4836  [Relay][Pass] Add inline pass (#4927)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/expr.h   |   9 +
 include/tvm/relay/transform.h  |   8 +
 python/tvm/relay/transform.py  |  13 +
 src/relay/ir/expr.cc   |   6 +
 src/relay/pass/call_graph.cc   |   9 +-
 src/relay/pass/call_graph.h|  12 +-
 src/relay/pass/inline.cc   | 229 +
 tests/python/relay/test_pass_inline.py | 837 +
 8 files changed, 1118 insertions(+), 5 deletions(-)
 create mode 100644 src/relay/pass/inline.cc
 create mode 100644 tests/python/relay/test_pass_inline.py



[incubator-tvm] branch master updated: [Frontend][TFLite] Add parser support for l2_normalization (#4966)

2020-02-29 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 2355caa  [Frontend][TFLite] Add parser support for l2_normalization 
(#4966)
2355caa is described below

commit 2355caa8afdc8e6a3638c9514f57686737cbd724
Author: Ina Dobreva <55383260+ina...@users.noreply.github.com>
AuthorDate: Sat Feb 29 23:30:16 2020 +0200

[Frontend][TFLite] Add parser support for l2_normalization (#4966)

* [Frontend][TFLite] Add parser support for l2_normalization

* TF doesn't provide uint8 support
* TFL does the normalization only if it's over the last axis
* TFL uses only the default value for expilon

* Change error message
---
 python/tvm/relay/frontend/tflite.py  | 47 
 tests/python/frontend/tflite/test_forward.py | 20 
 2 files changed, 67 insertions(+)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index 3a17083..5d26d98 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -122,6 +122,7 @@ class OperatorConverter(object):
 'LOGICAL_OR': self.convert_logical_or,
 'DETECTION_POSTPROCESS': self.convert_detection_postprocess,
 'SQUARE': self.convert_square,
+'L2_NORMALIZATION': self.convert_l2_normalization,
 }
 
 def check_unsupported_ops(self):
@@ -405,6 +406,52 @@ class OperatorConverter(object):
 """Convert TFLite RESIZE_NEAREST_NEIGHBOR"""
 return self._convert_resize("nearest_neighbor", op)
 
+def convert_l2_normalization(self, op):
+"""Convert TFLite L2_NORMALIZATION """
+try:
+from tflite.Operator import Operator
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.L2NormOptions import L2NormOptions
+from tflite.ActivationFunctionType import ActivationFunctionType
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+assert isinstance(op, Operator)
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 1, "input tensors length should be 1"
+input_tensor = input_tensors[0]
+in_expr = self.get_expr(input_tensor.tensor_idx)
+
+output_tensors = self.get_output_tensors(op)
+assert len(output_tensors) == 1, "output tensors length should be 1"
+output_tensor = output_tensors[0]
+
+assert op.BuiltinOptionsType() == BuiltinOptions.L2NormOptions
+op_options = op.BuiltinOptions()
+l2_norm_options = L2NormOptions()
+l2_norm_options.Init(op_options.Bytes, op_options.Pos)
+fused_activation_fn = l2_norm_options.FusedActivationFunction()
+
+# TFLite supports normalization only over the last dim
+input_tensor_rank = len(input_tensor.tensor.ShapeAsNumpy())
+
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFLite quantized L2_NORMALIZATION operator is not supported 
yet.')
+# TFL uses only the default epsilon value
+out = _op.nn.l2_normalize(in_expr, eps=1e-12, axis=[input_tensor_rank 
- 1])
+
+# if we have fused activation fn
+if fused_activation_fn != ActivationFunctionType.NONE:
+if not output_tensor.qnn_params:
+out = self.convert_fused_activation_function(out, 
fused_activation_fn)
+else:
+raise tvm.error.OpNotImplemented(
+'TFLite quantized L2_NORMALIZATION operator\
+with fused activation function is not supported yet.')
+
+return out
+
 def convert_logistic(self, op):
 """Convert TFLite LOGISTIC"""
 try:
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 4a16325..ced2425 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -33,6 +33,7 @@ from tensorflow.python.ops import math_ops
 from tensorflow.python.ops import nn_ops
 from tensorflow.python.ops import array_ops
 from tensorflow.python.ops import gen_array_ops
+from tensorflow.python.ops import nn_impl
 from tensorflow.python.ops import variables
 try:
 from tensorflow import lite as interpreter_wrapper
@@ -1264,6 +1265,24 @@ def test_forward_unpack():
 _test_unpack(np.array(np.random.uniform(0, 5, (2, 3, 4)), 
dtype=np.int32), axis=-3, num_unpacks=2)
 
 ###
+# L2 n

[incubator-tvm] branch master updated (61bea50 -> eba50ad)

2020-02-26 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 61bea50  [Tutorial] Add a tutorial for PyTorch (#4936)
 add eba50ad  [Relay][pass] call graph for relay (#4922)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/__init__.py  |   4 +
 python/tvm/relay/call_graph.py| 144 ++
 src/relay/pass/call_graph.cc  | 339 ++
 src/relay/pass/call_graph.h   | 509 ++
 tests/python/relay/test_call_graph.py | 150 ++
 5 files changed, 1146 insertions(+)
 create mode 100644 python/tvm/relay/call_graph.py
 create mode 100644 src/relay/pass/call_graph.cc
 create mode 100644 src/relay/pass/call_graph.h
 create mode 100644 tests/python/relay/test_call_graph.py