[GitHub] [tvm] domin1985 commented on pull request #7347: [RELAY][Parser] Optimize relay parser to restore calls attrs

2021-01-27 Thread GitBox


domin1985 commented on pull request #7347:
URL: https://github.com/apache/tvm/pull/7347#issuecomment-768864231


   > I think the change makes sense, but could you add a few test cases to the 
parser and I will take another look? Thanks for the fix!
   
   Thanks for the reminder



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] manupa-arm opened a new pull request #7358: [Relay][PatternLang] Bug fix of rewrite func attr

2021-01-27 Thread GitBox


manupa-arm opened a new pull request #7358:
URL: https://github.com/apache/tvm/pull/7358


   When using pattern with attr of functions, such attrs mostly does not exist 
for op node. Therefore, hasattrcheck has to be done for op nodes.
   
   @mbrookhart @masahi @comaniac 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on pull request #7267: [Frontend][Tensorflow] Sparse dense matmul adjoint option added

2021-01-27 Thread GitBox


FrozenGene commented on pull request #7267:
URL: https://github.com/apache/tvm/pull/7267#issuecomment-768819796


   Thanks @tkonolige @ANSHUMAN87 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Frontend][Tensorflow] Sparse dense matmul adjoint option added (#7267)

2021-01-27 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new dda8f5d  [Frontend][Tensorflow] Sparse dense matmul adjoint option 
added (#7267)
dda8f5d is described below

commit dda8f5d944747b9f48b9155e866fd0f746fcd9bb
Author: ANSHUMAN TRIPATHY 
AuthorDate: Thu Jan 28 11:28:13 2021 +0530

[Frontend][Tensorflow] Sparse dense matmul adjoint option added (#7267)

* [Frontend][Tensorflow] Sparse dense matmul adjoint option added

* [1] Review comments handled

* [2] Review comments handled

* [3] Review comments handled
---
 python/tvm/relay/frontend/tensorflow.py  | 69 
 tests/python/frontend/tensorflow/test_forward.py | 12 +++--
 2 files changed, 53 insertions(+), 28 deletions(-)

diff --git a/python/tvm/relay/frontend/tensorflow.py 
b/python/tvm/relay/frontend/tensorflow.py
index 2c7361a..b34e6c7 100644
--- a/python/tvm/relay/frontend/tensorflow.py
+++ b/python/tvm/relay/frontend/tensorflow.py
@@ -926,13 +926,6 @@ def _sparse_tensor_dense_matmul():
 
 data = inputs[3]
 
-# By default, in tensorflow the first input ,i.e., data is sparse
-sparse_lhs = True
-
-# If both are true means First input was dense and second was sparse
-if attr.get("adjoint_a") and attr.get("adjoint_b"):
-sparse_lhs = False
-
 rows = [x[0] for x in indices_tensor]
 cols = [x[1] for x in indices_tensor]
 
@@ -941,9 +934,53 @@ def _sparse_tensor_dense_matmul():
 (values_tensor, (rows, cols)), 
shape=tuple(dense_shape_tensor.tolist())
 )
 
-if sparse_lhs:
+# As per tensorflow implementation, we have 4 possible input 
combination
+# and the first input(A) is always sparse and second input(B) is 
always dense.
+# Case 1: A , B , adjoint_a=False, adjoint_b=False  --> A * B
+# Case 2: A , B , adjoint_a=True,   adjoint_b=False  --> A.T * B
+# Case 3: A , B , adjoint_a=False, adjoint_b=True--> A * B.T
+# Case 4: A , B , adjoint_a=True,   adjoint_b=True--> A.T * B.T
+#
+# Topi implementation for sparse_dense(matmul) has 2 possible input
+# combination where first input(A) is always dense
+# and second input(B) is always sparse.
+# Case 1: A , B, sparse_lhs = False  --> A * B.T
+# Case 2: A , B, sparse_lhs = True--> B * A.T
+#
+# The mapping would be as below:
+# TF Case 1: A , B , adjoint_a=False, adjoint_b=False
+#   --> In TF: A * B   --> In Topi: A * B.T.T
+#   --> sparse_dense(transpose(B), A, sparse_lhs=True)
+#
+# TF Case 2: A , B , adjoint_a=True, adjoint_b=False
+#   --> In TF: A.T * B   --> In Topi: A.T * B.T.T
+#   --> sparse_dense(transpose(B), transpose(A), 
sparse_lhs=True)
+#
+# TF Case 3: A , B , adjoint_a=False, adjoint_b=True
+#   --> In TF: A * B.T   --> In Topi: A * B
+#   --> sparse_dense(B, A, sparse_lhs=True)
+#
+# TF Case 4: A , B , adjoint_a=True, adjoint_b=True
+#   --> In TF: A.T * B.T   --> In Topi: (B * A.T).T
+#   --> transpose(sparse_dense(B, transpose(A), 
sparse_lhs=False))
+
+# By default, in tensorflow the first input ,i.e., data is sparse
+sparse_lhs = True
+
+# TF Case 1:
+if not attr.get("adjoint_a") and not attr.get("adjoint_b"):
+data = _op.transpose(data)
+# TF Case 2:
+elif attr.get("adjoint_a") and not attr.get("adjoint_b"):
 data = _op.transpose(data)
+weight_sp = csr_matrix(weight_sp.transpose())
+# TF Case 3:
+elif not attr.get("adjoint_a") and attr.get("adjoint_b"):
+pass
+# TF Case 4:
+# attr.get("adjoint_a") and attr.get("adjoint_b"):
 else:
+sparse_lhs = False
 weight_sp = csr_matrix(weight_sp.transpose())
 
 weight_data = _expr.const(weight_sp.data, weight_sp.data.dtype)
@@ -953,23 +990,9 @@ def _sparse_tensor_dense_matmul():
 ret = _op.nn.sparse_dense(data, [weight_data, weight_indices, 
weight_indptrs], sparse_lhs)
 
 if not sparse_lhs:
+# TF Case 4
 ret = _op.transpose(ret)
 
-# Case 1. If both are true means first input was dense and second was 
sparse
-# Case 2. If both are false means first input was sparse and second 
was dense
-# TODO(ANSHUMAN87): Support other adjoint option too
-if not (
-(attr.get("adjoint_a") and attr.get("adjoint_b"))
-or ((not attr.get("adjoint_a")) and (not attr.get("adjoint_b")))
-):
-raise tvm.error.OpAttributeUnImplemented(
-   

[GitHub] [tvm] FrozenGene merged pull request #7267: [Frontend][Tensorflow] Sparse dense matmul adjoint option added

2021-01-27 Thread GitBox


FrozenGene merged pull request #7267:
URL: https://github.com/apache/tvm/pull/7267


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ANSHUMAN87 commented on pull request #7267: [Frontend][Tensorflow] Sparse dense matmul adjoint option added

2021-01-27 Thread GitBox


ANSHUMAN87 commented on pull request #7267:
URL: https://github.com/apache/tvm/pull/7267#issuecomment-768816905


   Thanks @tkonolige !
   Gentle ping @FrozenGene!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on issue #7356: README for Rust bindings says nightly is required

2021-01-27 Thread GitBox


jroesch commented on issue #7356:
URL: https://github.com/apache/tvm/issues/7356#issuecomment-768808041


   Thanks for opening an issue! the current documentation needs to be rewritten 
I've been (slowly) working to refactor the bindings with a goal to have a 
stable version in the coming release. The Rust docs are more up to date, but in 
the repo docs are stale. This use to be true about the old bindings but CI uses 
a recent (1.47 last time I built the images) for CI. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on issue #7339: [Bug][Parser] Parser/tokenizer doesn't handle inf float

2021-01-27 Thread GitBox


jroesch commented on issue #7339:
URL: https://github.com/apache/tvm/issues/7339#issuecomment-768807297


   I would be happy to mentor anyone who wants to pick this one up.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Update uTVM code to work with the nRF5340DK dev board. (#7331)

2021-01-27 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new cbc035f  Update uTVM code to work with the nRF5340DK dev board. (#7331)
cbc035f is described below

commit cbc035f70a0cd2b3b85681fb77f843bb9b74b1ea
Author: Matt Welsh (OctoML) <63477620+mdw-oct...@users.noreply.github.com>
AuthorDate: Wed Jan 27 20:59:26 2021 -0800

Update uTVM code to work with the nRF5340DK dev board. (#7331)

* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Add `nrfjprog --recover` for nRF5340DK

* Cleanup.

* Remove debugging code.

* Revert submodule update.

* Remove debugging code.

* Fix comment.

* Remove -keys argument.

* Adding some debugging code

* Fix passing west command to ZephyrFlasher.

* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Add `nrfjprog --recover` for nRF5340DK

* Cleanup.

* Various fixes to get nRF5340 working. Not yet there.

* nRF5340 test runs locally.

* Remove debugging code.

* Fix comment.

* Remove -keys argument.

* Fix merge.
---
 apps/microtvm/reference-vm/zephyr/pyproject.toml |  3 +++
 python/tvm/micro/contrib/zephyr.py   | 23 ++---
 python/tvm/target/target.py  |  3 +++
 tests/micro/qemu/conftest.py |  9 +++
 tests/micro/qemu/test_zephyr.py  | 33 +---
 5 files changed, 53 insertions(+), 18 deletions(-)

diff --git a/apps/microtvm/reference-vm/zephyr/pyproject.toml 
b/apps/microtvm/reference-vm/zephyr/pyproject.toml
index f21c272..b4cfc54 100644
--- a/apps/microtvm/reference-vm/zephyr/pyproject.toml
+++ b/apps/microtvm/reference-vm/zephyr/pyproject.toml
@@ -64,6 +64,9 @@ scipy = "^1.4"
 python = "^3.6"
 tornado = "^6"
 typed_ast = "^1.4"
+pyyaml = "^5.4.1"
+pyserial = "^3.5"
+
 
 # AutoTVM
 xgboost = {version = "^1.1", optional = true}
diff --git a/python/tvm/micro/contrib/zephyr.py 
b/python/tvm/micro/contrib/zephyr.py
index fa032e2..ed1c986 100644
--- a/python/tvm/micro/contrib/zephyr.py
+++ b/python/tvm/micro/contrib/zephyr.py
@@ -191,7 +191,7 @@ class ZephyrCompiler(tvm.micro.Compiler):
 with open(os.path.join(output, "main.c"), "w"):
 pass
 
-# expecetd not to exist after populate_tvm_libs
+# expected not to exist after populate_tvm_libs
 build_dir = os.path.join(output, "__tvm_build")
 os.mkdir(build_dir)
 self._subprocess_env.run(
@@ -241,11 +241,12 @@ class ZephyrCompiler(tvm.micro.Compiler):
 def flasher_factory(self):
 return compiler.FlasherFactory(
 ZephyrFlasher,
-(self._west_cmd,),
+(self._board,),
 dict(
 zephyr_base=self._zephyr_base,
 project_dir=self._project_dir,
 subprocess_env=self._subprocess_env.default_overrides,
+west_cmd=self._west_cmd,
 ),
 )
 
@@ -291,7 +292,7 @@ class ZephyrFlasher(tvm.micro.compiler.Flasher):
 
 def __init__(
 self,
-west_cmd,
+board,
 zephyr_base=None,
 project_dir=None,
 subprocess_env=None,
@@ -300,6 +301,7 @@ class ZephyrFlasher(tvm.micro.compiler.Flasher):
 flash_args=None,
 debug_rpc_session=None,
 serial_timeouts=None,
+west_cmd=None,
 ):
 zephyr_base = zephyr_base or os.environ["ZEPHYR_BASE"]
 sys.path.insert(0, os.path.join(zephyr_base, "scripts", "dts"))
@@ -310,6 +312,7 @@ class ZephyrFlasher(tvm.micro.compiler.Flasher):
 finally:
 sys.path.pop(0)
 
+self._board = board
 self._zephyr_base = zephyr_base
 self._project_dir = project_dir
 self._west_cmd = west_cmd
@@ -414,6 +417,20 @@ class ZephyrFlasher(tvm.micro.compiler.Flasher):
 build_dir = os.path.dirname(
 micro_binary.abspath(micro_binary.labelled_files["cmake_cache"][0])
 )
+
+# The nRF5340DK requires an additional `nrfjprog --recover` before 
each flash cycle.
+# This is because readback protection is enabled by default when this 
device is flashed.
+# Otherwise, flashing may fail with an error such as the following:
+#  ERROR: The operation attempted is unavailable due to readback 
protection in
+#  ERROR: your device. Please use --recover to unlock the device.
+if (
+self._board.startswith("nrf5340dk")
+and self._get_flash_runner(cmake_entries) == "nrfjprog"
+):
+recover

[GitHub] [tvm] jroesch merged pull request #7331: Update uTVM code to work with the nRF5340DK dev board.

2021-01-27 Thread GitBox


jroesch merged pull request #7331:
URL: https://github.com/apache/tvm/pull/7331


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch edited a comment on pull request #7347: [RELAY][Parser] Optimize relay parser to restore calls attrs

2021-01-27 Thread GitBox


jroesch edited a comment on pull request #7347:
URL: https://github.com/apache/tvm/pull/7347#issuecomment-768798855


   I think the change makes sense, but could you add a few test cases to the 
parser and I will take another look? Thanks for the fix!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on pull request #7347: [RELAY][Parser] Optimize relay parser to restore calls attrs

2021-01-27 Thread GitBox


jroesch commented on pull request #7347:
URL: https://github.com/apache/tvm/pull/7347#issuecomment-768798855


   I think the change makes sense, but could you add a few test cases to the 
parser and I will take another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7354: [Relay] Fold If when the Condition is Constant

2021-01-27 Thread GitBox


masahi commented on pull request #7354:
URL: https://github.com/apache/tvm/pull/7354#issuecomment-768792198


   thanks @mbrookhart @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #7354: [Relay] Fold If when the Condition is Constant

2021-01-27 Thread GitBox


masahi merged pull request #7354:
URL: https://github.com/apache/tvm/pull/7354


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Fold If when the condition is Constant (#7354)

2021-01-27 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 8b84e33  Fold If when the condition is Constant (#7354)
8b84e33 is described below

commit 8b84e33679585082fd1817821eac8a7eae5830c6
Author: Matthew Brookhart 
AuthorDate: Wed Jan 27 21:31:18 2021 -0700

Fold If when the condition is Constant (#7354)
---
 src/relay/transforms/fold_constant.cc | 12 +
 tests/python/relay/test_pass_fold_constant.py | 39 +++
 2 files changed, 51 insertions(+)

diff --git a/src/relay/transforms/fold_constant.cc 
b/src/relay/transforms/fold_constant.cc
index 48af31f..66f233b 100644
--- a/src/relay/transforms/fold_constant.cc
+++ b/src/relay/transforms/fold_constant.cc
@@ -120,6 +120,18 @@ class ConstantFolder : public MixedModeMutator {
 }
   }
 
+  Expr VisitExpr_(const IfNode* op) final {
+auto new_cond = ExprMutator::VisitExpr(op->cond);
+if (auto const_cond = new_cond.as()) {
+  if (reinterpret_cast(const_cond->data->data)[0]) {
+return ExprMutator::VisitExpr(op->true_branch);
+  } else {
+return ExprMutator::VisitExpr(op->false_branch);
+  }
+}
+return ExprMutator::VisitExpr_(op);
+  }
+
   Expr Rewrite_(const CallNode* call, const Expr& post) final {
 if (inside_primitive) {
   return GetRef(call);
diff --git a/tests/python/relay/test_pass_fold_constant.py 
b/tests/python/relay/test_pass_fold_constant.py
index 549596d..76182d2 100644
--- a/tests/python/relay/test_pass_fold_constant.py
+++ b/tests/python/relay/test_pass_fold_constant.py
@@ -147,6 +147,45 @@ def test_fold_concat():
 assert tvm.ir.structural_equal(zz, zexpected)
 
 
+def test_fold_if():
+cond_data = np.array(1).astype("bool")
+x_data = np.array([[1, 2, 3]]).astype("float32")
+
+def before():
+a = relay.const(cond_data)
+x = relay.const(x_data)
+y = relay.const(x_data)
+iff = relay.If(a, x + y, x - y)
+return relay.Function([], iff)
+
+def expected():
+y_data = x_data + x_data
+y = relay.const(y_data)
+return relay.Function([], y)
+
+zz = run_opt_pass(before(), transform.FoldConstant())
+zexpected = run_opt_pass(expected(), transform.InferType())
+assert tvm.ir.structural_equal(zz, zexpected)
+
+cond_data = np.array(0).astype("bool")
+
+def before():
+a = relay.const(cond_data)
+x = relay.const(x_data)
+y = relay.const(x_data)
+iff = relay.If(a, x + y, x - y)
+return relay.Function([], iff)
+
+def expected():
+y_data = x_data - x_data
+y = relay.const(y_data)
+return relay.Function([], y)
+
+zz = run_opt_pass(before(), transform.FoldConstant())
+zexpected = run_opt_pass(expected(), transform.InferType())
+assert tvm.ir.structural_equal(zz, zexpected)
+
+
 def test_fold_shape_of():
 c_shape = (8, 9, 10)
 



[GitHub] [tvm] masahi commented on pull request #7355: [Relay][PatternLang] Fuzzy Function Matching

2021-01-27 Thread GitBox


masahi commented on pull request #7355:
URL: https://github.com/apache/tvm/pull/7355#issuecomment-768792058


   thanks @mbrookhart @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: If an expression has two branches, and the pattern ignores one with a wildcard, allow grouping via dominator analysis (#7355)

2021-01-27 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 02fefbc  If an expression has two branches, and the pattern ignores 
one with a wildcard, allow grouping via dominator analysis (#7355)
02fefbc is described below

commit 02fefbc1df0dec8989105076f48eace34027a31b
Author: Matthew Brookhart 
AuthorDate: Wed Jan 27 21:30:52 2021 -0700

If an expression has two branches, and the pattern ignores one with a 
wildcard, allow grouping via dominator analysis (#7355)
---
 src/relay/ir/dataflow_matcher.cc|  3 +-
 src/relay/ir/indexed_graph.h| 22 +
 tests/python/relay/test_dataflow_pattern.py | 71 +
 3 files changed, 95 insertions(+), 1 deletion(-)

diff --git a/src/relay/ir/dataflow_matcher.cc b/src/relay/ir/dataflow_matcher.cc
index 0d94813..cfacd41 100644
--- a/src/relay/ir/dataflow_matcher.cc
+++ b/src/relay/ir/dataflow_matcher.cc
@@ -730,7 +730,8 @@ class PatternGrouper {
   auto node = matcher_->expr_graph_.node_map_.at(kv.first);
   for (auto* output : node->outputs_) {
 // and the node is used by nodes outside of the group
-if (memo.count(output->ref_) == 0) {
+if (memo.count(output->ref_) == 0 &&
+!matcher_->expr_graph_.node_map_.at(expr)->Dominates(output)) {
   // Exit because nodes in this pattern's body are used outside 
the pattern
   // fusing it would be invalid
   return;
diff --git a/src/relay/ir/indexed_graph.h b/src/relay/ir/indexed_graph.h
index 4bbb741..d073bca 100644
--- a/src/relay/ir/indexed_graph.h
+++ b/src/relay/ir/indexed_graph.h
@@ -27,6 +27,7 @@
 #include 
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -74,6 +75,27 @@ class IndexedGraph {
 Node* dominator_parent_;
 /*! \brief The nodes this node dominates */
 std::vector dominator_children_;
+
+bool Dominates(const Node* other) {
+  std::stack stack;
+  std::unordered_set visited;
+  stack.push(this);
+  while (!stack.empty()) {
+const Node* current = stack.top();
+stack.pop();
+for (auto node : current->dominator_children_) {
+  if (visited.count(node) == 0) {
+if (other == node) {
+  return true;
+} else {
+  stack.push(node);
+}
+visited.insert(node);
+  }
+}
+  }
+  return false;
+}
   };
   /*! \brief Construct the domination tree inside IndexedGraph */
   void PostDom() {
diff --git a/tests/python/relay/test_dataflow_pattern.py 
b/tests/python/relay/test_dataflow_pattern.py
index e7b367b..15d3ee0 100644
--- a/tests/python/relay/test_dataflow_pattern.py
+++ b/tests/python/relay/test_dataflow_pattern.py
@@ -16,6 +16,7 @@
 # under the License.
 # pylint: disable=unused-wildcard-import
 import numpy as np
+import pytest
 
 import tvm
 from tvm import relay
@@ -1470,6 +1471,76 @@ def test_partition_function():
 assert tvm.ir.structural_equal(pattern.partition(expr), expr2)
 
 
+def test_rewrite_function_with_fuzzy_body():
+"""Allow Rewriting a function with a fuzzy body via dominator analysis"""
+x = relay.var("x")
+w = relay.var("w")
+b = relay.var("b")
+
+x1 = relay.var("x1")
+w1 = relay.var("w1")
+
+wc_x = wildcard()
+wc_w = wildcard()
+wc_b = wildcard()
+wc_x1 = wildcard()
+wc_w1 = wildcard()
+
+func_pattern = FunctionPattern([wc_x1, wc_w1], wildcard())
+pattern = func_pattern(wc_x, wc_w) + wc_b
+
+func = relay.Function([x1, w1], relay.nn.conv2d(x1, w1))
+expr = func(x, w) + b + b
+
+class TestRewrite(DFPatternCallback):
+def __init__(self):
+super(TestRewrite, self).__init__()
+self.pattern = pattern
+
+def callback(self, pre, post, node_map):
+return x + w
+
+out = rewrite(TestRewrite(), expr)
+assert tvm.ir.structural_equal(x + w, x + w)
+
+
+@pytest.mark.skip(
+"""TODO(mbrookhart): The current partitioner can't properly handle 
+   the partitioned inputs on the fuzzy body"""
+)
+def test_partition_function_with_fuzzy_body():
+"""
+Allow Rewriting a function with a fuzzy body via dominator analysis
+"""
+x = relay.var("x")
+w = relay.var("w")
+b = relay.var("b")
+
+x1 = relay.var("x1")
+w1 = relay.var("w1")
+
+wc_x = wildcard()
+wc_w = wildcard()
+wc_b = wildcard()
+wc_x1 = wildcard()
+wc_w1 = wildcard()
+
+func_pattern = FunctionPattern([wc_x1, wc_w1], wildcard())
+pattern = func_pattern(wc_x, wc_w) + wc_b
+
+func = relay.Function([x1, w1], relay.nn.conv2d(x1, w1))
+expr = func(x, w) + b + b
+
+x2 = relay.var("x2")
+w2 = relay.var("w2")
+b2 = relay.var("b2")
+func2 =

[GitHub] [tvm] masahi merged pull request #7355: [Relay][PatternLang] Fuzzy Function Matching

2021-01-27 Thread GitBox


masahi merged pull request #7355:
URL: https://github.com/apache/tvm/pull/7355


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Relay][Frontend][Onnx] Robustify Loop Importer (#7353)

2021-01-27 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 913abe0  [Relay][Frontend][Onnx] Robustify Loop Importer (#7353)
913abe0 is described below

commit 913abe087a3054831662b995c2e4f1f2271afbc6
Author: Josh Fromm 
AuthorDate: Wed Jan 27 20:30:30 2021 -0800

[Relay][Frontend][Onnx] Robustify Loop Importer (#7353)

* Add test for array loop.

* Fixed scalar issue.

* Formatting.

* Fix injective schedule for dynamic shapes.
---
 python/tvm/relay/frontend/onnx.py  | 13 +-
 python/tvm/topi/x86/injective.py   | 27 ++-
 tests/python/frontend/onnx/test_forward.py | 74 ++
 3 files changed, 92 insertions(+), 22 deletions(-)

diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
index 7a3b168..b1b01b8 100644
--- a/python/tvm/relay/frontend/onnx.py
+++ b/python/tvm/relay/frontend/onnx.py
@@ -2227,8 +2227,17 @@ class Loop(OnnxOpConverter):
 # Add new scan outputs to tracking
 combined_scan_outputs = []
 for i, scan in enumerate(scan_outputs):
-new_scan = _op.expand_dims(new_scan_outputs[i], axis=0)
-combined_scan = _op.concatenate([scan, new_scan], axis=0)
+rank = len(infer_shape(scan)) - 1
+new_scan = new_scan_outputs[i]
+expand_scan = _op.expand_dims(new_scan, axis=0)
+# For non scalar outputs we need to broadcast the initial 
value.
+if rank > 0:
+new_scan_shape = _op.shape_of(new_scan, dtype=iter_dtype)
+scan_broadcast = _op.concatenate(
+[_op.reshape(loop_count, [1]), new_scan_shape], axis=0
+)
+scan = _op.broadcast_to(scan, scan_broadcast)
+combined_scan = _op.concatenate([scan, expand_scan], axis=0)
 combined_scan_outputs.append(combined_scan)
 
 # Increment counter.
diff --git a/python/tvm/topi/x86/injective.py b/python/tvm/topi/x86/injective.py
index 29f903f..6492b78 100644
--- a/python/tvm/topi/x86/injective.py
+++ b/python/tvm/topi/x86/injective.py
@@ -17,6 +17,7 @@
 # pylint: disable=invalid-name
 """x86 declaration and schedules."""
 from tvm import te
+from tvm.tir import IntImm
 from ..utils import is_empty_shape
 
 
@@ -100,18 +101,20 @@ def schedule_concatenate(outs):
 def vectorize(sch, tensor, vectorize_limit):
 """Internal vectorization function for concatenate."""
 inner_axis = s[tensor].op.axis[len(s[tensor].op.axis) - 1]
-inner_length = tensor.shape[len(tensor.shape) - 1].value
-if inner_length <= vectorize_limit:
-sch[tensor].vectorize(inner_axis)
-else:
-split_factor = 1
-for i in range(vectorize_limit, 1, -1):
-if inner_length % i == 0:
-split_factor = i
-break
-if split_factor > 1:
-_, inner_i = sch[tensor].split(inner_axis, split_factor)
-sch[tensor].vectorize(inner_i)
+# Check that the tensor shape is static. Otherwise skip vectorization.
+if isinstance(tensor.shape[len(tensor.shape) - 1], IntImm):
+inner_length = tensor.shape[len(tensor.shape) - 1].value
+if inner_length <= vectorize_limit:
+sch[tensor].vectorize(inner_axis)
+else:
+split_factor = 1
+for i in range(vectorize_limit, 1, -1):
+if inner_length % i == 0:
+split_factor = i
+break
+if split_factor > 1:
+_, inner_i = sch[tensor].split(inner_axis, split_factor)
+sch[tensor].vectorize(inner_i)
 
 outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
 x = outs[0]
diff --git a/tests/python/frontend/onnx/test_forward.py 
b/tests/python/frontend/onnx/test_forward.py
index 20937d2..c04 100644
--- a/tests/python/frontend/onnx/test_forward.py
+++ b/tests/python/frontend/onnx/test_forward.py
@@ -3654,14 +3654,14 @@ def verify_cond_loop():
 
 
 def verify_count_loop():
-y_in = helper.make_tensor_value_info("y_in", TensorProto.FLOAT, [1])
-y_out = helper.make_tensor_value_info("y_out", TensorProto.FLOAT, [1])
-scan_out = helper.make_tensor_value_info("scan_out", TensorProto.FLOAT, 
[1])
+y_in = helper.make_tensor_value_info("y_in", TensorProto.FLOAT, [])
+y_out = helper.make_tensor_value_info("y_out", TensorProto.FLOAT, [])
+scan_out = helper.make_tensor_value_info("scan_out", TensorProto.FLOAT, [])
 cond_in = helper.make_tensor_value_info("cond_in", TensorProto.BOOL, [])
 cond_out = helper.make_t

[GitHub] [tvm] masahi commented on pull request #7353: [Relay][Frontend][Onnx] Robustify Loop Importer

2021-01-27 Thread GitBox


masahi commented on pull request #7353:
URL: https://github.com/apache/tvm/pull/7353#issuecomment-768791983


   thanks @jwfromm @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #7353: [Relay][Frontend][Onnx] Robustify Loop Importer

2021-01-27 Thread GitBox


masahi merged pull request #7353:
URL: https://github.com/apache/tvm/pull/7353


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh opened a new pull request #7357: [Relay][Training] fix grad for zeros and ones

2021-01-27 Thread GitBox


altanh opened a new pull request #7357:
URL: https://github.com/apache/tvm/pull/7357


   The old gradient for these operators must have been written/updated before 
the `dyn` namespace was decided. I've updated the gradients to be correct now.
   
   cc @mbrookhart @kevinthesun 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen merged pull request #7352: [COMMUNITY] @trevor-m -> reviewer

2021-01-27 Thread GitBox


tqchen merged pull request #7352:
URL: https://github.com/apache/tvm/pull/7352


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [COMMUNITY] @trevor-m -> reviewer (#7352)

2021-01-27 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new d8efe70  [COMMUNITY] @trevor-m -> reviewer (#7352)
d8efe70 is described below

commit d8efe709a7c70c24c7b9cd1b7842677497b342ed
Author: Tianqi Chen 
AuthorDate: Wed Jan 27 21:08:05 2021 -0500

[COMMUNITY] @trevor-m -> reviewer (#7352)
---
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index bf10271..773f94a 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -112,6 +112,7 @@ We do encourage everyone to work anything they are 
interested in.
 - [Sergey Mironov](https://github.com/grwlf): @grwlf
 - [Thierry Moreau](https://github.com/tmoreau89): @tmoreau89
 - [Kazutaka Morita](https://github.com/kazum): @kazum
+- [Trevor Morris](https://github.com/trevor-m): @trevor-m
 - [Tatsuya Nishiyama](https://github.com/nishi-t): @nishi-t
 - [Wei Pan](https://github.com/wpan11nv): @wpan11nv
 - [Krzysztof Parzyszek](https://github.com/kparzysz-quic): @kparzysz-quic



[GitHub] [tvm] genbattle opened a new issue #7356: README for Rust bindings says nightly is required

2021-01-27 Thread GitBox


genbattle opened a new issue #7356:
URL: https://github.com/apache/tvm/issues/7356


   The [README for the TVM rust 
bindings](https://github.com/apache/tvm/blob/main/rust/tvm/README.md) says that 
nightly rust is required, but I just downloaded, built and ran the tests for 
the bindings on stable Rust 1.49.0.
   
   >>> This crate provides an idiomatic Rust API for TVM runtime frontend. 
Currently this requires *Nightly Rust* and tested on `rustc 1.32.0-nightly`
   
   Is nightly Rust actually still required? If so, it may be helpful to list 
the features/dependencies that cause the nightly requirement as part of the 
README. If not, this requirement should be removed.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7354: [Relay] Fold If when the Condition is Constant

2021-01-27 Thread GitBox


masahi commented on a change in pull request #7354:
URL: https://github.com/apache/tvm/pull/7354#discussion_r565769392



##
File path: src/relay/transforms/fold_constant.cc
##
@@ -120,6 +120,18 @@ class ConstantFolder : public MixedModeMutator {
 }
   }
 
+  Expr VisitExpr_(const IfNode* op) final {
+auto new_cond = ExprMutator::VisitExpr(op->cond);
+if (auto const_cond = new_cond.as()) {
+  if (reinterpret_cast(const_cond->data->data)[0]) {

Review comment:
   there is no representation of `bool*`, things need to be byte-addressable





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ziyu-guo commented on pull request #7211: Build multi models into one system-lib

2021-01-27 Thread GitBox


ziyu-guo commented on pull request #7211:
URL: https://github.com/apache/tvm/pull/7211#issuecomment-768679644


   Quick question: how does this work with multiple TVM module libs compiled 
from same model, but with different batch size? Won't the names collide in the 
combined lib? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] yzhliu commented on pull request #7321: [Autodiff] Deterministic gradient compute

2021-01-27 Thread GitBox


yzhliu commented on pull request #7321:
URL: https://github.com/apache/tvm/pull/7321#issuecomment-768670519


   Thanks @hzfan @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Autodiff] Deterministic gradient compute (#7321)

2021-01-27 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 00257f3  [Autodiff] Deterministic gradient compute (#7321)
00257f3 is described below

commit 00257f347faad0b3ec2e9624413015bef34d451f
Author: Haozheng Fan 
AuthorDate: Thu Jan 28 08:32:04 2021 +0800

[Autodiff] Deterministic gradient compute (#7321)

* fix unstable compute

* fix

* fix

* lint

* sort linear equation

* sort inequalities

* fix

* fix find

* lint

* fix find

* lint
---
 src/arith/solve_linear_equation.cc   |  9 +++---
 src/arith/solve_linear_inequality.cc | 54 ++--
 src/te/autodiff/ad_simplify.cc   | 26 +
 3 files changed, 46 insertions(+), 43 deletions(-)

diff --git a/src/arith/solve_linear_equation.cc 
b/src/arith/solve_linear_equation.cc
index 22bf736..d66e75d 100644
--- a/src/arith/solve_linear_equation.cc
+++ b/src/arith/solve_linear_equation.cc
@@ -427,11 +427,10 @@ IntConstraintsTransform SolveLinearEquations(const 
IntConstraints& system_to_sol
 
   // We have to transform ranges of the old variables into relations over new 
variables because
   // new ranges are not enough usually.
-  for (const auto& p : system_to_solve->ranges) {
-const Var& old_var = p.first;
-const Range& old_range = p.second;
-if (old_to_new_map.count(old_var)) {
-  PrimExpr express_by_new_vars = old_to_new_map[old_var];
+  for (const auto& old_var : system_to_solve->variables) {
+if (system_to_solve->ranges.find(old_var) != 
system_to_solve->ranges.end()) {
+  const Range& old_range = system_to_solve->ranges.at(old_var);
+  PrimExpr express_by_new_vars = old_to_new_map.at(old_var);
   PrimExpr lower_cond = analyzer_solution.Simplify(old_range->min <= 
express_by_new_vars);
   PrimExpr upper_cond =
   analyzer_solution.Simplify(express_by_new_vars < old_range->min + 
old_range->extent);
diff --git a/src/arith/solve_linear_inequality.cc 
b/src/arith/solve_linear_inequality.cc
index f4de9ff..dd90448 100644
--- a/src/arith/solve_linear_inequality.cc
+++ b/src/arith/solve_linear_inequality.cc
@@ -94,11 +94,10 @@ struct ExprLess {
   }
 };
 
-void DebugPrint(
-const std::unordered_set& 
current_ineq_set,
-const std::unordered_set& 
next_ineq_set,
-const std::vector& rest, const std::vector>& coef_pos,
-const std::vector>& coef_neg) {
+void DebugPrint(const std::vector& current_ineq_set,
+const std::vector& next_ineq_set, const 
std::vector& rest,
+const std::vector>& coef_pos,
+const std::vector>& coef_neg) {
   std::cout << "Current ineq set:\n[";
   for (auto& ineq : current_ineq_set) {
 std::cout << ineq << ", ";
@@ -148,9 +147,12 @@ class NormalizeComparisons : public ExprMutator {
   arith::Analyzer analyzer_;
 };
 
-void AddInequality(std::unordered_set* inequality_set,
-   const PrimExpr& new_ineq, Analyzer* analyzer) {
-  if (analyzer->CanProve(new_ineq) || inequality_set->find(new_ineq) != 
inequality_set->end()) {
+void AddInequality(std::vector* inequality_set, const PrimExpr& 
new_ineq,
+   Analyzer* analyzer) {
+  if (analyzer->CanProve(new_ineq) ||
+  std::find_if(inequality_set->begin(), inequality_set->end(), [&](const 
PrimExpr& e) {
+return StructuralEqual()(e, new_ineq);
+  }) != inequality_set->end()) {
 // redundant: follows from the vranges
 // or has already been added
 return;
@@ -168,15 +170,13 @@ void AddInequality(std::unordered_set
 }
   }
 
-  inequality_set->insert(new_ineq);
+  inequality_set->push_back(new_ineq);
 }
 
-void ClassifyByPolarity(
-const Var& var,
-const std::unordered_set& 
current_ineq_set,
-std::unordered_set* 
next_ineq_set,
-std::vector* rest, std::vector>* 
coef_pos,
-std::vector>* coef_neg, Analyzer* analyzer) {
+void ClassifyByPolarity(const Var& var, const std::vector& 
current_ineq_set,
+std::vector* next_ineq_set, 
std::vector* rest,
+std::vector>* coef_pos,
+std::vector>* coef_neg, 
Analyzer* analyzer) {
   // Take formulas from current_ineq_set and classify them according to 
polarity wrt var
   // and store to coef_pos and coef_neg respectively.
   for (const PrimExpr& ineq : current_ineq_set) {
@@ -218,14 +218,14 @@ void ClassifyByPolarity(
   }
 }
 
-void MoveEquality(std::unordered_set* upper_bounds,
-  std::unordered_set* lower_bounds,
-  std::unordered_set* equalities) {
+void MoveEquality(std::vector* upper_bounds, std::vector* 
lower_bounds,
+  std::vector* equalities) {
   // those exist in both upper & lower bounds will be moved to equalities
   for 

[GitHub] [tvm] yzhliu merged pull request #7321: [Autodiff] Deterministic gradient compute

2021-01-27 Thread GitBox


yzhliu merged pull request #7321:
URL: https://github.com/apache/tvm/pull/7321


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics merged pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-27 Thread GitBox


zhiics merged pull request #7346:
URL: https://github.com/apache/tvm/pull/7346


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Torch] More graph rewrites for Faster RCNN / MaskRCNN (#7346)

2021-01-27 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 4006bde  [Torch] More graph rewrites for Faster RCNN / MaskRCNN (#7346)
4006bde is described below

commit 4006bde68e32daeaac5de11d9fc331a28ff55706
Author: masahi 
AuthorDate: Thu Jan 28 08:09:43 2021 +0900

[Torch] More graph rewrites for Faster RCNN / MaskRCNN (#7346)

* add post nms topk to max_out_size rewrite

* add argsort conversion

* scatter pattern first cut

* matching seems to working

* dup matching fixed

* add converter

* conversion seems working

* add reshape, use take

* remove pytorch argsort converter

* update test

* add doc
---
 python/tvm/relay/frontend/pytorch_utils.py | 258 +++--
 .../frontend/pytorch/test_object_detection.py  |  18 +-
 2 files changed, 261 insertions(+), 15 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch_utils.py 
b/python/tvm/relay/frontend/pytorch_utils.py
index 6fc5a6a..248f535 100644
--- a/python/tvm/relay/frontend/pytorch_utils.py
+++ b/python/tvm/relay/frontend/pytorch_utils.py
@@ -16,13 +16,16 @@
 # under the License.
 # pylint: disable=import-outside-toplevel, unused-argument, invalid-name
 """ Common utilities used by PyTorch frontend """
+from .. import expr
 from .. import op
 from ..dataflow_pattern import (
+wildcard,
 is_constant,
 is_op,
 rewrite,
 is_tuple,
-wildcard,
+is_tuple_get_item,
+is_if,
 DFPatternCallback,
 )
 
@@ -36,6 +39,19 @@ def is_version_greater_than(ver):
 )
 
 
+def dyn_strided_slice_pattern(inp, end):
+"""A pattern to detect dynamic strided slice op."""
+zero = is_constant()
+cast_like = is_op("cast_like")(zero, is_constant())
+less = is_op("less")(is_constant(), cast_like)
+shape_of = is_op("shape_of")(inp)
+cast_like = is_op("cast_like")(shape_of, is_constant())
+add = is_op("add")(is_constant(), cast_like)
+where = is_op("where")(less, add, is_constant())
+
+return is_op("dyn.strided_slice")(inp, where, end, is_constant())
+
+
 def batched_nms_pattern(boxes, scores, idxs, iou_threshold, num_boxes, 
indices):
 """A pattern to detect batched_nms function in torchvision
 
@@ -73,7 +89,6 @@ def batched_nms_pattern(boxes, scores, idxs, iou_threshold, 
num_boxes, indices):
 
 """
 one = is_constant()
-zero = is_constant()
 
 # Equivelent PyTorch code from above snippet
 # offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes))
@@ -84,17 +99,10 @@ def batched_nms_pattern(boxes, scores, idxs, iou_threshold, 
num_boxes, indices):
 
 # The following doesn't appear in the above Relay snippet. It is required 
for dynamic
 # stride_slice handling
-cast_like = is_op("cast_like")(zero, is_constant())
-less = is_op("less")(is_constant(), cast_like)
-shape_of = is_op("shape_of")(mul)
-cast_like = is_op("cast_like")(shape_of, is_constant())
-add = is_op("add")(is_constant(), cast_like)
-where = is_op("where")(less, add, is_constant())
 shape_of = is_op("shape_of")(mul)
 cast = is_op("cast")(shape_of)
-
 # This corresponds to offsets[:, None], where offsets is the result of 
multiplication
-dyn_strided_slice = is_op("dyn.strided_slice")(mul, where, cast, 
is_constant())
+dyn_strided_slice = dyn_strided_slice_pattern(mul, cast)
 
 # Add offsets to the boxes
 expand_dims = is_op("expand_dims")(dyn_strided_slice)
@@ -112,8 +120,49 @@ def batched_nms_pattern(boxes, scores, idxs, 
iou_threshold, num_boxes, indices):
 )
 
 
-class NMSRewrite(DFPatternCallback):
-"""A callback to rewrite nms and restore batched nms"""
+def topk_after_batch_nms_pattern(cond, true_branch, data, valid_count, 
indices, iou_threshold):
+"""
+Detect the following pattern used in torchvision detection models.
+
+def batched_nms(...):
+if boxes.numel() == 0:
+return torch.empty((0,), dtype=torch.int64, device=boxes.device)
+else:
+...
+return nms(boxes_for_nms, scores, iou_threshold)
+
+keep = batched_nms(boxes, scores, lvl, self.nms_thresh)
+keep = keep[:post_nms_top_k] # keep only topk scoring predictions
+
+An equivalent Relay subgraph:
+
+%1184 = if (%1117) {
+  ...
+} else {
+  ...
+  %1172 = vision.non_max_suppression(%1167, %1168, %1171, -1, 0.7f, ...);
+  ...
+  %1183 = dyn.strided_slice(%1174, %1180, %1182, ...);
+  cast(%1183, dtype="int64")
+};
+%1185 = strided_slice(%1184, begin=[0], end=[1000], strides=[1]);
+
+"""
+nms = is_op("vision.non_max_suppression")(
+data, valid_count, indices, is_constant(), iou_threshold
+)
+indices = is_op("squeeze")(is_tuple_get_item(nms, 0))
+   

[GitHub] [tvm] mbrookhart opened a new pull request #7355: [Relay][PatternLang] Fuzzy Function Matching

2021-01-27 Thread GitBox


mbrookhart opened a new pull request #7355:
URL: https://github.com/apache/tvm/pull/7355


   @comaniac @masahi 
   
   I recently ran into a situation where I needed to match based on a function 
signature, but not necessarily on the function body. To support that, I made 
some changes to the pattern matcher to allow matching and rewriting functions 
as long as everything in the match is completely dominated by the pattern.
   
   This works for rewriting, but I haven't been able to get the partitioner to 
work properly on these tests, so I added a unit test but skipped it with a TODO.
   
   What do you guys think?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-27 Thread GitBox


masahi commented on a change in pull request #7346:
URL: https://github.com/apache/tvm/pull/7346#discussion_r565698955



##
File path: python/tvm/relay/frontend/pytorch_utils.py
##
@@ -169,10 +218,193 @@ def callback(self, pre, post, node_map):
 return self.convert_batched_nms(boxes, scores, idxs, iou_thres, 
num_boxes, indices)
 
 
+class PostNMSTopKRewrite(DFPatternCallback):
+"""A callback to rewrite nms to exploit max_out_size parameter."""
+
+def __init__(self):
+super().__init__()
+self.cond = wildcard()
+self.true_branch = wildcard()
+self.data = wildcard()
+self.valid_count = wildcard()
+self.indices = wildcard()
+self.iou_threshold = wildcard()
+
+self.pattern = topk_after_batch_nms_pattern(
+self.cond,
+self.true_branch,
+self.data,
+self.valid_count,
+self.indices,
+self.iou_threshold,
+)
+
+def rewrite_batch_nms_with_max_out_size(
+self, cond, true_branch, data, valid_count, indices, iou_threshold, 
post_nms_topk
+):
+"""Use the detected post NMS topk parameter in NMS op."""
+nms_ret = op.vision.non_max_suppression(
+data=data,
+valid_count=valid_count,
+indices=indices,
+max_output_size=post_nms_topk,
+iou_threshold=iou_threshold,
+force_suppress=False,
+top_k=-1,
+coord_start=2,
+score_index=1,
+id_index=0,
+return_indices=True,
+invalid_to_bottom=False,
+)
+
+size = op.squeeze(nms_ret[1], axis=[1])
+data_slice = op.squeeze(nms_ret[0], axis=[0])
+
+ret = op.strided_slice(data_slice, begin=expr.const([0]), end=size, 
slice_mode="size")
+
+nms_result = op.cast(ret, "int64")
+
+return expr.If(cond, true_branch, nms_result)
+
+def callback(self, pre, post, node_map):
+post_nms_topk = post.attrs.end[0].value
+return self.rewrite_batch_nms_with_max_out_size(
+node_map[self.cond][0],
+node_map[self.true_branch][0],
+node_map[self.data][0],
+node_map[self.valid_count][0],
+node_map[self.indices][0],
+node_map[self.iou_threshold][0],
+post_nms_topk,
+)
+
+
+def scatter_roi_align_result_pattern(levels, roi_align_results, num_scales):
+"""Detect the Relay subgraph corresponding to the following PyTorch code
+
+first_result = roi_align_results[0]
+dtype, device = first_result.dtype, first_result.device
+res = torch.zeros((levels.size(0), first_result.size(1),
+   first_result.size(2), first_result.size(3)),
+  dtype=dtype, device=device)
+for level in range(len(roi_align_results)):
+index = torch.where(levels == level)[0].view(-1, 1, 1, 1)
+index = index.expand(index.size(0),
+ roi_align_results[level].size(1),
+ roi_align_results[level].size(2),
+ roi_align_results[level].size(3))
+res = res.scatter(0, index, roi_align_results[level])
+return res
+"""
+
+def do_where(levels, _):
+idx_in_level = is_op("argwhere")(is_op("equal")(levels, is_constant()))
+idx_in_level = is_op("split")(idx_in_level)
+idx_in_level = is_tuple_get_item(idx_in_level, 0)
+idx_in_level = is_op("squeeze")(idx_in_level)
+idx_in_level = is_tuple_get_item(is_tuple([idx_in_level]), 0)
+return idx_in_level
+
+scatter_res = wildcard()
+
+for i in range(num_scales):
+# index = torch.where(levels == level)[0].view(-1, 1, 1, 1)
+scatter_indices = do_where(levels, i)
+scatter_indices = is_op("reshape")(scatter_indices)
+
+# index = index.expand(index.size(0),

Review comment:
   Actually this is the equivalent PyTorch code to explain what pattern 
does :) I intentionally keep it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-27 Thread GitBox


masahi commented on a change in pull request #7346:
URL: https://github.com/apache/tvm/pull/7346#discussion_r565698955



##
File path: python/tvm/relay/frontend/pytorch_utils.py
##
@@ -169,10 +218,193 @@ def callback(self, pre, post, node_map):
 return self.convert_batched_nms(boxes, scores, idxs, iou_thres, 
num_boxes, indices)
 
 
+class PostNMSTopKRewrite(DFPatternCallback):
+"""A callback to rewrite nms to exploit max_out_size parameter."""
+
+def __init__(self):
+super().__init__()
+self.cond = wildcard()
+self.true_branch = wildcard()
+self.data = wildcard()
+self.valid_count = wildcard()
+self.indices = wildcard()
+self.iou_threshold = wildcard()
+
+self.pattern = topk_after_batch_nms_pattern(
+self.cond,
+self.true_branch,
+self.data,
+self.valid_count,
+self.indices,
+self.iou_threshold,
+)
+
+def rewrite_batch_nms_with_max_out_size(
+self, cond, true_branch, data, valid_count, indices, iou_threshold, 
post_nms_topk
+):
+"""Use the detected post NMS topk parameter in NMS op."""
+nms_ret = op.vision.non_max_suppression(
+data=data,
+valid_count=valid_count,
+indices=indices,
+max_output_size=post_nms_topk,
+iou_threshold=iou_threshold,
+force_suppress=False,
+top_k=-1,
+coord_start=2,
+score_index=1,
+id_index=0,
+return_indices=True,
+invalid_to_bottom=False,
+)
+
+size = op.squeeze(nms_ret[1], axis=[1])
+data_slice = op.squeeze(nms_ret[0], axis=[0])
+
+ret = op.strided_slice(data_slice, begin=expr.const([0]), end=size, 
slice_mode="size")
+
+nms_result = op.cast(ret, "int64")
+
+return expr.If(cond, true_branch, nms_result)
+
+def callback(self, pre, post, node_map):
+post_nms_topk = post.attrs.end[0].value
+return self.rewrite_batch_nms_with_max_out_size(
+node_map[self.cond][0],
+node_map[self.true_branch][0],
+node_map[self.data][0],
+node_map[self.valid_count][0],
+node_map[self.indices][0],
+node_map[self.iou_threshold][0],
+post_nms_topk,
+)
+
+
+def scatter_roi_align_result_pattern(levels, roi_align_results, num_scales):
+"""Detect the Relay subgraph corresponding to the following PyTorch code
+
+first_result = roi_align_results[0]
+dtype, device = first_result.dtype, first_result.device
+res = torch.zeros((levels.size(0), first_result.size(1),
+   first_result.size(2), first_result.size(3)),
+  dtype=dtype, device=device)
+for level in range(len(roi_align_results)):
+index = torch.where(levels == level)[0].view(-1, 1, 1, 1)
+index = index.expand(index.size(0),
+ roi_align_results[level].size(1),
+ roi_align_results[level].size(2),
+ roi_align_results[level].size(3))
+res = res.scatter(0, index, roi_align_results[level])
+return res
+"""
+
+def do_where(levels, _):
+idx_in_level = is_op("argwhere")(is_op("equal")(levels, is_constant()))
+idx_in_level = is_op("split")(idx_in_level)
+idx_in_level = is_tuple_get_item(idx_in_level, 0)
+idx_in_level = is_op("squeeze")(idx_in_level)
+idx_in_level = is_tuple_get_item(is_tuple([idx_in_level]), 0)
+return idx_in_level
+
+scatter_res = wildcard()
+
+for i in range(num_scales):
+# index = torch.where(levels == level)[0].view(-1, 1, 1, 1)
+scatter_indices = do_where(levels, i)
+scatter_indices = is_op("reshape")(scatter_indices)
+
+# index = index.expand(index.size(0),

Review comment:
   Actually this is the equivalent PyTorch code to explain what pattern 
does :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on a change in pull request #7354: [Relay] Fold If when the Condition is Constant

2021-01-27 Thread GitBox


jwfromm commented on a change in pull request #7354:
URL: https://github.com/apache/tvm/pull/7354#discussion_r565696211



##
File path: src/relay/transforms/fold_constant.cc
##
@@ -120,6 +120,18 @@ class ConstantFolder : public MixedModeMutator {
 }
   }
 
+  Expr VisitExpr_(const IfNode* op) final {
+auto new_cond = ExprMutator::VisitExpr(op->cond);
+if (auto const_cond = new_cond.as()) {
+  if (reinterpret_cast(const_cond->data->data)[0]) {

Review comment:
   why not use a bool datatype for the cast?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on a change in pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-27 Thread GitBox


zhiics commented on a change in pull request #7346:
URL: https://github.com/apache/tvm/pull/7346#discussion_r565693813



##
File path: python/tvm/relay/frontend/pytorch_utils.py
##
@@ -169,10 +218,193 @@ def callback(self, pre, post, node_map):
 return self.convert_batched_nms(boxes, scores, idxs, iou_thres, 
num_boxes, indices)
 
 
+class PostNMSTopKRewrite(DFPatternCallback):
+"""A callback to rewrite nms to exploit max_out_size parameter."""
+
+def __init__(self):
+super().__init__()
+self.cond = wildcard()
+self.true_branch = wildcard()
+self.data = wildcard()
+self.valid_count = wildcard()
+self.indices = wildcard()
+self.iou_threshold = wildcard()
+
+self.pattern = topk_after_batch_nms_pattern(
+self.cond,
+self.true_branch,
+self.data,
+self.valid_count,
+self.indices,
+self.iou_threshold,
+)
+
+def rewrite_batch_nms_with_max_out_size(
+self, cond, true_branch, data, valid_count, indices, iou_threshold, 
post_nms_topk
+):
+"""Use the detected post NMS topk parameter in NMS op."""
+nms_ret = op.vision.non_max_suppression(
+data=data,
+valid_count=valid_count,
+indices=indices,
+max_output_size=post_nms_topk,
+iou_threshold=iou_threshold,
+force_suppress=False,
+top_k=-1,
+coord_start=2,
+score_index=1,
+id_index=0,
+return_indices=True,
+invalid_to_bottom=False,
+)
+
+size = op.squeeze(nms_ret[1], axis=[1])
+data_slice = op.squeeze(nms_ret[0], axis=[0])
+
+ret = op.strided_slice(data_slice, begin=expr.const([0]), end=size, 
slice_mode="size")
+
+nms_result = op.cast(ret, "int64")
+
+return expr.If(cond, true_branch, nms_result)
+
+def callback(self, pre, post, node_map):
+post_nms_topk = post.attrs.end[0].value
+return self.rewrite_batch_nms_with_max_out_size(
+node_map[self.cond][0],
+node_map[self.true_branch][0],
+node_map[self.data][0],
+node_map[self.valid_count][0],
+node_map[self.indices][0],
+node_map[self.iou_threshold][0],
+post_nms_topk,
+)
+
+
+def scatter_roi_align_result_pattern(levels, roi_align_results, num_scales):
+"""Detect the Relay subgraph corresponding to the following PyTorch code
+
+first_result = roi_align_results[0]
+dtype, device = first_result.dtype, first_result.device
+res = torch.zeros((levels.size(0), first_result.size(1),
+   first_result.size(2), first_result.size(3)),
+  dtype=dtype, device=device)
+for level in range(len(roi_align_results)):
+index = torch.where(levels == level)[0].view(-1, 1, 1, 1)
+index = index.expand(index.size(0),
+ roi_align_results[level].size(1),
+ roi_align_results[level].size(2),
+ roi_align_results[level].size(3))
+res = res.scatter(0, index, roi_align_results[level])
+return res
+"""
+
+def do_where(levels, _):
+idx_in_level = is_op("argwhere")(is_op("equal")(levels, is_constant()))
+idx_in_level = is_op("split")(idx_in_level)
+idx_in_level = is_tuple_get_item(idx_in_level, 0)
+idx_in_level = is_op("squeeze")(idx_in_level)
+idx_in_level = is_tuple_get_item(is_tuple([idx_in_level]), 0)
+return idx_in_level
+
+scatter_res = wildcard()
+
+for i in range(num_scales):
+# index = torch.where(levels == level)[0].view(-1, 1, 1, 1)
+scatter_indices = do_where(levels, i)
+scatter_indices = is_op("reshape")(scatter_indices)
+
+# index = index.expand(index.size(0),

Review comment:
   remove?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-27 Thread GitBox


comaniac commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-768632418


   Thanks @merrymercy 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [AutoScheduler] Enable schedule sharing in dispatch context (#7344)

2021-01-27 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new fd39122  [AutoScheduler] Enable schedule sharing in dispatch context 
(#7344)
fd39122 is described below

commit fd391223c19bec454f488f8a976a0766fadb0db3
Author: Cody Yu 
AuthorDate: Wed Jan 27 14:54:43 2021 -0800

[AutoScheduler] Enable schedule sharing in dispatch context (#7344)

* [AutoScheduler] Enable schedule sharing in dispatch context

* Update python/tvm/auto_scheduler/dispatcher.py
---
 python/tvm/auto_scheduler/dispatcher.py| 135 -
 python/tvm/auto_scheduler/measure_record.py|  65 +-
 python/tvm/auto_scheduler/utils.py |  65 +-
 .../python/unittest/test_auto_scheduler_measure.py |  18 +--
 4 files changed, 178 insertions(+), 105 deletions(-)

diff --git a/python/tvm/auto_scheduler/dispatcher.py 
b/python/tvm/auto_scheduler/dispatcher.py
index b0b98d8..f2d7536 100644
--- a/python/tvm/auto_scheduler/dispatcher.py
+++ b/python/tvm/auto_scheduler/dispatcher.py
@@ -30,6 +30,7 @@ import numpy as np
 
 from tvm.tir.expr import FloatImm
 from .measure_record import load_records
+from .utils import calc_workload_dis_factor, decode_workload_key
 
 logger = logging.getLogger("auto_scheduler")
 
@@ -126,18 +127,53 @@ class ApplyHistoryBest(DispatchContext):
 If is str, then it should be the filename of a records log file.
 Each row of this file is an encoded record pair. Otherwise, it is an 
iterator.
 n_lines: Optional[int]
-if it is not None, only load the first `n_lines` lines of log
+if it is not None, only load the first `n_lines` lines of log.
+include_compatible: bool
+When set to True, compatible records will also be considered.
 """
 
-def __init__(self, records, n_lines=None):
+def __init__(self, records, n_lines=None, include_compatible=False):
 super(ApplyHistoryBest, self).__init__()
+self.include_compatible = include_compatible
 
+# Dict[str (target key),
+#   Dict[str (workload hash),
+# Dict[tuple (workload args), tuple (State, cost)]]]
 self.best_by_targetkey = {}
 self.best_by_model = {}
 self._best_user_defined = {}
 
 self.load(records, n_lines)
 
+@staticmethod
+def get_workload_entry(best_records, target_key, workload_key):
+"""Get the entry of the target key and workload key hash in the given 
best record map.
+
+Parameters
+--
+best_records: Dict[str, Dict[str, Dict[str, Any]]]
+The best record map.
+target_key: str
+The first key to the best_records.
+workload_key: str
+The workload key that can be decoded to workload hash and args.
+
+Returns
+---
+entry: Dict[str, Any]
+The entry in best_records with target key and workload hash.
+workload_hash: str
+The workload hash decoded from workload_key.
+workload_args: Tuple[Any, ...]
+The hashable tuple of workload args decoded from workload_key.
+"""
+workload_hash, workload_args = decode_workload_key(workload_key)
+if target_key not in best_records:
+best_records[target_key] = {}
+if workload_hash not in best_records[target_key]:
+best_records[target_key][workload_hash] = {}
+return best_records[target_key][workload_hash], workload_hash, 
workload_args
+
 def load(self, records, n_lines=None):
 """Load records to this dispatch context
 
@@ -171,29 +207,32 @@ class ApplyHistoryBest(DispatchContext):
 if res.error_no != 0:
 continue
 
+costs = [x.value for x in res.costs if isinstance(x, FloatImm)]
+cost = np.mean(costs)
+
 # use target keys in tvm target system as key to build best map
 for k in inp.task.target.keys:
-key = (k, inp.task.workload_key)
-if key not in best_by_targetkey:
-best_by_targetkey[key] = (inp, res)
+entry, _, workload_args = self.get_workload_entry(
+best_by_targetkey, k, inp.task.workload_key
+)
+if workload_args not in entry:
+entry[workload_args] = (inp.state, cost)
 else:
-_, other_res = best_by_targetkey[key]
-other_costs = [x.value for x in other_res.costs if 
isinstance(x, FloatImm)]
-costs = [x.value for x in res.costs if isinstance(x, 
FloatImm)]
-if np.mean(other_costs) > np.mean(costs):
-best_by_targetkey[key] = (inp, res)
+_, other_cost =

[GitHub] [tvm] comaniac merged pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-27 Thread GitBox


comaniac merged pull request #7344:
URL: https://github.com/apache/tvm/pull/7344


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart opened a new pull request #7354: Fold If when the Condition is Constant

2021-01-27 Thread GitBox


mbrookhart opened a new pull request #7354:
URL: https://github.com/apache/tvm/pull/7354


   @jroesch @jwfromm 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm opened a new pull request #7353: [Relay][Frontend][Onnx] Robustify Loop Importer

2021-01-27 Thread GitBox


jwfromm opened a new pull request #7353:
URL: https://github.com/apache/tvm/pull/7353


   Although the loop importer in onnx works for simple cases, it had some 
issues for loops that output and accumulate tensors. This PR adds handling for 
both scalar and tensor outputs and corresponding tests. 
   
   Note that one issue encountered was that an injective schedule was applied 
to concatenate during compilation. Unfortunately, the concat in our loop has 
dynamic shapes which caused an invalid access in the schedule. This PR also 
includes a check to make sure that the shapes of concat's inputs are static 
before trying to vectorize.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m edited a comment on pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-27 Thread GitBox


trevor-m edited a comment on pull request #7346:
URL: https://github.com/apache/tvm/pull/7346#issuecomment-768611717


   > No, the `If` there is for guarding against the case where there is no 
boxes, see
   > 
   > 
https://github.com/pytorch/vision/blob/8ebfd2f5d5f1792ce2cf5a2329320f604530a68e/torchvision/ops/boxes.py#L78-L79
   > 
   > So applying topk to an empty tensor is nop anyway.
   
   Got it, thanks! I guess the pattern does not guarantee that the true branch 
is for that 0 box case, but since this rewrite is only meant to be used for 
this particular model it is fine.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m commented on pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-27 Thread GitBox


trevor-m commented on pull request #7346:
URL: https://github.com/apache/tvm/pull/7346#issuecomment-768611717


   > No, the `If` there is for guarding against the case where there is no 
boxes, see
   > 
   > 
https://github.com/pytorch/vision/blob/8ebfd2f5d5f1792ce2cf5a2329320f604530a68e/torchvision/ops/boxes.py#L78-L79
   > 
   > So applying topk to an empty tensor is nop anyway.
   
   Got it, thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-27 Thread GitBox


masahi commented on pull request #7346:
URL: https://github.com/apache/tvm/pull/7346#issuecomment-768608594


   No, the `If` there is for guarding against the case where there is no boxes, 
see 
   
   
https://github.com/pytorch/vision/blob/8ebfd2f5d5f1792ce2cf5a2329320f604530a68e/torchvision/ops/boxes.py#L78-L79
   
   So applying topk to an empty tensor is nop anyway.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m commented on pull request #7352: [COMMUNITY] @trevor-m -> reviewer

2021-01-27 Thread GitBox


trevor-m commented on pull request #7352:
URL: https://github.com/apache/tvm/pull/7352#issuecomment-768602884


   Thank you! 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen opened a new pull request #7352: [COMMUNITY] @trevor-m -> reviewer

2021-01-27 Thread GitBox


tqchen opened a new pull request #7352:
URL: https://github.com/apache/tvm/pull/7352


   Dear community:
   
   Please join us to welcome @trevor-m as a new reviewer. He has made multiple 
contributions to the BYOC, in particular the TensorRT support
   
   - [Commits History](https://github.com/apache/tvm/commits?author=trevor-m)
   - [Code 
Review](https://github.com/apache/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by:trevor-m)
   - [Community Forum 
Summary](https://discuss.tvm.apache.org/u/trevor-m/summary)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m commented on a change in pull request #7162: Fix Segmentation Fault For Tensorrt BYOC when TVM_TENSORRT_CACHE_DIR is Set

2021-01-27 Thread GitBox


trevor-m commented on a change in pull request #7162:
URL: https://github.com/apache/tvm/pull/7162#discussion_r565629078



##
File path: src/runtime/contrib/tensorrt/tensorrt_ops.cc
##
@@ -921,7 +921,7 @@ class ReshapeOpConverter : public TensorRTOpConverter {
 
   void Convert(TensorRTOpConverterParams* params) const {
 auto input = params->inputs.at(0).tensor;
-
ICHECK_EQ(std::stoi(params->node.GetAttr>("reverse")[0]),
 false);
+
//ICHECK_EQ(std::stoi(params->node.GetAttr>("reverse")[0]),
 false);

Review comment:
   Please rebase so you don't need to comment out this line.

##
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##
@@ -83,8 +83,8 @@ class TensorRTRuntime : public JSONRuntimeBase {
 ICHECK_EQ(consts.size(), const_idx_.size())
 << "The number of input constants must match the number of required.";
 LoadGlobalAttributes();
-if (GetCachedEnginesFromDisk()) return;
 SetupConstants(consts);
+if (GetCachedEnginesFromDisk()) return;

Review comment:
   Since `GetCachedEnginesFromDisk` is now at the end of the function, we 
dont need the `if` and `return`.

##
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##
@@ -178,7 +178,14 @@ class TensorRTRuntime : public JSONRuntimeBase {
*/
   void BuildEngine() {
 batch_size_ = data_entry_[input_var_eid_[0]]->shape[0];
-if (trt_engine_cache_.count(std::make_pair(symbol_name_, batch_size_))) 
return;
+if (trt_engine_cache_.count(std::make_pair(symbol_name_, batch_size_))) {
+  TensorRTEngineAndContext& engine_and_context =
+  trt_engine_cache_.at(std::make_pair(symbol_name_, batch_size_));
+  size_t binding_num = engine_and_context.engine->getNbBindings();
+  if (engine_and_context.device_buffers.size() == binding_num) {

Review comment:
   This could be `!engine_and_context.device_buffers.empty()` instead, it 
maybe communicates the purpose of this check better.

##
File path: src/runtime/contrib/tensorrt/tensorrt_builder.cc
##
@@ -185,6 +185,17 @@ TensorRTEngineAndContext TensorRTBuilder::BuildEngine() {
   return {engine, context, network_input_names_, network_output_names_, 
device_buffers};
 }
 
+void TensorRTBuilder::CreateDeviceBuffers(TensorRTEngineAndContext* 
engine_and_context) {

Review comment:
   The code in this function is a duplicate of the code in `BuildEngine()` 
- can you call this new function from BuildEngine to avoid the duplication?

##
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##
@@ -211,6 +218,16 @@ class TensorRTRuntime : public JSONRuntimeBase {
   builder.AddOutput(outputs_[i], EntryID(outputs_[i]));
 }
 
+// Allocate Device Buffers
+if (trt_engine_cache_.count(std::make_pair(symbol_name_, batch_size_))) {
+  TensorRTEngineAndContext& engine_and_context =
+  trt_engine_cache_.at(std::make_pair(symbol_name_, batch_size_));
+  if (engine_and_context.device_buffers.size() == 0) {
+builder.CreateDeviceBuffers(&engine_and_context);
+return;

Review comment:
   We also shouldnt have to rebuild the whole nextwork just to allocate the 
buffers.

##
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##
@@ -211,6 +218,16 @@ class TensorRTRuntime : public JSONRuntimeBase {
   builder.AddOutput(outputs_[i], EntryID(outputs_[i]));
 }
 
+// Allocate Device Buffers
+if (trt_engine_cache_.count(std::make_pair(symbol_name_, batch_size_))) {
+  TensorRTEngineAndContext& engine_and_context =
+  trt_engine_cache_.at(std::make_pair(symbol_name_, batch_size_));
+  if (engine_and_context.device_buffers.size() == 0) {
+builder.CreateDeviceBuffers(&engine_and_context);
+return;

Review comment:
   We are building the TRT network in the TensorRTBuilder, but exiting 
before `BuildEngine` is called. This means the resources used by `builder` 
won't ever be freed (`TensorRTBuilder::CleanUp()`) needs to be called.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch ci-docker-staging updated (fa066e6 -> 0c94604)

2021-01-27 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from fa066e6  Put standalone_crt in correct Jenkinsfile stash bundle
 add 0c94604  include build prefix

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [tvm] masahi commented on pull request #7348: [Torch] Various updates for PyTorch frontend

2021-01-27 Thread GitBox


masahi commented on pull request #7348:
URL: https://github.com/apache/tvm/pull/7348#issuecomment-768523120


   Thanks @siju-samuel @t-vi 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Torch] Various updates for PyTorch frontend (#7348)

2021-01-27 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 59e0a4a  [Torch] Various updates for PyTorch frontend   (#7348)
59e0a4a is described below

commit 59e0a4a46461b1a90bc24660cf25e08cfcfb7a1f
Author: masahi 
AuthorDate: Thu Jan 28 04:30:08 2021 +0900

[Torch] Various updates for PyTorch frontend   (#7348)

* add conversion for detr

* remove explicit broadcast_to before batched matmul

* use take with wrap mode

* add test for transformer and negative indices

* add sort and argsort

* add logical_and

* support masked_select

* add gpu targets to masked_select test

* improve sort conversion
---
 python/tvm/relay/frontend/pytorch.py  |  63 
 tests/python/frontend/pytorch/test_forward.py | 101 +-
 2 files changed, 150 insertions(+), 14 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 991e3a8..68e68fd 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -399,10 +399,7 @@ class PyTorchOpConverter:
 begin = [0] * ndim
 dim = int(inputs[1])
 stride = int(inputs[4])
-if isinstance(inputs[2], _expr.Call):
-begin[dim], _ = try_infer_value(inputs[2], lambda ret: 
np.asscalar(ret.astype(np.int)))
-else:
-begin[dim] = int(inputs[2])
+begin[dim], _ = try_infer_value(inputs[2], lambda ret: 
np.asscalar(ret.astype(np.int)))
 
 # Process begin
 if not isinstance(begin[dim], int):
@@ -518,13 +515,13 @@ class PyTorchOpConverter:
 data = inputs[0]
 dim = int(inputs[1])
 index = _wrap_const(inputs[2])
-return _op.transform.take(data, index, axis=dim)
+return _op.transform.take(data, index, axis=dim, mode="wrap")
 
 def take(self, inputs, input_types):
 data = inputs[0]
 indices = _op.cast(inputs[1], "int32")
 
-return _op.transform.take(data, indices=indices)
+return _op.transform.take(data, indices=indices, mode="wrap")
 
 def topk(self, inputs, input_types):
 data = inputs[0]
@@ -551,7 +548,13 @@ class PyTorchOpConverter:
 
 def repeat(self, inputs, input_types):
 data = inputs[0]
-reps = inputs[1]
+reps = []
+for r in inputs[1]:
+if isinstance(r, int):
+reps.append(r)
+else:
+reps.append(int(_infer_value(r, {}).asnumpy()))
+
 return _op.transform.tile(data, reps=reps)
 
 def repeat_interleave(self, inputs, input_types):
@@ -1520,12 +1523,6 @@ class PyTorchOpConverter:
 # Convert a and b into 3 dimensional tensors.
 a = _op.reshape(inputs_0, [-1, a_shape[-2], a_shape[-1]])
 b = _op.reshape(inputs_1, [-1, b_shape[-2], b_shape[-1]])
-# Broadcast b to match batch size of a
-new_b_shape = list(self.infer_shape_with_prelude(b))
-new_a_shape = self.infer_shape_with_prelude(a)
-if new_a_shape[0] > new_b_shape[0]:
-new_b_shape[0] = new_a_shape[0]
-b = _op.broadcast_to(b, new_b_shape)
 # Transpose matrix dimensions of b.
 b = _op.transpose(b, [0, 2, 1])
 # Perform a batch matmul.
@@ -2070,6 +2067,40 @@ class PyTorchOpConverter:
 src = inputs[3]
 return _op.scatter_add(data, index, src, axis=axis)
 
+def cumsum(self, inputs, input_types):
+data = inputs[0]
+dim = inputs[1]
+dtype = inputs[2]
+
+if inputs[2] is not None:
+dtype = _convert_dtype_value(inputs[2])
+
+return _op.cumsum(data, axis=dim, dtype=dtype)
+
+def masked_fill(self, inputs, input_types):
+mask = inputs[1]
+value = _op.cast(_wrap_const(inputs[2]), input_types[0])
+return _op.where(mask, value, inputs[0])
+
+def masked_select(self, inputs, input_types):
+mask = inputs[1]
+indices = self.nonzero([mask], input_types, is_numpy_style=True)
+return _op.adv_index([inputs[0]] + [indices[i] for i in 
range(indices.size)])
+
+def sort(self, inputs, input_types):
+data = inputs[0]
+dim = inputs[1]
+is_descending = inputs[2]
+# pytorch sort returns both sorted indices and values
+indices = _op.argsort(data, dim, not is_descending)
+return _op.gather(data, dim, indices), indices
+
+def argsort(self, inputs, input_types):
+data = inputs[0]
+dim = inputs[1]
+is_descending = inputs[2]
+return _op.argsort(data, dim, not is_descending)
+
 def is_floating_point(self, inputs, input_types):
 assert len(inputs) == 1
 
@@ -2263,

[GitHub] [tvm] masahi merged pull request #7348: [Torch] Various updates for PyTorch frontend

2021-01-27 Thread GitBox


masahi merged pull request #7348:
URL: https://github.com/apache/tvm/pull/7348


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] vegaluisjose commented on pull request #7351: [BYOC][Verilator] change runtime registry function name

2021-01-27 Thread GitBox


vegaluisjose commented on pull request #7351:
URL: https://github.com/apache/tvm/pull/7351#issuecomment-768496534


   I don't think so, It is just that "name change"



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch ci-docker-staging updated (0fd91fb -> fa066e6)

2021-01-27 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 0fd91fb  Merge remote-tracking branch 'origin/main' into 
standalone-crt-build-tree
 add fa066e6  Put standalone_crt in correct Jenkinsfile stash bundle

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[GitHub] [tvm] vegaluisjose opened a new pull request #7351: [BYOC][Verilator] change runtime registry function name

2021-01-27 Thread GitBox


vegaluisjose opened a new pull request #7351:
URL: https://github.com/apache/tvm/pull/7351


   I think snake case is more common throughout the codebase than camel case 
for registering functions
   
   @tmoreau89 @liangfu 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


comaniac commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r565520801



##
File path: docs/deploy/vitis_ai.rst
##
@@ -541,20 +543,50 @@ TVM.
import tvm
import tvm.relay as relay
from tvm.contrib.target import vitis_ai
-   from tvm.contrib import util, graph_runtime
+   from tvm.contrib import utils, graph_runtime
from tvm.relay.build_module import bind_params_by_name
from tvm.relay.op.contrib.vitis_ai import annotation
 
 After importing a convolutional neural network model using the usual
 Relay API's, annotate the Relay expression for the given Vitis-AI DPU
 target and partition the graph.
 
+.. note::
+
+We recommend switching DPU convolutions' data layouts to NHWC and CPU 
comvolutions'
+data layouts to NCHW for best DPU and CPU performance. You can use the 
ConvertLayout
+transformation pass two times to achieve this as demonstrated in the code 
block
+underneath.
+
 .. code:: python
 
mod["main"] = bind_params_by_name(mod["main"], params)
+   
+   # For edge DPU we recommend switching the convolutions'data layout

Review comment:
   ```suggestion
  # For edge DPU we recommend converting the convolutions' data layout
   ```

##
File path: docs/deploy/vitis_ai.rst
##
@@ -541,20 +543,50 @@ TVM.
import tvm
import tvm.relay as relay
from tvm.contrib.target import vitis_ai
-   from tvm.contrib import util, graph_runtime
+   from tvm.contrib import utils, graph_runtime
from tvm.relay.build_module import bind_params_by_name
from tvm.relay.op.contrib.vitis_ai import annotation
 
 After importing a convolutional neural network model using the usual
 Relay API's, annotate the Relay expression for the given Vitis-AI DPU
 target and partition the graph.
 
+.. note::
+
+We recommend switching DPU convolutions' data layouts to NHWC and CPU 
comvolutions'

Review comment:
   ```suggestion
   We recommend converting DPU convolutions' data layouts to NHWC and CPU 
convolutions'
   ```
   
   In fact, to get the best performance by getting rid of the layout transform 
overheads, we should suggest using NHWC for an entire model. For the Conv2D 
that remains on CPU, we could use auto_scheduler to tune its performance, and 
it could be even better than the tuned Conv2D with NCHW data layout. It might 
be better to mention this in the doc, and point to the auto_scheduler tutorials.

##
File path: python/tvm/contrib/target/vitis_ai.py
##
@@ -132,12 +132,12 @@ def vitis_ai_compiler(ref):
 layers = xgraph.get_layers()
 
 # Get the output tensor names using XGraph and output Relay ids
-out_tensor_names = []
+out_tensor_names = [1] * len(output_relay_ids)

Review comment:
   It would be better to make the type consistent. For example, you could 
use `["unknown_name" for _ in range(output_relay_ids)]`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ZihengJiang commented on pull request #7287: [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is wrapping

2021-01-27 Thread GitBox


ZihengJiang commented on pull request #7287:
URL: https://github.com/apache/tvm/pull/7287#issuecomment-768470133


   Merged. Thanks @tkonolige @altanh 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is wrapping (#7287)

2021-01-27 Thread ziheng
This is an automated email from the ASF dual-hosted git repository.

ziheng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new eae21b0  [PRNG] Add check to PRNG to make sure that unsigned integer 
arithmetic is wrapping (#7287)
eae21b0 is described below

commit eae21b087cbde53b99fe40b862be7c99dedc57d0
Author: Tristan Konolige 
AuthorDate: Wed Jan 27 10:05:27 2021 -0800

[PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is 
wrapping (#7287)

* [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is 
wrapping

* Add threefry_test_wrapping: a manual test for wrapping unsigned 
arithmetic.

* fix test to actually run on the target

* formatting

* lint
---
 python/tvm/topi/random/kernel.py   | 62 +-
 tests/python/topi/python/test_topi_prng.py |  8 
 2 files changed, 68 insertions(+), 2 deletions(-)

diff --git a/python/tvm/topi/random/kernel.py b/python/tvm/topi/random/kernel.py
index 576fd92..b21db37 100644
--- a/python/tvm/topi/random/kernel.py
+++ b/python/tvm/topi/random/kernel.py
@@ -17,6 +17,7 @@
 """Pseudorandom number kernels."""
 import tvm
 import tvm.topi
+import numpy as np
 from ... import tir
 from ...tir import ir_builder
 
@@ -135,7 +136,7 @@ def _threefry(
 assert key_buf.dtype == counter_buf.dtype, "threefry key and counter must 
be the same dtype"
 
 def mix(a, b, rotation):
-x = a + b  # TODO should be wrapping
+x = a + b  # wrapping
 y = x ^ ((b << rotation) | (b >> (iwidth - rotation)))
 return [x, y]
 
@@ -167,7 +168,7 @@ def _threefry(
 with irb.for_range(0, out_shape, name="l") as l:  # pylint: 
disable=invalid-name
 for i in range(nrounds // 4):
 for j in range(nwords):
-out_buf[out_offset + l * nwords + j] += key_schedule(i, j)  # 
TODO wrapping
+out_buf[out_offset + l * nwords + j] += key_schedule(i, j)  # 
wrapping
 for k in range(4):
 for j in range(nwords // 2):
 (
@@ -201,6 +202,13 @@ def threefry_generate(gen, out_shape):
 then a new generator is created by applying Threefry to the current key, 
path, and counter.
 This new generator will have a reset counter.
 
+Warning
+---
+Threeyfry requires that unsigned integer arithmetic wraps on overflow. 
Currently TVM has no
+guarantee of this, so threefry contains an internal assert to check 
wrapping behavior. This
+assert may or may not run depending on your platform, so it is recommended 
you run
+:py:func:`threefry_test_wrapping` to verify wrapping behavior.
+
 Parameters
 --
 gen : Tensor[10, uint64]
@@ -234,6 +242,18 @@ def threefry_generate(gen, out_shape):
 out_gen = irb.buffer_ptr(out_gen_ptr)
 out_array = irb.buffer_ptr(out_array_ptr)
 
+# Check that unsigned arithmetic wraps, as it is required to implement 
threefry correctly.
+irb.emit(
+tvm.tir.AssertStmt(
+tvm.tir.const(0x, "uint64") + tvm.tir.const(1, 
"uint64")
+== tvm.tir.const(0, "uint64"),
+tvm.tir.StringImm(
+"Unsigned integer arithmetic is not wrapping, but threefry 
requires wrapping."
+),
+tvm.tir.Evaluate(0),
+)
+)
+
 # Create a temporary array to hold the generator state we will use to 
create the random
 # numbers. We cannot use gen because we may need to update the key + 
path if there is not
 # enough room in the counter.
@@ -408,3 +428,41 @@ def threefry_split(gen):
 name="threefry_split",
 tag="threefry_split",
 )
+
+
+def threefry_test_wrapping(target, ctx):
+"""Test that unsigned arithmetic wraps on overflow.
+
+Parameters
+--
+target : tvm.target.Target
+Target to run against
+ctx : tvm.runtime.TVMContext
+Context to run the test on
+
+Returns
+---
+is_wrapping : bool
+Whether or not unsigned integer arithmetic is wrapping for this 
target, context pair. True
+indicates that threefry will work on this platform.
+"""
+if isinstance(target, str):
+target = tvm.target.Target(target)
+
+def gen_ir(out_ptr):
+irb = ir_builder.create()
+out = irb.buffer_ptr(out_ptr)
+if "gpu" in target.keys:
+thread_x = tvm.te.thread_axis("threadIdx.x")
+irb.scope_attr(thread_x, "thread_extent", 1)
+out[0] = tvm.tir.const(0x, "uint64") + 
tvm.tir.const(1, "uint64")
+return irb.get()
+
+out = tvm.tir.decl_buffer((1,), dtype="uint64")
+f = tvm.te.extern(
+[out.shape], [], lambda ins, outs: gen_ir(outs[0]), dtype="uint64", 
out_buffers=

[GitHub] [tvm] ZihengJiang merged pull request #7287: [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is wrapping

2021-01-27 Thread GitBox


ZihengJiang merged pull request #7287:
URL: https://github.com/apache/tvm/pull/7287


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on pull request #7287: [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is wrapping

2021-01-27 Thread GitBox


altanh commented on pull request #7287:
URL: https://github.com/apache/tvm/pull/7287#issuecomment-768467381


   bump @tqchen @ZihengJiang 
   
   I think this is good to go since we resolved the assert discussion



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-27 Thread GitBox


comaniac commented on a change in pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#discussion_r565517145



##
File path: python/tvm/auto_scheduler/dispatcher.py
##
@@ -126,18 +127,53 @@ class ApplyHistoryBest(DispatchContext):
 If is str, then it should be the filename of a records log file.
 Each row of this file is an encoded record pair. Otherwise, it is an 
iterator.
 n_lines: Optional[int]
-if it is not None, only load the first `n_lines` lines of log
+if it is not None, only load the first `n_lines` lines of log.
+include_compatible: bool
+When set to True, compatible records will also be considered.
 """
 
-def __init__(self, records, n_lines=None):
+def __init__(self, records, n_lines=None, include_compatible=False):
 super(ApplyHistoryBest, self).__init__()
+self.include_compatible = include_compatible
 
+# Dict[str (target key),
+#   Dict[str (workload hash),
+# Dict[tuple (workload args), tuple (State, cost)]]]
 self.best_by_targetkey = {}
 self.best_by_model = {}
 self._best_user_defined = {}
 
 self.load(records, n_lines)
 
+@staticmethod
+def get_workload_entry(best_records, target_key, workload_key):
+"""Get the entry of the target key and workload key hash in the given 
best record map.
+
+Parameters
+--
+best_records: Dict[str, Dict[str, Dict[str, Any]]]
+The best record map.
+target_key: str
+The first key to the best_records.
+workload_key: str
+The workload key that can be decoded to workload hash and args.
+
+Returns
+---
+entry: Dict[str, Any]
+The entry in best_records with target key and workload hash.
+workload_hash: str
+The workload hash.

Review comment:
   ```suggestion
   The workload hash decoded from workload_key
   ```

##
File path: python/tvm/auto_scheduler/dispatcher.py
##
@@ -126,18 +127,53 @@ class ApplyHistoryBest(DispatchContext):
 If is str, then it should be the filename of a records log file.
 Each row of this file is an encoded record pair. Otherwise, it is an 
iterator.
 n_lines: Optional[int]
-if it is not None, only load the first `n_lines` lines of log
+if it is not None, only load the first `n_lines` lines of log.
+include_compatible: bool
+When set to True, compatible records will also be considered.
 """
 
-def __init__(self, records, n_lines=None):
+def __init__(self, records, n_lines=None, include_compatible=False):
 super(ApplyHistoryBest, self).__init__()
+self.include_compatible = include_compatible
 
+# Dict[str (target key),
+#   Dict[str (workload hash),
+# Dict[tuple (workload args), tuple (State, cost)]]]
 self.best_by_targetkey = {}
 self.best_by_model = {}
 self._best_user_defined = {}
 
 self.load(records, n_lines)
 
+@staticmethod
+def get_workload_entry(best_records, target_key, workload_key):
+"""Get the entry of the target key and workload key hash in the given 
best record map.
+
+Parameters
+--
+best_records: Dict[str, Dict[str, Dict[str, Any]]]
+The best record map.
+target_key: str
+The first key to the best_records.
+workload_key: str
+The workload key that can be decoded to workload hash and args.
+
+Returns
+---
+entry: Dict[str, Any]
+The entry in best_records with target key and workload hash.
+workload_hash: str
+The workload hash.

Review comment:
   ```suggestion
   The workload hash decoded from workload_key.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7349: [uTVM] fix missing memory runtime lib

2021-01-27 Thread GitBox


areusch commented on pull request #7349:
URL: https://github.com/apache/tvm/pull/7349#issuecomment-768456003


   hi @rafzi,
   
   this was actually intentional, though the API may be a bit confusing right 
now--the memory allocator is inefficient when used with the graph runtime on 
constrained devices, so I split it into a separate directory and you can 
optionally link it with `extra_libs` as you've done in this PR. are you seeing 
test failures other than the regression for this PR?
   
   thanks!
   -andrew



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


leandron commented on pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#issuecomment-768426656


   cc @tqchen as I think it will require a Docker images rebuild



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


leandron commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r565440518



##
File path: docker/install/ubuntu_install_vitis_ai_core.sh
##
@@ -22,8 +22,9 @@ set -o pipefail
 
 # install libraries for building Vitis-AI on ubuntu
 apt-get update && apt-get install -y --no-install-recommends \
-graphviz\
-gnupg2
+graphviz \
+gnupg2 \
+gpg-agent
 
 apt-get update && apt-get install -y gcc-aarch64-linux-gnu

Review comment:
   Thanks for the update - Yeah, I agree there is some refactoring that can 
be done on other scripts as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7287: [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is wrapping

2021-01-27 Thread GitBox


tkonolige commented on a change in pull request #7287:
URL: https://github.com/apache/tvm/pull/7287#discussion_r565438818



##
File path: python/tvm/topi/random/kernel.py
##
@@ -234,6 +242,18 @@ def gen_ir(gen_ptr, out_gen_ptr, out_array_ptr):
 out_gen = irb.buffer_ptr(out_gen_ptr)
 out_array = irb.buffer_ptr(out_array_ptr)
 
+# Check that unsigned arithmetic wraps, as it is required to implement 
threefry correctly.
+irb.emit(

Review comment:
   That should be fine then.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on pull request #7152: [RUNTIME] Improve error messages for TypedPackedFunc

2021-01-27 Thread GitBox


tkonolige commented on pull request #7152:
URL: https://github.com/apache/tvm/pull/7152#issuecomment-768395015


   Yeah, a type mismatch macro could be useful.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jtuyls commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


jtuyls commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r565427643



##
File path: docker/install/ubuntu_install_vitis_ai_core.sh
##
@@ -22,8 +22,9 @@ set -o pipefail
 
 # install libraries for building Vitis-AI on ubuntu
 apt-get update && apt-get install -y --no-install-recommends \
-graphviz\
-gnupg2
+graphviz \
+gnupg2 \
+gpg-agent
 
 apt-get update && apt-get install -y gcc-aarch64-linux-gnu

Review comment:
   Ok, thanks, I added `&& rm -rf /var/lib/apt/lists/*` and reorganized a 
bit. I didn't add your suggestion commit as I had to remove 
`--no-install-recommends` to avoid an issue with aarch64-linux-gnu-gcc 
returning the following when using it for cross compilation
   ```
   /usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 
cannot find crti.o: No such file or directory
   /usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 
cannot find -lc
   /usr/lib/gcc-cross/aarch64-linux-gnu/7/../../../../aarch64-linux-gnu/bin/ld: 
cannot find crtn.o: No such file or directory
   ```
   Ps  `rm -rf /var/lib/apt/lists/*` could be added to the other script files 
too, for example 
https://github.com/apache/tvm/blob/main/docker/install/ubuntu_install_core.sh





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


leandron commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r565364641



##
File path: docker/Dockerfile.demo_vitis_ai
##
@@ -18,7 +18,7 @@
 # CI docker VAI env
 FROM xilinx/vitis-ai:latest
 
-RUN apt-get update --fix-missing
+RUN apt-get update --fix-missing && apt-get install -y gpg-agent

Review comment:
   Ack, see comment below.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


leandron commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r565364279



##
File path: docker/install/ubuntu_install_vitis_ai_core.sh
##
@@ -22,8 +22,9 @@ set -o pipefail
 
 # install libraries for building Vitis-AI on ubuntu
 apt-get update && apt-get install -y --no-install-recommends \
-graphviz\
-gnupg2
+graphviz \
+gnupg2 \
+gpg-agent
 
 apt-get update && apt-get install -y gcc-aarch64-linux-gnu

Review comment:
   I understand you removed it from the Dockerfile, to make it clear, and 
that was a good move. However, due to the way Docker layers works, you'll still 
have leftover files, which over time will bloat your image.
   
   The code below is what I had in mind. _To be clear, it's a suggestion only._ 
Feel free to keep your script, if that makes more sense for you.
   
   ```suggestion
   graphviz \
   gnupg2 \
   gpg-agent \
   gcc-aarch64-linux-gnu \
   && rm -rf /var/lib/apt/lists/*
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jtuyls commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


jtuyls commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r565358680



##
File path: docker/Dockerfile.demo_vitis_ai
##
@@ -18,7 +18,7 @@
 # CI docker VAI env
 FROM xilinx/vitis-ai:latest
 
-RUN apt-get update --fix-missing
+RUN apt-get update --fix-missing && apt-get install -y gpg-agent

Review comment:
   Thanks for suggestion @leandron . I moved the gpg-agent package 
installation to the  install/ubuntu_install_vitis_ai_core.sh script instead to 
clear up the dockerfile.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


leandron commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r565340201



##
File path: docker/Dockerfile.demo_vitis_ai
##
@@ -18,7 +18,7 @@
 # CI docker VAI env
 FROM xilinx/vitis-ai:latest
 
-RUN apt-get update --fix-missing
+RUN apt-get update --fix-missing && apt-get install -y gpg-agent

Review comment:
   To save same space on your image, Docker recommends a little boilerplate 
when using `apt-get`. I suggest adding that recommended `&& rm -rf 
/var/lib/apt/lists/*` in the end here.
   
   You can read about it here: 
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#run





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jtuyls opened a new pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-01-27 Thread GitBox


jtuyls opened a new pull request #7350:
URL: https://github.com/apache/tvm/pull/7350


   Fix occasional issue in Vitis AI codegen out tensor names matching.
   Small updates in Vitis AI docs & demo_vitis_ai docker.
   
   @comaniac @zhiics @anilmartha 
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7287: [PRNG] Add check to PRNG to make sure that unsigned integer arithmetic is wrapping

2021-01-27 Thread GitBox


tqchen commented on a change in pull request #7287:
URL: https://github.com/apache/tvm/pull/7287#discussion_r565321690



##
File path: python/tvm/topi/random/kernel.py
##
@@ -234,6 +242,18 @@ def gen_ir(gen_ptr, out_gen_ptr, out_array_ptr):
 out_gen = irb.buffer_ptr(out_gen_ptr)
 out_array = irb.buffer_ptr(out_array_ptr)
 
+# Check that unsigned arithmetic wraps, as it is required to implement 
threefry correctly.
+irb.emit(

Review comment:
   The behavior can be backend dependent, I think right now the assert will 
be omitted





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7342: [BUILD] Don't add $TVM_HOME/.. to the include path when compiling code

2021-01-27 Thread GitBox


tqchen commented on pull request #7342:
URL: https://github.com/apache/tvm/pull/7342#issuecomment-768290444


   Thanks @tkonolige 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (eeec538 -> 38fa420)

2021-01-27 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from eeec538  Add resource_handle to both TVM_DLL_EXPORT_TYPED_FUNC and 
TVM_DLL_EXPORT_PACKED_FUNC macros in packed_func.h. This is a patch PR for 
#7388. (#7343)
 add 38fa420  [FIX] Don't add $TVM_HOME/.. to the include path when 
compiling code. (#7342)

No new revisions were added by this update.

Summary of changes:
 python/tvm/_ffi/libinfo.py | 2 --
 1 file changed, 2 deletions(-)



[tvm] branch main updated (1e0d356 -> eeec538)

2021-01-27 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 1e0d356  [Relay, TOPI] Add numpy style cumsum op (#7334)
 add eeec538  Add resource_handle to both TVM_DLL_EXPORT_TYPED_FUNC and 
TVM_DLL_EXPORT_PACKED_FUNC macros in packed_func.h. This is a patch PR for 
#7388. (#7343)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/packed_func.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[GitHub] [tvm] tqchen merged pull request #7342: [BUILD] Don't add $TVM_HOME/.. to the include path when compiling code

2021-01-27 Thread GitBox


tqchen merged pull request #7342:
URL: https://github.com/apache/tvm/pull/7342


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen merged pull request #7343: Add resource_handle to both TVM_DLL_EXPORT_TYPED_FUNC and TVM_DLL_EXP…

2021-01-27 Thread GitBox


tqchen merged pull request #7343:
URL: https://github.com/apache/tvm/pull/7343


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7348: [Torch] Various updates for PyTorch frontend

2021-01-27 Thread GitBox


masahi commented on pull request #7348:
URL: https://github.com/apache/tvm/pull/7348#issuecomment-768271843


   @t-vi Thanks, I think I tried `gather` before but for some reason I got 
wrong results, so I gave up `gather`. I tried again now, and it worked :) I 
don't know what I was doing, but I'm happy now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rafzi commented on pull request #7349: [uTVM] fix missing memory runtime lib

2021-01-27 Thread GitBox


rafzi commented on pull request #7349:
URL: https://github.com/apache/tvm/pull/7349#issuecomment-768245763


   The test failures are because of: 
https://github.com/apache/tvm/blob/main/tests/python/unittest/test_crt.py#L64
   
   Is uTVM supposed to be used with this extra_libs arg? The call to 
`build_static_runtime` will just fail without this, because it tries to link to 
the memory manager.
   
   Please let me know which way is intended, then I'll update the test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rafzi opened a new pull request #7349: [uTVM] fix missing memory runtime lib

2021-01-27 Thread GitBox


rafzi opened a new pull request #7349:
URL: https://github.com/apache/tvm/pull/7349


   memory.c was split off into its separate directory, but it was not added here
   
   @areusch @tqchen 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi opened a new pull request #7348: [Torch] Various updates for PyTorch frontend

2021-01-27 Thread GitBox


masahi opened a new pull request #7348:
URL: https://github.com/apache/tvm/pull/7348


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] marxin commented on issue #6832: TVM 0.7.0 - Tests fail with `Check failed: reg != nullptr: AttributeError: Operator reshape is not registered`

2021-01-27 Thread GitBox


marxin commented on issue #6832:
URL: https://github.com/apache/tvm/issues/6832#issuecomment-768182846


   Well I think the problem is Static Initialization Order Fiasco:
   https://en.cppreference.com/w/cpp/language/siof
   
   which is only exposed by LTO.
   
   ```
   static const Op& with_funcid_op = Op::Get("annotation.with_funcid"); 
   ```
   
   is called before `OpRegistry::Global()` is initialized if I see correctly.
   Note that without LTO optimization, you are lucky based on the order of `.o` 
files provided on the linker command line.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] WeberChen edited a comment on issue #1209: [tutorial] UINT8 quantization inference example

2021-01-27 Thread GitBox


WeberChen edited a comment on issue #1209:
URL: https://github.com/apache/tvm/issues/1209#issuecomment-768147138


   Hi @tqchen 
   
   Because I need to import the pre-quantized model from Tensorflow, but 
neither from TF-lite nor from PyTorch, I need the FakeQuantWithMinMaxVars 
operator.
   
   Therefore, the issue-706 in "discuss.tvm.ai" may not be suitable for me
   
https://discuss.tvm.apache.org/t/tensorflow-operator-fakequantwithminmaxvars-not-implemented/706
   
   Would you please put out more resources about the FakeQuantWithMinMaxVars?
   
   or Could you tell me how to build/implement the non-existed op by myself?
   
   Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] WeberChen edited a comment on issue #1209: [tutorial] UINT8 quantization inference example

2021-01-27 Thread GitBox


WeberChen edited a comment on issue #1209:
URL: https://github.com/apache/tvm/issues/1209#issuecomment-768147138


   Hi @tqchen 
   
   Because I need to import the pre-quantized model from Tensorflow but neither 
from TF-lite nor from PyTorch, I need the FakeQuantWithMinMaxVars operator for 
the following steps.
   
   Therefore, the issue-706 in "discuss.tvm.ai" may not be suitable for me
   
https://discuss.tvm.apache.org/t/tensorflow-operator-fakequantwithminmaxvars-not-implemented/706
   
   Would you please put out more resources about the FakeQuantWithMinMaxVars?
   
   or Could you tell me how to build/implement the non-existed op by myself?
   
   Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] WeberChen commented on issue #1209: [tutorial] UINT8 quantization inference example

2021-01-27 Thread GitBox


WeberChen commented on issue #1209:
URL: https://github.com/apache/tvm/issues/1209#issuecomment-768147138


   Hi @tqchen 
   
   Because I need to import the pre-quantized model from Tensorflow but not 
from TF-lite, I need the FakeQuantWithMinMaxVars operator for the following 
steps.
   
   Therefore, the issue-706 in "discuss.tvm.ai" may not be suitable for me
   
https://discuss.tvm.apache.org/t/tensorflow-operator-fakequantwithminmaxvars-not-implemented/706
   
   Would you please put out more resources about the FakeQuantWithMinMaxVars?
   
   or Could you tell me how to build/implement the non-existed op by myself?
   
   Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] domin1985 opened a new pull request #7347: [RELAY][Parser] Optimize relay parser to restore calls attrs

2021-01-27 Thread GitBox


domin1985 opened a new pull request #7347:
URL: https://github.com/apache/tvm/pull/7347


   Relay parser does not support to restore the attrs value when it is a 
non-OpNode call.
   
   To avoid too much modification to the native code, only print out the attrs 
type key of non-Operator Call in relay printer. Then reconstruct the attrs 
object after parsing this attrs_type_key value in Relay parser.
   
   @jroesch please review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org