[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5092: [PASS] dtype rewrite for indexing variables

2020-03-18 Thread GitBox
yzhliu commented on a change in pull request #5092: [PASS] dtype rewrite for 
indexing variables
URL: https://github.com/apache/incubator-tvm/pull/5092#discussion_r394803334
 
 

 ##
 File path: src/tir/pass/rewrite_datatype.cc
 ##
 @@ -0,0 +1,322 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file rewrite_datatype.cc
+ */
+
+#include 
+#include 
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+using arith::ConstIntBound;
+
+class DataTypeRewriter;
+
+class DataTypeVisitor final : public StmtExprVisitor {
+ public:
+  void VisitExpr(const PrimExpr& e) {
+if (e.dtype().is_int()) {
+  int bits = 64;
+  if (e.dtype().bits() <= 32 ||
+  analyzer_.CanProve(e <= max_value(DataType::Int(32)) &&
+ e >= min_value(DataType::Int(32 {
+bits = 32;
+  }
+  int tmp = bits_;
+  bits_ = bits > bits_ ? bits :  bits_;
+  StmtExprVisitor::VisitExpr(e);
+  bits_ = tmp;
+} else {
+  StmtExprVisitor::VisitExpr(e);
+}
+  }
+
+  void VisitStmt_(const ForNode* op) {
+analyzer_.Bind(op->loop_var,
+   Range::make_by_min_extent(op->min, op->extent));
+vset_.insert(op->loop_var.as());
+vextent_[op->loop_var.as()] = op->extent.dtype();
+return StmtExprVisitor::VisitStmt_(op);
+  }
+
+  void VisitStmt_(const AttrStmtNode* op) {
+if (op->attr_key == attr::thread_extent ||
+op->attr_key == attr::virtual_thread) {
+  IterVar iv = Downcast(op->node);
+  CHECK_NE(iv->thread_tag.length(), 0U);
+  analyzer_.Bind(iv->var,
+  Range::make_by_min_extent(0, op->value));
+  vset_.insert(iv->var.as());
+  vextent_[iv->var.as()] = op->value.dtype();
+  StmtExprVisitor::VisitStmt_(op);
+} else {
+  StmtExprVisitor::VisitStmt_(op);
+}
+  }
+
+  void VisitExpr_(const ReduceNode* op) {
+// Setup the domain information before simplification.
+for (const IterVar& iv : op->axis) {
+  analyzer_.Bind(iv->var, iv->dom);
+  vset_.insert(iv->var.as());
+  vextent_[iv->var.as()] = iv->dom->extent.dtype();
+}
+// Recursively call simplification when necessary.
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  void VisitExpr_(const VarNode* op) {
+if (vset_.find(op) != vset_.end()) {
+  int bits = std::min(vextent_[op].bits(), bits_);
+  if (vmap.find(op) == vmap.end()) {
+vmap[op] = op->dtype.with_bits(bits);
+  } else {
+vmap[op] = op->dtype.with_bits(std::max(vmap[op].bits(), bits));
+  }
+}
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  void VisitExpr_(const IntImmNode* op) {
+if (op->dtype.is_int()) {
+  int bits = std::min(op->dtype.bits(), bits_);
+  if (vmap.find(op) == vmap.end()) {
+vmap[op] = op->dtype.with_bits(bits);
+  } else {
+vmap[op] = op->dtype.with_bits(std::max(vmap[op].bits(), bits));
+  }
+}
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  void VisitExpr_(const CastNode* op) {
+if (op->dtype.is_int()) {
+  int bits = std::min(op->dtype.bits(), bits_);
+  if (vmap.find(op) == vmap.end()) {
+vmap[op] = op->dtype.with_bits(bits);
+  } else {
+vmap[op] = op->dtype.with_bits(std::max(vmap[op].bits(), bits));
+  }
+}
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  // the narrowed datatype of Var and IntImm
+  std::unordered_map vmap;
+
+ protected:
+  // internal analyzer
+  arith::Analyzer analyzer_;
+
+ private:
+  // the maximum bits of all containing expressions
+  int bits_;
+  // the vars to be rewritten
+  std::unordered_set vset_;
 
 Review comment:
   is it necessary? can we simply check `vextent_`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5092: [PASS] dtype rewrite for indexing variables

2020-03-18 Thread GitBox
yzhliu commented on a change in pull request #5092: [PASS] dtype rewrite for 
indexing variables
URL: https://github.com/apache/incubator-tvm/pull/5092#discussion_r394794367
 
 

 ##
 File path: src/tir/pass/rewrite_datatype.cc
 ##
 @@ -0,0 +1,322 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file rewrite_datatype.cc
 
 Review comment:
   briefly describe what does this file do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5092: [PASS] dtype rewrite for indexing variables

2020-03-18 Thread GitBox
yzhliu commented on a change in pull request #5092: [PASS] dtype rewrite for 
indexing variables
URL: https://github.com/apache/incubator-tvm/pull/5092#discussion_r394812364
 
 

 ##
 File path: src/tir/pass/rewrite_datatype.cc
 ##
 @@ -0,0 +1,322 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file rewrite_datatype.cc
+ */
+
+#include 
+#include 
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+using arith::ConstIntBound;
+
+class DataTypeRewriter;
+
+class DataTypeVisitor final : public StmtExprVisitor {
+ public:
+  void VisitExpr(const PrimExpr& e) {
+if (e.dtype().is_int()) {
+  int bits = 64;
+  if (e.dtype().bits() <= 32 ||
+  analyzer_.CanProve(e <= max_value(DataType::Int(32)) &&
+ e >= min_value(DataType::Int(32 {
+bits = 32;
+  }
+  int tmp = bits_;
+  bits_ = bits > bits_ ? bits :  bits_;
+  StmtExprVisitor::VisitExpr(e);
+  bits_ = tmp;
+} else {
+  StmtExprVisitor::VisitExpr(e);
+}
+  }
+
+  void VisitStmt_(const ForNode* op) {
+analyzer_.Bind(op->loop_var,
+   Range::make_by_min_extent(op->min, op->extent));
+vset_.insert(op->loop_var.as());
+vextent_[op->loop_var.as()] = op->extent.dtype();
+return StmtExprVisitor::VisitStmt_(op);
+  }
+
+  void VisitStmt_(const AttrStmtNode* op) {
+if (op->attr_key == attr::thread_extent ||
+op->attr_key == attr::virtual_thread) {
+  IterVar iv = Downcast(op->node);
+  CHECK_NE(iv->thread_tag.length(), 0U);
+  analyzer_.Bind(iv->var,
+  Range::make_by_min_extent(0, op->value));
+  vset_.insert(iv->var.as());
+  vextent_[iv->var.as()] = op->value.dtype();
+  StmtExprVisitor::VisitStmt_(op);
+} else {
+  StmtExprVisitor::VisitStmt_(op);
+}
+  }
+
+  void VisitExpr_(const ReduceNode* op) {
+// Setup the domain information before simplification.
+for (const IterVar& iv : op->axis) {
+  analyzer_.Bind(iv->var, iv->dom);
+  vset_.insert(iv->var.as());
+  vextent_[iv->var.as()] = iv->dom->extent.dtype();
+}
+// Recursively call simplification when necessary.
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  void VisitExpr_(const VarNode* op) {
+if (vset_.find(op) != vset_.end()) {
+  int bits = std::min(vextent_[op].bits(), bits_);
+  if (vmap.find(op) == vmap.end()) {
+vmap[op] = op->dtype.with_bits(bits);
+  } else {
+vmap[op] = op->dtype.with_bits(std::max(vmap[op].bits(), bits));
+  }
+}
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  void VisitExpr_(const IntImmNode* op) {
+if (op->dtype.is_int()) {
+  int bits = std::min(op->dtype.bits(), bits_);
+  if (vmap.find(op) == vmap.end()) {
+vmap[op] = op->dtype.with_bits(bits);
+  } else {
+vmap[op] = op->dtype.with_bits(std::max(vmap[op].bits(), bits));
+  }
+}
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  void VisitExpr_(const CastNode* op) {
+if (op->dtype.is_int()) {
+  int bits = std::min(op->dtype.bits(), bits_);
+  if (vmap.find(op) == vmap.end()) {
+vmap[op] = op->dtype.with_bits(bits);
+  } else {
+vmap[op] = op->dtype.with_bits(std::max(vmap[op].bits(), bits));
+  }
+}
+StmtExprVisitor::VisitExpr_(op);
+  }
+
+  // the narrowed datatype of Var and IntImm
+  std::unordered_map vmap;
+
+ protected:
+  // internal analyzer
+  arith::Analyzer analyzer_;
+
+ private:
+  // the maximum bits of all containing expressions
 
 Review comment:
   to be more precise, this is the maximum bits of the current expression 
return dtype, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on issue #5078: [DOC] Add doc for Relay op strategy

2020-03-18 Thread GitBox
icemelon9 commented on issue #5078: [DOC] Add doc for Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/5078#issuecomment-601002936
 
 
   made an update. @comaniac @tmoreau89 could you take a look again?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics edited a comment on issue #5068: [RELAY] Update the Python FFI Convention

2020-03-18 Thread GitBox
zhiics edited a comment on issue #5068: [RELAY] Update the Python FFI Convention
URL: https://github.com/apache/incubator-tvm/issues/5068#issuecomment-600759034
 
 
   @tqchen for docs, should we keep the current docs under python/relay, e.g. 
expr.rst and base.rst etc? I think we should remove them. But what should we 
have instead to represent relay namespace in python?
   
   do we want to have relay.rst? if so, I am not sure where we should put it. 
It seems it wired to put it either under docs/api/python/relay or 
docs/api/python. Or we want to have ir.rst under docs/api/python/relay to 
represent relay.rst? It looks not good either. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [ConvertLayout] Support QNN ops. (#5066)

2020-03-18 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 38118be  [ConvertLayout] Support QNN ops. (#5066)
38118be is described below

commit 38118befc0a7e8a3db87d652b30a9369abb60363
Author: Animesh Jain 
AuthorDate: Wed Mar 18 20:03:56 2020 -0700

[ConvertLayout] Support QNN ops. (#5066)

* [ConvertLayout] Support QNN ops.

* Changing layouts to C.

* Fixing dilation.

* Empty commit.

Co-authored-by: Ubuntu 
---
 python/tvm/relay/op/nn/_nn.py |  10 +-
 python/tvm/relay/qnn/op/__init__.py   |   2 +-
 python/tvm/relay/qnn/op/layout_conversions.py |  53 ++
 src/relay/op/nn/bitserial.cc  |   2 +-
 src/relay/op/nn/convolution.cc|  19 +-
 src/relay/op/nn/convolution.h |  15 ++
 src/relay/op/nn/nn.cc |  12 +-
 src/relay/op/nn/pad.cc|   2 +-
 src/relay/op/nn/pooling.cc|   2 +-
 src/relay/op/nn/upsampling.cc |   2 +-
 src/relay/op/tensor/reduce.cc |   7 +-
 src/relay/op/tensor/transform.cc  |  57 +-
 src/relay/op/tensor/transform.h   |  58 ++
 src/relay/qnn/op/add.cc   |  21 ++-
 src/relay/qnn/op/concatenate.cc   |  41 +++-
 src/relay/qnn/op/convolution.cc   |  20 +-
 src/relay/qnn/op/requantize.cc|  77 +++-
 src/relay/transforms/infer_layout_util.h  |  17 +-
 src/relay/transforms/transform_layout.h   |  16 +-
 tests/python/relay/test_pass_convert_op_layout.py | 217 ++
 20 files changed, 544 insertions(+), 106 deletions(-)

diff --git a/python/tvm/relay/op/nn/_nn.py b/python/tvm/relay/op/nn/_nn.py
index c2fe6d0..a9bd900 100644
--- a/python/tvm/relay/op/nn/_nn.py
+++ b/python/tvm/relay/op/nn/_nn.py
@@ -138,8 +138,6 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
 """
 # pylint: disable=import-outside-toplevel
 from tvm import relay
-data_layout = attrs['data_layout']
-kernel_layout = attrs['kernel_layout']
 data, weight = inputs
 assert desired_layout == 'NCHW', \
 "Currently only transformation to NCHW layout is supported."
@@ -147,13 +145,7 @@ def convert_conv2d(attrs, inputs, tinfos, desired_layout):
 new_attrs = dict(attrs)
 new_attrs['data_layout'] = desired_layout
 new_attrs['kernel_layout'] = 'OIHW'
-
-if data_layout == 'NHWC' and kernel_layout == 'HWIO':
-# Convert (NHWC, HWIO) to (NCHW, OIHW)
-return relay.nn.conv2d(data, weight, **new_attrs)
-if data_layout == 'NHWC' and kernel_layout == 'HWOI':
-# Convert (NHWC, HWOI) to (NCHW, OIHW). Depthwise conv2d.
-return relay.nn.conv2d(data, weight, **new_attrs)
+return relay.nn.conv2d(data, weight, **new_attrs)
 return None
 
 
diff --git a/python/tvm/relay/qnn/op/__init__.py 
b/python/tvm/relay/qnn/op/__init__.py
index 042dcb9..6d66e12 100644
--- a/python/tvm/relay/qnn/op/__init__.py
+++ b/python/tvm/relay/qnn/op/__init__.py
@@ -19,4 +19,4 @@
 from __future__ import absolute_import as _abs
 from .qnn import *
 from .op import register_qnn_legalize
-from . import legalizations
+from . import legalizations, layout_conversions
diff --git a/python/tvm/relay/qnn/op/layout_conversions.py 
b/python/tvm/relay/qnn/op/layout_conversions.py
new file mode 100644
index 000..f5850b8
--- /dev/null
+++ b/python/tvm/relay/qnn/op/layout_conversions.py
@@ -0,0 +1,53 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument
+"""Convert layout related registration"""
+from __future__ import absolute_import
+
+from tvm.relay.op import op as reg
+
+
+@reg.register_convert_op_layout("qnn.conv2d")
+def convert_qnn_conv2d(attrs, inputs, tinfos, desired_layout):
+"""Convert Layout pass registration for QNN conv2d op.
+
+Parameters

[GitHub] [incubator-tvm] anijain2305 commented on issue #5066: [ConvertLayout] Support QNN ops.

2020-03-18 Thread GitBox
anijain2305 commented on issue #5066: [ConvertLayout] Support QNN ops.
URL: https://github.com/apache/incubator-tvm/pull/5066#issuecomment-600962873
 
 
   Thanks @zhiics @yzhliu This is merged


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 merged pull request #5066: [ConvertLayout] Support QNN ops.

2020-03-18 Thread GitBox
anijain2305 merged pull request #5066: [ConvertLayout] Support QNN ops.
URL: https://github.com/apache/incubator-tvm/pull/5066
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-18 Thread GitBox
mbrookhart commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r394748894
 
 

 ##
 File path: src/relay/ir/expr_functor.cc
 ##
 @@ -29,8 +29,135 @@
 #include 
 #include 
 
+#include 
+
 namespace tvm {
 namespace relay {
+/*!
+ * \brief A function to non-recursively traverse dataflow regions of a graph
+ *
+ * ExpandDatflow manually manages a stack and performs DFS to determine the 
processing
+ * order of nodes in an input graph.
+ *
+ * If it finds a dataflow node (Call, Tuple, TupleGetItem), it checks if the 
arguments to that node
+ * need to be processed via fcheck_visited. If so, the function pushed those 
arguments to the stack
+ * and continues non-recursively to process the top of the stack. When it 
finds a node that doesn't
+ * match the dataflow types, or a node who's inputs have all been processed, 
it visits the current
+ * leaf via fvisit_leaf.
+ *
+ * This function should be used internally to other classes to implement 
mixed-mode traversals. The
+ * expectation is that fvisit_leaf will perform recursive analysis within 
mixed-mode traversal if it
+ * hits a non-dataflow node.
+ *
+ * fcheck_visited and fvisit_leaf are templated to encourage compiler inlining.
+ */
+template 
+void ExpandDataflow(Expr expr, FCheckVisited fcheck_visited, FVisitLeaf 
fvisit_leaf) {
+  std::stack> stack;
+  // The second state of the stack indicate whether the child has been
+  // expanded in the pre-order.
+  // NOTE: function will be inlined.
+  auto fpush_to_stack = [&fcheck_visited, &stack](const Expr& expr) {
+if (!fcheck_visited(expr)) {
+  stack.push({expr, false});
+}
+  };
+  fpush_to_stack(expr);
+  while (stack.size() > 0) {
+auto node = stack.top().first;
+// if this node was visited through another path
+// after being added to the stack ignore it.
+if (fcheck_visited(expr)) {
+  stack.pop();
+} else if (stack.top().second) {
+  // all the children has already been expanded.
+  // we can just run post order visit on it.
+  fvisit_leaf(node);
+  stack.pop();
+} else if (const CallNode* op = node.as()) {
+  // mark expanded = true
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->args.rbegin(); it != op->args.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+  fpush_to_stack(op->op);
+} else if (const TupleNode* op = node.as()) {
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->fields.rbegin(); it != op->fields.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+} else if (const TupleGetItemNode* op = node.as()) {
+  stack.top().second = true;
+  fpush_to_stack(op->tuple);
+} else {
+  // No need to expand the children directly run visit.
+  // terminal leaf, directly use visited.
+  fvisit_leaf(node);
+  stack.pop();
+}
+  }
+}
+
+DataflowVisitor::DataflowVisitor(int visit_limit) {
+  CHECK(visit_limit > 0) << "Dataflow visit limit must be greater than 0";
+  CHECK(visit_limit < 10) << "Dataflow visit limit must be less than 10";
+  visit_limit_ = visit_limit;
+}
+
+void DataflowVisitor::VisitLeaf(const Expr& expr) {
+  if (visit_counter_[expr.get()] == 0) {
+ExprFunctor::VisitExpr(expr);
+  }
+  visit_counter_[expr.get()]++;
+}
+
+bool DataflowVisitor::CheckVisited(const Expr& expr) {
+  if (visit_counter_[expr.get()] < visit_limit_) {
+return false;
+  } else {
+visit_counter_[expr.get()]++;
+return true;
+  }
+}
+
+void DataflowVisitor::VisitExpr(const Expr& expr) {
+  auto fcheck_visited = [this](const Expr& expr) { return 
this->CheckVisited(expr); };
+  auto fvisit_leaf = [this](const Expr& expr) { return this->VisitLeaf(expr); 
};
+  if (visit_counter_[expr.get()] < 1) {
 
 Review comment:
   Dead Code Elimination fails if I respect visit_limit here along with 
everywhere else, has to do with a double count. I'm a little concerned about 
the magic number, still thinking about possible edge cases with this, might 
find a cleaner solution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-18 Thread GitBox
yzhliu commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r394726606
 
 

 ##
 File path: src/relay/ir/expr_functor.cc
 ##
 @@ -29,8 +29,135 @@
 #include 
 #include 
 
+#include 
+
 namespace tvm {
 namespace relay {
+/*!
+ * \brief A function to non-recursively traverse dataflow regions of a graph
+ *
+ * ExpandDatflow manually manages a stack and performs DFS to determine the 
processing
+ * order of nodes in an input graph.
+ *
+ * If it finds a dataflow node (Call, Tuple, TupleGetItem), it checks if the 
arguments to that node
+ * need to be processed via fcheck_visited. If so, the function pushed those 
arguments to the stack
+ * and continues non-recursively to process the top of the stack. When it 
finds a node that doesn't
+ * match the dataflow types, or a node who's inputs have all been processed, 
it visits the current
+ * leaf via fvisit_leaf.
+ *
+ * This function should be used internally to other classes to implement 
mixed-mode traversals. The
+ * expectation is that fvisit_leaf will perform recursive analysis within 
mixed-mode traversal if it
+ * hits a non-dataflow node.
+ *
+ * fcheck_visited and fvisit_leaf are templated to encourage compiler inlining.
+ */
+template 
+void ExpandDataflow(Expr expr, FCheckVisited fcheck_visited, FVisitLeaf 
fvisit_leaf) {
+  std::stack> stack;
+  // The second state of the stack indicate whether the child has been
+  // expanded in the pre-order.
+  // NOTE: function will be inlined.
+  auto fpush_to_stack = [&fcheck_visited, &stack](const Expr& expr) {
+if (!fcheck_visited(expr)) {
+  stack.push({expr, false});
+}
+  };
+  fpush_to_stack(expr);
+  while (stack.size() > 0) {
+auto node = stack.top().first;
+// if this node was visited through another path
+// after being added to the stack ignore it.
+if (fcheck_visited(expr)) {
+  stack.pop();
+} else if (stack.top().second) {
+  // all the children has already been expanded.
+  // we can just run post order visit on it.
+  fvisit_leaf(node);
+  stack.pop();
+} else if (const CallNode* op = node.as()) {
+  // mark expanded = true
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->args.rbegin(); it != op->args.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+  fpush_to_stack(op->op);
+} else if (const TupleNode* op = node.as()) {
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->fields.rbegin(); it != op->fields.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+} else if (const TupleGetItemNode* op = node.as()) {
+  stack.top().second = true;
+  fpush_to_stack(op->tuple);
+} else {
+  // No need to expand the children directly run visit.
+  // terminal leaf, directly use visited.
+  fvisit_leaf(node);
+  stack.pop();
+}
+  }
+}
+
+DataflowVisitor::DataflowVisitor(int visit_limit) {
+  CHECK(visit_limit > 0) << "Dataflow visit limit must be greater than 0";
+  CHECK(visit_limit < 10) << "Dataflow visit limit must be less than 10";
+  visit_limit_ = visit_limit;
+}
+
+void DataflowVisitor::VisitLeaf(const Expr& expr) {
+  if (visit_counter_[expr.get()] == 0) {
+ExprFunctor::VisitExpr(expr);
+  }
+  visit_counter_[expr.get()]++;
+}
+
+bool DataflowVisitor::CheckVisited(const Expr& expr) {
+  if (visit_counter_[expr.get()] < visit_limit_) {
+return false;
+  } else {
+visit_counter_[expr.get()]++;
+return true;
+  }
+}
+
+void DataflowVisitor::VisitExpr(const Expr& expr) {
+  auto fcheck_visited = [this](const Expr& expr) { return 
this->CheckVisited(expr); };
+  auto fvisit_leaf = [this](const Expr& expr) { return this->VisitLeaf(expr); 
};
+  if (visit_counter_[expr.get()] < 1) {
 
 Review comment:
   not sure, do we need to respect `visit_limit_` here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-18 Thread GitBox
yzhliu commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r394726756
 
 

 ##
 File path: src/relay/analysis/util.cc
 ##
 @@ -330,12 +330,15 @@ TVM_REGISTER_GLOBAL("relay.analysis.all_type_vars")
  */
 std::unordered_map
 GetExprRefCount(const Expr& body) {
-  class ExprRefCounter : private ExprVisitor {
+  class ExprRefCounter : private DataflowVisitor {
public:
 std::unordered_map
 Get(const Expr& body) {
   this->VisitExpr(body);
-  return std::move(this->visit_counter_);
+  for (auto kv : visit_counter_) {
+std::cout << kv.first << '\t' << kv.second << std::endl;
+  }
 
 Review comment:
   remember to remove debug code.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics merged pull request #5095: [TUTORIAL] Fix vta tutorial after relay function refactor

2020-03-18 Thread GitBox
zhiics merged pull request #5095: [TUTORIAL] Fix vta tutorial after relay 
function refactor
URL: https://github.com/apache/incubator-tvm/pull/5095
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #5095: [TUTORIAL] Fix vta tutorial after relay function refactor

2020-03-18 Thread GitBox
zhiics commented on issue #5095: [TUTORIAL] Fix vta tutorial after relay 
function refactor
URL: https://github.com/apache/incubator-tvm/pull/5095#issuecomment-600910788
 
 
   Thanks @tqchen 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [TUTORIAL] Fix vta tutorial after relay function refactor (#5095)

2020-03-18 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new e54a9a9  [TUTORIAL] Fix vta tutorial after relay function refactor 
(#5095)
e54a9a9 is described below

commit e54a9a979eb7ac12f3912260778989a8edc1e41a
Author: Tianqi Chen 
AuthorDate: Wed Mar 18 16:39:18 2020 -0700

[TUTORIAL] Fix vta tutorial after relay function refactor (#5095)
---
 vta/python/vta/top/graphpack.py | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/vta/python/vta/top/graphpack.py b/vta/python/vta/top/graphpack.py
index 2689fbc..aca00a6 100644
--- a/vta/python/vta/top/graphpack.py
+++ b/vta/python/vta/top/graphpack.py
@@ -368,11 +368,11 @@ def get_subgraph(expr, start_name, stop_name, 
start_name_idx, stop_name_idx, cou
 def _recursion(anf, start_found, stop_found, operator_current_idx):
 """ Helper to obtain the subgraph.
 """
-if isinstance(anf, relay.expr.Function):
-return relay.expr.Function(anf.params,
-   _recursion(anf.body, start_found, 
stop_found,
-  operator_current_idx),
-   anf.ret_type, anf.type_params, 
anf.attrs)
+if isinstance(anf, relay.Function):
+return relay.Function(anf.params,
+  _recursion(anf.body, start_found, stop_found,
+ operator_current_idx),
+  anf.ret_type, anf.type_params, anf.attrs)
 if isinstance(anf, relay.expr.Let):
 value = anf.value
 if isinstance(value, relay.expr.Call):



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5078: [DOC] Add doc for Relay op strategy

2020-03-18 Thread GitBox
comaniac commented on a change in pull request #5078: [DOC] Add doc for Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/5078#discussion_r393848565
 
 

 ##
 File path: docs/dev/relay_op_strategy.rst
 ##
 @@ -0,0 +1,270 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _relay-op-strategy:
+
+Relay Operator Strategy
+===
+
+In order to lower Relay operators to the implementations defined in TOPI
+library, a compute and schedule function need to be registered to each Relay
+operator.  However, compute and schedule functions are usually specialized for
+each target, and further, even for the same target, we may have multiple
+algorithms and implementations available. To deal with the complexity, we
+introduce operator strategy to allow developers to define a flexible lowering
+strategy for each operator and target.
+
+
+Operator Strategy Design
+
+
+The basic element in operator strategy is an ``OpImplementation``. It includes
+the a pair of compute and schedule function, the name of the implementation,
+and a priority level (the use of priority level is explained in
+`Select Implementation from Op Strategy`_).
+
+The ``OpStrategy`` includes a list of ``OpSpecialization``. Each 
``OpSpecialization``
+contains a list of ``OpImplementation`` associated with a 
``SpecializedCondition``
+(see definition in ``include/tvm/te/schedule.h``).  The 
``SpecializedCondition``
+can be null, indicating the implementations are generally applicable;
+otherwise, the implementations are only considered when the specialized
+condition is satisfied. ``SpecializedCondition`` consists of a list
+of clauses defined in Tensor Expression in conjunctive normal form (CNF) and
+only supports conditions on tensor shapes.
+
+Last, a ``FTVMStrategy`` function is registered to each Relay operator.
+``FTVMStrategy`` is a generic function (see 
``include/tvm/target/generic_func.h``),
+that can be overwritten for each target. The function signature is
+
+.. code:: c
+
+OpStrategy(const Attrs& attrs, const Array& inputs, const Type& 
out_type, const Target& target)
+
+that the function returns an ``OpStrategy`` given the op attributes, input
+tensors, output types, and target to compile to.
+
+
+
+Register Strategy for An Operator
+-
+
+There are three methods to register a strategy function for an operator,
 
 Review comment:
   Some thoughts to this section:
   - It's better to define "strategy function" first. Something like "a 
strategy function is used to determine which pair of compute and schedule 
function should be used by given a workload".
   
   - The third method described at the end of this section (`add_implement`) is 
not a method of registering a strategy function to me. Instead, it is more like 
how to implement a strategy function. It might be better to have a separate 
section called "Implement a Strategy for An Operator", or just change this 
section title to "Implement and Register a Strategy for An Operator".
   
   - My understanding is that the first approach is a simple one because we can 
reuse the compute function for those patterns. In this case, it might be better 
to switch the order of first and second methods. That is, we first introduce a 
method of implementing and registering a strategy function for dense. After 
that, we mention if your operator fits certain patterns you can simplify your 
work.
   
   - In the description of strategy function implementation, I think it might 
be better to mention `topk` only. As a result, in the beginning of the next 
section "Advanced Strategy Function", you can say some operators like dense 
require a more complicate strategy so it cannot be implemented like `topk`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on issue #5078: [DOC] Add doc for Relay op strategy

2020-03-18 Thread GitBox
icemelon9 commented on issue #5078: [DOC] Add doc for Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/5078#issuecomment-600867796
 
 
   I have thought about @comaniac comments and they make a lot of sense. I plan 
to separate the "Register Strategy for An Operator" into two parts, first 
describe how to define a strategy function, and second how to do the 
registration. I'll work on the update today. So let's keep the PR open for now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5095: [TUTORIAL] Fix vta tutorial after relay function refactor

2020-03-18 Thread GitBox
zhiics commented on a change in pull request #5095: [TUTORIAL] Fix vta tutorial 
after relay function refactor
URL: https://github.com/apache/incubator-tvm/pull/5095#discussion_r394641441
 
 

 ##
 File path: python/tvm/relay/__init__.py
 ##
 @@ -96,6 +96,7 @@
 RefCreate = expr.RefCreate
 RefRead = expr.RefRead
 RefWrite = expr.RefWrite
+Function = function.Function
 
 Review comment:
   We have this already.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5047: [Runtime] Python String container interface

2020-03-18 Thread GitBox
tqchen commented on issue #5047: [Runtime] Python String container interface
URL: https://github.com/apache/incubator-tvm/issues/5047#issuecomment-600857831
 
 
   cc @zhiics @mbaret @comaniac @jroesch we probably needs to migrate the use 
of StringImm to String once the string container is introduced


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #5095: [TUTORIAL] Fix vta tutorial after relay function refactor

2020-03-18 Thread GitBox
tqchen opened a new pull request #5095: [TUTORIAL] Fix vta tutorial after relay 
function refactor
URL: https://github.com/apache/incubator-tvm/pull/5095
 
 
   cc @zhiics 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #5078: [DOC] Add doc for Relay op strategy

2020-03-18 Thread GitBox
tmoreau89 commented on a change in pull request #5078: [DOC] Add doc for Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/5078#discussion_r394635152
 
 

 ##
 File path: docs/dev/relay_op_strategy.rst
 ##
 @@ -0,0 +1,256 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _relay-op-strategy:
+
+Relay Operator Strategy
+===
+
+In order to lower Relay operators to implementation defined in TOPI library, 
the
+compute and schedule functions need to be registered to Relay operators.
+However, compute and schedule functions are usually specialized for each 
target,
+and further, even for the same target, we may have multiple algorithms and
+implementations available. To deal with the complexity, we introduce operator
+strategy to allow developers to define a flexible lowering strategy for each
+operator and target.
+
+
+Operator Strategy Design
+
+
+The basic element in operator strategy is an ``OpImplementation``. It includes
+the a pair of compute and schedule function, the name of the implementation,
+and a priority level (the usability of priority level will be explained below).
+
+The ``OpStrategy`` includes a list of specializations. Each specialization
+contains a list of ``OpImplementation`` associated with a specialized condition
+(see ``SpecializedCondition`` definition in ``include/tvm/te/schedule.h``).  
The
+specialized condition can be null, indicating the implementations are generally
+applicable; otherwise, the implementations should only be used when the
+specialized condition is satisfied. ``OpStrategy`` provides only one API,
+adding an implementation to the strategy:
+
+.. code:: python
+
+def add_implementation(self, compute, schedule, name="default", plevel=10)
+
+Last, a ``FTVMStrategy`` function is registered to each Relay operator.
+``FTVMStrategy`` is a generic function (see 
``include/tvm/target/generic_func.h``),
+that can be overwritten for each target. The function signature is
+
+.. code:: c
+
+OpStrategy(const Attrs& attrs, const Array& inputs, const Type& 
out_type, const Target& target)
+
+, that the function returns an ``OpStrategy`` given the op attributes, input
+tensors, output types, and target to compile to,
+
+
+
+Register strategy for a new operator
+
+
+There are three methods to register a strategy function for an operator,
+defined in ``python/tvm/relay/op/op.py``.
+
+First, for operators that have injective, broadcast, or reduction pattern, we
+can call ``register_injective_schedule``, ``register_broadcast_schedule``, and
+``register_reduce_schedule`` repsectively. The schedule function for these
+patterns are already registered by each target and can be applied to these
+operators. We assume the compute function should be same across all targets, 
and
+``FTVMCompute`` needs to be registered to the op before invoking register
+schedule.
+
+.. code:: python
+
+register_injective_schedule("my_new_op")
+
+Second, for operators that doesn't have these common patterns mentioned before,
+but also have the same compute function for all targets, we can use
+``register_schedule`` API. But before that, we need to first define the
+``FTVMSchedule`` function as follows:
+
+.. code:: python
+
+# add to python/tvm/relay/op/strategy/generic.py
+@generic_func
+def schedule_my_new_op(attrs, outs, target):
+...
+
+# add to each target file in python/tvm/relay/op/strategy, e.g., x86.py, 
cuda.py, etc.
+@schedule_my_new_op.register("cpu")
+def schedule_my_new_op_cpu(attrs, outs, target):
+...
+
+Now that we've created the ``FTVMSchedule`` for this new operator, we can
+register the strategy using ``register_schedule``:
+
+.. code:: python
+
+register_schedule("my_new_op", strategy.schedule_my_new_op)
+
+Third, for most comprehensive usage of op strategy, we can allow operator to 
use
+different implementation for both compute and schedule for different targets.
+Similarly, we need to first define the ``FTVMStrategy`` function as follows:
+
+.. code:: python
+
+# add to python/tvm/relay/op/strategy/generic.py
+@override_native_generic_func("

[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #5078: [DOC] Add doc for Relay op strategy

2020-03-18 Thread GitBox
tmoreau89 commented on a change in pull request #5078: [DOC] Add doc for Relay 
op strategy
URL: https://github.com/apache/incubator-tvm/pull/5078#discussion_r394634872
 
 

 ##
 File path: docs/dev/relay_op_strategy.rst
 ##
 @@ -0,0 +1,256 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _relay-op-strategy:
+
+Relay Operator Strategy
+===
+
+In order to lower Relay operators to implementation defined in TOPI library, 
the
+compute and schedule functions need to be registered to Relay operators.
+However, compute and schedule functions are usually specialized for each 
target,
+and further, even for the same target, we may have multiple algorithms and
+implementations available. To deal with the complexity, we introduce operator
+strategy to allow developers to define a flexible lowering strategy for each
+operator and target.
+
+
+Operator Strategy Design
+
+
+The basic element in operator strategy is an ``OpImplementation``. It includes
+the a pair of compute and schedule function, the name of the implementation,
+and a priority level (the usability of priority level will be explained below).
+
+The ``OpStrategy`` includes a list of specializations. Each specialization
+contains a list of ``OpImplementation`` associated with a specialized condition
+(see ``SpecializedCondition`` definition in ``include/tvm/te/schedule.h``).  
The
+specialized condition can be null, indicating the implementations are generally
+applicable; otherwise, the implementations should only be used when the
+specialized condition is satisfied. ``OpStrategy`` provides only one API,
 
 Review comment:
   Ack, I found the examples indeed; that was the result of writing feedback in 
real time and not removing the feedback after finishing my pass


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] maheshambule commented on issue #4981: [Relay, Topi, TF Frontend] Isfinite operator

2020-03-18 Thread GitBox
maheshambule commented on issue #4981: [Relay, Topi, TF Frontend] Isfinite 
operator
URL: https://github.com/apache/incubator-tvm/pull/4981#issuecomment-600835739
 
 
   @jwfromm  could you help in merging this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5082: [Relay, Topi] [TF, MXNet] Unravel Index operator

2020-03-18 Thread GitBox
maheshambule commented on a change in pull request #5082: [Relay, Topi] [TF, 
MXNet] Unravel Index operator
URL: https://github.com/apache/incubator-tvm/pull/5082#discussion_r394577194
 
 

 ##
 File path: topi/include/topi/transform.h
 ##
 @@ -232,6 +232,52 @@ inline Tensor reshape(const Tensor& x,
   }
 }
 
+/*!
+ * \brief Converts a flat index or array of flat indices into a tuple of 
coordinate arrays
+ *
+ * \param x The input tensor having indices.
+ * \param shape The shape tensor
+ * \param name The name of the operation
+ * \param tag The tag to mark the operation
+ *
+ * \return A Tensor of coordinate arrays.
+ */
+
+inline Tensor unravel_index(const Tensor& x, const Tensor& shape, std::string 
name = "T_unravel",
+std::string tag = kInjective) {
+  auto x_shape = x->shape;
+  auto shape_shape = shape->shape;
+
+  Array oshape;
+  oshape.push_back(shape_shape[0]);
+  if (x_shape.size() != 0) {
+oshape.push_back(x_shape[0]);
+  }
+
+  return compute(oshape,
+ [&](const Array& indices) {
+   auto i = indices[0];
+   std::vector indices_divs;
+   PrimExpr ret = 0;
+   PrimExpr cur_val = 0;
+   PrimExpr index_val = 0;
+
+   if (x_shape.size() != 0) {
+ index_val = x[indices[1]];
+   } else {
+ index_val = x();
+   }
+   indices_divs.push_back(index_val);
+   for (int v = GetConstInt(shape_shape[0]) - 1; v >= 0; --v) {
+ ret = tvm::if_then_else(i == v, 
indexmod(indices_divs.back(), shape[v]), ret);
+ cur_val = indexdiv(indices_divs.back(), shape[v]);
+ indices_divs.push_back(cur_val);
 
 Review comment:
   The function in this file returns all the coordinates for a given index. In 
compute definition we just want a coordinate for the current compute index and 
not for all of them. I was facing issue while extracting the current coordinate 
because compute index which is a Var can not be directly used to extract Expr 
from an array of Exprs. I had to use if_then_else construct for that. Please 
let me know if I am missing something here and if there is an easier way to 
achieve this.  I could have modified the existing function to meet my purposes 
for example pass in the coordinate index I want to extract and return just that 
coordinate. Please let me know if I should implement this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5082: [Relay, Topi] [TF, MXNet] Unravel Index operator

2020-03-18 Thread GitBox
maheshambule commented on a change in pull request #5082: [Relay, Topi] [TF, 
MXNet] Unravel Index operator
URL: https://github.com/apache/incubator-tvm/pull/5082#discussion_r394556662
 
 

 ##
 File path: include/tvm/relay/attrs/transform.h
 ##
 @@ -321,6 +321,12 @@ struct ArgWhereAttrs : public 
tvm::AttrsNode {
   }
 };  // struct ArgWhereAttrs
 
+/*! \brief Attributes used in unravel_index operators */
+struct UnRavelIndexAttrs : public tvm::AttrsNode {
+  TVM_DECLARE_ATTRS(UnRavelIndexAttrs, "relay.attrs.UnRavelIndexAttrs") {
 
 Review comment:
   Ok. Thanks. This is good to know. I have removed the attrs for both 
unravel_index and argwhere.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m commented on issue #5094: [Relay][BYOCG] Propagate constant to subgraphs

2020-03-18 Thread GitBox
trevor-m commented on issue #5094: [Relay][BYOCG] Propagate constant to 
subgraphs
URL: https://github.com/apache/incubator-tvm/pull/5094#issuecomment-600780462
 
 
   Thank you!
   I have tested this for my external codegen and it fixes a few problems I was 
having.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics opened a new pull request #5094: [Relay][BYOCG] Propagate constant to subgraphs

2020-03-18 Thread GitBox
zhiics opened a new pull request #5094: [Relay][BYOCG] Propagate constant to 
subgraphs
URL: https://github.com/apache/incubator-tvm/pull/5094
 
 
   Currently, all constants are not propagated to the functions that will be 
handled by external codegen. This may cause some performance issue if a 
function is frequently used as you will need to pass the constants multiple 
times to accelerates. This PR binds the constants to the new functions and 
propagate them. As expected, this will need the 3rd-party codegen to handle the 
constants by themselves.
   
   @comaniac @mbaret @soiferj @trevor-m @tqchen 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [Torch, QNN] Add missing upcast to uint8 avg_pool conversion (#5089)

2020-03-18 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new b64a843  [Torch, QNN] Add missing upcast to uint8 avg_pool conversion  
(#5089)
b64a843 is described below

commit b64a843acd15ca34d2baf9fce730e81f91b3a580
Author: masahi 
AuthorDate: Thu Mar 19 02:31:06 2020 +0900

[Torch, QNN] Add missing upcast to uint8 avg_pool conversion  (#5089)

* add missing upcast to avgpool

* add avg pool test
---
 python/tvm/relay/frontend/pytorch.py  | 22 +++---
 python/tvm/relay/frontend/qnn_torch.py|  5 ++---
 tests/python/frontend/pytorch/qnn_test.py | 15 +--
 3 files changed, 30 insertions(+), 12 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 6da91c1..0c7465b 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -172,7 +172,7 @@ def _adaptive_avg_2d():
 return _op.nn.adaptive_avg_pool2d(x, output_size=output_size)
 
 if input_types[0] == "quint8":
-return qnn_torch.quantized_adaptive_avg_2d(data, func)
+return qnn_torch.apply_with_upcast(data, func)
 
 return func(data)
 
@@ -484,14 +484,22 @@ def _avg_pool2d():
 ceil_mode = int(inputs[4])
 count_include_pad = int(inputs[5])
 
-return _op.nn.avg_pool2d(data,
- pool_size=pool_size,
- strides=strides,
- padding=padding,
- ceil_mode=ceil_mode,
- count_include_pad=count_include_pad)
+def func(x):
+return _op.nn.avg_pool2d(x,
+ pool_size=pool_size,
+ strides=strides,
+ padding=padding,
+ ceil_mode=ceil_mode,
+ count_include_pad=count_include_pad)
+
+if input_types[0] == "quint8":
+return qnn_torch.apply_with_upcast(data, func)
+
+return func(data)
+
 return _impl
 
+
 def _dropout():
 def _impl(inputs, input_types):
 data = inputs[0]
diff --git a/python/tvm/relay/frontend/qnn_torch.py 
b/python/tvm/relay/frontend/qnn_torch.py
index 70178be..e6a015f 100644
--- a/python/tvm/relay/frontend/qnn_torch.py
+++ b/python/tvm/relay/frontend/qnn_torch.py
@@ -359,10 +359,9 @@ def add_quant_params(params, quant_params):
 params[qparam.bias_var.name_hint] = tvm.nd.array(qparam.bias)
 
 
-def quantized_adaptive_avg_2d(data, func_fp32):
-# this follows tflite impl
+def apply_with_upcast(data, func):
 inp = _op.cast(data, dtype="int32")
-out = func_fp32(inp)
+out = func(inp)
 return _op.cast(out, "uint8")
 
 
diff --git a/tests/python/frontend/pytorch/qnn_test.py 
b/tests/python/frontend/pytorch/qnn_test.py
index 23fcb7c..ebc00bf 100644
--- a/tests/python/frontend/pytorch/qnn_test.py
+++ b/tests/python/frontend/pytorch/qnn_test.py
@@ -218,7 +218,6 @@ class MulScalarNegative(nn.Module):
 class UpsamplingBilinear(nn.Module):
 def __init__(self):
 super().__init__()
-self.relu = QuantWrapper(nn.ReLU())
 self.quant = QuantStub()
 self.dequant = DeQuantStub()
 
@@ -233,12 +232,25 @@ class UpsamplingBilinear(nn.Module):
 pass
 
 
+class AvgPool2d(nn.Module):
+def __init__(self):
+super().__init__()
+self.pool = QuantWrapper(nn.AvgPool2d(kernel_size=2))
+
+def forward(self, x):
+return self.pool(x)
+
+def fuse_model(self):
+pass
+
+
 def test_quantized_modules():
 imagenet_ishape = (1, 3, 224, 224)
 
 qmodules = [
("relu", imagenet_ishape, ReLU(), False),
("upsample bilinear", (1, 3, 64, 64), UpsamplingBilinear(), False),
+   ("avgpool", imagenet_ishape, AvgPool2d(), False),
 ]
 
 for per_channel in [False, True]:
@@ -276,7 +288,6 @@ def test_quantized_modules():
 pt_result = script_module(inp.clone()).numpy()
 
 input_name = get_graph_input_names(script_module)[0]
-
 runtime = get_tvm_runtime(script_module, input_name, ishape)
 runtime.set_input(input_name, inp.numpy().copy())
 runtime.run()



[GitHub] [incubator-tvm] anijain2305 merged pull request #5089: [Torch, QNN] Add missing upcast to uint8 avg_pool conversion

2020-03-18 Thread GitBox
anijain2305 merged pull request #5089: [Torch, QNN] Add missing upcast to uint8 
avg_pool conversion 
URL: https://github.com/apache/incubator-tvm/pull/5089
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #5089: [Torch, QNN] Add missing upcast to uint8 avg_pool conversion

2020-03-18 Thread GitBox
anijain2305 commented on issue #5089: [Torch, QNN] Add missing upcast to uint8 
avg_pool conversion 
URL: https://github.com/apache/incubator-tvm/pull/5089#issuecomment-600765263
 
 
   Thanks @masahi This is merged


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #5093: [RELAY] Codegen_c.h should include relay.function

2020-03-18 Thread GitBox
zhiics commented on issue #5093: [RELAY] Codegen_c.h should include 
relay.function
URL: https://github.com/apache/incubator-tvm/pull/5093#issuecomment-600760095
 
 
   Thanks @lhutton1 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics merged pull request #5093: [RELAY] Codegen_c.h should include relay.function

2020-03-18 Thread GitBox
zhiics merged pull request #5093: [RELAY] Codegen_c.h should include 
relay.function
URL: https://github.com/apache/incubator-tvm/pull/5093
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [RELAY] Codegen_c.h should include relay.function (#5093)

2020-03-18 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new e5c24d7  [RELAY] Codegen_c.h should include relay.function (#5093)
e5c24d7 is described below

commit e5c24d7e79961b47d309ca2fdae3ba143fd1fae6
Author: lhutton1 <35535092+lhutt...@users.noreply.github.com>
AuthorDate: Wed Mar 18 17:21:05 2020 +

[RELAY] Codegen_c.h should include relay.function (#5093)

Change-Id: I015b2c66a50b64d0eb2e9efe336f6c18ea1fdc67
---
 src/relay/backend/contrib/codegen_c/codegen_c.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/relay/backend/contrib/codegen_c/codegen_c.h 
b/src/relay/backend/contrib/codegen_c/codegen_c.h
index 4929819..f003d22 100644
--- a/src/relay/backend/contrib/codegen_c/codegen_c.h
+++ b/src/relay/backend/contrib/codegen_c/codegen_c.h
@@ -26,6 +26,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 



[GitHub] [incubator-tvm] zhiics commented on issue #5068: [RELAY] Update the Python FFI Convention

2020-03-18 Thread GitBox
zhiics commented on issue #5068: [RELAY] Update the Python FFI Convention
URL: https://github.com/apache/incubator-tvm/issues/5068#issuecomment-600759034
 
 
   @tqchen for docs, should we keep the current docs under python/relay 
expr.rst and base.rst etc? I think we should remove them. But what should we 
have instead to represent relay namespace in python?
   
   do we want to have relay.rst? if so, I am not sure where we should put it. 
It seems it wired to put it either under docs/api/python/relay or 
docs/api/python. Or we want to have ir.rst under docs/api/python/relay to 
represent relay.rst? It looks not good either. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene merged pull request #5091: Description updated for pooling attributes

2020-03-18 Thread GitBox
FrozenGene merged pull request #5091: Description updated for pooling attributes
URL: https://github.com/apache/incubator-tvm/pull/5091
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5090: [CODEGEN][OPENCL] Explicitly cast min/max operands

2020-03-18 Thread GitBox
tqchen commented on issue #5090: [CODEGEN][OPENCL] Explicitly cast min/max 
operands
URL: https://github.com/apache/incubator-tvm/pull/5090#issuecomment-600718969
 
 
   Thanks @kazum !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #5091: Description updated for pooling attributes

2020-03-18 Thread GitBox
FrozenGene commented on issue #5091: Description updated for pooling attributes
URL: https://github.com/apache/incubator-tvm/pull/5091#issuecomment-600719205
 
 
   Thanks @siju-samuel @zhiics merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (7ca3212 -> c3b89b7)

2020-03-18 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 7ca3212  create function.py (#5087)
 add c3b89b7  [CODEGEN][OPENCL] Explicitly cast min/max operands (#5090)

No new revisions were added by this update.

Summary of changes:
 src/target/source/codegen_opencl.cc| 48 +-
 src/target/source/codegen_opencl.h |  5 ++-
 .../python/unittest/test_target_codegen_opencl.py  | 29 +
 3 files changed, 60 insertions(+), 22 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5090: [CODEGEN][OPENCL] Explicitly cast min/max operands

2020-03-18 Thread GitBox
tqchen merged pull request #5090: [CODEGEN][OPENCL] Explicitly cast min/max 
operands
URL: https://github.com/apache/incubator-tvm/pull/5090
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (c3b89b7 -> c3ec96e)

2020-03-18 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c3b89b7  [CODEGEN][OPENCL] Explicitly cast min/max operands (#5090)
 add c3ec96e  Description updated for pooling attributes (#5091)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/attrs/nn.h | 28 ++--
 1 file changed, 14 insertions(+), 14 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #5087: [Refactor][Relay][py] Move expr.Function to function.py

2020-03-18 Thread GitBox
tqchen merged pull request #5087: [Refactor][Relay][py] Move expr.Function to 
function.py
URL: https://github.com/apache/incubator-tvm/pull/5087
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: create function.py (#5087)

2020-03-18 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 7ca3212  create function.py (#5087)
7ca3212 is described below

commit 7ca3212f06a56eb95420060aa822a56860d114fd
Author: Zhi <5145158+zhi...@users.noreply.github.com>
AuthorDate: Wed Mar 18 08:56:55 2020 -0700

create function.py (#5087)
---
 docs/api/python/relay/expr.rst |  3 -
 docs/langref/relay_expr.rst|  2 +-
 python/tvm/autotvm/graph_tuner/base_graph_tuner.py |  4 +-
 .../autotvm/graph_tuner/utils/traverse_graph.py|  3 +-
 python/tvm/autotvm/task/relay_integration.py   |  6 +-
 python/tvm/relay/__init__.py   |  3 +-
 python/tvm/relay/_parser.py|  7 +-
 python/tvm/relay/analysis/analysis.py  |  2 +-
 python/tvm/relay/backend/compile_engine.py |  4 +-
 python/tvm/relay/backend/interpreter.py|  3 +-
 python/tvm/relay/build_module.py   | 13 ++--
 python/tvm/relay/expr.py   | 66 +
 python/tvm/relay/expr_functor.py   |  3 +-
 python/tvm/relay/frontend/caffe2.py|  7 +-
 python/tvm/relay/frontend/common.py|  5 +-
 python/tvm/relay/frontend/coreml.py|  3 +-
 python/tvm/relay/frontend/darknet.py   |  3 +-
 python/tvm/relay/frontend/keras.py |  3 +-
 python/tvm/relay/frontend/mxnet.py |  5 +-
 python/tvm/relay/frontend/onnx.py  |  7 +-
 python/tvm/relay/frontend/tensorflow.py|  3 +-
 python/tvm/relay/frontend/tflite.py|  3 +-
 python/tvm/relay/function.py   | 86 ++
 python/tvm/relay/loops.py  |  3 +-
 python/tvm/relay/prelude.py|  3 +-
 python/tvm/relay/testing/nat.py|  3 +-
 python/tvm/relay/testing/py_converter.py   |  3 +-
 src/relay/ir/function.cc   |  9 +--
 28 files changed, 152 insertions(+), 113 deletions(-)

diff --git a/docs/api/python/relay/expr.rst b/docs/api/python/relay/expr.rst
index 57a4a25..cfb6df0 100644
--- a/docs/api/python/relay/expr.rst
+++ b/docs/api/python/relay/expr.rst
@@ -35,9 +35,6 @@ tvm.relay.expr
 .. autoclass:: tvm.relay.expr.Tuple
 :members:
 
-.. autoclass:: tvm.relay.expr.Function
-:members:
-
 .. autoclass:: tvm.relay.expr.Call
 :members:
 
diff --git a/docs/langref/relay_expr.rst b/docs/langref/relay_expr.rst
index 66bfe43..3b93360 100644
--- a/docs/langref/relay_expr.rst
+++ b/docs/langref/relay_expr.rst
@@ -120,7 +120,7 @@ Additionally, functions in Relay are higher-order, which 
means that a function c
 function or returned by a function, as function expressions evaluate to 
closures (see the `Closures`_ subsection),
 which are values like tensors and tuples.
 
-See :py:class:`~tvm.relay.expr.Function` for the definition and documentation 
of function nodes.
+See :py:class:`~tvm.relay.function.Function` for the definition and 
documentation of function nodes.
 
 Syntax
 ~~
diff --git a/python/tvm/autotvm/graph_tuner/base_graph_tuner.py 
b/python/tvm/autotvm/graph_tuner/base_graph_tuner.py
index f1a0756..e7b4694 100644
--- a/python/tvm/autotvm/graph_tuner/base_graph_tuner.py
+++ b/python/tvm/autotvm/graph_tuner/base_graph_tuner.py
@@ -69,7 +69,7 @@ class BaseGraphTuner(object):
 target_op in the input graph and layout transformation benchmark need 
to be
 executed before initialization.
 
-graph : tvm.relay.Expr.Function
+graph : tvm.relay.function.Function
 Input graph
 
 input_shapes : dict of str to tuple.
@@ -143,7 +143,7 @@ class BaseGraphTuner(object):
 if isinstance(graph, tvm.IRModule):
 graph = graph["main"]
 
-if isinstance(graph, relay.expr.Function):
+if isinstance(graph, relay.function.Function):
 node_dict = {}
 graph = bind_inputs(graph, input_shapes, dtype)
 expr2graph(graph, self._target_ops, node_dict, self._node_list)
diff --git a/python/tvm/autotvm/graph_tuner/utils/traverse_graph.py 
b/python/tvm/autotvm/graph_tuner/utils/traverse_graph.py
index f1dd404..8470fb6 100644
--- a/python/tvm/autotvm/graph_tuner/utils/traverse_graph.py
+++ b/python/tvm/autotvm/graph_tuner/utils/traverse_graph.py
@@ -21,7 +21,8 @@ import threading
 import tvm
 from tvm import relay, autotvm
 from tvm.relay import transform
-from tvm.relay.expr import Call, Function, TupleGetItem, Var, Constant, Tuple
+from tvm.relay.expr import Call, TupleGetItem, Var, Constant, Tuple
+from tvm.relay.function import Function
 from tvm.relay.ty import TupleType, TensorType
 from tvm.autotvm.task import TaskExtractEnv
 
diff --git a/p

[GitHub] [incubator-tvm] lhutton1 opened a new pull request #5093: [RELAY] Codegen_c.h should include relay.function

2020-03-18 Thread GitBox
lhutton1 opened a new pull request #5093: [RELAY] Codegen_c.h should include 
relay.function
URL: https://github.com/apache/incubator-tvm/pull/5093
 
 
   This prevents the build from failing when a 3rd party codegen includes 
codegen_c.h in its header file rather than in a source file. The reason this 
isn't an issue for dnnl, for example,  is because the codegen class is declared 
in a source file. However, a 3rd party may have the codegen class declared in a 
header.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] hzfan opened a new pull request #5092: [PASS] dtype rewrite for indexing variables

2020-03-18 Thread GitBox
hzfan opened a new pull request #5092: [PASS] dtype rewrite for indexing 
variables
URL: https://github.com/apache/incubator-tvm/pull/5092
 
 
   Changes:
   - enable indexing with i64 vars, so that large tensors with more than 2^32 
elements can be properly indexed.
   - narrow i64 index which trivially fits into i32 to i32.
   
   Some background:
   https://discuss.tvm.ai/t/rfc-support-for-large-tensors/5643
   
   Take the following as an example:
   ```
   A = te.placeholder((m, n), name='A')
   B = te.placeholder((m, n), name='B')
   C = te.compute((m, n), lambda *idx: A[idx] + B[idx])
   ```
   
   `m, n = te.var(’m’, dtype=‘int64’), te.var(’n’, dtype=‘int64’)` yields
   ```
   produce compute {
 for (i0.int64, (int64)0, m.int64) {
   for (i1.int64, (int64)0, n.int64) {
 compute[((i0.int64*stride.int64) + (i1.int64*stride.int64))] = 
(A[((i0.int64*stride.int64) + (i1.int64*stride.int64))] + 
B[((i0.int64*stride.int64) + (i1.int64*stride.int64))])
   }
 }
   }
   ```
   
   `m, n = tvm.tir.const(2, dtype="int64"), tvm.tir.const(2, dtype="int64")` 
yields
   
   ```
   produce compute {
 for (i0.int32, 0, 2) {
   for (i1.int32, 0, 2) {
 compute[((i0.int32*2) + i1.int32)] = (A[((i0.int32*2) + i1.int32)] + 
B[((i0.int32*2) + i1.int32)])
   }
 }
   }
   ```
   
   
   @yzhliu Could you review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

2020-03-18 Thread GitBox
kazum commented on a change in pull request #5073: [Relay][Frontend][ONNX] 
operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394392382
 
 

 ##
 File path: tests/python/frontend/onnx/test_forward.py
 ##
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, 
output_dtype='float32', opset=None):
-""" Generic function to execute and get tvm output"""
-target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
 if isinstance(input_data, list):
 input_names = {}
 shape_dict = {}
-dtype_dict = {}
 for i, _ in enumerate(input_data):
 input_names[i] = graph_def.graph.input[i].name
 shape_dict[input_names[i]] = input_data[i].shape
-dtype_dict[input_names[i]] = input_data[i].dtype
 else:
 input_names = graph_def.graph.input[0].name
 shape_dict = {input_names: input_data.shape}
+
+return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
+if isinstance(input_data, list):
+input_names = {}
+dtype_dict = {}
+for i, _ in enumerate(input_data):
+input_names[i] = graph_def.graph.input[i].name
+dtype_dict[input_names[i]] = input_data[i].dtype
+else:
+input_names = graph_def.graph.input[0].name
 dtype_dict = {input_names: input_data.dtype}
 
+return input_names, dtype_dict
+
+
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+   opset=None):
+""" Generic function to execute and get tvm output with vm executor"""
+
+_, shape_dict = get_input_data_shape_dict(graph_def, input_data)
+
+mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+ex = relay.create_executor('vm', mod=mod, ctx=ctx, target=target)
+indata = tvm.nd.array(input_data)
+result = ex.evaluate()(indata)
+return result.asnumpy()
+
+
+def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, 
output_dtype='float32', opset=None):
 
 Review comment:
   Okay, I'm fine with keeping both for now :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

2020-03-18 Thread GitBox
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] 
operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394366255
 
 

 ##
 File path: tests/python/frontend/onnx/test_forward.py
 ##
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, 
output_dtype='float32', opset=None):
-""" Generic function to execute and get tvm output"""
-target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
 if isinstance(input_data, list):
 input_names = {}
 shape_dict = {}
-dtype_dict = {}
 for i, _ in enumerate(input_data):
 input_names[i] = graph_def.graph.input[i].name
 shape_dict[input_names[i]] = input_data[i].shape
-dtype_dict[input_names[i]] = input_data[i].dtype
 else:
 input_names = graph_def.graph.input[0].name
 shape_dict = {input_names: input_data.shape}
+
+return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
+if isinstance(input_data, list):
+input_names = {}
+dtype_dict = {}
+for i, _ in enumerate(input_data):
+input_names[i] = graph_def.graph.input[i].name
+dtype_dict[input_names[i]] = input_data[i].dtype
+else:
+input_names = graph_def.graph.input[0].name
 dtype_dict = {input_names: input_data.dtype}
 
+return input_names, dtype_dict
+
+
+def get_tvm_output_with_vm(graph_def, input_data, target, ctx,
+   opset=None):
+""" Generic function to execute and get tvm output with vm executor"""
+
+_, shape_dict = get_input_data_shape_dict(graph_def, input_data)
+
+mod, params = relay.frontend.from_onnx(graph_def, shape_dict, opset=opset)
+
+ex = relay.create_executor('vm', mod=mod, ctx=ctx, target=target)
+indata = tvm.nd.array(input_data)
+result = ex.evaluate()(indata)
+return result.asnumpy()
+
+
+def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, 
output_dtype='float32', opset=None):
 
 Review comment:
   Hi @kazum 
   I'd like to keep the `relay.create_executor` and `relay.build` both in this 
PR. 
   
   I cannot change the `relay.build` with `relay.create_executor` directly due 
to there are many errors like below:
   
   ```
 File "/tvm/tests/python/frontend/onnx/test_forward.py", line 2282, in 

   test_flatten()
   
 File "/tvm/tests/python/frontend/onnx/test_forward.py", line 374, in 
test_flatten
   tvm_out = get_tvm_output(model, x, target, ctx, ref_shape, 'float32')
   
 File "/tvm/tests/python/frontend/onnx/test_forward.py", line 70, in 
get_tvm_output
   result = ex.evaluate()(indata)
   
 File "/tvm/python/tvm/relay/backend/vm.py", line 256, in _vm_wrapper
   return self.vm.run(*args)
   
 File "/tvm/python/tvm/runtime/vm.py", line 366, in run
   return self.invoke("main", *args, **kwargs)
   
 File "/tvm/python/tvm/runtime/vm.py", line 348, in invoke
   return self._invoke(func_name)
   
 File "/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 213, in __call__
   raise get_last_ffi_error()
   
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 [bt] (7) 8   ??? 0x7fff54230930 0x0 + 
140734604970288
 [bt] (6) 7   _ctypes.cpython-37m-darwin.so   0x0001104dc2bf 
ffi_call_unix64 + 79
 [bt] (5) 6   libtvm.dylib0x000125071f78 
TVMFuncCall + 72
 [bt] (4) 5   libtvm.dylib0x0001250a4e3f 
std::__1::__function::__func, std::__1::allocator > const&, 
tvm::runtime::ObjectPtr const&)::$_0, 
std::__1::allocator, std::__1::allocator > const&, 
tvm::runtime::ObjectPtr const&)::$_0>, void 
(tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, 
tvm::runtime::TVMRetValue*&&) + 735
 [bt] (3) 4   libtvm.dylib0x0001250a199a 
tvm::runtime::vm::VirtualMachine::RunLoop() + 7610
 [bt] (2) 3   libtvm.dylib0x0001250a310f 
tvm::runtime::vm::VirtualMachine::InvokePacked(long long, 
tvm::runtime::PackedFunc const&, long long, long long, 
std::__1::vector > const&) + 1039
 [bt] (1) 2   libtvm.dylib0x00012507b396 
std::__1::__function::__func 
const&)::$_0, std::__1::allocator const&)::$_0>, void 
(tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, 
tvm::runtime::TVMRetValue*&&) + 310
 [bt] (0) 1   libtvm.dylib0x000124666af9 
dmlc::LogMessageFatal::~LogMessageFatal() + 57
 File "/tvm/src/runtime/library_module.cc", line 89
   TVMError: Check failed: ret == 0 (-1 vs. 0) : Assert fail: 
(((tvm_struct_get(arg0, 0, 5) == (uint8)2) && (tvm_struct_get(arg0, 0, 6) == 
(uint8)32)) && (tvm_struct_get(arg0, 0, 7) == (

[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] operator support NonZero

2020-03-18 Thread GitBox
cchung100m commented on a change in pull request #5073: [Relay][Frontend][ONNX] 
operator support NonZero
URL: https://github.com/apache/incubator-tvm/pull/5073#discussion_r394358064
 
 

 ##
 File path: tests/python/frontend/onnx/test_forward.py
 ##
 @@ -30,22 +30,54 @@
 import scipy
 
 
-def get_tvm_output(graph_def, input_data, target, ctx, output_shape=None, 
output_dtype='float32', opset=None):
-""" Generic function to execute and get tvm output"""
-target = 'llvm'
+def get_input_data_shape_dict(graph_def, input_data):
 if isinstance(input_data, list):
 input_names = {}
 shape_dict = {}
-dtype_dict = {}
 for i, _ in enumerate(input_data):
 input_names[i] = graph_def.graph.input[i].name
 shape_dict[input_names[i]] = input_data[i].shape
-dtype_dict[input_names[i]] = input_data[i].dtype
 else:
 input_names = graph_def.graph.input[0].name
 shape_dict = {input_names: input_data.shape}
+
+return input_names, shape_dict
+
+
+def get_input_data_dtype_dict(graph_def, input_data):
 
 Review comment:
   Hi @kazum 
   Thanks for the review and I can remove it here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5091: Description updated for pooling attributes

2020-03-18 Thread GitBox
siju-samuel opened a new pull request #5091: Description updated for pooling 
attributes
URL: https://github.com/apache/incubator-tvm/pull/5091
 
 
   @yzhliu @FrozenGene please help to review and merge this.
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kazum opened a new pull request #5090: [CODEGEN][OPENCL] Explicitly cast min/max operands

2020-03-18 Thread GitBox
kazum opened a new pull request #5090: [CODEGEN][OPENCL] Explicitly cast 
min/max operands
URL: https://github.com/apache/incubator-tvm/pull/5090
 
 
   OpenCL codegen needs explicit casts to min/max operands to avoid ambiguous 
call errors.
   This will fix the reported bug in 
https://discuss.tvm.ai/t/relay-opencl-quantization-error/5963
   
   #2821 looks no longer necessary, so this PR also reverts it.
   
   @lixiaoquan @tqchen please help to review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services