[GitHub] [incubator-tvm] FrozenGene commented on issue #5200: Fix intel conv2d auto tune

2020-03-31 Thread GitBox
FrozenGene commented on issue #5200: Fix intel conv2d auto tune
URL: https://github.com/apache/incubator-tvm/pull/5200#issuecomment-607062531
 
 
   I think this issue exist in all auto tvm topi template.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] eric-haibin-lin closed pull request #5201: Add mxnet converter for BERT

2020-03-31 Thread GitBox
eric-haibin-lin closed pull request #5201: Add mxnet converter for BERT
URL: https://github.com/apache/incubator-tvm/pull/5201
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] sxjscience commented on a change in pull request #5201: Add mxnet converter for BERT

2020-03-31 Thread GitBox
sxjscience commented on a change in pull request #5201: Add mxnet converter for 
BERT
URL: https://github.com/apache/incubator-tvm/pull/5201#discussion_r401382720
 
 

 ##
 File path: topi/tests/python/test_topi_math.py
 ##
 @@ -237,12 +238,12 @@ def check_device(device):
 check_device('llvm -device=arm-cpu')
 
 
-test_apply(topi.fast_exp, "fast_exp", np.exp,
-   low=-88, high=88,
-   step = 0.01)
+test_apply(topi.fast_expØ, "fast_exp", np.exp,
 
 Review comment:
   Find a "Ø"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5171: [Arith] linear system and equation solver

2020-03-31 Thread GitBox
yzhliu commented on a change in pull request #5171: [Arith] linear system and 
equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r401382116
 
 

 ##
 File path: tests/python/unittest/test_arith_solve_linear_system.py
 ##
 @@ -0,0 +1,91 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import tvm
+from tvm import te, arith
+from tvm.tir import ir_pass
+
+
+def test_unique_solution():
+x, y = te.var("x"), te.var("y")
+ranges = {}
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y, 20),
+tvm.tir.EQ(x - y, 10),
+], [x, y], ranges)
+assert list(solution.dst.variables) == []
+assert ir_pass.Equal(solution.src_to_dst[x], 15)
+assert ir_pass.Equal(solution.src_to_dst[y], 5)
+
+
+def test_low_rank():
+x, y, z = te.var("x"), te.var("y"), te.var("z")
+ranges = {}
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y + z, 15),
+tvm.tir.EQ(x + y, 10),
+], [x, y, z], ranges)
+[n0] = solution.dst.variables
+assert ir_pass.Equal(solution.src_to_dst[x], n0 + 10)
+assert ir_pass.Equal(solution.src_to_dst[y], -n0)
+assert ir_pass.Equal(solution.src_to_dst[z], 5)
+
+
+def test_infer_range():
+x, y = te.var("x"), te.var("y")
+ranges = {
+x: tvm.ir.Range.make_by_min_extent(-5, 10),
+y: tvm.ir.Range.make_by_min_extent(0, 10),
+}
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y, 0),
+], [x, y], ranges)
+[n0] = solution.dst.variables
+assert ir_pass.Equal(solution.src_to_dst[x], n0)
+assert ir_pass.Equal(solution.src_to_dst[y], -n0)
+# inferred from y's range
+assert ir_pass.Equal(solution.dst.ranges[n0].min, -9)
+assert ir_pass.Equal(solution.dst.ranges[n0].extent, 10)
+# additional inequality is added into the system for x
+[ineq] = solution.dst.relations
+assert isinstance(ineq, tvm.tir.LE)
+assert ir_pass.Equal(ineq.a, -5)
+assert ir_pass.Equal(ineq.b, n0)
+
+
+def test_ill_formed():
+x, y = te.var("x"), te.var("y")
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y, 0),
+tvm.tir.EQ(x - y, 0),
+tvm.tir.EQ(x, 5),
+], [x, y], {})
+assert list(solution.dst.variables) == []
+[rel] = solution.dst.relations
+assert ir_pass.Equal(rel, False)
+assert len(solution.src_to_dst) == 0
+assert len(solution.dst_to_src) == 0
+
+
+if __name__ == "__main__":
+test_unique_solution()
+test_low_rank()
+test_infer_range()
+test_ill_formed()
 
 Review comment:
   how did you generate random cases? random coefs or something smarter?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on issue #5171: [Arith] linear system and equation solver

2020-03-31 Thread GitBox
yzhliu commented on issue #5171: [Arith] linear system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#issuecomment-607060730
 
 
   @sergei-grechanik do you have example that we have to end with rewrite?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] eric-haibin-lin opened a new pull request #5201: Add mxnet converter for BERT

2020-03-31 Thread GitBox
eric-haibin-lin opened a new pull request #5201: Add mxnet converter for BERT
URL: https://github.com/apache/incubator-tvm/pull/5201
 
 
   Add the converter for the BERT model in gluonnlp 0.9/mxnet 1.6. Work done 
together with @icemelon9 
   
   @yzhliu FYI 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5192: [FRONTEND][MXNET] Use leaky by default for LeakyReLU

2020-03-31 Thread GitBox
maheshambule commented on a change in pull request #5192: [FRONTEND][MXNET] Use 
leaky by default for LeakyReLU
URL: https://github.com/apache/incubator-tvm/pull/5192#discussion_r401376433
 
 

 ##
 File path: tests/python/frontend/mxnet/test_forward.py
 ##
 @@ -107,6 +107,14 @@ def test_forward_resnet():
 mx_sym = model_zoo.mx_resnet(18)
 verify_mxnet_frontend_impl(mx_sym)
 
+def test_forward_leaky_relu():
+data = mx.sym.var('data')
+data = mx.sym.concat(data, -data, dim=1)  # negative part explicitly
+mx_sym = mx.sym.LeakyReLU(data)
+verify_mxnet_frontend_impl(mx_sym, (1, 3, 100, 100), (1, 6, 100, 100))
+mx_sym = mx.sym.LeakyReLU(data, act_type='leaky')
 
 Review comment:
   ok fine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] heliqi removed a comment on issue #4969: [RFC] Enhance TensorFlow Frontend Control Flow Support

2020-03-31 Thread GitBox
heliqi removed a comment on issue #4969: [RFC] Enhance TensorFlow Frontend 
Control Flow Support
URL: https://github.com/apache/incubator-tvm/issues/4969#issuecomment-607026858
 
 
   Have you try the nlp model? I use latest code, dynamic sahpe don't working.
   recursively find the input to the control flow nodes , have some problems 
with fixed dynamic input.
   
   For example, I set the 'Placeholder' op(the original shape is dynamic 
(-1,-1)) shape as (1,30) in the from_tensorflow interface.First, for each 
control flow node, we backtrack all its ancestor nodes until input nodes.But 
all the nodes we first backtracked do not necessarily contain 'Placeholder' 
node, then the shape of these nodes is dynamic.until some control node search 
for and get the 'Placeholder' node.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun opened a new pull request #5200: Fix intel conv2d auto tune

2020-03-31 Thread GitBox
kevinthesun opened a new pull request #5200: Fix intel conv2d auto tune
URL: https://github.com/apache/incubator-tvm/pull/5200
 
 
   debug_skip_region will cause execution time to be inaccurate on x86. This PR 
fixes x86 conv2d and depthwise conv2d.
   
   @icemelon9 @anijain2305 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401352114
 
 

 ##
 File path: src/relay/ir/expr_functor.cc
 ##
 @@ -29,8 +29,158 @@
 #include 
 #include 
 
+#include 
+
 namespace tvm {
 namespace relay {
+/*!
+ * \brief A function to iteratively traverse dataflow regions of a graph
+ *
+ * ExpandDatflow manually manages a stack and performs DFS to determine the 
processing
+ * order of nodes in an input graph.
+ *
+ * If it finds a dataflow node (Call, Tuple, TupleGetItem), it checks if the 
arguments to that node
+ * need to be processed via fcheck_visited. If so, the function pushes those 
arguments to the stack
+ * and continues iteratively to process the top of the stack. When it finds a 
node that doesn't
+ * match the dataflow types, or a node who's inputs have all been processed, 
it visits the current
+ * leaf via fvisit_leaf.
+ *
+ * This function should be used internally to other classes to implement 
mixed-mode traversals. The
+ * expectation is that fvisit_leaf will perform recursive analysis within 
mixed-mode traversal if it
+ * hits a non-dataflow node.
+ *
+ * fcheck_visited and fvisit_leaf are templated to encourage compiler inlining.
+ */
+template 
+void ExpandDataflow(Expr expr, FCheckVisited fcheck_visited, FVisitLeaf 
fvisit_leaf) {
+  std::stack> stack;
+  auto fpush_to_stack = [&fcheck_visited, &stack](const Expr& expr) {
+// The second state of the stack indicate whether the child has been
+// expanded in the pre-order.
+// NOTE: function will be inlined.
+if (!fcheck_visited(expr)) {
+  stack.push({expr, false});
+}
+  };
+  fpush_to_stack(expr);
+  while (stack.size() > 0) {
+auto node = stack.top().first;
+if (fcheck_visited(expr)) {
+  // if this node was visited through another path
+  // after being added to the stack ignore it.
+  stack.pop();
+} else if (stack.top().second) {
+  // all the children have already been expanded.
+  // we can just run post order visit on it.
+  fvisit_leaf(node);
+  stack.pop();
+} else if (const CallNode* op = node.as()) {
+  // mark expanded = true
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->args.rbegin(); it != op->args.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+  fpush_to_stack(op->op);
+} else if (const TupleNode* op = node.as()) {
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->fields.rbegin(); it != op->fields.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+} else if (const TupleGetItemNode* op = node.as()) {
+  stack.top().second = true;
+  fpush_to_stack(op->tuple);
+} else {
+  // No need to expand the children directly run visit.
+  fvisit_leaf(node);
+  stack.pop();
+}
+  }
+}
+
+DataflowVisitor::DataflowVisitor(int visit_limit) {
+  CHECK(visit_limit > 0) << "Dataflow visit limit must be greater than 0";
+  CHECK(visit_limit < 10) << "Dataflow visit limit must be less than 10";
 
 Review comment:
   I see. Let's then add some comments before `visit_limit_` defined in the 
class and a comment here before the `CHECK`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] heliqi commented on issue #4969: [RFC] Enhance TensorFlow Frontend Control Flow Support

2020-03-31 Thread GitBox
heliqi commented on issue #4969: [RFC] Enhance TensorFlow Frontend Control Flow 
Support
URL: https://github.com/apache/incubator-tvm/issues/4969#issuecomment-607026858
 
 
   Have you try the nlp model? I use latest code, dynamic sahpe don't working.
   recursively find the input to the control flow nodes , have some problems 
with fixed dynamic input.
   
   For example, I set the 'Placeholder' op(the original shape is dynamic 
(-1,-1)) shape as (1,30) in the from_tensorflow interface.First, for each 
control flow node, we backtrack all its ancestor nodes until input nodes.But 
all the nodes we first backtracked do not necessarily contain 'Placeholder' 
node, then the shape of these nodes is dynamic.until some control node search 
for and get the 'Placeholder' node.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5186: [Relay][Topi][AutoTVM] Winograd support for Conv3D

2020-03-31 Thread GitBox
FrozenGene commented on a change in pull request #5186: [Relay][Topi][AutoTVM] 
Winograd support for Conv3D
URL: https://github.com/apache/incubator-tvm/pull/5186#discussion_r401330192
 
 

 ##
 File path: topi/python/topi/cuda/conv3d_winograd.py
 ##
 @@ -0,0 +1,348 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name,unused-variable,unused-argument
+"""Winograd template for cuda backend"""
+
+import logging
+import tvm
+from tvm import te
+from tvm import autotvm
+
+from .. import nn
+from ..util import get_const_int, get_const_tuple, traverse_inline
+from ..nn.winograd_util import winograd_transform_matrices
+
+logger = logging.getLogger('conv3d_winograd')
+
+
+def _infer_tile_size(data, kernel):
+N, CI, D, H, W = get_const_tuple(data.shape)
+
+if D % 8 == 0:
+return 4
+return 2
+
+
+def winograd_cuda(cfg, data, kernel, strides, padding, dilation, out_dtype, 
pre_computed):
+"""Compute declaration for winograd"""
+tile_size = _infer_tile_size(data, kernel)
+
+N, CI, D, H, W = get_const_tuple(data.shape)
+
+if isinstance(dilation, int):
+dilation_d = dilation_h = dilation_w = dilation
+else:
+dilation_d, dilation_h, dilation_w = dilation
+DSTR, HSTR, WSTR = (strides, strides, strides) if isinstance(strides, int) 
else strides
+
+if not pre_computed:  # kernel tensor is raw tensor, do strict check
+if dilation_d != 1 or dilation_h != 1 or dilation_w != 1:
+kernel = nn.dilate(kernel, (1, 1, dilation_d, dilation_h, 
dilation_w))
+CO, CI, KD, KH, KW = get_const_tuple(kernel.shape)
+alpha = KW + tile_size - 1
+assert DSTR == 1 and HSTR == 1 and WSTR == 1 and KD == KH and KH == KW
+else:
+# kernel tensor is pre-transfomred. this op is created by alter op 
layout.
+# dilation is not supported
+alpha, _, _, CI, CO = get_const_tuple(kernel.shape)
+KD = KH = KW = alpha + 1 - tile_size
+assert DSTR == 1 and HSTR == 1 and WSTR == 1 and \
+   dilation_d == 1 and dilation_h == 1 and dilation_w == 1
+
+pf, pt, pl, pb, pd, pr = nn.get_pad_tuple3d(padding, (KD, KH, KW))
+data_pad = nn.pad(data, (0, 0, pf, pt, pl), (0, 0, pb, pd, pr), 
name="data_pad")
+
+r = KW
+m = tile_size
+A, B, G = winograd_transform_matrices(m, r, out_dtype)
+
+D = (D + pf + pb - KD) // DSTR + 1
+H = (H + pt + pd - KH) // HSTR + 1
+W = (W + pl + pr - KW) // WSTR + 1
+nD, nH, nW = (D + m - 1) // m, (H + m - 1) // m, (W + m - 1) // m
+P = N * nD * nH * nW
+
+# transform kernel
+if not pre_computed:
+r_kd = te.reduce_axis((0, KD), name='r_kd')
+r_kh = te.reduce_axis((0, KH), name='r_kh')
+r_kw = te.reduce_axis((0, KW), name='r_kw')
+kernel_pack = te.compute(
+(alpha, alpha, alpha, CI, CO),
+lambda omg, eps, nu, ci, co: te.sum(
+kernel[co][ci][r_kd][r_kh][r_kw] * G[omg][r_kd] * G[eps][r_kh] 
* G[nu][r_kw],
+axis=[r_kd, r_kh, r_kw]),
+name='kernel_pack')
+else:
+kernel_pack = kernel
+
+idxdiv = tvm.tir.indexdiv
+idxmod = tvm.tir.indexmod
+# pack input tile
+input_tile = te.compute((CI, P, alpha, alpha, alpha),
+lambda c, p, omg, eps, nu: data_pad[idxdiv(p, (nD 
* nH * nW))]
+[c]
+[idxmod(idxdiv(p, nH * nW), nD) * m + omg]
+[idxmod(idxdiv(p, nW), nH) * m + eps]
+[idxmod(p, nW) * m + nu],
+name='d')
+
+# transform data
+r_a = te.reduce_axis((0, alpha), 'r_a')
+r_b = te.reduce_axis((0, alpha), 'r_b')
+r_c = te.reduce_axis((0, alpha), 'r_c')
+data_pack = te.compute(
+(alpha, alpha, alpha, CI, P),
+lambda omg, eps, nu, ci, p: te.sum(
+input_tile[ci][p][r_a][r_b][r_c] * B[r_a][omg] * B[r_b][eps] * 
B[r_c][nu],
+axis=[r_a, r_b, r_c]),
+name='data_pack')
+
+# do batch gemm
+ci = te.reduce_axis((0, CI), name='ci')
+bgemm = te.compute(
+(alpha, alpha, alpha, CO, P),
+lambda omg, eps, nu, 

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5186: [Relay][Topi][AutoTVM] Winograd support for Conv3D

2020-03-31 Thread GitBox
FrozenGene commented on a change in pull request #5186: [Relay][Topi][AutoTVM] 
Winograd support for Conv3D
URL: https://github.com/apache/incubator-tvm/pull/5186#discussion_r401330192
 
 

 ##
 File path: topi/python/topi/cuda/conv3d_winograd.py
 ##
 @@ -0,0 +1,348 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name,unused-variable,unused-argument
+"""Winograd template for cuda backend"""
+
+import logging
+import tvm
+from tvm import te
+from tvm import autotvm
+
+from .. import nn
+from ..util import get_const_int, get_const_tuple, traverse_inline
+from ..nn.winograd_util import winograd_transform_matrices
+
+logger = logging.getLogger('conv3d_winograd')
+
+
+def _infer_tile_size(data, kernel):
+N, CI, D, H, W = get_const_tuple(data.shape)
+
+if D % 8 == 0:
+return 4
+return 2
+
+
+def winograd_cuda(cfg, data, kernel, strides, padding, dilation, out_dtype, 
pre_computed):
+"""Compute declaration for winograd"""
+tile_size = _infer_tile_size(data, kernel)
+
+N, CI, D, H, W = get_const_tuple(data.shape)
+
+if isinstance(dilation, int):
+dilation_d = dilation_h = dilation_w = dilation
+else:
+dilation_d, dilation_h, dilation_w = dilation
+DSTR, HSTR, WSTR = (strides, strides, strides) if isinstance(strides, int) 
else strides
+
+if not pre_computed:  # kernel tensor is raw tensor, do strict check
+if dilation_d != 1 or dilation_h != 1 or dilation_w != 1:
+kernel = nn.dilate(kernel, (1, 1, dilation_d, dilation_h, 
dilation_w))
+CO, CI, KD, KH, KW = get_const_tuple(kernel.shape)
+alpha = KW + tile_size - 1
+assert DSTR == 1 and HSTR == 1 and WSTR == 1 and KD == KH and KH == KW
+else:
+# kernel tensor is pre-transfomred. this op is created by alter op 
layout.
+# dilation is not supported
+alpha, _, _, CI, CO = get_const_tuple(kernel.shape)
+KD = KH = KW = alpha + 1 - tile_size
+assert DSTR == 1 and HSTR == 1 and WSTR == 1 and \
+   dilation_d == 1 and dilation_h == 1 and dilation_w == 1
+
+pf, pt, pl, pb, pd, pr = nn.get_pad_tuple3d(padding, (KD, KH, KW))
+data_pad = nn.pad(data, (0, 0, pf, pt, pl), (0, 0, pb, pd, pr), 
name="data_pad")
+
+r = KW
+m = tile_size
+A, B, G = winograd_transform_matrices(m, r, out_dtype)
+
+D = (D + pf + pb - KD) // DSTR + 1
+H = (H + pt + pd - KH) // HSTR + 1
+W = (W + pl + pr - KW) // WSTR + 1
+nD, nH, nW = (D + m - 1) // m, (H + m - 1) // m, (W + m - 1) // m
+P = N * nD * nH * nW
+
+# transform kernel
+if not pre_computed:
+r_kd = te.reduce_axis((0, KD), name='r_kd')
+r_kh = te.reduce_axis((0, KH), name='r_kh')
+r_kw = te.reduce_axis((0, KW), name='r_kw')
+kernel_pack = te.compute(
+(alpha, alpha, alpha, CI, CO),
+lambda omg, eps, nu, ci, co: te.sum(
+kernel[co][ci][r_kd][r_kh][r_kw] * G[omg][r_kd] * G[eps][r_kh] 
* G[nu][r_kw],
+axis=[r_kd, r_kh, r_kw]),
+name='kernel_pack')
+else:
+kernel_pack = kernel
+
+idxdiv = tvm.tir.indexdiv
+idxmod = tvm.tir.indexmod
+# pack input tile
+input_tile = te.compute((CI, P, alpha, alpha, alpha),
+lambda c, p, omg, eps, nu: data_pad[idxdiv(p, (nD 
* nH * nW))]
+[c]
+[idxmod(idxdiv(p, nH * nW), nD) * m + omg]
+[idxmod(idxdiv(p, nW), nH) * m + eps]
+[idxmod(p, nW) * m + nu],
+name='d')
+
+# transform data
+r_a = te.reduce_axis((0, alpha), 'r_a')
+r_b = te.reduce_axis((0, alpha), 'r_b')
+r_c = te.reduce_axis((0, alpha), 'r_c')
+data_pack = te.compute(
+(alpha, alpha, alpha, CI, P),
+lambda omg, eps, nu, ci, p: te.sum(
+input_tile[ci][p][r_a][r_b][r_c] * B[r_a][omg] * B[r_b][eps] * 
B[r_c][nu],
+axis=[r_a, r_b, r_c]),
+name='data_pack')
+
+# do batch gemm
+ci = te.reduce_axis((0, CI), name='ci')
+bgemm = te.compute(
+(alpha, alpha, alpha, CO, P),
+lambda omg, eps, nu, 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5194: [PYTORCH]Activations for pytorch

2020-03-31 Thread GitBox
masahi commented on a change in pull request #5194: [PYTORCH]Activations for 
pytorch
URL: https://github.com/apache/incubator-tvm/pull/5194#discussion_r401326602
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -335,6 +335,53 @@ def forward(self, *args):
 input_data = torch.rand(input_shape).float()
 verify_model(ReLU1().float().eval(), input_data=input_data)
 
+def test_forward_prelu():
+torch.set_grad_enabled(False)
+input_shape = [1, 3, 10, 10]
+
+class PReLU1(Module):
+def __init__(self):
+super(PReLU1, self).__init__()
+self.prelu = torch.nn.PReLU(num_parameters=3)
+def forward(self, *args):
+return self.prelu(args[0])
+
+input_data = torch.rand(input_shape).float()
+verify_model(PReLU1().float().eval(), input_data=input_data)
+
+def test_forward_leakyrelu():
+torch.set_grad_enabled(False)
+input_shape = [10, 10]
+
+class LeakyReLU1(Module):
+def forward(self, *args):
+return torch.nn.LeakyReLU(negative_slope=0.05)(args[0])
+
+input_data = torch.rand(input_shape).float()
+verify_model(LeakyReLU1().float().eval(), input_data=input_data)
+
+def test_forward_elu():
+torch.set_grad_enabled(False)
+input_shape = [10, 10]
+
+class ELU1(Module):
+def forward(self, *args):
+return torch.nn.ELU(alpha=1.3)(args[0])
+
+input_data = torch.rand(input_shape).float()
+verify_model(ELU1().float().eval(), input_data=input_data)
+
+def test_forward_log_sigmoid():
+torch.set_grad_enabled(False)
+input_shape = [10, 10]
+
+class LogSigmoid1(Module):
+def forward(self, *args):
+return torch.nn.LogSigmoid()(args[0])
+
+input_data = torch.rand(input_shape).float()
+verify_model(LogSigmoid1().float().eval(), input_data=input_data)
+
 
 Review comment:
   New tests shouldn't add these wrapper classes. Use the torch modules 
directly. See
   
   
https://github.com/apache/incubator-tvm/blob/430cb89995bff298cca0adf6ef1087d071875d1a/tests/python/frontend/pytorch/test_forward.py#L771-L781


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (03ff0cd -> 2b6d69c)

2020-03-31 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 03ff0cd  [Topi x86] Missing vectorize for depthwise conv2d. (#5196)
 add 2b6d69c  [REFACTOR][TIR] Migrate Low-level Passes to Pass Manager 
(#5198)

No new revisions were added by this update.

Summary of changes:
 include/tvm/ir/module.h|   3 +
 include/tvm/tir/transform.h|  26 +++-
 python/tvm/tir/transform/transform.py  |  37 +
 src/ir/module.cc   |   4 +-
 src/ir/transform.cc|   1 +
 src/target/codegen.cc  |   2 -
 src/tir/pass/storage_access.cc | 111 --
 src/tir/transforms/combine_context_call.cc |   2 +-
 .../transforms/lower_device_storage_access_info.cc | 168 +
 src/tir/{pass => transforms}/lower_intrin.cc   |  44 --
 src/tir/{pass => transforms}/lower_warp_memory.cc  |  26 +++-
 .../test_tir_transform_combine_context_call.py |   2 +-
 ...ntrin.py => test_tir_transform_lower_intrin.py} |  25 +--
 py => test_tir_transform_lower_warp_memory.py} |  16 +-
 14 files changed, 319 insertions(+), 148 deletions(-)
 create mode 100644 src/tir/transforms/lower_device_storage_access_info.cc
 rename src/tir/{pass => transforms}/lower_intrin.cc (90%)
 rename src/tir/{pass => transforms}/lower_warp_memory.cc (94%)
 rename tests/python/unittest/{test_tir_pass_lower_intrin.py => 
test_tir_transform_lower_intrin.py} (77%)
 rename tests/python/unittest/{test_tir_pass_lower_warp_memory.py => 
test_tir_transform_lower_warp_memory.py} (72%)



[GitHub] [incubator-tvm] tqchen merged pull request #5198: [REFACTOR][TIR] Migrate Low-level Passes to Pass Manager

2020-03-31 Thread GitBox
tqchen merged pull request #5198: [REFACTOR][TIR] Migrate Low-level Passes to 
Pass Manager
URL: https://github.com/apache/incubator-tvm/pull/5198
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
mbrookhart commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401313355
 
 

 ##
 File path: src/relay/ir/expr_functor.cc
 ##
 @@ -29,8 +29,158 @@
 #include 
 #include 
 
+#include 
+
 namespace tvm {
 namespace relay {
+/*!
+ * \brief A function to iteratively traverse dataflow regions of a graph
+ *
+ * ExpandDatflow manually manages a stack and performs DFS to determine the 
processing
+ * order of nodes in an input graph.
+ *
+ * If it finds a dataflow node (Call, Tuple, TupleGetItem), it checks if the 
arguments to that node
+ * need to be processed via fcheck_visited. If so, the function pushes those 
arguments to the stack
+ * and continues iteratively to process the top of the stack. When it finds a 
node that doesn't
+ * match the dataflow types, or a node who's inputs have all been processed, 
it visits the current
+ * leaf via fvisit_leaf.
+ *
+ * This function should be used internally to other classes to implement 
mixed-mode traversals. The
+ * expectation is that fvisit_leaf will perform recursive analysis within 
mixed-mode traversal if it
+ * hits a non-dataflow node.
+ *
+ * fcheck_visited and fvisit_leaf are templated to encourage compiler inlining.
+ */
+template 
+void ExpandDataflow(Expr expr, FCheckVisited fcheck_visited, FVisitLeaf 
fvisit_leaf) {
+  std::stack> stack;
+  auto fpush_to_stack = [&fcheck_visited, &stack](const Expr& expr) {
+// The second state of the stack indicate whether the child has been
+// expanded in the pre-order.
+// NOTE: function will be inlined.
+if (!fcheck_visited(expr)) {
+  stack.push({expr, false});
+}
+  };
+  fpush_to_stack(expr);
+  while (stack.size() > 0) {
+auto node = stack.top().first;
+if (fcheck_visited(expr)) {
+  // if this node was visited through another path
+  // after being added to the stack ignore it.
+  stack.pop();
+} else if (stack.top().second) {
+  // all the children have already been expanded.
+  // we can just run post order visit on it.
+  fvisit_leaf(node);
+  stack.pop();
+} else if (const CallNode* op = node.as()) {
+  // mark expanded = true
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->args.rbegin(); it != op->args.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+  fpush_to_stack(op->op);
+} else if (const TupleNode* op = node.as()) {
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->fields.rbegin(); it != op->fields.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+} else if (const TupleGetItemNode* op = node.as()) {
+  stack.top().second = true;
+  fpush_to_stack(op->tuple);
+} else {
+  // No need to expand the children directly run visit.
+  fvisit_leaf(node);
+  stack.pop();
+}
+  }
+}
+
+DataflowVisitor::DataflowVisitor(int visit_limit) {
+  CHECK(visit_limit > 0) << "Dataflow visit limit must be greater than 0";
+  CHECK(visit_limit < 10) << "Dataflow visit limit must be less than 10";
 
 Review comment:
   This is primarily in here to support Dead Code Elimination, which visits 
every node twice. The limit of 10 is what I considered an absurdly high number 
with no use case to prevent things like overflow error or misuse. I expect 
almost everyone to use the default of 1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] sergei-grechanik commented on issue #5171: [Arith] linear system and equation solver

2020-03-31 Thread GitBox
sergei-grechanik commented on issue #5171: [Arith] linear system and equation 
solver
URL: https://github.com/apache/incubator-tvm/pull/5171#issuecomment-606979220
 
 
   About `SuperSimplify`: it was a combination of the canonical simplifier and 
the rewriting simplifier that worked best for autodiff (I actually ran it with 
different combinations and rewrite->canonical->rewrite turned out to be the 
best, although canonical->rewrite was good too). I think currently the default 
Simplify function is rewrite->canonical, not sure if this order has a good 
justification.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] sergei-grechanik commented on a change in pull request #5171: [Arith] linear system and equation solver

2020-03-31 Thread GitBox
sergei-grechanik commented on a change in pull request #5171: [Arith] linear 
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r401303278
 
 

 ##
 File path: include/tvm/arith/linear_system.h
 ##
 @@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/arith/linear_system.h
+ * \brief Linear system data structures and solvers
+ */
+#ifndef TVM_ARITH_LINEAR_SYSTEM_H_
+#define TVM_ARITH_LINEAR_SYSTEM_H_
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace arith {
+
+using tir::Var;
+using tir::VarNode;
+using tir::IterVar;
+
+/*!
+ * \brief Represent a linear system including variables, their ranges and
+ *the linear relations between them (either equations or inequalities)
+ * \sa LinearSystem
+ */
+class LinearSystemNode : public Object {
+ public:
+  // e.g., \alpha, \beta
+  Array variables;
+  // e.g., 1 <= \alpha <= N, etc.
+  Map ranges;
+  // linear equalities or inequalities
+  // e.g., A \alpha = \beta or A \alpha <= \beta
+  Array relations;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("variables", &variables);
+v->Visit("ranges", &ranges);
+v->Visit("relations", &relations);
+  }
+
+  static constexpr const char* _type_key = "arith.LinearSystem";
+  TVM_DECLARE_FINAL_OBJECT_INFO(LinearSystemNode, Object);
+};
+
+/*!
+ * \brief Managed reference to LinearSystemNode.
+ * \sa LinearSystemNode
+ */
+class LinearSystem : public ObjectRef {
+ public:
+  /*!
+   * \brief Constructor by fields
+   * \param variables The variables in the system.
+   * \param rangesThe ranges of the variables.
+   * \param relations The linear relations between the variables
+   *  (either equations or inequalities)
+   */
+  TVM_DLL LinearSystem(Array variables,
+   Map ranges,
+   Array relations);
+
+  TVM_DEFINE_OBJECT_REF_METHODS(LinearSystem, ObjectRef, LinearSystemNode);
+};
+
+/*!
+ * \brief We can have different set of variables to represent the same linear 
system.
+ *For example, the following two systems are equivalent,
+ *{a + b = 0 | a >= 0, b >= 0} and
+ *{m - n = 0 | m >= 0, n <= 0}
+ *This data structure represents the transformation
+ *between two equivalent linear systems.
+ *In the above example,
+ *src: {a + b = 0 | a >= 0, b >= 0}
+ *dst: {m - n = 0 | m >= 0, n <= 0}
+ *src_to_dst : {a -> m, b -> -n}
+ *dst_to_src : {m -> a, n -> -b}
+ * \sa LinearSystemTransform
+ */
+class LinearSystemTransformNode : public Object {
+ public:
+  LinearSystem src;
+  LinearSystem dst;
+  Map src_to_dst;
+  Map dst_to_src;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("src", &src);
+v->Visit("dst", &dst);
+v->Visit("src_to_dst", &src_to_dst);
+v->Visit("dst_to_src", &dst_to_src);
+  }
+
+  static constexpr const char* _type_key = "arith.LinearSystemTransform";
+  TVM_DECLARE_FINAL_OBJECT_INFO(LinearSystemTransformNode, Object);
+};
+
+/*!
+ * \brief Managed reference to LinearSystemTransformNode.
+ * \sa LinearSystemTransformNode
+ */
+class LinearSystemTransform : public ObjectRef {
+ public:
+  /*!
+   * \brief Constructor by fields
+   * \param srcsource linear system, e.g., {a + b = 0 | a >= 0, b >= 0}
+   * \param dstlinear system equivalent to the source, e.g., {m - n = 
0 | m >= 0, n <= 0}
+   * \param src_to_dst mapping from variables in the \p src to the variables 
in the \p dst,
+   *   e.g., {a -> m, b -> -n}
+   * \param dst_to_src mapping from variables in the \p dst to the variables 
in the \p src,
+   *   e.g., {m -> a, n -> -b}
+   */
+  TVM_DLL LinearSystemTransform(LinearSystem src,
+LinearSystem dst,
+Map src_to_dst,
+Map dst_to_src);
+
+  TVM_DEFINE_OBJECT_REF_METHODS(LinearSystemTransform, ObjectRef, 
LinearSystemTransformNode);
+};
+
+/*!
+ * \brief Obtain Smith Normal Form of linear equation A x = y.
+ *Smith Normal Form of matrix A_{mxn} is S_{mxn} = U_{mxm} A_{mxn

[GitHub] [incubator-tvm] sergei-grechanik commented on a change in pull request #5171: [Arith] linear system and equation solver

2020-03-31 Thread GitBox
sergei-grechanik commented on a change in pull request #5171: [Arith] linear 
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r401302108
 
 

 ##
 File path: include/tvm/arith/linear_system.h
 ##
 @@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file tvm/arith/linear_system.h
+ * \brief Linear system data structures and solvers
+ */
+#ifndef TVM_ARITH_LINEAR_SYSTEM_H_
+#define TVM_ARITH_LINEAR_SYSTEM_H_
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace arith {
+
+using tir::Var;
+using tir::VarNode;
+using tir::IterVar;
+
+/*!
+ * \brief Represent a linear system including variables, their ranges and
+ *the linear relations between them (either equations or inequalities)
 
 Review comment:
   I think it's worth mentioning that the system is integer.
   
   Also I'm not sure if LinearSystem is a good name, because it may be useful 
to have non-linear (in)equalities in `relations` (which doesn't break the 
algorithms that work on the linear part).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] sergei-grechanik commented on a change in pull request #5171: [Arith] linear system and equation solver

2020-03-31 Thread GitBox
sergei-grechanik commented on a change in pull request #5171: [Arith] linear 
system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#discussion_r401307502
 
 

 ##
 File path: tests/python/unittest/test_arith_solve_linear_system.py
 ##
 @@ -0,0 +1,91 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+import tvm
+from tvm import te, arith
+from tvm.tir import ir_pass
+
+
+def test_unique_solution():
+x, y = te.var("x"), te.var("y")
+ranges = {}
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y, 20),
+tvm.tir.EQ(x - y, 10),
+], [x, y], ranges)
+assert list(solution.dst.variables) == []
+assert ir_pass.Equal(solution.src_to_dst[x], 15)
+assert ir_pass.Equal(solution.src_to_dst[y], 5)
+
+
+def test_low_rank():
+x, y, z = te.var("x"), te.var("y"), te.var("z")
+ranges = {}
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y + z, 15),
+tvm.tir.EQ(x + y, 10),
+], [x, y, z], ranges)
+[n0] = solution.dst.variables
+assert ir_pass.Equal(solution.src_to_dst[x], n0 + 10)
+assert ir_pass.Equal(solution.src_to_dst[y], -n0)
+assert ir_pass.Equal(solution.src_to_dst[z], 5)
+
+
+def test_infer_range():
+x, y = te.var("x"), te.var("y")
+ranges = {
+x: tvm.ir.Range.make_by_min_extent(-5, 10),
+y: tvm.ir.Range.make_by_min_extent(0, 10),
+}
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y, 0),
+], [x, y], ranges)
+[n0] = solution.dst.variables
+assert ir_pass.Equal(solution.src_to_dst[x], n0)
+assert ir_pass.Equal(solution.src_to_dst[y], -n0)
+# inferred from y's range
+assert ir_pass.Equal(solution.dst.ranges[n0].min, -9)
+assert ir_pass.Equal(solution.dst.ranges[n0].extent, 10)
+# additional inequality is added into the system for x
+[ineq] = solution.dst.relations
+assert isinstance(ineq, tvm.tir.LE)
+assert ir_pass.Equal(ineq.a, -5)
+assert ir_pass.Equal(ineq.b, n0)
+
+
+def test_ill_formed():
+x, y = te.var("x"), te.var("y")
+
+solution = arith.solve_equations([
+tvm.tir.EQ(x + y, 0),
+tvm.tir.EQ(x - y, 0),
+tvm.tir.EQ(x, 5),
+], [x, y], {})
+assert list(solution.dst.variables) == []
+[rel] = solution.dst.relations
+assert ir_pass.Equal(rel, False)
+assert len(solution.src_to_dst) == 0
+assert len(solution.dst_to_src) == 0
+
+
+if __name__ == "__main__":
+test_unique_solution()
+test_low_rank()
+test_infer_range()
+test_ill_formed()
 
 Review comment:
   I would also recommend bringing in testing via random system generation, it 
helped me a lot in discovering subtle bugs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5092: [TIR][PASS] dtype rewrite for indexing variables

2020-03-31 Thread GitBox
tqchen commented on issue #5092: [TIR][PASS] dtype rewrite for indexing 
variables
URL: https://github.com/apache/incubator-tvm/pull/5092#issuecomment-606966333
 
 
   @hzfan Good work. We are in the progress of migrating to the new transform 
pass manager API. Can you also add a variant of the pass for IRModule and 
change the testcases to the new style? We can still keep using the old API 
until we migrated everything itno the new pass style.
   
   reference https://github.com/apache/incubator-tvm/pull/5198


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5092: [TIR][PASS] dtype rewrite for indexing variables

2020-03-31 Thread GitBox
tqchen commented on a change in pull request #5092: [TIR][PASS] dtype rewrite 
for indexing variables
URL: https://github.com/apache/incubator-tvm/pull/5092#discussion_r401299065
 
 

 ##
 File path: src/tir/pass/narrow_datatype.cc
 ##
 @@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file narrow_datatype.cc
+ * \brief narrow the datatype of indexing vars
+ */
+
+#include 
+#include 
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+// This pass narrows indexing expressions (like StoreNode::Index)
+// that trivially fit into i32 to i32. Considering that i32 indices
+// may be more efficient on some backends (while i64 may be more
+// efficient on others, like llvm), we may want this pass when i32
+// indices are more efficient.
+//
+// For Var v, we determine its dtype by examining all the PrimExpr
+// that contains v, denoted by E = {e_0 = v, e_1, e_2, ..., e_k}.
+// If all expressions in E fit into i32, then we think v can be narrowed
+// to i32.
+//
+// To make an indexing expression i32, we must make sure that every
+// component of that expression is of dtype i32. So besides Var, we
+// rewrite the following inside an indexing expression
+// - Var
+// - IntImm
+// - Cast
+//
+// Algorithm:
+// - Use DataTypeVisitor to determine whether a Var can be narrowed or not.
+// - Use DataTypeRewritter to rewrite the components of an indexing expression.
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+using arith::ConstIntBound;
+
+class DataTypeVisitor final : public StmtExprVisitor {
+ public:
+  explicit DataTypeVisitor(int target_bits)
+: bits_(target_bits), target_bits_(target_bits) {}
+
+  void VisitExpr(const PrimExpr& e) {
+if (e.dtype().is_int()) {
+  int bits = max_bits_;
+  ConstIntBound bound = analyzer_.const_int_bound(e);
 
 Review comment:
   memo can have unintended consequences if the vars can be bound to different 
context dependent info(e.g. `if (x<10) {x+1; } else x;`  `x<10` is only 
effective in the then branch.
   
   I would say perhaps we could have another API to pass in a unordered map, 
and ask the analyzer to record every intermediate steps into the map


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5183: [DOCS] Use https link

2020-03-31 Thread GitBox
tqchen commented on a change in pull request #5183: [DOCS] Use https link
URL: https://github.com/apache/incubator-tvm/pull/5183#discussion_r401298381
 
 

 ##
 File path: docs/contribute/document.rst
 ##
 @@ -20,7 +20,7 @@
 Write Document and Tutorials
 
 
-We use the `Sphinx `_ for the main documentation.
+We use the `Sphinx `_ for the main documentation.
 
 Review comment:
   good catch, just reverted this one to http


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
trevor-m commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#issuecomment-606963207
 
 
   HI @mbaret, I prepared this fix to include TupleNode in annotation when 
surrounded by supported nodes, in a similar way to what you have done for 
TupleGetItemNode.
   
   Here is the code, you are welcome to include it in this PR if you would like.
   
https://github.com/trevor-m/tvm/commit/2b7c3b1d040abf1f88fca3857c65f13c4f012b2e


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m edited a comment on issue #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
trevor-m edited a comment on issue #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#issuecomment-606963207
 
 
   HI @mbaret, I prepared this small enhancement to include TupleNode in 
annotation when surrounded by supported nodes, in a similar way to what you 
have done for TupleGetItemNode.
   
   Here is the code, you are welcome to include it in this PR if you would like.
   
https://github.com/trevor-m/tvm/commit/2b7c3b1d040abf1f88fca3857c65f13c4f012b2e


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5092: [TIR][PASS] dtype rewrite for indexing variables

2020-03-31 Thread GitBox
yzhliu commented on a change in pull request #5092: [TIR][PASS] dtype rewrite 
for indexing variables
URL: https://github.com/apache/incubator-tvm/pull/5092#discussion_r401289944
 
 

 ##
 File path: src/tir/pass/narrow_datatype.cc
 ##
 @@ -0,0 +1,373 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file narrow_datatype.cc
+ * \brief narrow the datatype of indexing vars
+ */
+
+#include 
+#include 
+#include "../../arith/ir_mutator_with_analyzer.h"
+#include "../../arith/ir_visitor_with_analyzer.h"
+
+namespace tvm {
+namespace tir {
+
+// This pass narrows indexing expressions (like StoreNode::Index)
+// that trivially fit into i32 to i32. Considering that i32 indices
+// may be more efficient on some backends (while i64 may be more
+// efficient on others, like llvm), we may want this pass when i32
+// indices are more efficient.
+//
+// For Var v, we determine its dtype by examining all the PrimExpr
+// that contains v, denoted by E = {e_0 = v, e_1, e_2, ..., e_k}.
+// If all expressions in E fit into i32, then we think v can be narrowed
+// to i32.
+//
+// To make an indexing expression i32, we must make sure that every
+// component of that expression is of dtype i32. So besides Var, we
+// rewrite the following inside an indexing expression
+// - Var
+// - IntImm
+// - Cast
+//
+// Algorithm:
+// - Use DataTypeVisitor to determine whether a Var can be narrowed or not.
+// - Use DataTypeRewritter to rewrite the components of an indexing expression.
+
+using arith::Analyzer;
+using arith::IRMutatorWithAnalyzer;
+using arith::ConstIntBound;
+
+class DataTypeVisitor final : public StmtExprVisitor {
+ public:
+  explicit DataTypeVisitor(int target_bits)
+: bits_(target_bits), target_bits_(target_bits) {}
+
+  void VisitExpr(const PrimExpr& e) {
+if (e.dtype().is_int()) {
+  int bits = max_bits_;
+  ConstIntBound bound = analyzer_.const_int_bound(e);
 
 Review comment:
   @hzfan I think it is good. but why not always doing memorization? @tqchen 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] roastduck commented on issue #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
roastduck commented on issue #5193: [TE] Support mixing normal and cross-thread 
reduction
URL: https://github.com/apache/incubator-tvm/pull/5193#issuecomment-606956774
 
 
   All resolved. @wpan11nv 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kognat-docs commented on issue #5199: Tensorflow Tutorial Fails with Metal as the context

2020-03-31 Thread GitBox
kognat-docs commented on issue #5199: Tensorflow Tutorial Fails with Metal as 
the context
URL: https://github.com/apache/incubator-tvm/issues/5199#issuecomment-606955843
 
 
   See python script.
   
   
[metal_tf_demo.zip](https://github.com/apache/incubator-tvm/files/4412095/metal_tf_demo.zip)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kognat-docs opened a new issue #5199: Tensorflow Tutorial Fails with Metal as the context

2020-03-31 Thread GitBox
kognat-docs opened a new issue #5199: Tensorflow Tutorial Fails with Metal as 
the context
URL: https://github.com/apache/incubator-tvm/issues/5199
 
 
   See:
   
   `# tvm, relay
   import tvm
   from tvm import te
   from tvm import relay
   
   # os and numpy
   import numpy as np
   import os.path
   
   # Tensorflow imports
   import tensorflow as tf
   tf_compat_v1 = tf
   
   # Tensorflow utility functions
   import tvm.relay.testing.tf as tf_testing
   
   # Base location for model related files.
   repo_base = 
'https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/'
   
   # Test image
   img_name = 'elephant-299.jpg'
   image_url = os.path.join(repo_base, img_name)
   
   model_name = 'classify_image_graph_def-with_shapes.pb'
   model_url = os.path.join(repo_base, model_name)
   
   # Image label map
   map_proto = 'imagenet_2012_challenge_label_map_proto.pbtxt'
   map_proto_url = os.path.join(repo_base, map_proto)
   
   # Human readable text for labels
   label_map = 'imagenet_synset_to_human_label_map.txt'
   label_map_url = os.path.join(repo_base, label_map)
   
   # Target settings
   # Use these commented settings to build for cuda.
   #target = 'cuda'
   #target_host = 'llvm'
   #layout = "NCHW"
   #ctx = tvm.gpu(0)
   target = 'metal'
   target_host = 'llvm'
   layout = "NCHW"
   ctx = tvm.metal(0)
   from tvm.contrib.download import download_testdata
   
   img_path = download_testdata(image_url, img_name, module='data')
   model_path = download_testdata(model_url, model_name, module=['tf', 
'InceptionV1'])
   map_proto_path = download_testdata(map_proto_url, map_proto, module='data')
   label_path = download_testdata(label_map_url, label_map, module='data')
   
   with tf_compat_v1.gfile.GFile(model_path, 'rb') as f:
   graph_def = tf_compat_v1.GraphDef()
   graph_def.ParseFromString(f.read())
   graph = tf.import_graph_def(graph_def, name='')
   # Call the utility to import the graph definition into default graph.
   graph_def = tf_testing.ProcessGraphDefParam(graph_def)
   # Add shapes to the graph.
   with tf_compat_v1.Session() as sess:
   graph_def = tf_testing.AddShapesToGraphDef(sess, 'softmax')
   
   from PIL import Image
   image = Image.open(img_path).resize((299, 299))
   
   x = np.array(image)
   
   shape_dict = {'DecodeJpeg/contents': x.shape}
   dtype_dict = {'DecodeJpeg/contents': 'uint8'}
   mod, params = relay.frontend.from_tensorflow(graph_def,
layout=layout,
shape=shape_dict)
   
   print("Tensorflow protobuf imported to relay frontend.")
   with relay.build_config(opt_level=3):
   graph, lib, params = relay.build(mod,
target=target,
target_host=target_host,
params=params)
   
   
   
   from tvm.contrib import graph_runtime
   dtype = 'uint8'
   m = graph_runtime.create(graph, lib, ctx)
   # set inputs
   m.set_input('DecodeJpeg/contents', tvm.nd.array(x.astype(dtype)))
   m.set_input(**params)
   # execute
   m.run()
   # get outputs
   tvm_output = m.get_output(0, tvm.nd.empty(((1, 1008)), 'float32'))
   
   predictions = tvm_output.asnumpy()
   predictions = np.squeeze(predictions)
   
   # Creates node ID --> English string lookup.
   node_lookup = tf_testing.NodeLookup(label_lookup_path=map_proto_path,
   uid_lookup_path=label_path)
   
   # Print top 5 predictions from TVM output.
   top_k = predictions.argsort()[-5:][::-1]
   for node_id in top_k:
   human_string = node_lookup.id_to_string(node_id)
   score = predictions[node_id]
   print('%s (score = %.5f)' % (human_string, score))
   
   
   def create_graph():
   """Creates a graph from saved GraphDef file and returns a saver."""
   # Creates graph from saved graph_def.pb.
   with tf_compat_v1.gfile.GFile(model_path, 'rb') as f:
   graph_def = tf_compat_v1.GraphDef()
   graph_def.ParseFromString(f.read())
   graph = tf.import_graph_def(graph_def, name='')
   # Call the utility to import the graph definition into default graph.
   graph_def = tf_testing.ProcessGraphDefParam(graph_def)
   
   def run_inference_on_image(image):
   """Runs inference on an image.
   
   Parameters
   --
   image: String
   Image file name.
   
   Returns
   ---
   Nothing
   """
   if not tf_compat_v1.gfile.Exists(image):
   tf.logging.fatal('File does not exist %s', image)
   image_data = tf_compat_v1.gfile.GFile(image, 'rb').read()
   
   # Creates graph from saved GraphDef.
   create_graph()
   
   with tf_compat_v1.Session() as sess:
   softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
   predictions = sess.run(softmax_tensor,

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401284934
 
 

 ##
 File path: src/relay/transforms/merge_compiler_regions.cc
 ##
 @@ -86,126 +86,145 @@ class AnnotateRestDefault : public ExprMutator {
   }
 
   /*! \brief This function adds compiler ends to nodes that
-   * have a region AND they should not be arguments of the
-   * original function
+   * don't belong to a region already (default).
* \param expr The expression to add a compiler end to.
* \return expr The expression with or without a compiler end added.
*/
-  Expr AddCompilerEnd(const Expr& expr) {
-auto region = regions_->GetRegion(expr);
-auto visited_expr = VisitExpr(expr);
-
-// The compiler ends are added to nodes that does have a region
-// AND they should not be arguments of the original function
-if (!region.defined() &&
-   std::find(func_->params.begin(),
- func_->params.end(), visited_expr)
-   == func_->params.end()) {
-  return AddCompilerEnd_(visited_expr);
+  Expr InsertEnd(const Expr& expr) {
+if (annotated_nodes_.find(expr) == annotated_nodes_.end() &&
+!expr->IsInstance() && !expr->IsInstance()) {
+  const auto *end_op =
+runtime::Registry::Get("relay.op.annotation._make.compiler_end");
+  CHECK(end_op);
+  Expr end = (*end_op)(expr, target_);
+  return end;
 }
-return visited_expr;
+return expr;
   }
 
-  Expr AddCompilerEnd_(const Expr& expr) {
-const auto* end_op =
-  runtime::Registry::Get("relay.op.annotation._make.compiler_end");
-CHECK(end_op);
-Expr end = (*end_op)(expr, target_);
-return end;
+  /*! \brief This function adds compiler begins to nodes that
+ * don't belong to a region already (default).
+ * \param expr The expression to add a compiler begin to.
+ * \return expr The expression with or without a compiler begin added.
+ */
+  Expr InsertBegin(const Expr& expr) {
+const auto *begin_op =
 
 Review comment:
   const auto*


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401284888
 
 

 ##
 File path: src/relay/transforms/merge_compiler_regions.cc
 ##
 @@ -86,126 +86,145 @@ class AnnotateRestDefault : public ExprMutator {
   }
 
   /*! \brief This function adds compiler ends to nodes that
-   * have a region AND they should not be arguments of the
-   * original function
+   * don't belong to a region already (default).
* \param expr The expression to add a compiler end to.
* \return expr The expression with or without a compiler end added.
*/
-  Expr AddCompilerEnd(const Expr& expr) {
-auto region = regions_->GetRegion(expr);
-auto visited_expr = VisitExpr(expr);
-
-// The compiler ends are added to nodes that does have a region
-// AND they should not be arguments of the original function
-if (!region.defined() &&
-   std::find(func_->params.begin(),
- func_->params.end(), visited_expr)
-   == func_->params.end()) {
-  return AddCompilerEnd_(visited_expr);
+  Expr InsertEnd(const Expr& expr) {
+if (annotated_nodes_.find(expr) == annotated_nodes_.end() &&
+!expr->IsInstance() && !expr->IsInstance()) {
+  const auto *end_op =
 
 Review comment:
   const auto*


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401284830
 
 

 ##
 File path: src/relay/transforms/merge_compiler_regions.cc
 ##
 @@ -86,126 +86,145 @@ class AnnotateRestDefault : public ExprMutator {
   }
 
   /*! \brief This function adds compiler ends to nodes that
-   * have a region AND they should not be arguments of the
-   * original function
+   * don't belong to a region already (default).
* \param expr The expression to add a compiler end to.
* \return expr The expression with or without a compiler end added.
*/
-  Expr AddCompilerEnd(const Expr& expr) {
-auto region = regions_->GetRegion(expr);
-auto visited_expr = VisitExpr(expr);
-
-// The compiler ends are added to nodes that does have a region
-// AND they should not be arguments of the original function
-if (!region.defined() &&
-   std::find(func_->params.begin(),
- func_->params.end(), visited_expr)
-   == func_->params.end()) {
-  return AddCompilerEnd_(visited_expr);
+  Expr InsertEnd(const Expr& expr) {
+if (annotated_nodes_.find(expr) == annotated_nodes_.end() &&
+!expr->IsInstance() && !expr->IsInstance()) {
+  const auto *end_op =
+runtime::Registry::Get("relay.op.annotation._make.compiler_end");
+  CHECK(end_op);
+  Expr end = (*end_op)(expr, target_);
+  return end;
 }
-return visited_expr;
+return expr;
   }
 
-  Expr AddCompilerEnd_(const Expr& expr) {
-const auto* end_op =
-  runtime::Registry::Get("relay.op.annotation._make.compiler_end");
-CHECK(end_op);
-Expr end = (*end_op)(expr, target_);
-return end;
+  /*! \brief This function adds compiler begins to nodes that
 
 Review comment:
   Alignment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401116877
 
 

 ##
 File path: src/relay/analysis/annotated_region_set.cc
 ##
 @@ -73,7 +73,7 @@ void AnnotatedRegionSetNode::MergeRegions(AnnotatedRegion 
src,
 void AnnotatedRegionSetNode::AddToRegion(AnnotatedRegion region, const Expr& 
expr) {
   auto region2 = GetRegion(expr);
   if (region2.defined()) {
-MergeRegions(region, region2);
+MergeRegions(region2, region);
 
 Review comment:
   I would suggest we use `src` and `dst` to reduce the confusion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu merged pull request #5196: [Topi x86] Missing vectorize for depthwise conv2d.

2020-03-31 Thread GitBox
yzhliu merged pull request #5196: [Topi x86] Missing vectorize for depthwise 
conv2d.
URL: https://github.com/apache/incubator-tvm/pull/5196
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on issue #5196: [Topi x86] Missing vectorize for depthwise conv2d.

2020-03-31 Thread GitBox
yzhliu commented on issue #5196: [Topi x86] Missing vectorize for depthwise 
conv2d.
URL: https://github.com/apache/incubator-tvm/pull/5196#issuecomment-606948771
 
 
   Thanks @anijain2305 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (14ae3a6 -> 03ff0cd)

2020-03-31 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 14ae3a6  [RELAY] Re-wrote the Graph Partitioner to support multiple 
outputs (#5143)
 add 03ff0cd  [Topi x86] Missing vectorize for depthwise conv2d. (#5196)

No new revisions were added by this update.

Summary of changes:
 topi/python/topi/x86/depthwise_conv2d.py | 1 +
 1 file changed, 1 insertion(+)



[GitHub] [incubator-tvm] yzhliu commented on a change in pull request #5183: [DOCS] Use https link

2020-03-31 Thread GitBox
yzhliu commented on a change in pull request #5183: [DOCS] Use https link
URL: https://github.com/apache/incubator-tvm/pull/5183#discussion_r401278896
 
 

 ##
 File path: docs/contribute/document.rst
 ##
 @@ -20,7 +20,7 @@
 Write Document and Tutorials
 
 
-We use the `Sphinx `_ for the main documentation.
+We use the `Sphinx `_ for the main documentation.
 
 Review comment:
   this one seems to be not working.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5198: [REFACTOR][TIR] Migrate Low-level Passes to Pass Manager

2020-03-31 Thread GitBox
tqchen commented on issue #5198: [REFACTOR][TIR] Migrate Low-level Passes to 
Pass Manager
URL: https://github.com/apache/incubator-tvm/pull/5198#issuecomment-606944607
 
 
   cc @yzhliu @ZihengJiang @vinx13 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #5198: [REFACTOR][TIR] Migrate Low-level Passes to Pass Manager

2020-03-31 Thread GitBox
tqchen opened a new pull request #5198: [REFACTOR][TIR] Migrate Low-level 
Passes to Pass Manager
URL: https://github.com/apache/incubator-tvm/pull/5198
 
 
   - LowerIntrin
   - LowerDeviceStorageAccessInfo
   - LowerWarpMemory


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on issue #5171: [Arith] linear system and equation solver

2020-03-31 Thread GitBox
yzhliu commented on issue #5171: [Arith] linear system and equation solver
URL: https://github.com/apache/incubator-tvm/pull/5171#issuecomment-606936203
 
 
   @tqchen changed to use arith::Analyzer


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
wpan11nv commented on a change in pull request #5193: [TE] Support mixing 
normal and cross-thread reduction
URL: https://github.com/apache/incubator-tvm/pull/5193#discussion_r401264115
 
 

 ##
 File path: tests/python/unittest/test_target_codegen_cuda.py
 ##
 @@ -321,6 +321,33 @@ def check_cuda(dtype, m=32, n=32):
 check_cuda("float32")
 check_cuda("float16")
 
+def test_cuda_mix_threaded_and_normal_reduction():
+def check_cuda(dtype, m=32, n=32):
+if not tvm.gpu(0).exist or not tvm.runtime.enabled("cuda"):
+print("skip because cuda is not enabled..")
+return
+if dtype == "float16" and not have_fp16(tvm.gpu(0).compute_version):
+print("Skip because gpu does not have fp16 support")
+return
+
+a = tvm.te.placeholder((m, n), name="a", dtype=dtype)
+b = topi.sum(a)
+with tvm.target.cuda():
+sb = tvm.te.create_schedule(b.op)
+i, j = b.op.reduce_axis
 
 Review comment:
   i, j == > i, _
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
wpan11nv commented on a change in pull request #5193: [TE] Support mixing 
normal and cross-thread reduction
URL: https://github.com/apache/incubator-tvm/pull/5193#discussion_r401264162
 
 

 ##
 File path: tests/python/unittest/test_target_codegen_cuda.py
 ##
 @@ -481,4 +508,4 @@ def run_test(dtype):
 test_cuda_floordiv_with_vectorization()
 test_vectorized_intrin1()
 test_vectorized_intrin2()
-test_vectorized_popcount()
\ No newline at end of file
+test_vectorized_popcount()
 
 Review comment:
   invoke your test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
wpan11nv commented on a change in pull request #5193: [TE] Support mixing 
normal and cross-thread reduction
URL: https://github.com/apache/incubator-tvm/pull/5193#discussion_r401263693
 
 

 ##
 File path: src/te/operation/cross_thread_reduction.cc
 ##
 @@ -57,10 +57,63 @@ Stmt MakeCrossThreadReduction(
   for (PrimExpr v : conds) {
 cond = cond && v;
   }
+
+  std::vector > common, normal_red;
+  for (size_t i = 0, n = stage->leaf_iter_vars.size(); i < n; ++i) {
+IterVar iv = stage->leaf_iter_vars[i];
+IterVarAttr attr;
+auto it = stage->iter_var_attrs.find(iv);
+if (it != stage->iter_var_attrs.end()) {
+  attr = (*it).second;
+}
+if (iv->iter_type == kCommReduce) {
+  if (attr.defined() && attr->bind_thread.defined()) {
+common.emplace_back(nest[i + 1]);
+  } else {
+normal_red.emplace_back(nest[i + 1]);
+  }
+} else {
+  common.emplace_back(nest[i + 1]);
+}
+  }
+
+  // If we load from and then store into the same res_handles in the 
thread_allreduce intrinsic,
+  // somethings goes wrong, so we use an extra variable here for normal 
reduction.
 
 Review comment:
   %s/somethings/something/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
wpan11nv commented on a change in pull request #5193: [TE] Support mixing 
normal and cross-thread reduction
URL: https://github.com/apache/incubator-tvm/pull/5193#discussion_r401263693
 
 

 ##
 File path: src/te/operation/cross_thread_reduction.cc
 ##
 @@ -57,10 +57,63 @@ Stmt MakeCrossThreadReduction(
   for (PrimExpr v : conds) {
 cond = cond && v;
   }
+
+  std::vector > common, normal_red;
+  for (size_t i = 0, n = stage->leaf_iter_vars.size(); i < n; ++i) {
+IterVar iv = stage->leaf_iter_vars[i];
+IterVarAttr attr;
+auto it = stage->iter_var_attrs.find(iv);
+if (it != stage->iter_var_attrs.end()) {
+  attr = (*it).second;
+}
+if (iv->iter_type == kCommReduce) {
+  if (attr.defined() && attr->bind_thread.defined()) {
+common.emplace_back(nest[i + 1]);
+  } else {
+normal_red.emplace_back(nest[i + 1]);
+  }
+} else {
+  common.emplace_back(nest[i + 1]);
+}
+  }
+
+  // If we load from and then store into the same res_handles in the 
thread_allreduce intrinsic,
+  // somethings goes wrong, so we use an extra variable here for normal 
reduction.
 
 Review comment:
   %/res_handles/res_handle
   %s/somethings/something/


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
wpan11nv commented on a change in pull request #5193: [TE] Support mixing 
normal and cross-thread reduction
URL: https://github.com/apache/incubator-tvm/pull/5193#discussion_r401263649
 
 

 ##
 File path: src/te/operation/cross_thread_reduction.cc
 ##
 @@ -57,10 +57,63 @@ Stmt MakeCrossThreadReduction(
   for (PrimExpr v : conds) {
 cond = cond && v;
   }
+
+  std::vector > common, normal_red;
 
 Review comment:
   no space before > now (> C++11).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm removed a comment on issue #5186: [Relay][Topi][AutoTVM] Winograd support for Conv3D

2020-03-31 Thread GitBox
jwfromm removed a comment on issue #5186: [Relay][Topi][AutoTVM] Winograd 
support for Conv3D
URL: https://github.com/apache/incubator-tvm/pull/5186#issuecomment-606923274
 
 
   @merrymercy, there's one little bit of this PR that isn't quite working yet. 
After autotuning, the alter_op_layout pass successfully converts 
`conv3d_winograd` to `contrib_conv3d_winograd_without_weight_transform` but 
then the autotvm dispatcher complains about not being able to find the new op. 
What should happen is the same schedule used for `conv3d_winograd` is applied. 
As far as I can tell, everything is completely analogous to the conv2d case 
which works fine. Is there some special casing somewhere to make this work for 
conv2d?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on issue #5186: [Relay][Topi][AutoTVM] Winograd support for Conv3D

2020-03-31 Thread GitBox
jwfromm commented on issue #5186: [Relay][Topi][AutoTVM] Winograd support for 
Conv3D
URL: https://github.com/apache/incubator-tvm/pull/5186#issuecomment-606923274
 
 
   @merrymercy, there's one little bit of this PR that isn't quite working yet. 
After autotuning, the alter_op_layout pass successfully converts 
`conv3d_winograd` to `contrib_conv3d_winograd_without_weight_transform` but 
then the autotvm dispatcher complains about not being able to find the new op. 
What should happen is the same schedule used for `conv3d_winograd` is applied. 
As far as I can tell, everything is completely analogous to the conv2d case 
which works fine. Is there some special casing somewhere to make this work for 
conv2d?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401246595
 
 

 ##
 File path: src/relay/ir/expr_functor.cc
 ##
 @@ -29,8 +29,158 @@
 #include 
 #include 
 
+#include 
+
 namespace tvm {
 namespace relay {
+/*!
+ * \brief A function to iteratively traverse dataflow regions of a graph
+ *
+ * ExpandDatflow manually manages a stack and performs DFS to determine the 
processing
 
 Review comment:
   ExpandDataflow


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401249373
 
 

 ##
 File path: src/relay/ir/expr_functor.cc
 ##
 @@ -29,8 +29,158 @@
 #include 
 #include 
 
+#include 
+
 namespace tvm {
 namespace relay {
+/*!
+ * \brief A function to iteratively traverse dataflow regions of a graph
+ *
+ * ExpandDatflow manually manages a stack and performs DFS to determine the 
processing
+ * order of nodes in an input graph.
+ *
+ * If it finds a dataflow node (Call, Tuple, TupleGetItem), it checks if the 
arguments to that node
+ * need to be processed via fcheck_visited. If so, the function pushes those 
arguments to the stack
+ * and continues iteratively to process the top of the stack. When it finds a 
node that doesn't
+ * match the dataflow types, or a node who's inputs have all been processed, 
it visits the current
+ * leaf via fvisit_leaf.
+ *
+ * This function should be used internally to other classes to implement 
mixed-mode traversals. The
+ * expectation is that fvisit_leaf will perform recursive analysis within 
mixed-mode traversal if it
+ * hits a non-dataflow node.
+ *
+ * fcheck_visited and fvisit_leaf are templated to encourage compiler inlining.
+ */
+template 
+void ExpandDataflow(Expr expr, FCheckVisited fcheck_visited, FVisitLeaf 
fvisit_leaf) {
+  std::stack> stack;
+  auto fpush_to_stack = [&fcheck_visited, &stack](const Expr& expr) {
+// The second state of the stack indicate whether the child has been
+// expanded in the pre-order.
+// NOTE: function will be inlined.
+if (!fcheck_visited(expr)) {
+  stack.push({expr, false});
+}
+  };
+  fpush_to_stack(expr);
+  while (stack.size() > 0) {
+auto node = stack.top().first;
+if (fcheck_visited(expr)) {
+  // if this node was visited through another path
+  // after being added to the stack ignore it.
+  stack.pop();
+} else if (stack.top().second) {
+  // all the children have already been expanded.
+  // we can just run post order visit on it.
+  fvisit_leaf(node);
+  stack.pop();
+} else if (const CallNode* op = node.as()) {
+  // mark expanded = true
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->args.rbegin(); it != op->args.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+  fpush_to_stack(op->op);
+} else if (const TupleNode* op = node.as()) {
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->fields.rbegin(); it != op->fields.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+} else if (const TupleGetItemNode* op = node.as()) {
+  stack.top().second = true;
+  fpush_to_stack(op->tuple);
+} else {
+  // No need to expand the children directly run visit.
+  fvisit_leaf(node);
+  stack.pop();
+}
+  }
+}
+
+DataflowVisitor::DataflowVisitor(int visit_limit) {
+  CHECK(visit_limit > 0) << "Dataflow visit limit must be greater than 0";
+  CHECK(visit_limit < 10) << "Dataflow visit limit must be less than 10";
 
 Review comment:
   Could you explain why it must be less than 10 here? I think I don't well 
understand the functionality of `visit_limit`. So we will just stop visiting if 
it reaches the limit?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
zhiics commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401248550
 
 

 ##
 File path: src/relay/ir/expr_functor.cc
 ##
 @@ -29,8 +29,158 @@
 #include 
 #include 
 
+#include 
+
 namespace tvm {
 namespace relay {
+/*!
+ * \brief A function to iteratively traverse dataflow regions of a graph
+ *
+ * ExpandDatflow manually manages a stack and performs DFS to determine the 
processing
+ * order of nodes in an input graph.
+ *
+ * If it finds a dataflow node (Call, Tuple, TupleGetItem), it checks if the 
arguments to that node
+ * need to be processed via fcheck_visited. If so, the function pushes those 
arguments to the stack
+ * and continues iteratively to process the top of the stack. When it finds a 
node that doesn't
+ * match the dataflow types, or a node who's inputs have all been processed, 
it visits the current
+ * leaf via fvisit_leaf.
+ *
+ * This function should be used internally to other classes to implement 
mixed-mode traversals. The
+ * expectation is that fvisit_leaf will perform recursive analysis within 
mixed-mode traversal if it
+ * hits a non-dataflow node.
+ *
+ * fcheck_visited and fvisit_leaf are templated to encourage compiler inlining.
+ */
+template 
+void ExpandDataflow(Expr expr, FCheckVisited fcheck_visited, FVisitLeaf 
fvisit_leaf) {
+  std::stack> stack;
+  auto fpush_to_stack = [&fcheck_visited, &stack](const Expr& expr) {
+// The second state of the stack indicate whether the child has been
+// expanded in the pre-order.
+// NOTE: function will be inlined.
+if (!fcheck_visited(expr)) {
+  stack.push({expr, false});
+}
+  };
+  fpush_to_stack(expr);
+  while (stack.size() > 0) {
+auto node = stack.top().first;
+if (fcheck_visited(expr)) {
+  // if this node was visited through another path
+  // after being added to the stack ignore it.
+  stack.pop();
+} else if (stack.top().second) {
+  // all the children have already been expanded.
+  // we can just run post order visit on it.
+  fvisit_leaf(node);
+  stack.pop();
+} else if (const CallNode* op = node.as()) {
+  // mark expanded = true
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->args.rbegin(); it != op->args.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+  fpush_to_stack(op->op);
+} else if (const TupleNode* op = node.as()) {
+  stack.top().second = true;
+  // push the children to the stack in reverse order
+  // to match recursive processing order
+  for (auto it = op->fields.rbegin(); it != op->fields.rend(); ++it) {
+fpush_to_stack(*it);
+  }
+} else if (const TupleGetItemNode* op = node.as()) {
+  stack.top().second = true;
+  fpush_to_stack(op->tuple);
+} else {
 
 Review comment:
   also need to handle `MatchNode` and ref nodes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
jroesch commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401236242
 
 

 ##
 File path: include/tvm/relay/expr_functor.h
 ##
 @@ -232,6 +232,181 @@ class ExprMutator
   std::unordered_map memo_;
 };
 
+/*!
+ * \brief A wrapper around ExprVisitor which traverses the Dataflow Normal AST.
+ *
+ * DataflowVisitor treats Expr as dataflow graph, and visits in post-DFS order
+ *
+ * DataflowVisitor provides the same recursive API as ExprVisitor, and uses
+ * recursion to traverse most forms of the IR, but under the hood it expands 
nested dataflow regions
+ * of the graph and processes them iteratatively to prevent stack overflows
+ */
+class DataflowVisitor : public ::tvm::relay::ExprVisitor {
 
 Review comment:
   I don't think ScopeVisitor is a very good name, i.e what is the property 
that the visitor is maintaining respecting? it is not just scoping. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed issue #5189: Python unit test test_tuple_type crashes

2020-03-31 Thread GitBox
tqchen closed issue #5189: Python unit test test_tuple_type crashes 
URL: https://github.com/apache/incubator-tvm/issues/5189
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5189: Python unit test test_tuple_type crashes

2020-03-31 Thread GitBox
tqchen commented on issue #5189: Python unit test test_tuple_type crashes 
URL: https://github.com/apache/incubator-tvm/issues/5189#issuecomment-606878745
 
 
   Thanks for bringing up the issue, It would be great if you can dig a bit 
further. Also the community uses https://discuss.tvm.ai/ mostly for trouble 
shooting related discussions, it would be great if we can open a new thread 
there with more details(e.g. stack trace, valgrind)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
tqchen commented on issue #5193: [TE] Support mixing normal and cross-thread 
reduction
URL: https://github.com/apache/incubator-tvm/pull/5193#issuecomment-606877894
 
 
   Thanks @roastduck  for contributing.  cc @vinx13 @merrymercy @Hzfengsy 
@wpan11nv  please also help to take a look:)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
tqchen commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401215336
 
 

 ##
 File path: include/tvm/relay/expr_functor.h
 ##
 @@ -232,6 +232,181 @@ class ExprMutator
   std::unordered_map memo_;
 };
 
+/*!
+ * \brief A wrapper around ExprVisitor which traverses the Dataflow Normal AST.
+ *
+ * DataflowVisitor treats Expr as dataflow graph, and visits in post-DFS order
+ *
+ * DataflowVisitor provides the same recursive API as ExprVisitor, and uses
+ * recursion to traverse most forms of the IR, but under the hood it expands 
nested dataflow regions
+ * of the graph and processes them iteratatively to prevent stack overflows
+ */
+class DataflowVisitor : public ::tvm::relay::ExprVisitor {
+ public:
+  DataflowVisitor(int visit_limit = 1);
+
+  /*!
+   * \brief VisitExpr is finalized to preserve call expansion of dataflow 
regions
+   */
+  void VisitExpr(const Expr& expr) final;
+  void VisitExpr_(const CallNode* op) override;
+  void VisitExpr_(const TupleNode* op) override;
+  void VisitExpr_(const TupleGetItemNode* op) override;
+
+
+ protected:
+  /*!
+   * \brief A function to apply when reaching a leaf of the graph 
non-recursively
+   */
+  virtual void VisitLeaf(const Expr& expr);
+  /*!
+   * \brief A function to determine if an expression has already been visited 
or needs to be
+   * re-visited
+   */
+  virtual bool CheckVisited(const Expr& expr);
+  /*!
+   * \brief The max number of times to visit a node
+   */
+  size_t visit_limit_;
+};
+
+/*! \brief Non-recursive DFS Graph Traversal for Custom Rewriting Passes
+ *
+ * ScopeMutator treats Expr as dataflow graph, and only Rewrites each Expr 
once.
+ * The mutated results are memoized in a map and reused so that
+ * local transformation on the dataflow preserves the graph structure.
+ *
+ * ScopeMutator provides the same recursive API as ExprMutator, and uses
+ * recursion to traverse most forms of the IR, but under the hood it expands 
nested dataflow regions
+ * of the graph and processes them iteratatively to prevent stack overflows
+ *
+ * Uses Rewrite_ API of ExprRewriter for a cleaner split between recrusive and 
non-recursive behavior.
+ */
+class ScopeMutator : public ::tvm::relay::ExprMutator {
+ public:
+  Expr Mutate(const Expr& expr) final;
+  Expr VisitExpr_(const TupleNode* op) final { return Rewrite(op); };
+  Expr VisitExpr_(const CallNode* call_node) final { return 
Rewrite(call_node); };
+  Expr VisitExpr_(const TupleGetItemNode* op) final { return Rewrite(op); };
+  /*!
+   *  Users should override Rewrite_ methods to implement their pass. Rewrite_ 
functions will be
+   * able to rewrite the op only with data about the original node `pre` and 
the same node with
 
 Review comment:
   document all the arguments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 opened a new pull request #5197: [TOPI x86] Adding unroll_kw config option for depthwise conv2d.

2020-03-31 Thread GitBox
anijain2305 opened a new pull request #5197: [TOPI x86] Adding unroll_kw config 
option for depthwise conv2d.
URL: https://github.com/apache/incubator-tvm/pull/5197
 
 
   @yzhliu 
   
   unroll_kw is used for normal conv2d schedules as well. In the case of 
depthwise conv2d, the input pixels are actually vectorized (not broadcasted as 
in the case with normal conv2d). This potentially can create opportunities for 
reusing the data vector across two output pixels. Therefore, adding unroll_kw 
config option.
   
   In any case, this does not bring perf degradation. It makes search space 
larger.
   
   Concern - This might require a minor change in Tophub configuration. I can 
make that change once this PR is merged (or just before).
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
tqchen commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401213764
 
 

 ##
 File path: include/tvm/relay/expr_functor.h
 ##
 @@ -232,6 +232,181 @@ class ExprMutator
   std::unordered_map memo_;
 };
 
+/*!
+ * \brief A wrapper around ExprVisitor which traverses the Dataflow Normal AST.
+ *
+ * DataflowVisitor treats Expr as dataflow graph, and visits in post-DFS order
+ *
+ * DataflowVisitor provides the same recursive API as ExprVisitor, and uses
+ * recursion to traverse most forms of the IR, but under the hood it expands 
nested dataflow regions
+ * of the graph and processes them iteratatively to prevent stack overflows
+ */
+class DataflowVisitor : public ::tvm::relay::ExprVisitor {
+ public:
+  DataflowVisitor(int visit_limit = 1);
+
+  /*!
+   * \brief VisitExpr is finalized to preserve call expansion of dataflow 
regions
+   */
+  void VisitExpr(const Expr& expr) final;
+  void VisitExpr_(const CallNode* op) override;
+  void VisitExpr_(const TupleNode* op) override;
+  void VisitExpr_(const TupleGetItemNode* op) override;
+
+
 
 Review comment:
   One empty line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
tqchen commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401213597
 
 

 ##
 File path: include/tvm/relay/expr_functor.h
 ##
 @@ -196,7 +196,7 @@ class ExprMutator
* \brief Mutate is alias for VisitExpr
* \return expr.
*/
-  Expr Mutate(const Expr& expr) {
+  virtual Expr Mutate(const Expr& expr) {
 
 Review comment:
   I still think it is better to not subclass Mutate, instead, override 
VisitExpr in the ScopeMutator, which calls into DispatchVisitExpr that does the 
dispatching.
   
   This way we do not have to change a lot of calls of VisitExpr into Mutate in 
the subclasses, which can be confusing(for user to think about which one to 
call).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
tqchen commented on a change in pull request #4886: [WIP][POC]First pass a 
defining at non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#discussion_r401214283
 
 

 ##
 File path: include/tvm/relay/expr_functor.h
 ##
 @@ -232,6 +232,181 @@ class ExprMutator
   std::unordered_map memo_;
 };
 
+/*!
+ * \brief A wrapper around ExprVisitor which traverses the Dataflow Normal AST.
+ *
+ * DataflowVisitor treats Expr as dataflow graph, and visits in post-DFS order
+ *
+ * DataflowVisitor provides the same recursive API as ExprVisitor, and uses
+ * recursion to traverse most forms of the IR, but under the hood it expands 
nested dataflow regions
+ * of the graph and processes them iteratatively to prevent stack overflows
+ */
+class DataflowVisitor : public ::tvm::relay::ExprVisitor {
 
 Review comment:
   Shall we unify the naming convention of DataflowVisitor and ScopeMutator? 
Perhaps ScopeVisitor?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4886: [WIP][POC]First pass a defining at non-recursive Graph Vistor and Rewriter

2020-03-31 Thread GitBox
tqchen commented on issue #4886: [WIP][POC]First pass a defining at 
non-recursive Graph Vistor and Rewriter
URL: https://github.com/apache/incubator-tvm/pull/4886#issuecomment-606873195
 
 
   cc @icemelon9 @yzhliu @anijain2305 @zhiics please help to review this PR, 
let us bring aim to bring it in this week.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 opened a new pull request #5196: [Topi x86] Missing vectorize for depthwise conv2d.

2020-03-31 Thread GitBox
anijain2305 opened a new pull request #5196: [Topi x86] Missing vectorize for 
depthwise conv2d.
URL: https://github.com/apache/incubator-tvm/pull/5196
 
 
   @yzhliu @kevinthesun 
   
   Missing vectorize. I checked the TVM IR, this can vectorize the stores of 
partial outputs registers to memory. I did not any see perf benefit, because it 
might be negligible. But, we should do this explicitly. We already do this for 
conv2d_avx_common.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
trevor-m commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401175060
 
 

 ##
 File path: src/relay/transforms/annotate_target.cc
 ##
 @@ -52,6 +55,13 @@ class AnnotateTargetWrapper : public ExprMutator {
 return fannotate[op](call->attrs, call->args);
   }
 }
+if (expr->IsInstance()) {
 
 Review comment:
   I don't think that many nodes will need to be supported here. 
TupleGetItemNode is needed for batchnorm. I have a PR ready to add support for 
TupleNode which is needed for concatenate. I haven't encountered any other 
nodes yet that would need to be added here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
trevor-m commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401175060
 
 

 ##
 File path: src/relay/transforms/annotate_target.cc
 ##
 @@ -52,6 +55,13 @@ class AnnotateTargetWrapper : public ExprMutator {
 return fannotate[op](call->attrs, call->args);
   }
 }
+if (expr->IsInstance()) {
 
 Review comment:
   I have a PR ready to add support for TupleNode.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kazum commented on a change in pull request #5192: [FRONTEND][MXNET] Use leaky by default for LeakyReLU

2020-03-31 Thread GitBox
kazum commented on a change in pull request #5192: [FRONTEND][MXNET] Use leaky 
by default for LeakyReLU
URL: https://github.com/apache/incubator-tvm/pull/5192#discussion_r401172125
 
 

 ##
 File path: tests/python/frontend/mxnet/test_forward.py
 ##
 @@ -107,6 +107,14 @@ def test_forward_resnet():
 mx_sym = model_zoo.mx_resnet(18)
 verify_mxnet_frontend_impl(mx_sym)
 
+def test_forward_leaky_relu():
+data = mx.sym.var('data')
+data = mx.sym.concat(data, -data, dim=1)  # negative part explicitly
+mx_sym = mx.sym.LeakyReLU(data)
+verify_mxnet_frontend_impl(mx_sym, (1, 3, 100, 100), (1, 6, 100, 100))
+mx_sym = mx.sym.LeakyReLU(data, act_type='leaky')
 
 Review comment:
   The only leaky relu has two patterns to be applied.  Having two tests for it 
looks good to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m edited a comment on issue #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
trevor-m edited a comment on issue #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#issuecomment-606786052
 
 
   Thanks @mbaret ! This fixes the segfault issues with resnet and other 
networks.
   
   However now it is impossible to execute a model that is not fully offloaded 
to an external codegen.  Since`AnnotateRestDefault` labels the ops meant to be 
run in TVM as "relay.ext.default" it results in a failure during codegen since 
there is obviously no "default" codegen: `TVMError: Check failed: pf: Failed to 
find the codegen tool for relay.ext.default`. Am I supposed to do something 
extra to remove the default label after partitioning?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
comaniac commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401126984
 
 

 ##
 File path: src/relay/transforms/annotate_target.cc
 ##
 @@ -52,6 +55,13 @@ class AnnotateTargetWrapper : public ExprMutator {
 return fannotate[op](call->attrs, call->args);
   }
 }
+if (expr->IsInstance()) {
 
 Review comment:
   I have a concern about this. It seems like you have to list all possible 
nodes here and it's easy to miss something. For now I would suggest adding an 
exception at least to indicate that we need to add more here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
comaniac commented on a change in pull request #5195: [RELAY] Fixes to 
MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#discussion_r401134416
 
 

 ##
 File path: src/relay/transforms/merge_compiler_regions.cc
 ##
 @@ -239,10 +258,14 @@ class RegionMerger : public ExprVisitor {
   void VisitExpr_(const CallNode* call) final {
 if (call->op == compiler_end_op) {
   auto region = regions_->GetRegion(GetRef(call));
+  auto node = (*region->GetOutputs().begin()).as();
+  std::string name = "";
 
 Review comment:
   What is this `name` for?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #5143: [RELAY] Re-wrote the Graph Partitioner to support multiple outputs

2020-03-31 Thread GitBox
zhiics commented on issue #5143: [RELAY] Re-wrote the Graph Partitioner to 
support multiple outputs
URL: https://github.com/apache/incubator-tvm/pull/5143#issuecomment-606795653
 
 
   Thanks @manupa-arm @comaniac 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (9cb9a51 -> 14ae3a6)

2020-03-31 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 9cb9a51  rocm: fix dense_rocblas in strategy, topi (#5191)
 add 14ae3a6  [RELAY] Re-wrote the Graph Partitioner to support multiple 
outputs (#5143)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/partition_graph.cc | 449 ++--
 tests/python/relay/test_pass_partition_graph.py | 183 ++
 2 files changed, 451 insertions(+), 181 deletions(-)



[GitHub] [incubator-tvm] zhiics merged pull request #5143: [RELAY] Re-wrote the Graph Partitioner to support multiple outputs

2020-03-31 Thread GitBox
zhiics merged pull request #5143: [RELAY] Re-wrote the Graph Partitioner to 
support multiple outputs
URL: https://github.com/apache/incubator-tvm/pull/5143
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m edited a comment on issue #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
trevor-m edited a comment on issue #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#issuecomment-606786052
 
 
   Thanks @mbaret ! This fixes the issues with resnet and other networks.
   
   However now it is impossible to execute a model that is not fully offloaded 
to an external codegen.  Since`AnnotateRestDefault` labels the ops meant to be 
run in TVM as "relay.ext.default" it results in a failure during codegen since 
there is obviously no "default" codegen: `TVMError: Check failed: pf: Failed to 
find the codegen tool for relay.ext.default`. Am I supposed to do something 
extra to remove the default label after partitioning?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5192: [FRONTEND][MXNET] Use leaky by default for LeakyReLU

2020-03-31 Thread GitBox
maheshambule commented on a change in pull request #5192: [FRONTEND][MXNET] Use 
leaky by default for LeakyReLU
URL: https://github.com/apache/incubator-tvm/pull/5192#discussion_r401115279
 
 

 ##
 File path: tests/python/frontend/mxnet/test_forward.py
 ##
 @@ -107,6 +107,14 @@ def test_forward_resnet():
 mx_sym = model_zoo.mx_resnet(18)
 verify_mxnet_frontend_impl(mx_sym)
 
+def test_forward_leaky_relu():
+data = mx.sym.var('data')
+data = mx.sym.concat(data, -data, dim=1)  # negative part explicitly
+mx_sym = mx.sym.LeakyReLU(data)
+verify_mxnet_frontend_impl(mx_sym, (1, 3, 100, 100), (1, 6, 100, 100))
+mx_sym = mx.sym.LeakyReLU(data, act_type='leaky')
 
 Review comment:
   All other existing Relu test cases seem to follow same pattern. Can we 
refactor them to a single function?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
zhiics commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#issuecomment-606787991
 
 
   For "default", I commented in the other PR: 
https://github.com/apache/incubator-tvm/pull/5028
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] trevor-m commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
trevor-m commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#issuecomment-606786052
 
 
   Thanks @mbaret ! This fixes the issues with resnet and other networks.
   
   I did notice however that `AnnotateRestDefault` labels the ops meant to be 
run in TVM as "relay.ext.default" which results in a failure during codegen 
since there is obviously no "default" codegen: `TVMError: Check failed: pf: 
Failed to find the codegen tool for relay.ext.default`. Am I supposed to do 
something extra to remove the default label after partitioning?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #5167: [RELAY][QNN]Add support for QNN Div operator

2020-03-31 Thread GitBox
anijain2305 commented on issue #5167: [RELAY][QNN]Add support for QNN Div 
operator
URL: https://github.com/apache/incubator-tvm/pull/5167#issuecomment-606777410
 
 
   Thanks @siju-samuel I will read it and review the PR today. Thanks for 
increasing the coverage :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
kevinthesun commented on a change in pull request #5103: [Relay][ADT]Static 
Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r401099670
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
+tensor_type_name = self.get_name('tensor_t')
+# Skip register if tensor type is already registered.
+global_type_names = set()
+for g_ty_var in self.prelude.mod.get_global_type_vars():
+global_type_names.add(g_ty_var.name_hint)
+if tensor_type_name in global_type_names:
+return
+
+tensor_type_var = GlobalTypeVar(tensor_type_name)
+setattr(self.prelude, tensor_type_name, tensor_type_var)
+tensor_type = TensorType(self.shape, self.dtype)
+tensor_constructor_name = self.get_name('tensor_constructor')
+
+tensor_nil_name = self.get_name('tensor_nil')
+tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var)
+tensor_case = Constructor(tensor_constructor_name, [tensor_type], 
tensor_type_var)
+
+setattr(self.prelude, tensor_nil_name, tensor_nil_case)
+setattr(self.prelude, tensor_constructor_name, tensor_case)
+self.prelude.mod[tensor_type_var] = TypeData(tensor_type_var,
+ [],
+ [tensor_nil_case, 
tensor_case])
+
+def define_tensor_array(self):
+"""Defines a function to create a tensor array with size n.
+tensor_array(n) : Tensor[(), int32] -> list[tensor_t]
+"""
+tensor_array_constructor_name = self.get_name("tensor_array")
+tensor_array_constructor_var = 
self._create_global_var(tensor_array_constructor_name)
+setattr(self.prelude, tensor_array_constructor_name, 
tensor_array_constructor_var)
+tensor_nil_var = self.get_var('tensor_nil')
+tensor_type_var = self.get_var('tensor_t')
+n = Var("x", scalar_type('int32'))
+body = If(equal(n, const(0)),
+  self.prelude.nil(),
+  self.prelude.cons(tensor_nil_var(),
+tensor_array_constructor_var(subtract(n, 
const(1)
+self.prelude.mod[tensor_array_constructor_var] = \
+Function([n], body, self.prelude.l(tensor_type_var()), [])
+
+def define_tensor_take(self):
+"""Defines a function to return a range of tensor_t on axis 0.
+tensor_take(t, lower, upper) :
+tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t
+"""
+ndim = len(self.shape)
+if ndim == 0:
+return
+
+take_name = self.get_name("tensor_take")
+take_var = self._create_global_var(take_name)
+setattr(self.prelude, take_name, take_var)
+
+output_shape = [Any(),] + list(self.shape[1:])
+tensor_type_var, tensor_constructor = \
+self._get_adt_by_shape(output_shape)
+
+t = Var('tensor', self.get_var('tensor_t')())
+lower = Var('lower', scalar_type('int32'))
+upper = Var('upper', scalar_type('int32'))
+tvar = Var('t')
+case = Clause(PatternConstructor(self.get_var('tensor_constructor'), 
[PatternVar(tvar)]),
+  tensor_constructor(op.take(tvar,
+ op.arange(lower, upper, 
dtype='int32'),
+ axis=0)))
+self.prelude.mod[take_var] = \
+Function([t, lower, upper],
+ Match(t, [case], False), tensor_type_var(), [])
+
+def define_tensor_concatenate(self):
+"""Defines a function to concatenate two tensor_t on axis 0.
+tensor_concatenate(t) : tensor_t -> tensor_t -> ten

[GitHub] [incubator-tvm] mbaret commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
mbaret commented on issue #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195#issuecomment-606764975
 
 
   cc @zhiics @trevor-m @comaniac 
   Can you see if you still observe failures with this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbaret opened a new pull request #5195: [RELAY] Fixes to MergeCompilerRegions

2020-03-31 Thread GitBox
mbaret opened a new pull request #5195: [RELAY] Fixes to MergeCompilerRegions
URL: https://github.com/apache/incubator-tvm/pull/5195
 
 
   There were a few outstanding issues with the previous PR to implement the 
MergeCompilerRegions pass. This PR addresses those issues and fixes the 
AnnotateTarget test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen merged pull request #5191: rocm: fix dense_rocblas in strategy, topi

2020-03-31 Thread GitBox
tqchen merged pull request #5191: rocm: fix dense_rocblas in strategy, topi
URL: https://github.com/apache/incubator-tvm/pull/5191
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: rocm: fix dense_rocblas in strategy, topi (#5191)

2020-03-31 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 9cb9a51  rocm: fix dense_rocblas in strategy, topi (#5191)
9cb9a51 is described below

commit 9cb9a51f37eaa9c7692f15f8c5ae52fa70394209
Author: Thomas Viehmann 
AuthorDate: Tue Mar 31 18:37:51 2020 +0200

rocm: fix dense_rocblas in strategy, topi (#5191)
---
 python/tvm/relay/op/strategy/rocm.py | 2 +-
 topi/python/topi/rocm/dense.py   | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/python/tvm/relay/op/strategy/rocm.py 
b/python/tvm/relay/op/strategy/rocm.py
index 0486f71..6cda346 100644
--- a/python/tvm/relay/op/strategy/rocm.py
+++ b/python/tvm/relay/op/strategy/rocm.py
@@ -129,7 +129,7 @@ def dense_strategy_rocm(attrs, inputs, out_type, target):
 assert out_type.dtype == inputs[0].dtype, "Mixed precision not 
supported."
 strategy.add_implementation(
 wrap_compute_dense(topi.rocm.dense_rocblas),
-wrap_topi_schedule(topi.rocm.dense_rocblas),
+wrap_topi_schedule(topi.rocm.schedule_dense_rocblas),
 name="dense_rocblas.rocm",
 plevel=15)
 return strategy
diff --git a/topi/python/topi/rocm/dense.py b/topi/python/topi/rocm/dense.py
index 097120d..989cc2a 100644
--- a/topi/python/topi/rocm/dense.py
+++ b/topi/python/topi/rocm/dense.py
@@ -123,6 +123,8 @@ def dense_rocblas(cfg, data, weight, bias=None, 
out_dtype=None):
 output : tvm.te.Tensor
 2-D with shape [batch, out_dim]
 """
+if out_dtype is None:
+out_dtype = data.dtype
 assert out_dtype == data.dtype, "Mixed precision not supported."
 matmul = rocblas.matmul(data, weight, False, True)
 batch, in_dim = data.shape



[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5194: [PYTORCH]Activations for pytorch

2020-03-31 Thread GitBox
siju-samuel opened a new pull request #5194: [PYTORCH]Activations for pytorch
URL: https://github.com/apache/incubator-tvm/pull/5194
 
 
   Some activation functions and their test cases for Torch
   @masahi  Please help to review this PR, TIA
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] roastduck opened a new pull request #5193: [TE] Support mixing normal and cross-thread reduction

2020-03-31 Thread GitBox
roastduck opened a new pull request #5193: [TE] Support mixing normal and 
cross-thread reduction
URL: https://github.com/apache/incubator-tvm/pull/5193
 
 
   Currently TVM only supports pure normal (i.e. sequential) reduction or pure 
cross-thread reduction. Since TVM has not supported nested reduction yet, one 
is even unable to schedule a mixed reduction manually. I modified the function 
that lowers cross-thread reduction, to support mixed reduction as well.
   
   The approach is straight forward: First perform normal reduction into local 
variables in each threads first, and then invoke the original cross-thread 
reduction intrinsic. It works like this (pseudo-code):
   
   ```c++
   // Divide the loop nest into two parts
   normal_red = sequantial loops nest
   common = other loops nest
   
   // If normal_red is empty, fallback to original code
   
   normal_init = generate init for the temp var
   normal_update = generate sequential reduction on the temp var
   body genereate cross-thread reduction // original code
   
   // Merge loop nests and add some checks
   body = SeqStmt(normal_init, MergeNest(normal_red, normal_update), body)
   body = MergeNest(common, body)
   return body
   
   ```
   
   A test case is added as a Python unit test.
   
   This is my first PR to TVM, and I am not sure whom to invite as a reviewer. 
Since this is compiling pass related, @tqchen can you review my code?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] siju-samuel commented on issue #5167: [RELAY][QNN]Add support for QNN Div operator

2020-03-31 Thread GitBox
siju-samuel commented on issue #5167: [RELAY][QNN]Add support for QNN Div 
operator
URL: https://github.com/apache/incubator-tvm/pull/5167#issuecomment-606714956
 
 
   HI @anijain2305 I was trying to run quantized Mobilenetv3, it has a 
hard_swish node, which will need a div op.
   
   
![image](https://user-images.githubusercontent.com/15828974/78047089-911ade80-7395-11ea-9b82-8397b7b4dd9b.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400886793
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
 
 Review comment:
   ```suggestion
   """Defines the static tensor ADT, which is the container for tensors
   with fixed shape."""
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400890313
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
+tensor_type_name = self.get_name('tensor_t')
+# Skip register if tensor type is already registered.
+global_type_names = set()
+for g_ty_var in self.prelude.mod.get_global_type_vars():
+global_type_names.add(g_ty_var.name_hint)
+if tensor_type_name in global_type_names:
+return
+
+tensor_type_var = GlobalTypeVar(tensor_type_name)
+setattr(self.prelude, tensor_type_name, tensor_type_var)
+tensor_type = TensorType(self.shape, self.dtype)
+tensor_constructor_name = self.get_name('tensor_constructor')
+
+tensor_nil_name = self.get_name('tensor_nil')
+tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var)
+tensor_case = Constructor(tensor_constructor_name, [tensor_type], 
tensor_type_var)
+
+setattr(self.prelude, tensor_nil_name, tensor_nil_case)
+setattr(self.prelude, tensor_constructor_name, tensor_case)
+self.prelude.mod[tensor_type_var] = TypeData(tensor_type_var,
+ [],
+ [tensor_nil_case, 
tensor_case])
+
+def define_tensor_array(self):
+"""Defines a function to create a tensor array with size n.
+tensor_array(n) : Tensor[(), int32] -> list[tensor_t]
+"""
+tensor_array_constructor_name = self.get_name("tensor_array")
+tensor_array_constructor_var = 
self._create_global_var(tensor_array_constructor_name)
+setattr(self.prelude, tensor_array_constructor_name, 
tensor_array_constructor_var)
+tensor_nil_var = self.get_var('tensor_nil')
+tensor_type_var = self.get_var('tensor_t')
+n = Var("x", scalar_type('int32'))
+body = If(equal(n, const(0)),
+  self.prelude.nil(),
+  self.prelude.cons(tensor_nil_var(),
+tensor_array_constructor_var(subtract(n, 
const(1)
+self.prelude.mod[tensor_array_constructor_var] = \
+Function([n], body, self.prelude.l(tensor_type_var()), [])
+
+def define_tensor_take(self):
+"""Defines a function to return a range of tensor_t on axis 0.
+tensor_take(t, lower, upper) :
+tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t
+"""
+ndim = len(self.shape)
+if ndim == 0:
+return
+
+take_name = self.get_name("tensor_take")
+take_var = self._create_global_var(take_name)
+setattr(self.prelude, take_name, take_var)
+
+output_shape = [Any(),] + list(self.shape[1:])
+tensor_type_var, tensor_constructor = \
+self._get_adt_by_shape(output_shape)
+
+t = Var('tensor', self.get_var('tensor_t')())
+lower = Var('lower', scalar_type('int32'))
+upper = Var('upper', scalar_type('int32'))
+tvar = Var('t')
+case = Clause(PatternConstructor(self.get_var('tensor_constructor'), 
[PatternVar(tvar)]),
 
 Review comment:
   ```suggestion
   case = Clause(PatternConstructor(tensor_constructor, 
[PatternVar(tvar)]),
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400880265
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
 
 Review comment:
   maybe we should improve this a bit. can we use `'_'.join(self.shape)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400890849
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
+tensor_type_name = self.get_name('tensor_t')
+# Skip register if tensor type is already registered.
+global_type_names = set()
+for g_ty_var in self.prelude.mod.get_global_type_vars():
+global_type_names.add(g_ty_var.name_hint)
+if tensor_type_name in global_type_names:
+return
+
+tensor_type_var = GlobalTypeVar(tensor_type_name)
+setattr(self.prelude, tensor_type_name, tensor_type_var)
+tensor_type = TensorType(self.shape, self.dtype)
+tensor_constructor_name = self.get_name('tensor_constructor')
+
+tensor_nil_name = self.get_name('tensor_nil')
+tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var)
+tensor_case = Constructor(tensor_constructor_name, [tensor_type], 
tensor_type_var)
+
+setattr(self.prelude, tensor_nil_name, tensor_nil_case)
+setattr(self.prelude, tensor_constructor_name, tensor_case)
+self.prelude.mod[tensor_type_var] = TypeData(tensor_type_var,
+ [],
+ [tensor_nil_case, 
tensor_case])
+
+def define_tensor_array(self):
+"""Defines a function to create a tensor array with size n.
+tensor_array(n) : Tensor[(), int32] -> list[tensor_t]
+"""
+tensor_array_constructor_name = self.get_name("tensor_array")
+tensor_array_constructor_var = 
self._create_global_var(tensor_array_constructor_name)
+setattr(self.prelude, tensor_array_constructor_name, 
tensor_array_constructor_var)
+tensor_nil_var = self.get_var('tensor_nil')
+tensor_type_var = self.get_var('tensor_t')
+n = Var("x", scalar_type('int32'))
+body = If(equal(n, const(0)),
+  self.prelude.nil(),
+  self.prelude.cons(tensor_nil_var(),
+tensor_array_constructor_var(subtract(n, 
const(1)
+self.prelude.mod[tensor_array_constructor_var] = \
+Function([n], body, self.prelude.l(tensor_type_var()), [])
+
+def define_tensor_take(self):
+"""Defines a function to return a range of tensor_t on axis 0.
+tensor_take(t, lower, upper) :
+tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t
+"""
+ndim = len(self.shape)
+if ndim == 0:
+return
+
+take_name = self.get_name("tensor_take")
+take_var = self._create_global_var(take_name)
+setattr(self.prelude, take_name, take_var)
+
+output_shape = [Any(),] + list(self.shape[1:])
+tensor_type_var, tensor_constructor = \
+self._get_adt_by_shape(output_shape)
+
+t = Var('tensor', self.get_var('tensor_t')())
+lower = Var('lower', scalar_type('int32'))
+upper = Var('upper', scalar_type('int32'))
+tvar = Var('t')
+case = Clause(PatternConstructor(self.get_var('tensor_constructor'), 
[PatternVar(tvar)]),
+  tensor_constructor(op.take(tvar,
+ op.arange(lower, upper, 
dtype='int32'),
+ axis=0)))
+self.prelude.mod[take_var] = \
+Function([t, lower, upper],
+ Match(t, [case], False), tensor_type_var(), [])
+
+def define_tensor_concatenate(self):
+"""Defines a function to concatenate two tensor_t on axis 0.
+tensor_concatenate(t) : tensor_t -> tensor_t -> tensor_t

[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400895709
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
+tensor_type_name = self.get_name('tensor_t')
+# Skip register if tensor type is already registered.
+global_type_names = set()
+for g_ty_var in self.prelude.mod.get_global_type_vars():
+global_type_names.add(g_ty_var.name_hint)
+if tensor_type_name in global_type_names:
+return
+
+tensor_type_var = GlobalTypeVar(tensor_type_name)
+setattr(self.prelude, tensor_type_name, tensor_type_var)
+tensor_type = TensorType(self.shape, self.dtype)
+tensor_constructor_name = self.get_name('tensor_constructor')
+
+tensor_nil_name = self.get_name('tensor_nil')
+tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var)
+tensor_case = Constructor(tensor_constructor_name, [tensor_type], 
tensor_type_var)
+
+setattr(self.prelude, tensor_nil_name, tensor_nil_case)
+setattr(self.prelude, tensor_constructor_name, tensor_case)
+self.prelude.mod[tensor_type_var] = TypeData(tensor_type_var,
+ [],
+ [tensor_nil_case, 
tensor_case])
+
+def define_tensor_array(self):
+"""Defines a function to create a tensor array with size n.
+tensor_array(n) : Tensor[(), int32] -> list[tensor_t]
+"""
+tensor_array_constructor_name = self.get_name("tensor_array")
+tensor_array_constructor_var = 
self._create_global_var(tensor_array_constructor_name)
+setattr(self.prelude, tensor_array_constructor_name, 
tensor_array_constructor_var)
+tensor_nil_var = self.get_var('tensor_nil')
+tensor_type_var = self.get_var('tensor_t')
+n = Var("x", scalar_type('int32'))
+body = If(equal(n, const(0)),
+  self.prelude.nil(),
+  self.prelude.cons(tensor_nil_var(),
+tensor_array_constructor_var(subtract(n, 
const(1)
+self.prelude.mod[tensor_array_constructor_var] = \
+Function([n], body, self.prelude.l(tensor_type_var()), [])
+
+def define_tensor_take(self):
+"""Defines a function to return a range of tensor_t on axis 0.
+tensor_take(t, lower, upper) :
+tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t
+"""
+ndim = len(self.shape)
+if ndim == 0:
+return
+
+take_name = self.get_name("tensor_take")
+take_var = self._create_global_var(take_name)
+setattr(self.prelude, take_name, take_var)
+
+output_shape = [Any(),] + list(self.shape[1:])
+tensor_type_var, tensor_constructor = \
+self._get_adt_by_shape(output_shape)
+
+t = Var('tensor', self.get_var('tensor_t')())
+lower = Var('lower', scalar_type('int32'))
+upper = Var('upper', scalar_type('int32'))
+tvar = Var('t')
+case = Clause(PatternConstructor(self.get_var('tensor_constructor'), 
[PatternVar(tvar)]),
+  tensor_constructor(op.take(tvar,
+ op.arange(lower, upper, 
dtype='int32'),
+ axis=0)))
+self.prelude.mod[take_var] = \
+Function([t, lower, upper],
+ Match(t, [case], False), tensor_type_var(), [])
+
+def define_tensor_concatenate(self):
+"""Defines a function to concatenate two tensor_t on axis 0.
+tensor_concatenate(t) : tensor_t -> tensor_t -> tensor_t

[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400893534
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
+tensor_type_name = self.get_name('tensor_t')
+# Skip register if tensor type is already registered.
+global_type_names = set()
+for g_ty_var in self.prelude.mod.get_global_type_vars():
+global_type_names.add(g_ty_var.name_hint)
+if tensor_type_name in global_type_names:
+return
+
+tensor_type_var = GlobalTypeVar(tensor_type_name)
+setattr(self.prelude, tensor_type_name, tensor_type_var)
+tensor_type = TensorType(self.shape, self.dtype)
+tensor_constructor_name = self.get_name('tensor_constructor')
+
+tensor_nil_name = self.get_name('tensor_nil')
+tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var)
+tensor_case = Constructor(tensor_constructor_name, [tensor_type], 
tensor_type_var)
+
+setattr(self.prelude, tensor_nil_name, tensor_nil_case)
+setattr(self.prelude, tensor_constructor_name, tensor_case)
+self.prelude.mod[tensor_type_var] = TypeData(tensor_type_var,
+ [],
+ [tensor_nil_case, 
tensor_case])
+
+def define_tensor_array(self):
+"""Defines a function to create a tensor array with size n.
+tensor_array(n) : Tensor[(), int32] -> list[tensor_t]
+"""
+tensor_array_constructor_name = self.get_name("tensor_array")
+tensor_array_constructor_var = 
self._create_global_var(tensor_array_constructor_name)
+setattr(self.prelude, tensor_array_constructor_name, 
tensor_array_constructor_var)
+tensor_nil_var = self.get_var('tensor_nil')
+tensor_type_var = self.get_var('tensor_t')
+n = Var("x", scalar_type('int32'))
+body = If(equal(n, const(0)),
+  self.prelude.nil(),
+  self.prelude.cons(tensor_nil_var(),
+tensor_array_constructor_var(subtract(n, 
const(1)
+self.prelude.mod[tensor_array_constructor_var] = \
+Function([n], body, self.prelude.l(tensor_type_var()), [])
+
+def define_tensor_take(self):
+"""Defines a function to return a range of tensor_t on axis 0.
+tensor_take(t, lower, upper) :
+tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t
+"""
+ndim = len(self.shape)
+if ndim == 0:
+return
+
+take_name = self.get_name("tensor_take")
+take_var = self._create_global_var(take_name)
+setattr(self.prelude, take_name, take_var)
+
+output_shape = [Any(),] + list(self.shape[1:])
+tensor_type_var, tensor_constructor = \
+self._get_adt_by_shape(output_shape)
+
+t = Var('tensor', self.get_var('tensor_t')())
+lower = Var('lower', scalar_type('int32'))
+upper = Var('upper', scalar_type('int32'))
+tvar = Var('t')
+case = Clause(PatternConstructor(self.get_var('tensor_constructor'), 
[PatternVar(tvar)]),
+  tensor_constructor(op.take(tvar,
+ op.arange(lower, upper, 
dtype='int32'),
+ axis=0)))
+self.prelude.mod[take_var] = \
+Function([t, lower, upper],
+ Match(t, [case], False), tensor_type_var(), [])
+
+def define_tensor_concatenate(self):
+"""Defines a function to concatenate two tensor_t on axis 0.
+tensor_concatenate(t) : tensor_t -> tensor_t -> tensor_t

[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400888717
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
+tensor_type_name = self.get_name('tensor_t')
+# Skip register if tensor type is already registered.
+global_type_names = set()
+for g_ty_var in self.prelude.mod.get_global_type_vars():
+global_type_names.add(g_ty_var.name_hint)
+if tensor_type_name in global_type_names:
+return
+
+tensor_type_var = GlobalTypeVar(tensor_type_name)
+setattr(self.prelude, tensor_type_name, tensor_type_var)
+tensor_type = TensorType(self.shape, self.dtype)
+tensor_constructor_name = self.get_name('tensor_constructor')
+
+tensor_nil_name = self.get_name('tensor_nil')
+tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var)
+tensor_case = Constructor(tensor_constructor_name, [tensor_type], 
tensor_type_var)
+
+setattr(self.prelude, tensor_nil_name, tensor_nil_case)
+setattr(self.prelude, tensor_constructor_name, tensor_case)
+self.prelude.mod[tensor_type_var] = TypeData(tensor_type_var,
+ [],
+ [tensor_nil_case, 
tensor_case])
+
+def define_tensor_array(self):
+"""Defines a function to create a tensor array with size n.
+tensor_array(n) : Tensor[(), int32] -> list[tensor_t]
+"""
+tensor_array_constructor_name = self.get_name("tensor_array")
+tensor_array_constructor_var = 
self._create_global_var(tensor_array_constructor_name)
+setattr(self.prelude, tensor_array_constructor_name, 
tensor_array_constructor_var)
+tensor_nil_var = self.get_var('tensor_nil')
+tensor_type_var = self.get_var('tensor_t')
+n = Var("x", scalar_type('int32'))
+body = If(equal(n, const(0)),
+  self.prelude.nil(),
+  self.prelude.cons(tensor_nil_var(),
+tensor_array_constructor_var(subtract(n, 
const(1)
+self.prelude.mod[tensor_array_constructor_var] = \
+Function([n], body, self.prelude.l(tensor_type_var()), [])
+
+def define_tensor_take(self):
+"""Defines a function to return a range of tensor_t on axis 0.
+tensor_take(t, lower, upper) :
+tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t
+"""
+ndim = len(self.shape)
+if ndim == 0:
 
 Review comment:
   Add a comment that if the static rank is 0 we don't generate the `take` 
operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on a change in pull request #5103: [Relay][ADT]Static Tensor 
Array
URL: https://github.com/apache/incubator-tvm/pull/5103#discussion_r400892086
 
 

 ##
 File path: python/tvm/relay/prelude.py
 ##
 @@ -27,6 +27,538 @@
 from . import op
 
 
+class StaticTensorArrayOps(object):
+"""Contains tensor array related ops for fixed rank tensor array"""
+
+def __init__(self, prelude, dtype, shape):
+"""Create tensor array ops registry"""
+self.prelude = prelude
+self.dtype = dtype
+self.shape = shape
+
+def get_name(self, canonical):
+"""Get name corresponding to the canonical name"""
+shape_str = str(self.shape).replace('[', '').replace(']', '')\
+.replace('(', '').replace(')', '').replace(', ', '_')\
+.replace(',', '')
+if len(shape_str) == 0:
+shape_str = "scalar"
+if canonical == 'tensor_t':
+return 'static_tensor_{}_{}_t'.format(self.dtype, shape_str)
+return "{}_{}_{}".format(canonical, self.dtype, shape_str)
+
+def get_var(self, canonical):
+"""Get var corresponding to the canonical name"""
+name = self.get_name(canonical)
+return getattr(self.prelude, name)
+
+def define_tensor_adt(self):
+"""Defines the dynamic tensor ADT, which is the container for tensors
+with variable shapes."""
+tensor_type_name = self.get_name('tensor_t')
+# Skip register if tensor type is already registered.
+global_type_names = set()
+for g_ty_var in self.prelude.mod.get_global_type_vars():
+global_type_names.add(g_ty_var.name_hint)
+if tensor_type_name in global_type_names:
+return
+
+tensor_type_var = GlobalTypeVar(tensor_type_name)
+setattr(self.prelude, tensor_type_name, tensor_type_var)
+tensor_type = TensorType(self.shape, self.dtype)
+tensor_constructor_name = self.get_name('tensor_constructor')
+
+tensor_nil_name = self.get_name('tensor_nil')
+tensor_nil_case = Constructor(tensor_nil_name, [], tensor_type_var)
+tensor_case = Constructor(tensor_constructor_name, [tensor_type], 
tensor_type_var)
+
+setattr(self.prelude, tensor_nil_name, tensor_nil_case)
+setattr(self.prelude, tensor_constructor_name, tensor_case)
+self.prelude.mod[tensor_type_var] = TypeData(tensor_type_var,
+ [],
+ [tensor_nil_case, 
tensor_case])
+
+def define_tensor_array(self):
+"""Defines a function to create a tensor array with size n.
+tensor_array(n) : Tensor[(), int32] -> list[tensor_t]
+"""
+tensor_array_constructor_name = self.get_name("tensor_array")
+tensor_array_constructor_var = 
self._create_global_var(tensor_array_constructor_name)
+setattr(self.prelude, tensor_array_constructor_name, 
tensor_array_constructor_var)
+tensor_nil_var = self.get_var('tensor_nil')
+tensor_type_var = self.get_var('tensor_t')
+n = Var("x", scalar_type('int32'))
+body = If(equal(n, const(0)),
+  self.prelude.nil(),
+  self.prelude.cons(tensor_nil_var(),
+tensor_array_constructor_var(subtract(n, 
const(1)
+self.prelude.mod[tensor_array_constructor_var] = \
+Function([n], body, self.prelude.l(tensor_type_var()), [])
+
+def define_tensor_take(self):
+"""Defines a function to return a range of tensor_t on axis 0.
+tensor_take(t, lower, upper) :
+tensor_t -> Tensor[(), int32] -> Tensor[(), int32] -> tensor_t
+"""
+ndim = len(self.shape)
+if ndim == 0:
+return
+
+take_name = self.get_name("tensor_take")
+take_var = self._create_global_var(take_name)
+setattr(self.prelude, take_name, take_var)
+
+output_shape = [Any(),] + list(self.shape[1:])
+tensor_type_var, tensor_constructor = \
+self._get_adt_by_shape(output_shape)
+
+t = Var('tensor', self.get_var('tensor_t')())
+lower = Var('lower', scalar_type('int32'))
+upper = Var('upper', scalar_type('int32'))
+tvar = Var('t')
+case = Clause(PatternConstructor(self.get_var('tensor_constructor'), 
[PatternVar(tvar)]),
+  tensor_constructor(op.take(tvar,
+ op.arange(lower, upper, 
dtype='int32'),
+ axis=0)))
+self.prelude.mod[take_var] = \
+Function([t, lower, upper],
+ Match(t, [case], False), tensor_type_var(), [])
+
+def define_tensor_concatenate(self):
+"""Defines a function to concatenate two tensor_t on axis 0.
+tensor_concatenate(t) : tensor_t -> tensor_t -> tensor_t

[GitHub] [incubator-tvm] wweic commented on issue #5103: [Relay][ADT]Static Tensor Array

2020-03-31 Thread GitBox
wweic commented on issue #5103: [Relay][ADT]Static Tensor Array
URL: https://github.com/apache/incubator-tvm/pull/5103#issuecomment-606600886
 
 
   @MarisaKirisame The reason we can not use relay parser is that we want to 
dynamically generate the operators for specific shape while we convert the TF 
model to relay IR. Current relay text parser can not easily do that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kazum opened a new pull request #5192: [FRONTEND][MXNET] Use leaky by default for LeakyReLU

2020-03-31 Thread GitBox
kazum opened a new pull request #5192: [FRONTEND][MXNET] Use leaky by default 
for LeakyReLU
URL: https://github.com/apache/incubator-tvm/pull/5192
 
 
   c.f. 
https://mxnet.apache.org/api/python/docs/api/symbol/symbol.html#mxnet.symbol.LeakyReLU
   
   @maheshambule @alexgl-github @kevinthesun Could you review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [Torch] Add support for split (#5174)

2020-03-31 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 430cb89  [Torch] Add support for split (#5174)
430cb89 is described below

commit 430cb89995bff298cca0adf6ef1087d071875d1a
Author: Wang Yucheng 
AuthorDate: Tue Mar 31 19:01:10 2020 +0800

[Torch] Add support for split (#5174)

* [Torch] Add support for split

* fix

* fix test class
---
 python/tvm/relay/frontend/pytorch.py  | 36 +++
 tests/python/frontend/pytorch/test_forward.py | 24 ++
 2 files changed, 60 insertions(+)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 6a26711..7dee58e 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -105,6 +105,36 @@ def _slice():
 return _op.transform.strided_slice(data, begin, end, strides)
 return _impl
 
+def _split():
+def _impl(inputs, input_types):
+data = inputs[0]
+split_size = int(inputs[1])
+dim = int(inputs[2])
+
+split_index = split_size
+indices = []
+while split_index < _infer_shape(data)[dim]:
+indices.append(split_index)
+split_index += split_size
+
+return _op.split(data, indices, dim)
+return _impl
+
+def _split_with_sizes():
+def _impl(inputs, inputs_types):
+data = inputs[0]
+dim = int(inputs[2])
+
+split_index = 0
+indices = []
+sections = _infer_shape(inputs[1])
+for i in range(len(sections) - 1):
+split_index += sections[i]
+indices.append(split_index)
+
+return _op.split(data, indices, dim)
+return _impl
+
 def _select():
 def _impl(inputs, input_types):
 data = inputs[0]
@@ -886,6 +916,8 @@ _convert_map = {
 "aten::unsqueeze"   : _unsqueeze(),
 "aten::cat" : _concatenate(),
 "aten::slice"   : _slice(),
+"aten::split"   : _split(),
+"aten::split_with_sizes": _split_with_sizes(),
 "aten::select"  : _select(),
 "aten::relu": _relu(),
 "aten::relu_"   : _relu(),
@@ -1415,6 +1447,10 @@ def from_pytorch(script_module, input_shapes, 
custom_convert_map=None):
 
 ret = convert_operators(_get_operator_nodes(graph.nodes()), outputs,
 output_index_map, ret_name)
+
+if isinstance(ret[0], list):
+ret[0] = _expr.Tuple(ret[0])
+
 func = tvm.relay.Function(_analysis.free_vars(ret[0]), ret[0])
 
 return _module.IRModule.from_expr(func), tvm_params
diff --git a/tests/python/frontend/pytorch/test_forward.py 
b/tests/python/frontend/pytorch/test_forward.py
index 1878266..6070d88 100644
--- a/tests/python/frontend/pytorch/test_forward.py
+++ b/tests/python/frontend/pytorch/test_forward.py
@@ -379,6 +379,29 @@ def test_forward_maxpool1d():
 stride=2).eval(),
 input_data)
 
+def test_forward_split():
+torch.set_grad_enabled(False)
+input_shape = [4, 10]
+
+class Split(Module):
+def __init__(self, split_size_or_sections, dim):
+super(Split, self).__init__()
+self.split_size_or_sections = split_size_or_sections
+self.dim = dim
+
+def forward(self, *args):
+return torch.split(args[0], self.split_size_or_sections, self.dim)
+
+input_data = torch.rand(input_shape).float()
+verify_model(Split(2, 0).float().eval(),
+input_data=input_data)
+verify_model(Split(3, 1).float().eval(),
+input_data=input_data)
+verify_model(Split(4, 1).float().eval(),
+input_data=input_data)
+verify_model(Split([2, 3, 5], 1).float().eval(),
+input_data=input_data)
+
 def test_forward_avgpool():
 torch.set_grad_enabled(False)
 input_shape = [1, 3, 10, 10]
@@ -1077,6 +1100,7 @@ if __name__ == "__main__":
 test_forward_expand()
 test_forward_pow()
 test_forward_chunk()
+test_forward_split()
 test_upsample()
 test_to()
 test_adaptive_pool3d()



[GitHub] [incubator-tvm] masahi commented on issue #5174: [Torch] Add support for split

2020-03-31 Thread GitBox
masahi commented on issue #5174: [Torch] Add support for split
URL: https://github.com/apache/incubator-tvm/pull/5174#issuecomment-606555766
 
 
   Thanks @wyc-ruiker 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi merged pull request #5174: [Torch] Add support for split

2020-03-31 Thread GitBox
masahi merged pull request #5174: [Torch] Add support for split
URL: https://github.com/apache/incubator-tvm/pull/5174
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] t-vi commented on issue #5191: rocm: fix dense_rocblas in strategy, topi

2020-03-31 Thread GitBox
t-vi commented on issue #5191: rocm: fix dense_rocblas in strategy, topi
URL: https://github.com/apache/incubator-tvm/pull/5191#issuecomment-606501367
 
 
   @masahi / @tqchen . Sorry for sending this piecemeal. In debugging, this bug 
has eluded me quite long before it became obvious.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] t-vi opened a new pull request #5191: rocm: fix dense_rocblas in strategy, topi

2020-03-31 Thread GitBox
t-vi opened a new pull request #5191: rocm: fix dense_rocblas in strategy, topi
URL: https://github.com/apache/incubator-tvm/pull/5191
 
 
   Fixes a typo in strategy for rocm dense using rocblas (took long to find in 
debugging).
   Also fixes dtype check in dense_rocblas.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #5174: [Torch] Add support for split

2020-03-31 Thread GitBox
wyc-ruiker commented on a change in pull request #5174: [Torch] Add support for 
split
URL: https://github.com/apache/incubator-tvm/pull/5174#discussion_r400745718
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -379,6 +379,32 @@ def test_forward_maxpool1d():
 stride=2).eval(),
 input_data)
 
+def test_forward_split():
+torch.set_grad_enabled(False)
+input_shape = [4, 10]
+
+class Split1(Module):
+def forward(self, *args):
+return torch.split(args[0], 2, 0)
+
+class Split2(Module):
+def forward(self, *args):
+return torch.split(args[0], [2, 3, 5], 1)
+
+class Split3(Module):
+def forward(self, *args):
+return torch.split(args[0], 3, 1)
+
+class Split4(Module):
+def forward(self, *args):
+return torch.split(args[0], 4, 1)
+
 
 Review comment:
   done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >