[GitHub] [incubator-tvm] hypercubestart edited a comment on pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart edited a comment on pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#issuecomment-679820639


   @wweic @jroesch could you review please?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


jcf94 commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679822767


   +1 (non-binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#issuecomment-679820639


   @wweic @jroesch could you review?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tnachen commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


tnachen commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679737195


   +1 binding



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] bgchun commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


bgchun commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679709282


   +1 (binding) 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch opened a new pull request #6334: µTVM RPC server and Part 1 of AutoTVM compilation infrastructure

2020-08-24 Thread GitBox


areusch opened a new pull request #6334:
URL: https://github.com/apache/incubator-tvm/pull/6334


   RFCs forthcoming when this is promoted from draft; sending to CI



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r476142148



##
File path: src/arith/rewrite_simplify.cc
##
@@ -460,7 +460,8 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const DivNode* 
op) {
 
   // x / 2.0 = x * 0.5
   if (const FloatImmNode* ptr = op->b.as()) {
-CHECK(op->dtype.is_float());
+// TODO(@gussmith23) is this ok?
+// CHECK(op->dtype.is_float());

Review comment:
   also, it seems FloatImm(custom_datatype, value) represents a 
custom_datatype with that float value. see here: 
https://github.com/gussmith23/tvm/blob/a45f7bb1975933118db7647261a6ddefb214595a/include/tvm/tir/op.h#L829,
 we freely mutate FloatImm as a double 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6310: [Ansor][AutoTVM v2.0] Phase 2: Evolutionary Search

2020-08-24 Thread GitBox


jcf94 commented on a change in pull request #6310:
URL: https://github.com/apache/incubator-tvm/pull/6310#discussion_r476106376



##
File path: tests/python/unittest/test_auto_scheduler_evolutionary_search.py
##
@@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+""" Test evolutionary search. """
+
+import tvm
+from tvm import te, auto_scheduler
+
+from test_auto_scheduler_common import conv2d_nchw_bn_relu_auto_scheduler_test
+
+def test_evo_search():
+workload_key = 
auto_scheduler.make_workload_key(conv2d_nchw_bn_relu_auto_scheduler_test,
+(1, 56, 56, 512, 512, 3, 
1, 1))
+dag = auto_scheduler.ComputeDAG(workload_key)
+task = auto_scheduler.SearchTask(dag, workload_key, 
tvm.target.create('llvm'))
+policy = auto_scheduler.SketchPolicy(task, verbose=0)
+states = policy.sample_initial_population(50)
+policy.evolutionary_search(states, 10)
+
+
+if __name__ == "__main__":
+test_evo_search()

Review comment:
   Add a new line here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6310: [Ansor][AutoTVM v2.0] Phase 2: Evolutionary Search

2020-08-24 Thread GitBox


jcf94 commented on a change in pull request #6310:
URL: https://github.com/apache/incubator-tvm/pull/6310#discussion_r476106376



##
File path: tests/python/unittest/test_auto_scheduler_evolutionary_search.py
##
@@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+""" Test evolutionary search. """
+
+import tvm
+from tvm import te, auto_scheduler
+
+from test_auto_scheduler_common import conv2d_nchw_bn_relu_auto_scheduler_test
+
+def test_evo_search():
+workload_key = 
auto_scheduler.make_workload_key(conv2d_nchw_bn_relu_auto_scheduler_test,
+(1, 56, 56, 512, 512, 3, 
1, 1))
+dag = auto_scheduler.ComputeDAG(workload_key)
+task = auto_scheduler.SearchTask(dag, workload_key, 
tvm.target.create('llvm'))
+policy = auto_scheduler.SketchPolicy(task, verbose=0)
+states = policy.sample_initial_population(50)
+policy.evolutionary_search(states, 10)
+
+
+if __name__ == "__main__":
+test_evo_search()

Review comment:
   Add new lines here.

##
File path: src/auto_scheduler/search_policy/sketch_policy.cc
##
@@ -363,8 +376,150 @@ Array SketchPolicyNode::EvolutionarySearch(const 
Array& init_popul
   Array best_states;
   auto tic_begin = std::chrono::high_resolution_clock::now();
 
-  // TODO(comaniac, merrymercy, jcf94): Since we haven't finished porting the 
cost model part
-  // yet, currently delete the implementation of EvolutionarySearch. To be 
added later.
+  size_t population = init_population.size();
+  int num_iters =
+  static_cast(GetIntParam(params, 
SketchParamKey::EvolutionarySearch::num_iters));
+  double mutation_prob = static_cast(
+  GetDoubleParam(params, 
SketchParamKey::EvolutionarySearch::mutation_prob));
+
+  // Two ping pong buffers to avoid copy.
+  Array states_buf1{init_population}, states_buf2;
+  states_buf1.reserve(population);
+  states_buf2.reserve(population);
+  Array* pnow = &states_buf1;
+  Array* pnext = &states_buf2;
+
+  // The set of explored states to avoid redendants.
+  std::unordered_set explored_set;
+
+  // The heap to maintain the so far best states.
+  using StateHeapItem = std::pair;
+  auto cmp = [](const StateHeapItem& left, const StateHeapItem& right) {
+return left.second > right.second;
+  };
+  using StateHeap = std::priority_queue, decltype(cmp)>;
+  StateHeap heap(cmp);
+  auto update_heap = [&heap, &explored_set](const Array& states,
+const std::vector& scores, 
const int out_size) {
+float max_score = 0.0;
+for (size_t i = 0; i < states.size(); ++i) {
+  const State& state = states[i];
+  std::string state_str = state.ToStr();
+
+  // Skip redundant states.
+  if (explored_set.count(state_str) > 0) {
+continue;
+  }
+  explored_set.insert(state_str);
+
+  if (static_cast(heap.size()) < out_size) {
+// Directly push item if the heap is not full yet.
+heap.push({state, scores[i]});
+  } else if (scores[i] > heap.top().second) {
+// Replace the worst state in the heap with the new state.
+heap.pop();
+heap.push({state, scores[i]});
+  }
+  max_score = (scores[i] > max_score) ? scores[i] : max_score;
+}
+return max_score;
+  };
+
+  // Cost model predicted scores.
+  std::vector scores;
+  scores.reserve(population);
+
+  // The function to generate prefix sum probabilities based on the given 
scores.
+  auto assign_prob = [](const std::vector& scores, std::vector* 
prefix_sum_probs) {
+// Compute selection probabilities.
+double sum = 0.0;
+prefix_sum_probs->resize(scores.size());
+for (size_t i = 0; i < scores.size(); ++i) {
+  sum += std::max(scores[i], 0.0f);
+  (*prefix_sum_probs)[i] = sum;
+}
+for (size_t i = 0; i < scores.size(); ++i) {
+  (*prefix_sum_probs)[i] /= sum;
+}
+  };
+
+  // State selection probabilities.
+  std::uniform_real_distribution<> uniform_dist(0.0, 1.0);
+  std::vector state_select_probs;
+  state_select_probs.reserve(population);
+
+  // Mutation rule selection probabilities.
+  std::vector rule_select_probs;
+  rule_select_probs.reserve(mutation_rules.size());
+  std::

[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r476113952



##
File path: src/arith/rewrite_simplify.cc
##
@@ -460,7 +460,8 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const DivNode* 
op) {
 
   // x / 2.0 = x * 0.5
   if (const FloatImmNode* ptr = op->b.as()) {
-CHECK(op->dtype.is_float());
+// TODO(@gussmith23) is this ok?
+// CHECK(op->dtype.is_float());

Review comment:
   i think a better idea may be to do 
   ```CHECK(op->dtype.is_float() || 
datatype::Registry::Global()->GetTypeRegistered(op->dtype.code()));```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


siju-samuel commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679476915


   +1 (non-binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


cchung100m commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679468703


   +1 (non-binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on a change in pull request #6238: [TIR][Transform]Block scope hoisting added

2020-08-24 Thread GitBox


roastduck commented on a change in pull request #6238:
URL: https://github.com/apache/incubator-tvm/pull/6238#discussion_r476063346



##
File path: tests/python/unittest/test_tir_transform_hoist_if.py
##
@@ -255,6 +259,488 @@ def test_multi_if():
('tir.For', 'i'): (('tir.IfThenElse', ('i',)),)}
 verify_structure(new_stmt, expected_struct)
 
+def test_no_hoisting_1():
+ib = tvm.tir.ir_builder.create()
+data = ib.pointer("float32", name="data")
+n = te.var("n")
+
+with ib.for_range(0, 10, "i") as i:
+with ib.for_range(0, 10, "j") as j:
+with ib.for_range(0, 10, "k") as k:
+with ib.if_scope(k >= 3):
+data[i * 100 + j * 10 + k] = data[i * 100 + j * 10 + k] + 
0.5
+
+stmt = ib.get()
+mod = tvm.IRModule.from_expr(tvm.tir.PrimFunc([], stmt))
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+with tvm.transform.PassContext(config={
+"tir.HoistIfThenElse": {"support_block_scope_hosting": True}
+}):
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+def test_no_hoisting_2():
+ib = tvm.tir.ir_builder.create()
+data = ib.pointer("float32", name="data")
+n = te.var("n")
+x = te.var("x")
+
+with ib.for_range(0, 10, "i") as i:
+with ib.for_range(0, 10, "j") as j:
+with ib.for_range(0, 10, "k") as k:
+with ib.if_scope(i >= 3):
+data[i * 100 + j * 10 + k] = data[i * 100 + j * 10 + k] + 
0.3
+data[i * 100 + j * 10 + k] = data[i * 100 + j * 10 + k] + 0.5
+
+stmt = ib.get()
+mod = tvm.IRModule.from_expr(tvm.tir.PrimFunc([], stmt))
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+with tvm.transform.PassContext(config={
+"tir.HoistIfThenElse": {"support_block_scope_hosting": True}
+}):
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+def test_no_hoisting_3():
+ib = tvm.tir.ir_builder.create()
+dshape = (32, 64)
+dshape_inner = (33, 63)
+data = ib.pointer("float32", name="data")
+l = te.var('l')
+m = te.var('m')
+n = te.var('n')
+
+tx = te.thread_axis("threadIdx.x")
+bx = te.thread_axis("blockIdx.x")
+ib.scope_attr(tx, "thread_extent", dshape[0])
+ib.scope_attr(bx, "thread_extent", dshape[1])
+with ib.for_range(0, l, "i") as i:
+with ib.for_range(0, m, "j") as j:
+with ib.for_range(0, n, "k") as k:
+ib.scope_attr(tx, "thread_extent", dshape_inner[0])
+ib.scope_attr(bx, "thread_extent", dshape_inner[1])

Review comment:
   Yes, I understand that we are testing what should not be hoisted here. I 
am talking about potential future works.
   
   Here the attributes are thread extents. I think thread extents are global 
attributes, although they are currently in a particular block. Hoisting them to 
the out-most loops seems to be correct.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] roastduck commented on a change in pull request #6238: [TIR][Transform]Block scope hoisting added

2020-08-24 Thread GitBox


roastduck commented on a change in pull request #6238:
URL: https://github.com/apache/incubator-tvm/pull/6238#discussion_r476059547



##
File path: python/tvm/driver/build_module.py
##
@@ -181,7 +181,7 @@ def lower(sch,
 tvm.tir.transform.BF16Legalize(),
 tvm.tir.transform.NarrowDataType(32),
 tvm.tir.transform.Simplify(),
-tvm.tir.transform.HoistIfThenElse(),
+tvm.tir.transform.HoistIfThenElse("basic"),

Review comment:
   I don't think a little more compile-time complexity is a real problem. 
Perhaps the pass can even be faster than those Python bindings. We can always 
perform the "advanced" version as long as it won't break the correctness or do 
negative optimizations.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r476047623



##
File path: src/arith/rewrite_simplify.cc
##
@@ -460,7 +460,8 @@ PrimExpr RewriteSimplifier::Impl::VisitExpr_(const DivNode* 
op) {
 
   // x / 2.0 = x * 0.5
   if (const FloatImmNode* ptr = op->b.as()) {
-CHECK(op->dtype.is_float());
+// TODO(@gussmith23) is this ok?
+// CHECK(op->dtype.is_float());

Review comment:
   why do we need to do this @gussmith23 ? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] electriclilies commented on a change in pull request #6293: [DYN][RELAY] Resize support for NCHW-convertible layouts

2020-08-24 Thread GitBox


electriclilies commented on a change in pull request #6293:
URL: https://github.com/apache/incubator-tvm/pull/6293#discussion_r476014648



##
File path: python/tvm/relay/op/dyn/image/_image.py
##
@@ -67,10 +57,19 @@ def resize_shape_func(attrs, inputs, _):
 Shape function for dyn.image.resize op.
 """
 layout = attrs.layout
-if layout == 'NHWC':
-out = [_NHWC_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
-elif (layout == 'NCHW') or nchw_pack_layout(layout) or 
nchw_xc_layout(layout):
-out = [_NCHW_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
+if nchw_pack_layout(layout) or nchw_xc_layout(layout):
+out = [_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)),
+  convert(2), convert(3), convert(1))]

Review comment:
   I guess it is redundant, I will remove it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] vinx13 commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


vinx13 commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679437092


   +1 (non-binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r476001349



##
File path: python/tvm/target/datatype.py
##
@@ -135,8 +166,40 @@ def lower(op):
 dtype = "uint" + str(t.bits)
 if t.lanes > 1:
 dtype += "x" + str(t.lanes)
-if isinstance(op, (_Cast, _FloatImm)):
-return tvm.tir.call_pure_extern(dtype, extern_func_name, op.value)
-return tvm.tir.call_pure_extern(dtype, extern_func_name, op.a, op.b)
+if isinstance(op, _Cast):
+src_bits = bit_length(op.value.dtype)
+return call_pure_extern(dtype, extern_func_map[(src_bits, 
t.bits)], op.value)
+if isinstance(op, _FloatImm):
+return call_pure_extern(dtype, extern_func_map[t.bits], op.value)
+if isinstance(op, _Call):
+return call_pure_extern(dtype, extern_func_map[t.bits], *op.args)
+if isinstance(op, _BinaryOpExpr):
+return call_pure_extern(dtype, extern_func_map[t.bits], op.a, op.b)

Review comment:
   @gussmith23 should we improve debugging message here?
   
   if the map does not contain the bit_width, this throws a KeyError on 
`extern_func_map[t.bits]`, which may be somewhat cryptic to the user





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


kazum commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679431456


   +1 (non-binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


masahi commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679430321


   +1 (binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kazum commented on pull request #6278: [Frontend][Relay] Keras softmax and prelu fix under NHWC

2020-08-24 Thread GitBox


kazum commented on pull request #6278:
URL: https://github.com/apache/incubator-tvm/pull/6278#issuecomment-679427809


   Thanks @domin1985 @leandron @jwfromm @yongwww !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (6b5176d -> a189fe0)

2020-08-24 Thread kazum
This is an automated email from the ASF dual-hosted git repository.

kazum pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 6b5176d  [OpFusion] Make the max number of fused ops configurable 
(#6327)
 add a189fe0  [Frontend][Relay] Keras softmax and prelu fix (#6278) (#6278)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/keras.py  | 21 +
 tests/python/frontend/keras/test_forward.py |  1 +
 2 files changed, 14 insertions(+), 8 deletions(-)



[GitHub] [incubator-tvm] kazum merged pull request #6278: [Frontend][Relay] Keras softmax and prelu fix under NHWC

2020-08-24 Thread GitBox


kazum merged pull request #6278:
URL: https://github.com/apache/incubator-tvm/pull/6278


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


comaniac commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679424892


   +1 (non-binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r475971136



##
File path: tests/python/unittest/test_custom_datatypes.py
##
@@ -0,0 +1,396 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import tvm.topi.testing
+import numpy as np
+import pytest
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.layers import batch_norm_infer
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite, lower_call_pure_extern
+from tvm.tir.op import call_pure_extern
+
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, array):
+"""Converts NDArray(s) into the specified datatype"""
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = {k: convert_ndarray(dst, v) for k, v in params.items()}
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}, 
target='llvm'):
+module = relay.transform.SimplifyInference()(module)
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, converted_params = change_dtype(src_dtype, dst_dtype, module, 
params)
+ex = relay.create_executor("graph", mod=module, target=target)
+# converts all inputs to dst_dtype
+x_converted = [convert_ndarray(dst_dtype, arr) for arr in input]
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **converted_params)
+# currently this only works for comparing single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+@pytest.fixture(scope="session", autouse=True)
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: 'Posit8es2Sub'
+}), "Sub", "llvm", "posites2")
+

[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r475969594



##
File path: python/tvm/relay/frontend/change_datatype.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=unused-argument
+"""Change Datatype Pass"""
+from ..function import Function
+from ..expr_functor import ExprMutator
+from ..transform.transform import function_pass
+from ..expr import var, bind
+
+# TODO(@gussmith23) what's the right opt level here?
+@function_pass(opt_level=0)

Review comment:
   i think we may want to remove this since its not very relevant





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


zhiics commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679418594


   +1 (binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


icemelon9 commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679411683


   +1 (binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


jroesch commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679409069


   +1 (binding) 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #6293: [DYN][RELAY] Resize support for NCHW-convertible layouts

2020-08-24 Thread GitBox


icemelon9 commented on a change in pull request #6293:
URL: https://github.com/apache/incubator-tvm/pull/6293#discussion_r475940086



##
File path: python/tvm/relay/op/dyn/image/_image.py
##
@@ -67,10 +57,19 @@ def resize_shape_func(attrs, inputs, _):
 Shape function for dyn.image.resize op.
 """
 layout = attrs.layout
-if layout == 'NHWC':
-out = [_NHWC_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
-elif (layout == 'NCHW') or nchw_pack_layout(layout) or 
nchw_xc_layout(layout):
-out = [_NCHW_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
+if nchw_pack_layout(layout) or nchw_xc_layout(layout):
+out = [_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)),
+  convert(2), convert(3), convert(1))]

Review comment:
   I see. 
   btw. why do we need `out[channel_axis] = int64(dshape[channel_axis])` since 
it's already copied?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on pull request #6333: Add docker/lint.sh, for running dockerized lint scripts locally

2020-08-24 Thread GitBox


areusch commented on pull request #6333:
URL: https://github.com/apache/incubator-tvm/pull/6333#issuecomment-679401382


   @tqchen @jroesch @tmoreau89 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch opened a new pull request #6333: Add docker/lint.sh, for running dockerized lint scripts locally

2020-08-24 Thread GitBox


areusch opened a new pull request #6333:
URL: https://github.com/apache/incubator-tvm/pull/6333


   This PR adds a script that automates the process of running dockerized lint 
scripts locally, specifically:
- consulting `Jenkinsfile` to determine which docker container to use
- invoking 1 or all of the lint scripts, depending on which ones need to be 
fixed up
   
   it also fixes the ASF check to ignore gitignore'd and untracked files 
locally, so that the overall process doesn't bomb out if you have an 
uncommitted text file in the repo.
   
   a couple of design choices here:
- moved the `make (cpp|py|jni)lint` steps to dedicated scripts in tests/lint
- explicitly avoided adding interactive logic to 
`tests/scripts/task_lint.sh`--this should remain simple and readable so it's 
clear what the CI does
- maintained existing make targets for backwards-compat with everyone's 
workflows
- two changes likely usable for other workflow improvements:
   - a script to filter untracked/gitignore'd files
   - added `-i` flag to `docker/bash.sh` so that more automation can be 
added that supports ctrl+c



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r475922169



##
File path: python/tvm/relay/frontend/change_datatype.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=unused-argument
+"""Change Datatype Pass"""
+from ..function import Function
+from ..expr_functor import ExprMutator
+from ..transform.transform import function_pass
+from ..expr import var, bind
+
+# TODO(@gussmith23) what's the right opt level here?
+@function_pass(opt_level=0)
+class ChangeDatatype(ExprMutator):
+"""Mutator for changing the datatype of Relay programs.
+
+Example:
+
+.. code-block:: python
+
+from tvm.relay.testing.inception_v3 import get_workload
+expr, params = get_workload()
+
+def change_dtype(src, dst, expr, params):
+cdtype = ChangeDatatype(src, dst)
+expr = cdtype.visit(expr)
+expr = relay.ir_pass.infer_type(expr)
+params = dict((p, tvm.nd.array(params[p].asnumpy().astype(dst))) 
for p in params)
+return expr, params
+"""
+def __init__(self, src, dst):
+self.src = src
+self.dst = dst
+super().__init__()
+
+def transform_function(self, func, mod, ctx):
+return self.visit(func)
+
+def visit_constant(self, const):
+if const.data.dtype == self.src:
+return const.astype(self.dst)
+# TODO(hypercubestart): should we raise an error in this case, or 
return const?
+return const

Review comment:
   @gussmith23 i added this here to avoid the linting, but I think casting 
it in every case makes more sense. What do you think?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r475920138



##
File path: python/tvm/relay/frontend/change_datatype.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=unused-argument
+"""Change Datatype Pass"""
+from ..function import Function
+from ..expr_functor import ExprMutator
+from ..transform.transform import function_pass
+from ..expr import var, bind
+
+# TODO(@gussmith23) what's the right opt level here?
+@function_pass(opt_level=0)

Review comment:
   yeah I think this is fine. opt_level is only relevant when used in 
transform.Sequential which determines which transforms can be run. But our use 
case doesn't really involve transform.Sequential, so any opt_level should be 
okay here





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] electriclilies commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


electriclilies commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679387753


   +1 (non-binding)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hsaputra commented on issue #6299: [DISCUSS][RFC] Apache TVM Graduation

2020-08-24 Thread GitBox


hsaputra commented on issue #6299:
URL: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-679386711


   Thanks, Tianqi. I think the community would love to have you the VP for the 
TVM as TLP.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hsaputra commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


hsaputra commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679385681


   +1 (binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #6316: Dynamic Strided Slice

2020-08-24 Thread GitBox


mbrookhart commented on pull request #6316:
URL: https://github.com/apache/incubator-tvm/pull/6316#issuecomment-679382342


   I don't think so, it's already targeting the dynamic op
   
   ```
   # squeeze the two outputs of nms for strided_slice
   size = get_relay_op("squeeze")(nms_ret[1], axis=[1])
   data_slice = get_relay_op("squeeze")(nms_ret[0], axis=[0])
   
   # strided slice to get the dynamic result
   return get_relay_op("strided_slice")(data_slice, 
begin=_expr.const([0]),
end=size, slice_mode="size")
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475911255



##
File path: src/relay/op/type_relations.cc
##
@@ -126,6 +126,7 @@ bool BroadcastCompRel(const Array& types, int 
num_inputs, const Attrs& att
   return true;
 }
   }
+  reporter->Assign(types[0], types[1]);

Review comment:
   removed, added type annotation to test. thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] electriclilies commented on a change in pull request #6316: Dynamic Strided Slice

2020-08-24 Thread GitBox


electriclilies commented on a change in pull request #6316:
URL: https://github.com/apache/incubator-tvm/pull/6316#discussion_r475909697



##
File path: python/tvm/relay/op/transform.py
##
@@ -827,13 +828,17 @@ def strided_slice(data, begin, end, strides=None, 
slice_mode="end"):
 ret : relay.Expr
 The computed result.
 """
-strides = strides or const([1], dtype="int32")
-if isinstance(begin, (tuple, list)):
-begin = const(list(begin))
-if isinstance(end, (tuple, list)):
-end = const(list(end))
-if isinstance(strides, (tuple, list)):
-strides = const(list(strides))
+strides = strides or [1]
+if (isinstance(begin, Expr) or isinstance(end, Expr) or 
isinstance(strides, Expr)):
+if isinstance(begin, (tuple, list)):
+begin = const(list(begin))
+if isinstance(end, (tuple, list)):
+end = const(list(end))
+if isinstance(strides, (tuple, list)):
+strides = const(list(strides))
+normalized_begin = _make.where(begin < cast_like(const(0), begin),

Review comment:
   How does this make it less useful?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6316: Dynamic Strided Slice

2020-08-24 Thread GitBox


masahi commented on pull request #6316:
URL: https://github.com/apache/incubator-tvm/pull/6316#issuecomment-679379965


   @mbrookhart #6314 introduced another strided slice usage, you might need to 
update that too.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


tmoreau89 commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679378081


   +1 (binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ZihengJiang commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


ZihengJiang commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679377654


   +1 (binding)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


junrushao1994 commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679370009


   +1 (non-binding)
   
   Super excited!!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #6299: [DISCUSS][RFC] Apache TVM Graduation

2020-08-24 Thread GitBox


tqchen edited a comment on issue #6299:
URL: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-679359605


   Thanks everyone for great discussions, here is the formal voting thread:
   - https://github.com/apache/incubator-tvm/issues/6332
   - 
https://lists.apache.org/thread.html/rd5b8eefe49af09a2d0913758a5e5737b3fdb9072bc0becf4a2b2c7ee%40%3Cdev.tvm.apache.org%3E



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #6299: [DISCUSS][RFC] Apache TVM Graduation

2020-08-24 Thread GitBox


tqchen edited a comment on issue #6299:
URL: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-679359605


   Thanks everyone for great discussions, here is the formal voting thread:
   - #6332
   - 
https://lists.apache.org/thread.html/rd5b8eefe49af09a2d0913758a5e5737b3fdb9072bc0becf4a2b2c7ee%40%3Cdev.tvm.apache.org%3E



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


zhiics commented on a change in pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#discussion_r475869132



##
File path: tests/python/contrib/test_random.py
##
@@ -151,3 +138,5 @@ def test_rpc(dtype):
 test_uniform()
 test_normal()
 test_random_fill()
+

Review comment:
   no need these extra lines

##
File path: tests/python/relay/dyn/test_dynamic_op_level3.py
##
@@ -36,6 +37,8 @@ def verify_func(func, data, ref_res):
 tvm.testing.assert_allclose(op_res.asnumpy(), ref_res, rtol=1e-5)
 relay.backend.compile_engine.get().clear()
 
+@gpu
+@gpu

Review comment:
   only one decorator is needed?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tvm-archiver commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


tvm-archiver commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679360562


   +1 (binding)
   
   TVM is ready to graduate.
   
   Markus
   
   On Mon, Aug 24, 2020 at 1:51 PM Tianqi Chen  wrote:
   >
   > Dear Community:
   >
   > Thanks  to everyone who participated in the discussion about 
graduation[1]. This is a formal voting thread for Apache TVM’s graduation.
   >
   > If this vote passes, the next step would be to submit the resolution below
   > to the Incubator PMC, who would vote on sending it on to the Apache Board.
   >
   > Vote:
   > [ ] +1 - Recommend graduation of Apache TVM as a TLP
   > [ ]  0 - I don't feel strongly about it, but don't object
   > [ ] -1 - Do not recommend graduation of Apache TVM because...
   >
   > The VOTE will open for at least 72 hours.
   >
   > This thread is mirrored to dev@, please vote by replying to this thread
   >
   > --
   > The TVM project has been an Apache incubator project for nearly 1.5 year 
now. In the past one and half year, the community grew healthily under the 
Apache way. Some highlights include:
   >
   > - A successful developer conference that we are continuing to host this 
year
   > - Great community growth, as of now, the community contains 16 PPMC 
members, 31 committers, from a diverse list of organizations. We are actively 
growing the list monthly.
   > - Active contributions: ~ 150 PRs merged each month.
   >
   > The community has produced two formal apache releases. While we could also 
wait until more releases. We feel that the community is mature enough that we 
can push for graduation as it is, and continue to push for high quality 
releases concurrently.
   >
   > For reference, we also put together a maturity evaluation doc[2] under the 
Apache maturity model.
   >
   > Some additional note about the resolution below: the current PPMC will be 
transitioned to the PMC. We have invited all the mentors in the current PPMC 
who like to stay involved.
   >
   > -
   >
   > Establish the Apache TVM Project
   >
   > WHEREAS, the Board of Directors deems it to be in the best interests of
   > the Foundation and consistent with the Foundation's purpose to establish
   > a Project Management Committee charged with the creation and maintenance
   > of open-source software, for distribution at no charge to the public,
   > related to compilation of machine learning models to run on a wide range 
of hardware platforms...
   >
   > NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee
   > (PMC), to be known as the "Apache TVM Project", be and hereby is
   > established pursuant to Bylaws of the Foundation; and be it further
   >
   > RESOLVED, that the Apache TVM Project be and hereby is responsible for the
   > creation and maintenance of software related to compilation of machine 
learning models to run on a wide range of hardware platforms; and be it further
   >
   > RESOLVED, that the office of "Vice President, Apache TVM" be and
   > hereby is created, the person holding such office to serve at the
   > direction of the Board of Directors as the chair of the Apache TVM
   > Project, and to have primary responsibility for management of the
   > projects within the scope of responsibility of the Apache TVM
   > Project; and be it further
   >
   > RESOLVED, that the persons listed immediately below be and hereby are
   > appointed to serve as the initial members of the Apache TVM Project:
   >
   >  * Tianqi Chen 
   >  * Timothy Chen 
   >  * Zhi Chen 
   >  * Byung-Gon Chun 
   >  * Ziheng Jiang 
   >  * Furkan Kamaci 
   >  * YiZhi Liu 
   >  * Masahiro Masuda 
   >  * Thierry Moreau 
   >  * Jared Roesch 
   >  * Henry Saputra 
   >  * Haichen Shen 
   >  * Markus Weimer 
   >  * Eddie Yan 
   >  * Lianmin Zheng 
   >
   > NOW, THEREFORE, BE IT FURTHER RESOLVED, that Tianqi Chen be appointed to
   > the office of Vice President, Apache TVM, to serve in accordance
   > with and subject to the direction of the Board of Directors and the
   > Bylaws of the Foundation until death, resignation, retirement, removal
   > or disqualification, or until a successor is appointed; and be it
   > further
   >
   > RESOLVED, that the Apache TVM Project be and hereby is tasked with
   > the migration and rationalization of the Apache Incubator TVM
   > podling; and be it further
   >
   > RESOLVED, that all responsibilities pertaining to the Apache Incubator
   > TVM  podling encumbered upon the Apache Incubator PMC are hereafter
   > Discharged.
   >
   > - [1] 
https://lists.apache.org/thread.html/r91b8f469c6a54769869bb2435b7334a28bcff885ae078ab5612dae00%40%3Cdev.tvm.apache.org%3E
   > - [2] 
https://docs.google.com/document/d/18nyAH-fcptVezAxPQe6H3FeTKPRkujOp1tc1YRSPLok/edit?usp=sharing
   >
   > --
   > You are receiving this because you are subscribed to this thread.
   > Reply to this email directly or view it on GitHub:
   > https://github.com

[GitHub] [incubator-tvm] tqchen commented on issue #6299: [DISCUSS][RFC] Apache TVM Graduation

2020-08-24 Thread GitBox


tqchen commented on issue #6299:
URL: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-679359605


   Thanks everyone for great discussions, here is the formal voting thread:
   - https://github.com/apache/incubator-tvm/issues/6299
   - 
https://lists.apache.org/thread.html/rd5b8eefe49af09a2d0913758a5e5737b3fdb9072bc0becf4a2b2c7ee%40%3Cdev.tvm.apache.org%3E



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new issue #6332: [VOTE] Apache TVM Graduation

2020-08-24 Thread GitBox


tqchen opened a new issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332


   Dear Community:
   
   Thanks  to everyone who participated in the discussion about graduation[1]. 
This is a formal voting thread for Apache TVM’s graduation.
   
   If this vote passes, the next step would be to submit the resolution below
   to the Incubator PMC, who would vote on sending it on to the Apache Board.
   
   Vote:
   [ ] +1 - Recommend graduation of Apache TVM as a TLP
   [ ]  0 - I don't feel strongly about it, but don't object
   [ ] -1 - Do not recommend graduation of Apache TVM because...
   
   The VOTE will open for at least 72 hours.
   
   This thread is mirrored to dev@, please vote by replying to this thread
   
   --
   The TVM project has been an Apache incubator project for nearly 1.5 year 
now. In the past one and half year, the community grew healthily under the 
Apache way. Some highlights include:
   
   - A successful developer conference that we are continuing to host this year
   - Great community growth, as of now, the community contains 16 PPMC members, 
31 committers, from a diverse list of organizations. We are actively growing 
the list monthly.
   - Active contributions: ~ 150 PRs merged each month.
   
   The community has produced two formal apache releases. While we could also 
wait until more releases. We feel that the community is mature enough that we 
can push for graduation as it is, and continue to push for high quality 
releases concurrently.
   
   For reference, we also put together a maturity evaluation doc[2] under the 
Apache maturity model.
   
   Some additional note about the resolution below: the current PPMC will be 
transitioned to the PMC. We have invited all the mentors in the current PPMC 
who like to stay involved.
   
   -
   
   Establish the Apache TVM Project
   
   WHEREAS, the Board of Directors deems it to be in the best interests of
   the Foundation and consistent with the Foundation's purpose to establish
   a Project Management Committee charged with the creation and maintenance
   of open-source software, for distribution at no charge to the public,
   related to compilation of machine learning models to run on a wide range of 
hardware platforms...
   
   NOW, THEREFORE, BE IT RESOLVED, that a Project Management Committee
   (PMC), to be known as the "Apache TVM Project", be and hereby is
   established pursuant to Bylaws of the Foundation; and be it further
   
   RESOLVED, that the Apache TVM Project be and hereby is responsible for the
   creation and maintenance of software related to compilation of machine 
learning models to run on a wide range of hardware platforms; and be it further
   
   RESOLVED, that the office of "Vice President, Apache TVM" be and
   hereby is created, the person holding such office to serve at the
   direction of the Board of Directors as the chair of the Apache TVM
   Project, and to have primary responsibility for management of the
   projects within the scope of responsibility of the Apache TVM
   Project; and be it further
   
   RESOLVED, that the persons listed immediately below be and hereby are
   appointed to serve as the initial members of the Apache TVM Project:
   
* Tianqi Chen 
* Timothy Chen 
* Zhi Chen 
* Byung-Gon Chun 
* Ziheng Jiang 
* Furkan Kamaci  
* YiZhi Liu
* Masahiro Masuda 
* Thierry Moreau 
* Jared Roesch 
* Henry Saputra 
* Haichen Shen 
* Markus Weimer 
* Eddie Yan 
* Lianmin Zheng 
   
   NOW, THEREFORE, BE IT FURTHER RESOLVED, that Tianqi Chen be appointed to
   the office of Vice President, Apache TVM, to serve in accordance
   with and subject to the direction of the Board of Directors and the
   Bylaws of the Foundation until death, resignation, retirement, removal
   or disqualification, or until a successor is appointed; and be it
   further
   
   RESOLVED, that the Apache TVM Project be and hereby is tasked with
   the migration and rationalization of the Apache Incubator TVM
   podling; and be it further
   
   RESOLVED, that all responsibilities pertaining to the Apache Incubator
   TVM  podling encumbered upon the Apache Incubator PMC are hereafter
   Discharged.
   
   - [1] 
https://lists.apache.org/thread.html/r91b8f469c6a54769869bb2435b7334a28bcff885ae078ab5612dae00%40%3Cdev.tvm.apache.org%3E
   - [2] 
https://docs.google.com/document/d/18nyAH-fcptVezAxPQe6H3FeTKPRkujOp1tc1YRSPLok/edit?usp=sharing
 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] electriclilies commented on a change in pull request #6293: [DYN][RELAY] Resize support for NCHW-convertible layouts

2020-08-24 Thread GitBox


electriclilies commented on a change in pull request #6293:
URL: https://github.com/apache/incubator-tvm/pull/6293#discussion_r475884598



##
File path: python/tvm/relay/op/dyn/image/_image.py
##
@@ -67,10 +57,19 @@ def resize_shape_func(attrs, inputs, _):
 Shape function for dyn.image.resize op.
 """
 layout = attrs.layout
-if layout == 'NHWC':
-out = [_NHWC_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
-elif (layout == 'NCHW') or nchw_pack_layout(layout) or 
nchw_xc_layout(layout):
-out = [_NCHW_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
+if nchw_pack_layout(layout) or nchw_xc_layout(layout):
+out = [_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)),
+  convert(2), convert(3), convert(1))]

Review comment:
   It should. The shape function copies the entire shape by iterating over 
ndim, so all dimensions are copied before any swapping happens. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tkonolige commented on a change in pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#discussion_r475884123



##
File path: python/tvm/testing.py
##
@@ -285,4 +288,105 @@ def _check_forward(constraints1, constraints2, varmap, 
backvarmap):
constraints_trans.dst_to_src, constraints_trans.src_to_dst)
 
 
+def gpu(f):
+"""Mark to differentiate tests that use the GPU is some capacity. These
+tests will be run on CPU-only nodes and on nodes with GPUS.
+
+To mark a test that must have a GPU present to run, use `@requires_gpu`.
+"""
+return pytest.mark.gpu(f)
+
+
+def requires_gpu(f):

Review comment:
   I don't think this is possible with pytest. We don't want to have to run 
every test to see if they require a gpu or not.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6299: [DISCUSS][RFC] Apache TVM Graduation

2020-08-24 Thread GitBox


tqchen commented on issue #6299:
URL: https://github.com/apache/incubator-tvm/issues/6299#issuecomment-679357130


   We also need a VP who handles the PMC's reporting to the board, I self 
nominate to serve that role for the community.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #6293: [DYN][RELAY] Resize support for NCHW-convertible layouts

2020-08-24 Thread GitBox


icemelon9 commented on a change in pull request #6293:
URL: https://github.com/apache/incubator-tvm/pull/6293#discussion_r475880977



##
File path: python/tvm/relay/op/dyn/image/_image.py
##
@@ -67,10 +57,19 @@ def resize_shape_func(attrs, inputs, _):
 Shape function for dyn.image.resize op.
 """
 layout = attrs.layout
-if layout == 'NHWC':
-out = [_NHWC_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
-elif (layout == 'NCHW') or nchw_pack_layout(layout) or 
nchw_xc_layout(layout):
-out = [_NCHW_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)))]
+if nchw_pack_layout(layout) or nchw_xc_layout(layout):
+out = [_resize_shape_func(inputs[0].shape, inputs[1], 
convert(len(inputs[0].shape)),
+  convert(2), convert(3), convert(1))]

Review comment:
   Does the shape function work for `NCHWc` layout?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tkonolige commented on a change in pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#discussion_r475877787



##
File path: python/tvm/_ffi/runtime_ctypes.py
##
@@ -197,6 +199,10 @@ def _GetDeviceAttr(self, device_type, device_id, attr_id):
 @property
 def exist(self):
 """Whether this device exist."""
+allowed_ctxs = os.environ.get("TVM_TEST_CTXS")

Review comment:
   Sorry, this is old code. Forgot to remove it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r475874742



##
File path: 3rdparty/posit/posit-wrapper.cc
##
@@ -0,0 +1,211 @@
+#include 
+
+#include 
+
+#include "universal/posit/posit.hpp"
+// must go after posit.hpp
+#include "universal/posit/math/exponent.hpp"
+#include "universal/posit/math/hyperbolic.hpp"
+#include "universal/posit/math/logarithm.hpp"
+#include "universal/posit/math/sqrt.hpp"
+
+TVM_DLL sw::unum::posit<8, 2> Uint8ToPosit8es2(uint8_t in) {
+  sw::unum::bitblock<8> bb;
+  bb = static_cast(in);
+  return sw::unum::posit<8, 2>().set(bb);
+}
+
+extern "C" {
+TVM_DLL uint8_t RawPosit8es2(uint8_t in) { return in; }
+
+TVM_DLL uint8_t Posit8es2toUint8(sw::unum::posit<8, 2> in) {
+  return static_cast(in.get().to_ullong());
+}
+
+TVM_DLL float Posit8es2ToFloat(uint8_t in) { return 
Uint8ToPosit8es2(in).operator float(); }
+
+TVM_DLL uint8_t FloatToPosit8es2(float in) {
+  auto posit = sw::unum::posit<8, 2>(in);
+  return Posit8es2toUint8(posit);
+}
+
+// TODO(gus) how wide should the input be?
+TVM_DLL uint8_t IntToPosit8es2(int in) { return 
Posit8es2toUint8(sw::unum::posit<8, 2>(in)); }
+
+TVM_DLL uint8_t Posit8es2Add(uint8_t a, uint8_t b) {
+  return Posit8es2toUint8(Uint8ToPosit8es2(a) + Uint8ToPosit8es2(b));
+}
+
+TVM_DLL uint8_t Posit8es2Sub(uint8_t a, uint8_t b) {
+  return Posit8es2toUint8(Uint8ToPosit8es2(a) - Uint8ToPosit8es2(b));
+}
+
+TVM_DLL uint8_t Posit8es2Mul(uint8_t a, uint8_t b) {
+  return Posit8es2toUint8(Uint8ToPosit8es2(a) * Uint8ToPosit8es2(b));
+}
+
+TVM_DLL uint8_t Posit8es2Div(uint8_t a, uint8_t b) {
+  return Posit8es2toUint8(Uint8ToPosit8es2(a) / Uint8ToPosit8es2(b));
+}
+
+TVM_DLL uint8_t Posit8es2Max(uint8_t a, uint8_t b) {
+  auto a_p = Uint8ToPosit8es2(a);
+  auto b_p = Uint8ToPosit8es2(b);
+  return Posit8es2toUint8(a_p > b_p ? a_p : b_p);
+}
+
+TVM_DLL uint8_t Posit8es2Sqrt(uint8_t a) {
+  return Posit8es2toUint8(sw::unum::sqrt(Uint8ToPosit8es2(a)));
+}
+
+TVM_DLL uint8_t Posit8es2Exp(uint8_t a) {
+  return Posit8es2toUint8(sw::unum::exp(Uint8ToPosit8es2(a)));
+}
+
+TVM_DLL uint8_t Posit8es2Log(uint8_t a) {
+  return Posit8es2toUint8(sw::unum::log(Uint8ToPosit8es2(a)));
+}
+
+TVM_DLL uint8_t Posit8es2Sigmoid(uint8_t a) {
+  auto posit_one = sw::unum::posit<8, 2>(1);
+  return Posit8es2toUint8(posit_one / (sw::unum::exp(-Uint8ToPosit8es2(a)) + 
posit_one));
+}
+
+TVM_DLL uint8_t Posit8es2Tanh(uint8_t a) {
+  return Posit8es2toUint8(sw::unum::tanh(Uint8ToPosit8es2(a)));
+}
+}
+
+TVM_DLL sw::unum::posit<16, 2> Uint16ToPosit16es2(uint16_t in) {
+  sw::unum::bitblock<16> bb;
+  bb = static_cast(in);
+  return sw::unum::posit<16, 2>().set(bb);
+}
+
+extern "C" {
+TVM_DLL uint16_t RawPosit16es2(uint16_t in) { return in; }
+
+TVM_DLL uint16_t Posit16es2toUint16(sw::unum::posit<16, 2> in) {
+  return static_cast(in.get().to_ullong());
+}
+
+TVM_DLL float Posit16es2ToFloat(uint16_t in) { return 
Uint16ToPosit16es2(in).operator float(); }
+
+TVM_DLL uint16_t FloatToPosit16es2(float in) {
+  auto posit = sw::unum::posit<16, 2>(in);
+  return Posit16es2toUint16(posit);
+}
+
+// TODO(gus) how wide should the input be?
+TVM_DLL uint16_t IntToPosit16es2(int in) { return 
Posit16es2toUint16(sw::unum::posit<16, 2>(in)); }
+
+TVM_DLL uint16_t Posit16es2Add(uint16_t a, uint16_t b) {
+  return Posit16es2toUint16(Uint16ToPosit16es2(a) + Uint16ToPosit16es2(b));
+}
+
+TVM_DLL uint16_t Posit16es2Sub(uint16_t a, uint16_t b) {
+  return Posit16es2toUint16(Uint16ToPosit16es2(a) - Uint16ToPosit16es2(b));
+}
+
+TVM_DLL uint16_t Posit16es2Mul(uint16_t a, uint16_t b) {
+  return Posit16es2toUint16(Uint16ToPosit16es2(a) * Uint16ToPosit16es2(b));
+}
+
+TVM_DLL uint16_t Posit16es2Div(uint16_t a, uint16_t b) {
+  return Posit16es2toUint16(Uint16ToPosit16es2(a) / Uint16ToPosit16es2(b));
+}
+
+TVM_DLL uint16_t Posit16es2Max(uint16_t a, uint16_t b) {
+  auto a_p = Uint16ToPosit16es2(a);
+  auto b_p = Uint16ToPosit16es2(b);
+  return Posit16es2toUint16(a_p > b_p ? a_p : b_p);
+}
+
+TVM_DLL uint16_t Posit16es2Sqrt(uint16_t a) {
+  return Posit16es2toUint16(sw::unum::sqrt(Uint16ToPosit16es2(a)));
+}
+
+TVM_DLL uint16_t Posit16es2Exp(uint16_t a) {
+  return Posit16es2toUint16(sw::unum::exp(Uint16ToPosit16es2(a)));
+}
+
+TVM_DLL uint16_t Posit16es2Log(uint16_t a) {
+  return Posit16es2toUint16(sw::unum::log(Uint16ToPosit16es2(a)));
+}
+
+TVM_DLL uint16_t Posit16es2Sigmoid(uint16_t a) {
+  auto posit_one = sw::unum::posit<16, 2>(1);
+  return Posit16es2toUint16(posit_one / (sw::unum::exp(-Uint16ToPosit16es2(a)) 
+ posit_one));
+}
+
+TVM_DLL uint16_t Posit16es2Tanh(uint16_t a) {
+  return Posit16es2toUint16(sw::unum::tanh(Uint16ToPosit16es2(a)));
+}
+}
+
+TVM_DLL sw::unum::posit<32, 2> Uint32ToPosit32es2(uint32_t in) {
+  sw::unum::bitblock<32> bb;
+  bb = static_cast(in);
+  return sw::unum::posit<32, 2>().set(bb);
+}
+
+extern "C" {
+TVM_DLL uint32_t RawPosit32es2(uint32_t in) { return in; }

Review commen

[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tkonolige commented on a change in pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#discussion_r475871867



##
File path: Jenkinsfile
##
@@ -202,8 +202,8 @@ stage('Unit Test') {
 unpack_lib('gpu', tvm_multilib)
 timeout(time: max_time, unit: 'MINUTES') {
   sh "${docker_run} ${ci_gpu} ./tests/scripts/task_sphinx_precheck.sh"
-  sh "${docker_run} ${ci_gpu} ./tests/scripts/task_python_unittest.sh"
-  sh "${docker_run} ${ci_gpu} 
./tests/scripts/task_python_integration.sh"
+  sh "${docker_run} ${ci_gpu} ./tests/scripts/task_python_unittest.sh 
gpu"

Review comment:
   Without any arguments, the scripts should be the same as before this PR. 
The gpu or cpu arguments are completely optional.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] MarisaKirisame commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


MarisaKirisame commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475869124



##
File path: src/relay/op/type_relations.cc
##
@@ -126,6 +126,7 @@ bool BroadcastCompRel(const Array& types, int 
num_inputs, const Attrs& att
   return true;
 }
   }
+  reporter->Assign(types[0], types[1]);

Review comment:
   even if you add explicit type? not doing anything is the correct thing, 
and i dont think you should do this (look unsound to me) to make the type 
checker stronger - probably too strong.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tkonolige commented on pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#issuecomment-679343397


   I agree that an RFC could be useful, but maybe I could just add information 
to the docs instead?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6327: [OpFusion] Make the max number of fused ops configurable

2020-08-24 Thread GitBox


tqchen commented on pull request #6327:
URL: https://github.com/apache/incubator-tvm/pull/6327#issuecomment-679342779


   Thanks @masahi @zhiics @junrushao1994 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (37cbbd7 -> 6b5176d)

2020-08-24 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 37cbbd7  [Relay] Support for PyTorch Non-Maximum Suppression (#6314)
 add 6b5176d  [OpFusion] Make the max number of fused ops configurable 
(#6327)

No new revisions were added by this update.

Summary of changes:
 src/relay/transforms/fuse_ops.cc | 52 -
 tests/python/relay/test_pass_fuse_ops.py | 66 
 2 files changed, 101 insertions(+), 17 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #6327: [OpFusion] Make the max number of fused ops configurable

2020-08-24 Thread GitBox


tqchen merged pull request #6327:
URL: https://github.com/apache/incubator-tvm/pull/6327


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tqchen commented on pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#issuecomment-679342371


   It might also be helpful to send a quick RFC given that the change of test 
infra will affect quite a lot of people



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-08-24 Thread GitBox


tqchen commented on issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879#issuecomment-679341931


   @hgt312 feel free to take over, right now this issue has not been addressed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tqchen commented on pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#issuecomment-679339864


   cc @zhiics @yzhliu @junrushao1994 @merrymercy @ajtulloch @kparzysz-quic 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tqchen commented on a change in pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#discussion_r475852720



##
File path: Jenkinsfile
##
@@ -202,8 +202,8 @@ stage('Unit Test') {
 unpack_lib('gpu', tvm_multilib)
 timeout(time: max_time, unit: 'MINUTES') {
   sh "${docker_run} ${ci_gpu} ./tests/scripts/task_sphinx_precheck.sh"
-  sh "${docker_run} ${ci_gpu} ./tests/scripts/task_python_unittest.sh"
-  sh "${docker_run} ${ci_gpu} 
./tests/scripts/task_python_integration.sh"
+  sh "${docker_run} ${ci_gpu} ./tests/scripts/task_python_unittest.sh 
gpu"

Review comment:
   Let us consider add new scripts e.g. 
`tests/scripts/task_python_unittest_gpuonly.sh` that setup the scripts and runs 
runs `task_python_unit_test.sh` . This way devs can always directly go and run 
the scripts





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tqchen commented on a change in pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#discussion_r475851274



##
File path: tests/scripts/setup-pytest-env.sh
##
@@ -26,5 +26,20 @@ else
 fi
 set -u
 
+export TVM_TEST_DEVICES=""
+while test $# -gt 0
+do
+case "$1" in
+cpu) export TVM_TEST_DEVICES="llvm;llvm 
-device=arm_cpu;$TVM_TEST_DEVICES"
+;;

Review comment:
   I feel it is a bit overly complicated to have two level of directions 
here:
   - D0: arguments in the test script
   - D1: env variable(TVM_TEST_DEVICES)
   - D2: pytest flag.
   
   It would be awsome if we can reduce it to a single level e.g. an environment 
variable, and ensure good default behavior so that we can:
   - Directly run a specific test via pytest without worrying about setting up 
the env and run setup-pytest-denv
   - Have good default when nothing is set (e.g. run everything that is 
available).
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tqchen commented on a change in pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#discussion_r475843056



##
File path: python/tvm/testing.py
##
@@ -285,4 +288,105 @@ def _check_forward(constraints1, constraints2, varmap, 
backvarmap):
constraints_trans.dst_to_src, constraints_trans.src_to_dst)
 
 
+def gpu(f):
+"""Mark to differentiate tests that use the GPU is some capacity. These
+tests will be run on CPU-only nodes and on nodes with GPUS.
+

Review comment:
   Use numpydoc style to document all the arguments

##
File path: python/tvm/testing.py
##
@@ -285,4 +288,105 @@ def _check_forward(constraints1, constraints2, varmap, 
backvarmap):
constraints_trans.dst_to_src, constraints_trans.src_to_dst)
 
 
+def gpu(f):
+"""Mark to differentiate tests that use the GPU is some capacity. These
+tests will be run on CPU-only nodes and on nodes with GPUS.
+
+To mark a test that must have a GPU present to run, use `@requires_gpu`.
+"""
+return pytest.mark.gpu(f)
+
+
+def requires_gpu(f):

Review comment:
   Perhaps it is more useful to mark a region besides mark a function. e.g. 
   
   ```python
   with tvm.testing.cuda_region():
   pass
   ```

##
File path: python/tvm/testing.py
##
@@ -285,4 +288,105 @@ def _check_forward(constraints1, constraints2, varmap, 
backvarmap):
constraints_trans.dst_to_src, constraints_trans.src_to_dst)
 
 
+def gpu(f):
+"""Mark to differentiate tests that use the GPU is some capacity. These
+tests will be run on CPU-only nodes and on nodes with GPUS.
+
+To mark a test that must have a GPU present to run, use `@requires_gpu`.
+"""
+return pytest.mark.gpu(f)
+
+
+def requires_gpu(f):
+"""Mark a test as requiring a GPU to run. Tests with this mark will not be
+run unless a gpu is present.
+"""
+return pytest.mark.skipif(not tvm.gpu().exist, reason="No GPU 
present")(gpu(f))
+
+
+def requires_cuda(f):
+"""Mark a test as requiring the CUDA runtime. This does not mean the tests
+also requires a gpu. For that, use `@requires_gpu` and `@requires_cuda`
+"""
+return pytest.mark.cuda(
+pytest.mark.skipif(
+not tvm.runtime.enabled("cuda"), reason="CUDA support not enabled"
+)(requires_gpu(f))
+)
+
+
+def requires_opencl(f):
+"""Mark a test as requiring the OpenCL runtime. This does not mean the 
tests
+also requires a gpu. For that, use `@requires_gpu` and `@requires_cuda`.
+"""
+return pytest.mark.opencl(
+pytest.mark.skipif(
+not tvm.runtime.enabled("opencl"), reason="OpenCL support not 
enabled"
+)(f)
+)
+
+
+def requires_tpu(f):

Review comment:
   requires_tensorcore

##
File path: tests/python/topi/python/test_topi_conv2d_nhwc_winograd.py
##
@@ -114,6 +111,8 @@ def check_device(device):
 check_device(devices)
 
 
+@requires_cuda

Review comment:
   requires_cuda imply requires gpu?

##
File path: python/tvm/_ffi/runtime_ctypes.py
##
@@ -197,6 +199,10 @@ def _GetDeviceAttr(self, device_type, device_id, attr_id):
 @property
 def exist(self):
 """Whether this device exist."""
+allowed_ctxs = os.environ.get("TVM_TEST_CTXS")

Review comment:
   This seems to be a hack that we should not put in here, but instead put 
in tvm.testing

##
File path: python/tvm/testing.py
##
@@ -285,4 +288,105 @@ def _check_forward(constraints1, constraints2, varmap, 
backvarmap):
constraints_trans.dst_to_src, constraints_trans.src_to_dst)
 
 
+def gpu(f):

Review comment:
   the function name is not informative. e.g. use_gpu?

##
File path: tests/python/topi/python/test_topi_conv2d_nhwc_winograd.py
##
@@ -114,6 +111,8 @@ def check_device(device):
 check_device(devices)
 
 
+@requires_cuda

Review comment:
   perhaps it is better to use the abs name: `tvm.testing.requires_cuda` to 
give better context, do the same thing for other annotations

##
File path: tests/scripts/setup-pytest-env.sh
##
@@ -26,5 +26,20 @@ else
 fi
 set -u
 
+export TVM_TEST_DEVICES=""
+while test $# -gt 0
+do
+case "$1" in
+cpu) export TVM_TEST_DEVICES="llvm;llvm 
-device=arm_cpu;$TVM_TEST_DEVICES"
+;;

Review comment:
   I feel it is a bit overly complicated to have two level of directions 
here:
   - D0: arguments in the test script
   - D1: env variable(TVM_TEST_DEVICES)
   - D2: pytest flag.
   
   It would be awsome if we can reduce it to a single level e.g. an environment 
variable, and ensure good default behavior so that we can:
   - Directly run a specific test via pytest without worrying about setting up 
the env and run setup-pytest-denv
   - Have good default when nothing is set 





This i

[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475824761



##
File path: src/relay/transforms/type_infer.cc
##
@@ -86,6 +86,31 @@ struct ResolvedTypeInfo {
   Array type_args = Array(ObjectPtr(nullptr));
 };
 
+// helper class to dedup typevars of a type
+// - types do not have to be already typechecked
+//
+// This is used to Dedup GlobalVar type to avoid
+// incorrect type resolving across different usages
+class DeDupType : public TypeMutator, public ExprMutator, public 
PatternMutator {

Review comment:
   moved to `de_deplicate.cc` thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475825103



##
File path: src/relay/op/type_relations.cc
##
@@ -126,6 +126,7 @@ bool BroadcastCompRel(const Array& types, int 
num_inputs, const Attrs& att
   return true;
 }
   }
+  reporter->Assign(types[0], types[1]);

Review comment:
   ```
   # f(x) = if x > 0 then g(x - 1) else 0
   # g(y) = if y > 0 then f(y - 1) else 0
   ```
   this test doesn't typecheck otherwise





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475823330



##
File path: tests/python/relay/test_type_infer.py
##
@@ -362,6 +365,147 @@ def test_let_polymorphism():
 int32 = relay.TensorType((), "int32")
 tvm.ir.assert_structural_equal(body.checked_type, relay.TupleType([int32, 
relay.TupleType([])]))
 
+def test_mutual_recursion():
+# f(x) = if x > 0 then g(x - 1) else 0
+# g(y) = if y > 0 then f(y - 1) else 0
+tensortype = relay.TensorType((), 'float32')
+
+x = relay.Var("x")
+y = relay.Var("y")
+
+zero = relay.Constant(tvm.nd.array(np.array(0, dtype='float32')))
+one = relay.Constant(tvm.nd.array(np.array(1, dtype='float32')))
+
+f_gv = relay.GlobalVar('f')
+g_gv = relay.GlobalVar('g')
+
+def body(var, call_func):
+subtract_one = relay.op.subtract(var, one)
+cond = relay.If(relay.op.greater(var, zero),
+relay.Call(call_func, [subtract_one]),
+zero)
+func = relay.Function([var], cond)
+return func
+
+f = body(x, g_gv)
+g = body(y, f_gv)
+
+mod = tvm.IRModule()
+# p = Prelude(mod)
+mod.add_unchecked(f_gv, f)
+mod.add_unchecked(g_gv, g)
+mod = transform.InferTypeAll()(mod)
+
+expected = relay.FuncType([tensortype], tensortype)
+tvm.ir.assert_structural_equal(mod[f_gv].checked_type, expected)
+tvm.ir.assert_structural_equal(mod[g_gv].checked_type, expected)
+
+def test_mutual_recursion_adt():
+# f[A](x: A) = match x {
+#   Cons(a, Nil) => a
+#   Cons(_, b) => g(b)
+# }
+# g[B](y: B) = match y {
+#   Cons(a, Nil) => a
+#   Cons(_, b) => f(b)
+# }
+p = Prelude()
+l = p.l
+
+A = relay.TypeVar("A")
+B = relay.TypeVar("B")
+
+x = relay.Var("x")
+y = relay.Var("y")
+
+f_gv = relay.GlobalVar('f')
+g_gv = relay.GlobalVar('g')
+
+def body(var, call_func, type_param):
+a = relay.Var("a", type_param)
+b = relay.Var("b")
+body = relay.Match(
+var, 
+[
+relay.Clause(relay.PatternConstructor(p.cons, 
[relay.PatternVar(a), relay.PatternConstructor(p.nil)]), a),
+relay.Clause(relay.PatternConstructor(p.cons, 
[relay.PatternWildcard(), relay.PatternVar(b)]), relay.Call(call_func, [b]))
+],
+complete=False
+)
+func = relay.Function([var], body, type_params=[type_param])
+return func
+
+f = body(x, g_gv, A)
+g = body(y, f_gv, B)
+
+mod = p.mod
+mod.add_unchecked(f_gv, f)
+mod.add_unchecked(g_gv, g)
+mod = transform.InferTypeAll()(mod)
+
+tv = relay.TypeVar("test")
+expected = relay.FuncType([l(tv)], tv, [tv])
+tvm.ir.assert_structural_equal(mod[f_gv].checked_type, expected)
+tvm.ir.assert_structural_equal(mod[g_gv].checked_type, expected)
+
+def test_mutual_recursion_peano():
+# even and odd function for peano function
+# even(x) = match x {
+#   z => true
+#   s(a: nat) => odd(a)
+# }
+# odd(x) = match x {
+#   z => false
+#   s(a: nat) => even(a)
+# }
+p = Prelude()
+add_nat_definitions(p)
+z = p.z

Review comment:
   parser fails to handle a module with mutually recursive functions, we 
would have to make changes to the parser which feels out of the scope of this 
PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475823726



##
File path: src/relay/analysis/type_solver.h
##
@@ -65,6 +65,9 @@ class TypeSolver {
  public:
   TypeSolver(const GlobalVar& current_func, const IRModule& _mod, 
ErrorReporter* err_reporter);
   ~TypeSolver();
+
+  void SetCurrentFunc(GlobalVar current_func) { this->current_func = 
current_func; }

Review comment:
   done thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475823330



##
File path: tests/python/relay/test_type_infer.py
##
@@ -362,6 +365,147 @@ def test_let_polymorphism():
 int32 = relay.TensorType((), "int32")
 tvm.ir.assert_structural_equal(body.checked_type, relay.TupleType([int32, 
relay.TupleType([])]))
 
+def test_mutual_recursion():
+# f(x) = if x > 0 then g(x - 1) else 0
+# g(y) = if y > 0 then f(y - 1) else 0
+tensortype = relay.TensorType((), 'float32')
+
+x = relay.Var("x")
+y = relay.Var("y")
+
+zero = relay.Constant(tvm.nd.array(np.array(0, dtype='float32')))
+one = relay.Constant(tvm.nd.array(np.array(1, dtype='float32')))
+
+f_gv = relay.GlobalVar('f')
+g_gv = relay.GlobalVar('g')
+
+def body(var, call_func):
+subtract_one = relay.op.subtract(var, one)
+cond = relay.If(relay.op.greater(var, zero),
+relay.Call(call_func, [subtract_one]),
+zero)
+func = relay.Function([var], cond)
+return func
+
+f = body(x, g_gv)
+g = body(y, f_gv)
+
+mod = tvm.IRModule()
+# p = Prelude(mod)
+mod.add_unchecked(f_gv, f)
+mod.add_unchecked(g_gv, g)
+mod = transform.InferTypeAll()(mod)
+
+expected = relay.FuncType([tensortype], tensortype)
+tvm.ir.assert_structural_equal(mod[f_gv].checked_type, expected)
+tvm.ir.assert_structural_equal(mod[g_gv].checked_type, expected)
+
+def test_mutual_recursion_adt():
+# f[A](x: A) = match x {
+#   Cons(a, Nil) => a
+#   Cons(_, b) => g(b)
+# }
+# g[B](y: B) = match y {
+#   Cons(a, Nil) => a
+#   Cons(_, b) => f(b)
+# }
+p = Prelude()
+l = p.l
+
+A = relay.TypeVar("A")
+B = relay.TypeVar("B")
+
+x = relay.Var("x")
+y = relay.Var("y")
+
+f_gv = relay.GlobalVar('f')
+g_gv = relay.GlobalVar('g')
+
+def body(var, call_func, type_param):
+a = relay.Var("a", type_param)
+b = relay.Var("b")
+body = relay.Match(
+var, 
+[
+relay.Clause(relay.PatternConstructor(p.cons, 
[relay.PatternVar(a), relay.PatternConstructor(p.nil)]), a),
+relay.Clause(relay.PatternConstructor(p.cons, 
[relay.PatternWildcard(), relay.PatternVar(b)]), relay.Call(call_func, [b]))
+],
+complete=False
+)
+func = relay.Function([var], body, type_params=[type_param])
+return func
+
+f = body(x, g_gv, A)
+g = body(y, f_gv, B)
+
+mod = p.mod
+mod.add_unchecked(f_gv, f)
+mod.add_unchecked(g_gv, g)
+mod = transform.InferTypeAll()(mod)
+
+tv = relay.TypeVar("test")
+expected = relay.FuncType([l(tv)], tv, [tv])
+tvm.ir.assert_structural_equal(mod[f_gv].checked_type, expected)
+tvm.ir.assert_structural_equal(mod[g_gv].checked_type, expected)
+
+def test_mutual_recursion_peano():
+# even and odd function for peano function
+# even(x) = match x {
+#   z => true
+#   s(a: nat) => odd(a)
+# }
+# odd(x) = match x {
+#   z => false
+#   s(a: nat) => even(a)
+# }
+p = Prelude()
+add_nat_definitions(p)
+z = p.z

Review comment:
   the parser isn't set up to handle mutual recursive functions, we would 
have to make changes to the parser which feels out of the scope of this PR





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on a change in pull request #6238: [TIR][Transform]Block scope hoisting added

2020-08-24 Thread GitBox


ANSHUMAN87 commented on a change in pull request #6238:
URL: https://github.com/apache/incubator-tvm/pull/6238#discussion_r475821907



##
File path: python/tvm/driver/build_module.py
##
@@ -181,7 +181,7 @@ def lower(sch,
 tvm.tir.transform.BF16Legalize(),
 tvm.tir.transform.NarrowDataType(32),
 tvm.tir.transform.Simplify(),
-tvm.tir.transform.HoistIfThenElse(),
+tvm.tir.transform.HoistIfThenElse("basic"),

Review comment:
   Yes, we can combine both also and place the pass in End phase. 
   I understand the dilemma here. Even i also had the same :)
   The reasoning i came up later is as below.
   Current PR supports the feature for "Block scope vars" or "Attr nodes" which 
happens to be more applicable in specific cases(For example in Cuda Kernels). 
Also there is slight increase in time complexity(As linear).
   
   So to sum up, we have 2 cases : 
   Case 1: "Basic" or "Default": The scenarios covered here should be more 
general(simpler version) across.
   Case 2: "Advanced" : The scenarios covered here should be enabled in case of 
particular settings.
   
   Please let me know your thought on above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6326: [Relay] Make check stricter by using Feature.

2020-08-24 Thread GitBox


junrushao1994 commented on a change in pull request #6326:
URL: https://github.com/apache/incubator-tvm/pull/6326#discussion_r475821511



##
File path: tests/python/relay/test_pass_merge_composite.py
##
@@ -999,14 +1000,4 @@ def _check_type_false(extract):
 
 if __name__ == "__main__":
 test_simple_merge()
-test_branch_merge()
-test_multiple_patterns()
-test_optional_pattern()
-test_merge_order()
-test_parallel_merge()
-test_multiple_input_subgraphs()
-test_reuse_call_merge()
-test_tuple_get_item_merge()
-test_pattern_with_check()
-test_diamond_not_merge()
-test_type_check()
+#pytest.main([__file__])

Review comment:
   recover this?

##
File path: include/tvm/relay/feature.h
##
@@ -124,6 +125,13 @@ class FeatureSet {
*/
   bool is_subset_of(const FeatureSet& rhs) const { return ((*this) - 
rhs).bs_.none(); }
 
+  /*!
+   * \brief Pretty Print the FeatureSet.
+   *
+   * \return a string representation.
+   */
+  std::string Print() const;

Review comment:
   Maybe `ToString` sounds better?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige opened a new pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-24 Thread GitBox


tkonolige opened a new pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331


   Much of the time spent in testing is duplicated work between CPU and GPU 
test nodes. The main reason is that there is no way to control which TVM 
devices are enabled at runtime, so tests that use LLVM will run on both GPU and 
CPU nodes.
   
   This patch adds an environment variable, `TVM_TEST_DEVICES`, which controls 
which TVM devices should be used by tests. Devices not in `TVM_TEST_DEVICES` 
can still be used, so tests must be careful to check that the desired device is 
enabled with `tvm.testing.device_enabled` or by enumerating all devices with 
`tvm.testing.enabled_devices`. All tests have been retrofitted with these 
checks.
   
   This patch also provides the decorator `@tvm.testing.gpu` to mark a test as 
possibly using the gpu. Tests that require the gpu can use 
`@tvm.testing.requires_gpu`. Tests without these flags will not be run on GPU 
nodes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6316: Dynamic Strided Slice

2020-08-24 Thread GitBox


mbrookhart commented on a change in pull request #6316:
URL: https://github.com/apache/incubator-tvm/pull/6316#discussion_r475819380



##
File path: python/tvm/relay/op/_transform.py
##
@@ -165,6 +138,8 @@ def _strided_slice_shape_func_input_shape(data_shape, 
begin, end, strides, slice
 cstride = int64(strides[i])
 if len(begin) > i:
 cbegin = int64(begin[i])
+if cbegin < 0:
+cbegin += int64(data_shape[i])

Review comment:
   I attempted to do this with an assert in the hybrid script, but 
something seems a little off in the compiler, even after these lines it was 
still checking against the original value.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] ANSHUMAN87 commented on a change in pull request #6238: [TIR][Transform]Block scope hoisting added

2020-08-24 Thread GitBox


ANSHUMAN87 commented on a change in pull request #6238:
URL: https://github.com/apache/incubator-tvm/pull/6238#discussion_r475818271



##
File path: tests/python/unittest/test_tir_transform_hoist_if.py
##
@@ -255,6 +259,488 @@ def test_multi_if():
('tir.For', 'i'): (('tir.IfThenElse', ('i',)),)}
 verify_structure(new_stmt, expected_struct)
 
+def test_no_hoisting_1():
+ib = tvm.tir.ir_builder.create()
+data = ib.pointer("float32", name="data")
+n = te.var("n")
+
+with ib.for_range(0, 10, "i") as i:
+with ib.for_range(0, 10, "j") as j:
+with ib.for_range(0, 10, "k") as k:
+with ib.if_scope(k >= 3):
+data[i * 100 + j * 10 + k] = data[i * 100 + j * 10 + k] + 
0.5
+
+stmt = ib.get()
+mod = tvm.IRModule.from_expr(tvm.tir.PrimFunc([], stmt))
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+with tvm.transform.PassContext(config={
+"tir.HoistIfThenElse": {"support_block_scope_hosting": True}
+}):
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+def test_no_hoisting_2():
+ib = tvm.tir.ir_builder.create()
+data = ib.pointer("float32", name="data")
+n = te.var("n")
+x = te.var("x")
+
+with ib.for_range(0, 10, "i") as i:
+with ib.for_range(0, 10, "j") as j:
+with ib.for_range(0, 10, "k") as k:
+with ib.if_scope(i >= 3):
+data[i * 100 + j * 10 + k] = data[i * 100 + j * 10 + k] + 
0.3
+data[i * 100 + j * 10 + k] = data[i * 100 + j * 10 + k] + 0.5
+
+stmt = ib.get()
+mod = tvm.IRModule.from_expr(tvm.tir.PrimFunc([], stmt))
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+with tvm.transform.PassContext(config={
+"tir.HoistIfThenElse": {"support_block_scope_hosting": True}
+}):
+new_stmt = tvm.tir.transform.HoistIfThenElse()(mod)["main"].body
+tvm.ir.assert_structural_equal(new_stmt, stmt)
+
+def test_no_hoisting_3():
+ib = tvm.tir.ir_builder.create()
+dshape = (32, 64)
+dshape_inner = (33, 63)
+data = ib.pointer("float32", name="data")
+l = te.var('l')
+m = te.var('m')
+n = te.var('n')
+
+tx = te.thread_axis("threadIdx.x")
+bx = te.thread_axis("blockIdx.x")
+ib.scope_attr(tx, "thread_extent", dshape[0])
+ib.scope_attr(bx, "thread_extent", dshape[1])
+with ib.for_range(0, l, "i") as i:
+with ib.for_range(0, m, "j") as j:
+with ib.for_range(0, n, "k") as k:
+ib.scope_attr(tx, "thread_extent", dshape_inner[0])
+ib.scope_attr(bx, "thread_extent", dshape_inner[1])

Review comment:
   I am sorry! I am not sure whether i understand your point here clearly. 
So please correct me if my reply is not as per.
   In this case I am trying to show that there wont be any hoisting, in case 
the scope of a variable is redefined.
   And i think it may not be appropriate to hoist an Attr node, as it meant for 
a particular block.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5881: [Relay] Mutual Recursion Support

2020-08-24 Thread GitBox


hypercubestart commented on a change in pull request #5881:
URL: https://github.com/apache/incubator-tvm/pull/5881#discussion_r475802507



##
File path: src/relay/transforms/type_infer.cc
##
@@ -109,6 +134,44 @@ class TypeInferencer : private ExprFunctor,
   // inference the type of expr.
   Expr Infer(Expr expr);
 
+  void SetCurrentFunc(GlobalVar current_func) {
+this->current_func_ = current_func;
+this->solver_.SetCurrentFunc(current_func);
+  }
+
+  void Solve();
+  Expr ResolveType(Expr expr);
+
+  // Lazily get type for expr
+  // expression, we will populate it now, and return the result.
+  Type GetType(const Expr& expr) {
+auto it = type_map_.find(expr);
+if (it != type_map_.end() && it->second.checked_type.defined()) {
+  if (expr.as() != nullptr) {
+// if we don't dedup GlobalVarNode, two functions that use the same 
GlobalVar
+// may resolve to the same type incorrectly
+return DeDupType().VisitType(it->second.checked_type);

Review comment:
   if we DeDup a TypeVar, this will always be incorrect because there is no 
subst taking place.
   
   i.e.
   ```
   In `id`: 
 fn [A](%x: A) -> A {
 %x
   } unable to unify: `A` and `A`;
   ```
   %x is DeDup into a different typevar which is incorrect





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6310: [Ansor][AutoTVM v2.0] Phase 2: Evolutionary Search

2020-08-24 Thread GitBox


comaniac commented on a change in pull request #6310:
URL: https://github.com/apache/incubator-tvm/pull/6310#discussion_r475802287



##
File path: src/auto_scheduler/search_policy/sketch_policy_rules.cc
##
@@ -580,5 +580,254 @@ InitPopulationRule::ResultKind 
InitVectorization::Apply(SketchPolicyNode* policy
   return ResultKind::kValid;
 }
 
+MutationRule::ResultKind MutateTileSize::Apply(SketchPolicyNode* policy, 
State* state) const {
+  int max_innermost_split_factor =
+  GetIntParam(policy->params, SketchParamKey::max_innermost_split_factor);
+
+  // Extract all SplitStep
+  std::vector split_step_ids;
+  for (size_t i = 0; i < (*state)->transform_steps.size(); ++i) {
+if (auto ps = (*state)->transform_steps[i].as()) {
+  if (!ps->extent.defined() || 
!ps->extent.value()->IsInstance()) {
+continue;
+  }
+  auto innermost_factor = 
ps->lengths.back().value_or(max_innermost_split_factor + 1);
+  if (GetIntImm(innermost_factor) <= max_innermost_split_factor) {
+split_step_ids.push_back(i);
+  }
+}
+  }
+  if (split_step_ids.empty()) {
+// No tile size could be mutated.
+return ResultKind::kInvalid;
+  }
+
+  // Select a SplitStep with extent larger than one to mutate.
+  int retry_ct = 0;
+  int64_t extent = 1;
+  int step_id;
+  const SplitStepNode* ps;
+
+  do {
+step_id = split_step_ids[(policy->rand_gen)() % split_step_ids.size()];
+ps = (*state)->transform_steps[step_id].as();
+CHECK(ps != nullptr);
+extent = GetIntImm(ps->extent.value());
+retry_ct += 1;
+  } while (retry_ct < static_cast(split_step_ids.size()) << 2 && (extent 
== 1 || extent == 0));
+
+  if (extent <= 1) {
+// Cannot find a step with extent larger than one.
+return ResultKind::kInvalid;
+  }
+
+  // Fetch the current tile sizes.
+  std::vector lengths(ps->lengths.size() + 1, 1);
+  for (int i = 0; i < static_cast(ps->lengths.size()); ++i) {
+lengths[i + 1] = GetIntImm(ps->lengths[i].value());
+  }
+  lengths[0] = extent / ElementProduct(lengths);
+
+  // Random permute the tile size order.
+  std::vector random_perm;
+  RandomPermutation(lengths.size(), &random_perm, &(policy->rand_gen));
+
+  // Try to divide a factor from one tile size and multiple it to another.
+  for (size_t i = 0; i < random_perm.size(); ++i) {
+size_t src_idx = random_perm[i];
+int length = lengths[src_idx];
+if (length == 1) {
+  continue;
+}
+
+size_t dst_idx = random_perm[(i + 1) % random_perm.size()];
+const std::vector& factors = policy->split_memo.GetFactors(length);
+CHECK_GE(factors.size(), 1);
+
+int divide_factor;
+if (dst_idx == lengths.size() - 1) {
+  // Maintain the restriction of 
hardware_params.max_innermost_split_factor.
+  int max_factor_index = static_cast(factors.size()) - 1;
+  for (; max_factor_index >= 1; max_factor_index--) {
+if (factors[max_factor_index] * lengths[dst_idx] <= 
max_innermost_split_factor) {
+  break;
+}
+  }
+  if (max_factor_index == 0) {
+// Failed on this dst_idx, try next one.
+continue;
+  }
+  divide_factor = factors[1 + (policy->rand_gen)() % (max_factor_index)];
+} else {
+  divide_factor = factors[1 + (policy->rand_gen)() % (factors.size() - 1)];
+}
+
+// Divide one factor from lengths[src_idx] and multiply it to 
lengths[dst_idx].
+Array new_lengths;
+for (size_t j = 1; j < lengths.size(); ++j) {
+  if (j == src_idx) {
+new_lengths.push_back(Integer(lengths[j] / divide_factor));
+  } else if (j == dst_idx) {
+new_lengths.push_back(Integer(lengths[j] * divide_factor));
+  } else {
+new_lengths.push_back(Integer(lengths[j]));
+  }
+}
+
+StateNode* pstate = state->CopyOnWrite();
+pstate->transform_steps.Set(
+step_id, SplitStep(ps->stage_id, ps->iter_id, ps->extent,
+   Array>(new_lengths.begin(), 
new_lengths.end()),
+   ps->inner_to_outer));
+return ResultKind::kValid;
+  }
+  return ResultKind::kInvalid;
+}
+
+MutationRule::ResultKind MutateMaxUnrollFactor::Apply(SketchPolicyNode* policy,
+  State* state) const {
+  // Extract all auto_unroll_max_step pragma steps.
+  std::vector annotate_steps;
+  for (size_t i = 0; i < (*state)->transform_steps.size(); ++i) {
+if (auto ps = (*state)->transform_steps[i].as()) {
+  if (StrStartsWith(ps->pragma_type, "auto_unroll_max_step")) {
+annotate_steps.push_back(i);
+  }
+}
+  }
+  if (annotate_steps.empty()) {
+return ResultKind::kInvalid;
+  }
+
+  // Random pick up one unroll factor candidate.
+  auto cands = (IsGPUTask(policy->search_task))? &gpu_unroll_cands_: 
&cpu_unroll_cands_;
+  auto new_factor = std::to_string((*cands)[(policy->rand_gen)() % 
cands->size()]);
+
+  // Random pick up and mutate an unroll step.
+  auto step_id = annota

[GitHub] [incubator-tvm] ANSHUMAN87 commented on a change in pull request #6238: [TIR][Transform]Block scope hoisting added

2020-08-24 Thread GitBox


ANSHUMAN87 commented on a change in pull request #6238:
URL: https://github.com/apache/incubator-tvm/pull/6238#discussion_r475800494



##
File path: src/tir/transforms/hoist_if_then_else.cc
##
@@ -93,11 +112,33 @@ using HoistForIfTuple = std::tuple;
  *if (likely(j > 2))
  *A[i+j+k] = B[i+j+k]
  *
+ *
+ * This pass do hoisting for Block scope variables also.

Review comment:
   Yes. It is referring to Attr nodes.
   "Block scope variables" is just for my internal mapping :)
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-24 Thread GitBox


comaniac commented on a change in pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#discussion_r475776056



##
File path: tests/python/contrib/test_arm_compute_lib/infrastructure.py
##
@@ -99,6 +95,24 @@ def _get_remote(cls):
 
 return device
 
+@classmethod
+def load(cls, file_name):
+"""Load test config
+
+Load the test configuration by looking for file_name relative
+to the test_arm_compute_lib directory.
+"""
+location = os.path.realpath(os.path.join(os.getcwd(), 
os.path.dirname(__file__)))
+with open(os.path.join(location, file_name), mode="r") as config:

Review comment:
   Make sure the file exists.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] videetparekh opened a new issue #6330: [WASM] Cannot find global function runtime.SystemLib

2020-08-24 Thread GitBox


videetparekh opened a new issue #6330:
URL: https://github.com/apache/incubator-tvm/issues/6330


   ### Issue
   While trying to load a wasm file into TVM web runtime, I get: 
   
   ```
   Error: Cannot find global function runtime.SystemLib
   Error: Cannot find global function runtime.SystemLib
   
   /path/to/wasm/tvmjs_runtime.wasi.js:3
   var Module=typeof Module!=="undefined"?Module:{};var __wasmLib={};function 
__wasmLibInstantiateWasm(imports,successCallback){__wasmLib.imports=imports;__wasmLib.successCallback=successCallback}function
 
__wasmLibStart(wasmInstance){__wasmLib.successCallback(wasmInstance)}__wasmLib.start=__wasmLibStart;var
 
Module={"instantiateWasm":__wasmLibInstantiateWasm,"wasmLibraryProvider":__wasmLib};var
 moduleOverrides={};var key;for(key in 
Module){if(Module.hasOwnProperty(key)){moduleOverrides[key]=Module[key]}}var 
arguments_=[];var thisProgram="./this.program";var 
quit_=function(status,toThrow){throw toThrow};var ENVIRONMENT_IS_WEB=false;var 
ENVIRONMENT_IS_WORKER=false;var ENVIRONMENT_IS_NODE=false;var 
ENVIRONMENT_HAS_NODE=false;var 
ENVIRONMENT_IS_SHELL=false;ENVIRONMENT_IS_WEB=typeof 
window==="object";ENVIRONMENT_IS_WORKER=typeof 
importScripts==="function";ENVIRONMENT_HAS_NODE=typeof process==="object
   abort(Error: Cannot find global function runtime.SystemLib). Build with -s 
ASSERTIONS=1 for more info.
   (Use `node --trace-uncaught ...` to show where the exception was thrown)
   ```
   
   ### Steps to Reproduce
   
   Build Steps:
   `make` in `tvm/web` gives me the following but proceeds to generate the 
`dist` folder:
   ```sh
   warning: undefined symbol: TVMWasmPackedCFunc
   warning: undefined symbol: TVMWasmPackedCFuncFinalizer
   python3 emcc/decorate_as_wasi.py dist/wasm/tvmjs_runtime.js 
dist/wasm/tvmjs_runtime.wasi.js
   ```
   
   Compiling a model to WASM:
   ```py
   graph, lib, params = tvm.relay.build(relay_function, target='llvm 
-mtriple=wasm32-unknown-emscripten -system-lib', params=relay_function_params)
   lib.save('path/to/modelLibrary.bc')
   emcc.create_tvmjs_wasm('path/to/modelLibrary.js', 'path/to/modelLibrary.bc', 
options=["-s", "USE_GLFW=3"])
   ```
   
   The model compiles as expected and generates a wasm file. The runtime fails 
while loading the systemLibrary as follows.
   ```js
   const lrejs = require("../runtime_dist");
   const wasmPath = lrejs.wasmPath();
   const EmccWASI = require(path.join(wasmPath, "tvmjs_runtime.wasi.js"));
   var WasiObj = new EmccWASI()
   const wasmSource = fs.readFileSync('path/to/modelLibrary.wasm')
   const lre = await lrejs.instantiate(wasmSource, WasiObj)
   var ctx = lre.cpu(0)
   const sysLib = lre.systemLib() // Fails
   const executor = lre.createGraphRuntime(graphJson, sysLib, ctx)
   executor.loadParams(paramsBinary)
   ```
   ### Env
   EMSDK: 1.39.0
   LLVM: release/10.x _(LLVM 12 breaks `make` in `tvm/web`)_
   TVM Commit ID: 5046ff2



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] MarisaKirisame commented on pull request #6326: [Relay] Make check stricter by using Feature.

2020-08-24 Thread GitBox


MarisaKirisame commented on pull request #6326:
URL: https://github.com/apache/incubator-tvm/pull/6326#issuecomment-679258085


   @junrushao1994 it is. I am trying to fix the ci though.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6316: Dynamic Strided Slice

2020-08-24 Thread GitBox


mbrookhart commented on a change in pull request #6316:
URL: https://github.com/apache/incubator-tvm/pull/6316#discussion_r475751392



##
File path: src/relay/op/tensor/transform.cc
##
@@ -2069,12 +2070,11 @@ bool StridedSliceRel(const Array& types, int 
num_inputs, const Attrs& attr
   oshape[i] = tir::make_const(dshape[i].dtype(), (slice_range + step - 1) 
/ step);
 }
   } else {
-for (int64_t i = 0; i < num_axis; ++i) {
-  oshape[i] = Any();
-}
+CHECK(param->begin) << "strided_slice recieved invalid begin";

Review comment:
   If this check fails it will be a nullptr. Should I print the null?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6316: Dynamic Strided Slice

2020-08-24 Thread GitBox


mbrookhart commented on a change in pull request #6316:
URL: https://github.com/apache/incubator-tvm/pull/6316#discussion_r475750902



##
File path: python/tvm/relay/op/transform.py
##
@@ -827,13 +828,17 @@ def strided_slice(data, begin, end, strides=None, 
slice_mode="end"):
 ret : relay.Expr
 The computed result.
 """
-strides = strides or const([1], dtype="int32")
-if isinstance(begin, (tuple, list)):
-begin = const(list(begin))
-if isinstance(end, (tuple, list)):
-end = const(list(end))
-if isinstance(strides, (tuple, list)):
-strides = const(list(strides))
+strides = strides or [1]
+if (isinstance(begin, Expr) or isinstance(end, Expr) or 
isinstance(strides, Expr)):
+if isinstance(begin, (tuple, list)):
+begin = const(list(begin))
+if isinstance(end, (tuple, list)):
+end = const(list(end))
+if isinstance(strides, (tuple, list)):
+strides = const(list(strides))
+normalized_begin = _make.where(begin < cast_like(const(0), begin),

Review comment:
   Hmm, yeah, seems a little odd to produce a subgraph as part of the 
constructor of an op, which is why I put it here. That being said, this makes 
the op less useful from other frontends, so...
   
   Any other votes?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6326: [Relay] Make check stricter by using Feature.

2020-08-24 Thread GitBox


junrushao1994 commented on pull request #6326:
URL: https://github.com/apache/incubator-tvm/pull/6326#issuecomment-679209084


   Please ping me when this PR is ready for review :-)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hgt312 commented on issue #3879: Cast from float16 to uint8 was not supported by CUDA

2020-08-24 Thread GitBox


hgt312 commented on issue #3879:
URL: https://github.com/apache/incubator-tvm/issues/3879#issuecomment-679203498


   What is the current status of this issue? Does some one working on it?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jainris commented on a change in pull request #6303: [Relay/TOPI][TFLite] Implemented MATRIX_SET_DIAG Operator for Relay/TOPI and TFLite Frontend.

2020-08-24 Thread GitBox


jainris commented on a change in pull request #6303:
URL: https://github.com/apache/incubator-tvm/pull/6303#discussion_r475695170



##
File path: src/relay/op/tensor/transform.cc
##
@@ -3093,5 +3093,55 @@ RELAY_REGISTER_OP("sparse_to_dense")
 .set_attr("FInferCorrectLayout", 
ElemwiseArbitraryLayout)
 .set_attr("FTVMCompute", SparseToDenseCompute);
 
+// relay.matrix_set_diag
+bool MatrixSetDiagRel(const Array& types, int num_inputs, const Attrs& 
attrs,
+  const TypeReporter& reporter) {
+  // `types` contains: [input, diagonal, result]
+  CHECK_EQ(types.size(), 3);
+
+  const auto* input = types[0].as();
+  CHECK(input);
+
+  const auto* diagonal = types[1].as();
+  CHECK(diagonal);
+
+  int d_ndims = diagonal->shape.size();
+  for (int i = 0; i < d_ndims - 1; i++) {
+reporter->AssertEQ(input->shape[i], diagonal->shape[i]);
+  }
+  auto min_dim = if_then_else(input->shape[d_ndims - 1] >= 
input->shape[d_ndims],
+  input->shape[d_ndims], input->shape[d_ndims - 
1]);
+  reporter->Assert(diagonal->shape[d_ndims - 1] >= min_dim);
+
+  reporter->Assign(types[2], TensorType(input->shape, input->dtype));
+  return true;
+}
+
+Array MatrixSetDiagCompute(const Attrs& attrs, const 
Array& inputs,
+   const Type& out_type) {
+  return Array{topi::matrix_set_diag(inputs[0], inputs[1])};
+}
+
+Expr MakeMatrixSetDiag(Expr input, Expr diagonal) {
+  static const Op& op = Op::Get("matrix_set_diag");
+  return Call(op, {input, diagonal}, Attrs(), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op._make.matrix_set_diag").set_body_typed(MakeMatrixSetDiag);
+
+RELAY_REGISTER_OP("matrix_set_diag")
+.describe(
+R"code(Returns a tensor with the diagonal of input tensor replaced 
with the provided diagonal values.
+**input** Input tensor.
+**diagonal** Values to be filled in the diagonal.
+)code" TVM_ADD_FILELINE)
+.set_num_inputs(2)
+.add_argument("input", "Tensor", "Input Tensor.")
+.add_argument("diagonal", "Tensor", "Values to be filled in the diagonal.")
+.set_support_level(10)
+.add_type_rel("MatrixSetDiag", MatrixSetDiagRel)
+.set_attr("FTVMCompute", MatrixSetDiagCompute)
+.set_attr("TOpPattern", kBroadcast);

Review comment:
   Thanks for reviewing.
   Changed it to be injective.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] giuseros commented on a change in pull request #6117: Use auto-tuner to improve conv2d_gemm performance

2020-08-24 Thread GitBox


giuseros commented on a change in pull request #6117:
URL: https://github.com/apache/incubator-tvm/pull/6117#discussion_r475669627



##
File path: python/tvm/topi/arm_cpu/conv2d_int8.py
##
@@ -142,6 +142,7 @@ def schedule_conv2d_NHWC_quantized(cfg, outs):
 n, h, w, c = out.op.axis
 outer, inner = s[out].split(c, 4)
 s[out].vectorize(inner)
+s[out].parallel(h)

Review comment:
   I also fused batch and first outer dimensions in all `conv2d_gemm`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #6303: [Relay/TOPI][TFLite] Implemented MATRIX_SET_DIAG Operator for Relay/TOPI and TFLite Frontend.

2020-08-24 Thread GitBox


siju-samuel commented on a change in pull request #6303:
URL: https://github.com/apache/incubator-tvm/pull/6303#discussion_r475647341



##
File path: src/relay/op/tensor/transform.cc
##
@@ -3093,5 +3093,55 @@ RELAY_REGISTER_OP("sparse_to_dense")
 .set_attr("FInferCorrectLayout", 
ElemwiseArbitraryLayout)
 .set_attr("FTVMCompute", SparseToDenseCompute);
 
+// relay.matrix_set_diag
+bool MatrixSetDiagRel(const Array& types, int num_inputs, const Attrs& 
attrs,
+  const TypeReporter& reporter) {
+  // `types` contains: [input, diagonal, result]
+  CHECK_EQ(types.size(), 3);
+
+  const auto* input = types[0].as();
+  CHECK(input);
+
+  const auto* diagonal = types[1].as();
+  CHECK(diagonal);
+
+  int d_ndims = diagonal->shape.size();
+  for (int i = 0; i < d_ndims - 1; i++) {
+reporter->AssertEQ(input->shape[i], diagonal->shape[i]);
+  }
+  auto min_dim = if_then_else(input->shape[d_ndims - 1] >= 
input->shape[d_ndims],
+  input->shape[d_ndims], input->shape[d_ndims - 
1]);
+  reporter->Assert(diagonal->shape[d_ndims - 1] >= min_dim);
+
+  reporter->Assign(types[2], TensorType(input->shape, input->dtype));
+  return true;
+}
+
+Array MatrixSetDiagCompute(const Attrs& attrs, const 
Array& inputs,
+   const Type& out_type) {
+  return Array{topi::matrix_set_diag(inputs[0], inputs[1])};
+}
+
+Expr MakeMatrixSetDiag(Expr input, Expr diagonal) {
+  static const Op& op = Op::Get("matrix_set_diag");
+  return Call(op, {input, diagonal}, Attrs(), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op._make.matrix_set_diag").set_body_typed(MakeMatrixSetDiag);
+
+RELAY_REGISTER_OP("matrix_set_diag")
+.describe(
+R"code(Returns a tensor with the diagonal of input tensor replaced 
with the provided diagonal values.
+**input** Input tensor.
+**diagonal** Values to be filled in the diagonal.
+)code" TVM_ADD_FILELINE)
+.set_num_inputs(2)
+.add_argument("input", "Tensor", "Input Tensor.")
+.add_argument("diagonal", "Tensor", "Values to be filled in the diagonal.")
+.set_support_level(10)
+.add_type_rel("MatrixSetDiag", MatrixSetDiagRel)
+.set_attr("FTVMCompute", MatrixSetDiagCompute)
+.set_attr("TOpPattern", kBroadcast);

Review comment:
   Why kBroadcast? i think it shud be injective.

##
File path: include/tvm/topi/transform.h
##
@@ -1511,6 +1511,35 @@ inline Tensor sparse_to_dense(const Tensor& 
sparse_indices, const Array
   name, tag);
 }
 
+/*!
+ * \brief Returns a tensor with the diagonal of input tensor replaced with the 
provided diagonal.
+ * \param input input tensor.
+ * \param diagonal values to be filled in the diagonal.
+ * \param name output tensor name.
+ * \param tag output tensor tag.
+ * \return new tensor with given diagonal values.
+ */
+inline Tensor matrix_set_diag(const Tensor& input, const Tensor& diagonal,

Review comment:
   A suggestion:- may be if we can support `alignment` and `k`(offset) 
similar to `MatrixSetDiagV3` in tf, it will be good. we can support directly 
for tensorflow ops as well.

##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -2652,6 +2652,77 @@ def test_forward_reverse_v2():
 _test_reverse_v2((5, 6, 4, 2), np.array([2], dtype='int32'), dtype)
 
 
+###
+# MATRIX_SET_DIAG
+# ---
+
+def _test_matrix_set_diag(input_shape, input_type, quantized=False):
+""" One iteration of MATRIX_SET_DIAG """
+with tf.Graph().as_default():
+diagonal_shape = list(input_shape[:-2])
+diagonal_shape.append(min(input_shape[-2], input_shape[-1]))
+
+if quantized:
+# ignoring input_type as quantized requires uint8
+input = np.random.uniform(0, 256, input_shape).astype('uint8')
+in_input = tf.placeholder(dtype='float32', shape=input.shape, 
name="input")
+inq_input = tf.quantization.fake_quant_with_min_max_args(
+in_input,
+min=-100,
+max=100,
+name="q_input")
+
+diagonal = np.random.uniform(0, 256, 
diagonal_shape).astype('uint8')
+in_diagonal = tf.placeholder(dtype='float32', 
shape=diagonal.shape, name="diagonal")
+inq_diagonal = tf.quantization.fake_quant_with_min_max_args(
+in_diagonal,
+min=-100,
+max=100,
+name="q_diagonal")
+
+input_range = {'q_input': (-100, 100), 'q_diagonal': (-100, 100)}
+
+out = array_ops.matrix_set_diag(inq_input, inq_diagonal)
+out = tf.quantization.fake_quant_with_min_max_args(
+out,
+min=-100,
+max=100,
+name="out")
+
+compare_tflite_with_tvm(
+[input, diagonal],
+["q_input", "q_diagonal"

[GitHub] [incubator-tvm] jainris commented on a change in pull request #6303: [Relay/TOPI][TFLite] Implemented MATRIX_SET_DIAG Operator for Relay/TOPI and TFLite Frontend.

2020-08-24 Thread GitBox


jainris commented on a change in pull request #6303:
URL: https://github.com/apache/incubator-tvm/pull/6303#discussion_r475643471



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -2652,6 +2652,77 @@ def test_forward_reverse_v2():
 _test_reverse_v2((5, 6, 4, 2), np.array([2], dtype='int32'), dtype)
 
 
+###
+# MATRIX_SET_DIAG
+# ---
+
+def _test_matrix_set_diag(input_shape, input_type, quantized=False):
+""" One iteration of MATRIX_SET_DIAG """
+with tf.Graph().as_default():
+diagonal_shape = list(input_shape[:-2])
+diagonal_shape.append(min(input_shape[-2], input_shape[-1]))

Review comment:
   TFLite MATRIX_SET_DIAG doesn't seem to be a broadcast operator. So, I'll 
change the registration to be injective.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #6327: [OpFusion] Make the max number of fused ops configurable

2020-08-24 Thread GitBox


masahi commented on a change in pull request #6327:
URL: https://github.com/apache/incubator-tvm/pull/6327#discussion_r475603574



##
File path: src/relay/transforms/fuse_ops.cc
##
@@ -83,6 +83,8 @@ constexpr uint32_t kMaxFusedOps = 256;
 
 static const Op& stop_fusion_op = Op::Get("annotation.stop_fusion");
 
+TVM_REGISTER_PASS_CONFIG_OPTION("relay.max_fuse_depth", Integer);

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6327: [OpFusion] Make the max number of fused ops configurable

2020-08-24 Thread GitBox


tqchen commented on a change in pull request #6327:
URL: https://github.com/apache/incubator-tvm/pull/6327#discussion_r475599518



##
File path: src/relay/transforms/fuse_ops.cc
##
@@ -83,6 +83,8 @@ constexpr uint32_t kMaxFusedOps = 256;
 
 static const Op& stop_fusion_op = Op::Get("annotation.stop_fusion");
 
+TVM_REGISTER_PASS_CONFIG_OPTION("relay.max_fuse_depth", Integer);

Review comment:
   consider use the naming convention: `relay.PassName.option_name`
   
   `relay.FuseOps.max_depth`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6314: [Relay] Support for PyTorch Non-Maximum Suppression

2020-08-24 Thread GitBox


masahi commented on pull request #6314:
URL: https://github.com/apache/incubator-tvm/pull/6314#issuecomment-679122047


   Thanks @yongwww @kevinthesun @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #6314: [Relay] Support for PyTorch Non-Maximum Suppression

2020-08-24 Thread GitBox


masahi merged pull request #6314:
URL: https://github.com/apache/incubator-tvm/pull/6314


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (b1f8f15 -> 37cbbd7)

2020-08-24 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b1f8f15  [Ansor][AutoTVM v2.0] Phase 2: Basic GPU Sketch Search Policy 
(#6269)
 add 37cbbd7  [Relay] Support for PyTorch Non-Maximum Suppression (#6314)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py  | 50 ++-
 tests/python/frontend/pytorch/test_forward.py | 59 ++-
 2 files changed, 97 insertions(+), 12 deletions(-)



[GitHub] [incubator-tvm] leandron removed a comment on pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-24 Thread GitBox


leandron removed a comment on pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#issuecomment-679113518


   I was thinking about something along the lines:
   ```
   Device.load("some_file.json") # populates the class
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-24 Thread GitBox


leandron commented on a change in pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#discussion_r475585124



##
File path: tests/python/contrib/test_arm_compute_lib/infrastructure.py
##
@@ -25,15 +26,56 @@
 from tvm.contrib import graph_runtime
 from tvm.relay.op.contrib import arm_compute_lib
 from tvm.contrib import util
+from tvm.autotvm.measure import request_remote
 
 
 class Device:
-"""Adjust the following settings to connect to and use a remote device for 
tests."""
-use_remote = False
-target = "llvm -mtriple=aarch64-linux-gnu -mattr=+neon"
-# Enable cross compilation when connecting a remote device from a non-arm 
platform.
-cross_compile = None
-# cross_compile = "aarch64-linux-gnu-g++"
+"""
+Configuration for Arm Compute Library tests.
+
+Check tests/python/contrib/arm_compute_lib/ for the presence of an 
test_config.json file.
+This file can be used to override the default configuration here which 
will attempt to run the Arm
+Compute Library runtime tests locally if the runtime is available. 
Changing the configuration
+will allow these runtime tests to be offloaded to a remote Arm device via 
a tracker for example.
+
+Notes
+-
+The test configuration will be loaded once when the the class is 
created. If the configuration
+changes between tests, any changes will not be picked up.
+
+Parameters
+--
+device : RPCSession
+Allows tests to connect to and use remote device.
+
+Attributes
+--
+connection_type : str
+Details the type of RPC connection to use. Options:
+local - Use the local device,
+tracker - Connect to a tracker to request a remote device,
+remote - Connect to a remote device directly.
+host : str
+Specify IP address or hostname of remote target.
+port : int
+Specify port number of remote target.
+target : str
+The compilation target.
+device_key : str
+The device key of the remote target. Use when connecting to a remote 
device via a tracker.
+cross_compile : str
+Specify path to cross compiler to use when connecting a remote device 
from a non-arm platform.
+"""
+_location = os.path.realpath(os.path.join(os.getcwd(), 
os.path.dirname(__file__)))
+with open(os.path.join(_location, "test_config.json"), mode="r") as config:
+_test_config = json.load(config)
+
+connection_type = _test_config["connection_type"]
+host = _test_config["host"]
+port = _test_config["port"]
+target = _test_config["target"]
+device_key = _test_config.get("device_key") or ""
+cross_compile = _test_config.get("cross_compile") or ""

Review comment:
   I was thinking about something along the lines:
   ```
   Device.load("some_file.json") # populates the class
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-24 Thread GitBox


leandron commented on pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#issuecomment-679113518


   I was thinking about something along the lines:
   ```
   Device.load("some_file.json") # populates the class
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >