[GitHub] [tvm] comaniac commented on pull request #7145: [AutoScheduler] Improve SearchTask and ComputeDAG serialization

2020-12-22 Thread GitBox


comaniac commented on pull request #7145:
URL: https://github.com/apache/tvm/pull/7145#issuecomment-749989310


   Per offline discussion, now we only support (de)serialization of ComputeDAG 
constructed by compute, because this limitation can largely simplify the design.
   
   @merrymercy @jcf94 PTAL.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi opened a new pull request #7157: [TOPI] GPU sort IR refactor to enable sort by keys

2020-12-22 Thread GitBox


masahi opened a new pull request #7157:
URL: https://github.com/apache/tvm/pull/7157


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on pull request #7156: [AutoScheduler] Update layout rewrite option setting for measuring

2020-12-22 Thread GitBox


jcf94 commented on pull request #7156:
URL: https://github.com/apache/tvm/pull/7156#issuecomment-749974856


   Will update the log version after #7144 since this PR has modified the log 
structure of SearchTask.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 opened a new pull request #7156: [AutoScheduler] Update layout rewrite option setting for measuring

2020-12-22 Thread GitBox


jcf94 opened a new pull request #7156:
URL: https://github.com/apache/tvm/pull/7156


   AutoScheduler uses a cost model to guide the search search.
   
   We now have NO_REWRITE, INSERT_TRANSFORM_STAGE, REWRITE_FOR_PRE_TRANSFORMED 
three options when applying schedule from AutoScheduler.
   In my tests, if we set REWRITE_FOR_PRE_TRANSFORMED in program measuring, the 
final schedule we get will trend to perform better in 
REWRITE_FOR_PRE_TRANSFORMED mode. Though this schedule also works in other 
options, it will not perform the best performance if we would like to get a 
kernel with NO_REWRITE.
   
   This PR:
   1. Add a layout rewrite option for SearchTask, which will be passed to 
program measuring.
   2. This option will be set to NO_REWRITE in default, and 
REWRITE_FOR_PRE_TRANSFORMED when working in a end to end network task.
   3. Update the schedule for the inserted transform stage in option 
INSERT_TRANSFORM_STAGE, according to `python/tvm/topi/x86/injective.py`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7143: [AutoScheduler] Python based measure callbacks

2020-12-22 Thread GitBox


comaniac commented on pull request #7143:
URL: https://github.com/apache/tvm/pull/7143#issuecomment-749962826


   Per offline discussion, we cast the abstract SearchPolicy to the actual 
instance so that we can pass it to the packed function.
   
   @merrymercy @jcf94 PTAL.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac merged pull request #7151: Add a FunctionPattern, remove unused attributes in CallPattern

2020-12-22 Thread GitBox


comaniac merged pull request #7151:
URL: https://github.com/apache/tvm/pull/7151


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Add a FunctionPattern, remove unused attributes in CallPattern (#7151)

2020-12-22 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 4a7503d  Add a FunctionPattern, remove unused attributes in 
CallPattern (#7151)
4a7503d is described below

commit 4a7503d9036ffcd3323959709c92a5e13816fd73
Author: Matthew Brookhart 
AuthorDate: Tue Dec 22 22:56:51 2020 -0700

Add a FunctionPattern, remove unused attributes in CallPattern (#7151)

* Add a FunctionPattern, remove unused attributes in CallPattern

* update docs
---
 docs/langref/relay_pattern.rst| 19 
 include/tvm/relay/dataflow_pattern.h  | 69 +--
 include/tvm/relay/dataflow_pattern_functor.h  |  3 ++
 python/tvm/relay/dataflow_pattern/__init__.py | 34 -
 src/relay/ir/dataflow_matcher.cc  | 42 
 src/relay/ir/dataflow_pattern.cc  | 30 
 src/relay/ir/dataflow_pattern_functor.cc  |  7 +++
 src/relay/ir/indexed_graph.cc |  7 +++
 src/relay/transforms/simplify_expr.cc |  2 +-
 tests/python/relay/test_dataflow_pattern.py   | 61 +++
 10 files changed, 220 insertions(+), 54 deletions(-)

diff --git a/docs/langref/relay_pattern.rst b/docs/langref/relay_pattern.rst
index 8b34b76..ff02e50 100644
--- a/docs/langref/relay_pattern.rst
+++ b/docs/langref/relay_pattern.rst
@@ -167,6 +167,19 @@ The next example is matching a pattern of batch_norm -> 
get(0) -> relu. Note tha
 out = relay.nn.relu(tuple_get_item_node)
 pat.match(out)
 
+If we have a pattern that crosses a function boundary, we might want to match 
the Function itself
+
+
+.. code-block:: python
+
+  def test_match_func():
+  x = relay.var("x")
+  y = relay.var("y")
+  wc1 = wildcard()
+  wc2 = wildcard()
+  func_pattern = FunctionPattern([wc1, wc2], wc1 + wc2)
+  assert func_pattern.match(relay.Function([x, y], x + y))
+
 The next example is matching a constant node regarding its values. This is 
useful to check
 if a specific parameter in a subgraph has been bound or not.
 
@@ -283,6 +296,7 @@ The high level design is to introduce a language of 
patterns for now we propose
 | is_tuple_get_item(pattern, index = None)
 | pattern1 `|` pattern2
 | dominates(parent_pattern, path_pattern, child_pattern)
+| FunctionPattern(params, body)
 
 The above language then provides a matching interface with both can select 
sub-graphs as well as verify that the graph does match the pattern.
 
@@ -332,6 +346,11 @@ Domination
 
 Match child pattern, find a match for the parent pattern, insuring that the 
child ultimately dominates the parrent (i.e., no nodes outside the pattern use 
outputs of the parent), and that ever node betwen the child and the pattern 
matches the path pattern.
 
+Function Pattern
+
+
+Match a Function with a body and parameters
+
 Applications
 
 
diff --git a/include/tvm/relay/dataflow_pattern.h 
b/include/tvm/relay/dataflow_pattern.h
index 11ac7e3..909a4fe 100644
--- a/include/tvm/relay/dataflow_pattern.h
+++ b/include/tvm/relay/dataflow_pattern.h
@@ -148,34 +148,9 @@ class CallPatternNode : public DFPatternNode {
   /*! \brief The arguments(inputs) of the call */
   tvm::Array args;
 
-  /*! \brief The additional attributes */
-  Attrs attrs;
-
-  /*!
-   * \brief The type arguments passed to polymorphic(template) function.
-   *
-   * This is the advance feature that is only used when the function is
-   * polymorphic. It is safe to be ignored in most cases. For example, in the
-   * following code, the type_args of addone call is [int].
-   *
-   * \code
-   *
-   * template
-   * T addone(T a) { return a + 1; }
-   *
-   * void main() {
-   *   int x = addone(10);
-   * }
-   *
-   * \endcode
-   */
-  tvm::Array type_args;
-
   void VisitAttrs(tvm::AttrVisitor* v) {
 v->Visit("op", &op);
 v->Visit("args", &args);
-v->Visit("attrs", &attrs);
-v->Visit("type_args", &type_args);
   }
 
   static constexpr const char* _type_key = 
"relay.dataflow_pattern.CallPattern";
@@ -184,10 +159,52 @@ class CallPatternNode : public DFPatternNode {
 
 class CallPattern : public DFPattern {
  public:
-  TVM_DLL CallPattern(DFPattern op, Array args, Attrs attrs, 
Array type_args);
+  TVM_DLL CallPattern(DFPattern op, Array args);
   TVM_DEFINE_OBJECT_REF_METHODS(CallPattern, DFPattern, CallPatternNode);
 };
 
+/*!
+ * \brief Relay Function container
+ * \sa Function
+ */
+class FunctionPatternNode : public DFPatternNode {
+ public:
+  /*! \brief Function parameters */
+  tvm::Array params;
+  /*!
+   * \brief
+   * The expression which represents the computation of the function,
+   * the expression may reference the parameters, and the type of it
+   * or sub-expressions may reference the ty

[GitHub] [tvm] comaniac commented on pull request #7151: Add a FunctionPattern, remove unused attributes in CallPattern

2020-12-22 Thread GitBox


comaniac commented on pull request #7151:
URL: https://github.com/apache/tvm/pull/7151#issuecomment-749952354


   Thanks @mbrookhart @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on a change in pull request #7146: [CUDA]batch_matmul tensorcore schedule

2020-12-22 Thread GitBox


jwfromm commented on a change in pull request #7146:
URL: https://github.com/apache/tvm/pull/7146#discussion_r547662242



##
File path: tests/python/topi/python/test_topi_batch_matmul_tensorcore.py
##
@@ -0,0 +1,75 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Test code for batch_matmul operator"""
+import numpy as np
+import tvm
+from tvm import te
+from tvm import topi
+import tvm.topi.testing
+from tvm.topi.utils import get_const_tuple
+from tvm.contrib.pickle_memoize import memoize
+
+import tvm.testing
+
+_batch_matmul_implement = {
+"gpu": (topi.cuda.batch_matmul_tensorcore, 
topi.cuda.schedule_batch_matmul_tensorcore),
+}
+
+
+def verify_batch_matmul(x_batch, y_batch, M, N, K):
+x = te.placeholder((x_batch, M, K), name="x")
+y = te.placeholder((y_batch, N, K), name="y")
+dtype = x.dtype

Review comment:
   It may be worth testing other datatypes as well, especially `float16`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #7146: [CUDA]batch_matmul tensorcore schedule

2020-12-22 Thread GitBox


jwfromm commented on pull request #7146:
URL: https://github.com/apache/tvm/pull/7146#issuecomment-749941299


   @Meteorix out of curiosity can you share some of your benchmarking results? 
I'd love to know how much faster this performs than cublas.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on a change in pull request #7147: [CUDA][PASS]Legalize tensorcore

2020-12-22 Thread GitBox


jwfromm commented on a change in pull request #7147:
URL: https://github.com/apache/tvm/pull/7147#discussion_r547655745



##
File path: python/tvm/topi/cuda/conv2d_alter_op.py
##
@@ -345,4 +347,49 @@ def _conv2d_legalize(attrs, inputs, arg_types):
 else:
 out = relay.nn.conv2d(data, kernel, **new_attrs)
 return out
+elif data_dtype in ['float16', 'float32']:

Review comment:
   Why are we checking for 'float32'? My understanding is that tensorcores 
only work on smaller datatypes. You should consider looking at some of the 
support TVM has for running int8 and int4 workloads on tensorcores as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #7153: [RUNTIME] Add libbacktrace for backtraces with line numbers

2020-12-22 Thread GitBox


junrushao1994 commented on a change in pull request #7153:
URL: https://github.com/apache/tvm/pull/7153#discussion_r547579013



##
File path: include/tvm/target/tag.h
##
@@ -139,7 +139,7 @@ inline TargetTagRegEntry& TargetTagRegEntry::set_name() {
 }
 
 #define TVM_TARGET_TAG_REGISTER_VAR_DEF \
-  static DMLC_ATTRIBUTE_UNUSED ::tvm::TargetTagRegEntry& __make_##TargetTag
+  static __attribute__((unused)) ::tvm::TargetTagRegEntry& __make_##TargetTag

Review comment:
   Let's define `TVM_ATTRIBUTE_UNUSED` instead

##
File path: include/tvm/support/with.h
##
@@ -65,7 +63,7 @@ class With {
 ctx_.EnterWithScope();
   }
   /*! \brief destructor, leaves the scope of the context. */
-  ~With() DMLC_THROW_EXCEPTION { ctx_.ExitWithScope(); }
+  ~With() noexcept(false) { ctx_.ExitWithScope(); }

Review comment:
   Let's define `TVM_THROW_EXCEPTION ` instea

##
File path: CMakeLists.txt
##
@@ -526,3 +530,25 @@ if(MSVC)
   target_compile_definitions(tvm_objs PRIVATE -DTVM_EXPORTS)
   target_compile_definitions(tvm_runtime_objs PRIVATE -DTVM_EXPORTS)
 endif()
+
+set(IS_DEBUG_BUILD OFF)

Review comment:
   let's prefix it with TVM_, like `TVM_IS_DEBUG`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


masahi commented on a change in pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#discussion_r547615181



##
File path: python/tvm/relay/frontend/pytorch_utils.py
##
@@ -25,3 +35,98 @@ def is_version_greater_than(ver):
 return "".join(re.findall(r"(\d+\.)(\d+\.)(\d)", torch.__version__)[0]) > 
"".join(
 re.findall(r"(\d+\.)(\d+\.)(\d)", ver)[0]
 )
+
+
+def batched_nms_pattern(boxes, scores, idxs, iou_threshold):
+"""A pattern to detect batched_nms function in torchvision"""

Review comment:
   Ok I'll update after the CI finishes. In the mean time, you can have a 
look at rewrite test cases in 
https://github.com/apache/tvm/blob/main/tests/python/relay/test_dataflow_pattern.py#L698-L736
 to get an idea of how they work





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


masahi commented on a change in pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#discussion_r547566314



##
File path: tests/python/frontend/pytorch/test_object_detection.py
##
@@ -102,38 +105,55 @@ def test_detection_models():
 scripted_model = generate_jit_model(1)
 mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
 
-with tvm.transform.PassContext(opt_level=3, 
disabled_pass=["FoldScaleAxis"]):
-vm_exec = relay.vm.compile(mod, target=target, params=params)
+def compile_and_run_vm(mod, params, data_np):
+with tvm.transform.PassContext(opt_level=3, 
disabled_pass=["FoldScaleAxis"]):
+vm_exec = relay.vm.compile(mod, target=target, params=params)
 
-ctx = tvm.cpu()
-vm = VirtualMachine(vm_exec, ctx)
-data = process_image(img)
-pt_res = scripted_model(data)
-data = data.detach().numpy()
-vm.set_input("main", **{input_name: data})
-tvm_res = vm.run()
+ctx = tvm.context(target, 0)
+vm = VirtualMachine(vm_exec, ctx)
+vm.set_input("main", **{input_name: data_np})
+return vm.run()
 
+data = process_image(img)
+data_np = data.detach().numpy()
+tvm_res = compile_and_run_vm(mod, params, data_np)
 # Note: due to accumulated numerical error, we can't directly compare 
results
 # with pytorch output. Some boxes might have a quite tiny difference in 
score
 # and the order can become different. We just measure how many valid boxes
 # there are for input image.
+pt_res = scripted_model(data)
 pt_scores = pt_res[1].detach().numpy().tolist()
 tvm_scores = tvm_res[1].asnumpy().tolist()
-num_pt_valid_scores = num_tvm_valid_scores = 0
 
-for score in pt_scores:
-if score >= score_threshold:
-num_pt_valid_scores += 1
-else:
-break
+def count_valid_scores(scores):
+num_valid_scores = 0
+for score in pt_scores:
+if score >= score_threshold:
+num_valid_scores += 1
+else:
+return num_valid_scores
 
-for score in tvm_scores:
-if score >= score_threshold:
-num_tvm_valid_scores += 1
-else:
-break
+num_pt_valid_scores = count_valid_scores(pt_scores)
+num_tvm_valid_scores = count_valid_scores(tvm_scores)
 
 assert num_pt_valid_scores == num_tvm_valid_scores, (
 "Output mismatch: Under score threshold {}, Pytorch has {} valid "
 "boxes while TVM has {}.".format(score_threshold, num_pt_valid_scores, 
num_tvm_valid_scores)
 )
+
+before = mod["main"]
+after = rewrite(NMSRewrite(), before)
+# TODO(masahi): Is there a better way to test if the desired rewrite has 
happened?

Review comment:
   @mbrookhart Any suggestion here? Specifically, I am looking for search 
and match function.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] roger-zhao opened a new issue #7155: Potential bug for SearchSpace length

2020-12-22 Thread GitBox


roger-zhao opened a new issue #7155:
URL: https://github.com/apache/tvm/issues/7155


   
https://github.com/apache/tvm/blob/08a69d4f92742c8c526d6a7c2a5805d00f5dc725/python/tvm/autotvm/task/space.py#L839
   
   here, when _length is None, then initiate it to current search space length, 
it's almost good most of times, however if user/developer get __len__ of search 
space when space is empty, then a unreasonable _length 1 will be produced( the 
result of empty list for np.prod is 1), so it's better for adding an extra 
condition such as " len(self.space_map.values()) != 0" . :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


codeislife99 commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547597810



##
File path: include/tvm/topi/transform.h
##
@@ -1386,6 +1386,96 @@ inline Array meshgrid(const Array& 
inputs, const std::string& in
   return result;
 }
 
+/*!
+ * \brief Fill Empty rows of a sparse tensor with default value
+ *
+ * \param sparse_indices Indices where values of the dense tensor exist
+ * \param sparse_values Values at the above indices respectively
+ * \param default_value Default value at to be used at empty rows
+ * \param dense_shape Dense shape of the sparse tensor
+ * \param name The name of the operation
+ * \param tag The tag to mark the operation
+ *
+ * \return A Tensor whose op member is the SparseFillEmptyRows operation
+ */
+inline Array SparseFillEmptyRows(const Tensor& sparse_indices, const 
Tensor& sparse_values,
+ const Tensor& default_value,
+ const Array& dense_shape,
+ const std::string name = 
"T_sparse_fill_empty_rows",
+ std::string tag = kInjective) {
+  Array result;
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(dense_shape[0] + sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  auto empty_row_indicator =
+  compute(Array{dense_shape[0]}, [&](const Array& indices) {
+PrimExpr ret = PrimExpr(Bool(1));
+for (int i = 0; i < GetConstInt(sparse_indices->shape[0]); ++i) {

Review comment:
   I am a beginner in op implementation, do you mind sharing some PR/code 
examples , on how I can achieve the same goal(loop etc.) and set the output 
shape in `OpRel`. 
   Then I will follow them to make this compatible for dynamically shaped input 
tensors. 
   There is also probably no need for the strided_slice to be outside the op 
implementation if I change it to support dynamic shapes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


codeislife99 commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547596393



##
File path: include/tvm/topi/transform.h
##
@@ -1386,6 +1386,96 @@ inline Array meshgrid(const Array& 
inputs, const std::string& in
   return result;
 }
 
+/*!
+ * \brief Fill Empty rows of a sparse tensor with default value
+ *
+ * \param sparse_indices Indices where values of the dense tensor exist
+ * \param sparse_values Values at the above indices respectively
+ * \param default_value Default value at to be used at empty rows
+ * \param dense_shape Dense shape of the sparse tensor
+ * \param name The name of the operation
+ * \param tag The tag to mark the operation
+ *
+ * \return A Tensor whose op member is the SparseFillEmptyRows operation
+ */
+inline Array SparseFillEmptyRows(const Tensor& sparse_indices, const 
Tensor& sparse_values,
+ const Tensor& default_value,
+ const Array& dense_shape,
+ const std::string name = 
"T_sparse_fill_empty_rows",
+ std::string tag = kInjective) {
+  Array result;
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(dense_shape[0] + sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  auto empty_row_indicator =
+  compute(Array{dense_shape[0]}, [&](const Array& indices) {
+PrimExpr ret = PrimExpr(Bool(1));
+for (int i = 0; i < GetConstInt(sparse_indices->shape[0]); ++i) {

Review comment:
   Yes there are 3 sparse ops that we are trying to target for a customer 
for a TF model explicitly. 
   These 3 are : 
   1. 
[sparse_reshape](https://www.tensorflow.org/api_docs/python/tf/sparse/reshape) 
   2. 
[sparse_segment_sum](https://www.tensorflow.org/api_docs/python/tf/sparse/segment_sum?hl=bn)
   3. 
[sparse_fill_empty_rows](https://www.tensorflow.org/api_docs/python/tf/sparse/fill_empty_rows)
   4. 
[sparse_segment_sum_sqrt_n](https://www.tensorflow.org/api_docs/python/tf/sparse/segment_sqrt_n?hl=bn)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


codeislife99 commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547596393



##
File path: include/tvm/topi/transform.h
##
@@ -1386,6 +1386,96 @@ inline Array meshgrid(const Array& 
inputs, const std::string& in
   return result;
 }
 
+/*!
+ * \brief Fill Empty rows of a sparse tensor with default value
+ *
+ * \param sparse_indices Indices where values of the dense tensor exist
+ * \param sparse_values Values at the above indices respectively
+ * \param default_value Default value at to be used at empty rows
+ * \param dense_shape Dense shape of the sparse tensor
+ * \param name The name of the operation
+ * \param tag The tag to mark the operation
+ *
+ * \return A Tensor whose op member is the SparseFillEmptyRows operation
+ */
+inline Array SparseFillEmptyRows(const Tensor& sparse_indices, const 
Tensor& sparse_values,
+ const Tensor& default_value,
+ const Array& dense_shape,
+ const std::string name = 
"T_sparse_fill_empty_rows",
+ std::string tag = kInjective) {
+  Array result;
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(dense_shape[0] + sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  auto empty_row_indicator =
+  compute(Array{dense_shape[0]}, [&](const Array& indices) {
+PrimExpr ret = PrimExpr(Bool(1));
+for (int i = 0; i < GetConstInt(sparse_indices->shape[0]); ++i) {

Review comment:
   Yes there are 3 sparse ops that we are trying to target for a customer 
for a TF model explicitly. 
   These 3 are : 
   1. 
[sparse_reshape](https://www.tensorflow.org/api_docs/python/tf/sparse/reshape) 
   2. 
[sparse_segment_sum](https://www.tensorflow.org/api_docs/python/tf/sparse/segment_sum?hl=bn)
   3. 
[sparse_fill_empty_rows](https://www.tensorflow.org/api_docs/python/tf/sparse/fill_empty_rows)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun commented on a change in pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


kevinthesun commented on a change in pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#discussion_r547594012



##
File path: python/tvm/relay/frontend/pytorch_utils.py
##
@@ -25,3 +35,98 @@ def is_version_greater_than(ver):
 return "".join(re.findall(r"(\d+\.)(\d+\.)(\d)", torch.__version__)[0]) > 
"".join(
 re.findall(r"(\d+\.)(\d+\.)(\d)", ver)[0]
 )
+
+
+def batched_nms_pattern(boxes, scores, idxs, iou_threshold):
+"""A pattern to detect batched_nms function in torchvision"""

Review comment:
   Can we have more comments about this pattern matching processing? I'm a 
bit confused of how class id is restored.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7145: [AutoScheduler] Improve SearchTask and ComputeDAG serialization

2020-12-22 Thread GitBox


jcf94 commented on a change in pull request #7145:
URL: https://github.com/apache/tvm/pull/7145#discussion_r547593358



##
File path: python/tvm/auto_scheduler/search_task.py
##
@@ -221,10 +221,6 @@ def __init__(
 target_host = Target(target_host)
 
 self.dag = compute_dag

Review comment:
   Oh, thanks! Yeah, I finally get yout point. Let's have a discussion in 
the meeting.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7145: [AutoScheduler] Improve SearchTask and ComputeDAG serialization

2020-12-22 Thread GitBox


jcf94 commented on a change in pull request #7145:
URL: https://github.com/apache/tvm/pull/7145#discussion_r547593358



##
File path: python/tvm/auto_scheduler/search_task.py
##
@@ -221,10 +221,6 @@ def __init__(
 target_host = Target(target_host)
 
 self.dag = compute_dag

Review comment:
   Oh, thanks! Yeah, I finally get yout point. Let's have a discusse in the 
meeting.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


masahi commented on pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#issuecomment-749860080


   I should mention that this rewrite is not run by default, so there is no 
perf risk.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


zhiics commented on pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#issuecomment-749858927


   @masahi I think this is an plausible as well particularly it is only in the 
parser. @kevinthesun please help take a look as well. Thanks.   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


masahi edited a comment on pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#issuecomment-749858279


   @zhiics Sure updated the description. Unfortunately I cannot claim that this 
is perf improvement. The regression is only 200 us on CPU, so it may be just a 
measurement noise, though.
   
   I have no idea why I'm not getting good speed up. IOU tests, including 
memory access to boxes should be definitely reduced. The only additional 
overhead I think of is that the input to NMS is one column wider, due to 
storing class ids. 
   
   Performance is not great, but I believe having access to class ids should 
not be a bad idea...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


masahi commented on pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#issuecomment-749858279


   @zhiics Sure updated the description. The regression is only 200 us on CPU, 
so it may be just a measurement noise.
   
   I have no idea why I'm not getting good speed up. IOU tests, including 
memory access to boxes should be definitely reduced. The only additional 
overhead I think of is that the input to NMS is one column wider, due to 
storing class ids. 
   
   Performance is not great, but I believe having access to class ids should 
not be bad idea...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


zhiics commented on pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#issuecomment-749852977


   @masahi Thanks for the perf improvement. Could you provide the CPU numbers 
as well?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Rust] Impl IsObjectRef for Array (#7138)

2020-12-22 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 08a69d4  [Rust] Impl IsObjectRef for Array (#7138)
08a69d4 is described below

commit 08a69d4f92742c8c526d6a7c2a5805d00f5dc725
Author: Andrew Liu 
AuthorDate: Tue Dec 22 16:21:52 2020 -0800

[Rust] Impl IsObjectRef for Array (#7138)

* impl isobjectref for array

* array test

* cargo fmt
---
 rust/tvm-rt/src/array.rs | 34 --
 1 file changed, 32 insertions(+), 2 deletions(-)

diff --git a/rust/tvm-rt/src/array.rs b/rust/tvm-rt/src/array.rs
index 1b0ce83..5abf667 100644
--- a/rust/tvm-rt/src/array.rs
+++ b/rust/tvm-rt/src/array.rs
@@ -45,6 +45,26 @@ external! {
 fn array_size(array: ObjectRef) -> i64;
 }
 
+impl IsObjectRef for Array {
+type Object = Object;
+fn as_ptr(&self) -> Option<&ObjectPtr> {
+self.object.as_ptr()
+}
+fn into_ptr(self) -> Option> {
+self.object.into_ptr()
+}
+fn from_ptr(object_ptr: Option>) -> Self {
+let object_ref = match object_ptr {
+Some(o) => o.into(),
+_ => panic!(),
+};
+Array {
+object: object_ref,
+_data: PhantomData,
+}
+}
+}
+
 impl Array {
 pub fn from_vec(data: Vec) -> Result> {
 let iter = data.into_iter().map(T::into_arg_value).collect();
@@ -131,8 +151,8 @@ impl FromIterator for Array {
 }
 }
 
-impl From> for ArgValue<'static> {
-fn from(array: Array) -> ArgValue<'static> {
+impl<'a, T: IsObjectRef> From> for ArgValue<'a> {
+fn from(array: Array) -> ArgValue<'a> {
 array.object.into()
 }
 }
@@ -172,6 +192,7 @@ impl<'a, T: IsObjectRef> TryFrom for Array {
 mod tests {
 use super::Array;
 use crate::function::Result;
+use crate::object::{IsObjectRef, ObjectRef};
 use crate::string::String;
 
 #[test]
@@ -183,4 +204,13 @@ mod tests {
 assert_eq!(array.get(2)?.to_string(), "baz");
 Ok(())
 }
+
+#[test]
+fn downcast() -> Result<()> {
+let vec: Vec = vec!["foo".into(), "bar".into(), "baz".into()];
+let array: ObjectRef = 
ObjectRef::from_ptr(Array::from_vec(vec)?.into_ptr());
+let array: Array = 
array.downcast::>().unwrap();
+assert_eq!(array.get(1)?.downcast::().unwrap(), "bar");
+Ok(())
+}
 }



[GitHub] [tvm] jroesch merged pull request #7138: [Rust] Impl IsObjectRef for Array

2020-12-22 Thread GitBox


jroesch merged pull request #7138:
URL: https://github.com/apache/tvm/pull/7138


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on pull request #7138: [Rust] Impl IsObjectRef for Array

2020-12-22 Thread GitBox


jroesch commented on pull request #7138:
URL: https://github.com/apache/tvm/pull/7138#issuecomment-749847986


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


masahi commented on a change in pull request #7154:
URL: https://github.com/apache/tvm/pull/7154#discussion_r547566314



##
File path: tests/python/frontend/pytorch/test_object_detection.py
##
@@ -102,38 +105,55 @@ def test_detection_models():
 scripted_model = generate_jit_model(1)
 mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
 
-with tvm.transform.PassContext(opt_level=3, 
disabled_pass=["FoldScaleAxis"]):
-vm_exec = relay.vm.compile(mod, target=target, params=params)
+def compile_and_run_vm(mod, params, data_np):
+with tvm.transform.PassContext(opt_level=3, 
disabled_pass=["FoldScaleAxis"]):
+vm_exec = relay.vm.compile(mod, target=target, params=params)
 
-ctx = tvm.cpu()
-vm = VirtualMachine(vm_exec, ctx)
-data = process_image(img)
-pt_res = scripted_model(data)
-data = data.detach().numpy()
-vm.set_input("main", **{input_name: data})
-tvm_res = vm.run()
+ctx = tvm.context(target, 0)
+vm = VirtualMachine(vm_exec, ctx)
+vm.set_input("main", **{input_name: data_np})
+return vm.run()
 
+data = process_image(img)
+data_np = data.detach().numpy()
+tvm_res = compile_and_run_vm(mod, params, data_np)
 # Note: due to accumulated numerical error, we can't directly compare 
results
 # with pytorch output. Some boxes might have a quite tiny difference in 
score
 # and the order can become different. We just measure how many valid boxes
 # there are for input image.
+pt_res = scripted_model(data)
 pt_scores = pt_res[1].detach().numpy().tolist()
 tvm_scores = tvm_res[1].asnumpy().tolist()
-num_pt_valid_scores = num_tvm_valid_scores = 0
 
-for score in pt_scores:
-if score >= score_threshold:
-num_pt_valid_scores += 1
-else:
-break
+def count_valid_scores(scores):
+num_valid_scores = 0
+for score in pt_scores:
+if score >= score_threshold:
+num_valid_scores += 1
+else:
+return num_valid_scores
 
-for score in tvm_scores:
-if score >= score_threshold:
-num_tvm_valid_scores += 1
-else:
-break
+num_pt_valid_scores = count_valid_scores(pt_scores)
+num_tvm_valid_scores = count_valid_scores(tvm_scores)
 
 assert num_pt_valid_scores == num_tvm_valid_scores, (
 "Output mismatch: Under score threshold {}, Pytorch has {} valid "
 "boxes while TVM has {}.".format(score_threshold, num_pt_valid_scores, 
num_tvm_valid_scores)
 )
+
+before = mod["main"]
+after = rewrite(NMSRewrite(), before)
+# TODO(masahi): Is there a better way to test if the desired rewrite has 
happened?

Review comment:
   @mbrookhart Any suggestion here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 edited a comment on pull request #7153: [RUNTIME] Add libbacktrace for backtraces with line numbers

2020-12-22 Thread GitBox


junrushao1994 edited a comment on pull request #7153:
URL: https://github.com/apache/tvm/pull/7153#issuecomment-749842135


   Thank you Tristan for the hard work! Would love to ask some questions just 
for clarification on the forum first :-)
   
   Would you like to also copy your bullet points to the forum as well? Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #7153: [RUNTIME] Add libbacktrace for backtraces with line numbers

2020-12-22 Thread GitBox


junrushao1994 commented on pull request #7153:
URL: https://github.com/apache/tvm/pull/7153#issuecomment-749842135


   Thank you Tristan for the hard work! Would love to ask some questions just 
for clarification on the forum first :-)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi opened a new pull request #7154: [Torch] Restore class-aware NMS for detection models by graph rewrite

2020-12-22 Thread GitBox


masahi opened a new pull request #7154:
URL: https://github.com/apache/tvm/pull/7154


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


mbrookhart commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547556384



##
File path: include/tvm/topi/transform.h
##
@@ -1386,6 +1386,96 @@ inline Array meshgrid(const Array& 
inputs, const std::string& in
   return result;
 }
 
+/*!
+ * \brief Fill Empty rows of a sparse tensor with default value
+ *
+ * \param sparse_indices Indices where values of the dense tensor exist
+ * \param sparse_values Values at the above indices respectively
+ * \param default_value Default value at to be used at empty rows
+ * \param dense_shape Dense shape of the sparse tensor
+ * \param name The name of the operation
+ * \param tag The tag to mark the operation
+ *
+ * \return A Tensor whose op member is the SparseFillEmptyRows operation
+ */
+inline Array SparseFillEmptyRows(const Tensor& sparse_indices, const 
Tensor& sparse_values,
+ const Tensor& default_value,
+ const Array& dense_shape,
+ const std::string name = 
"T_sparse_fill_empty_rows",
+ std::string tag = kInjective) {
+  Array result;
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(dense_shape[0] + sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  auto empty_row_indicator =
+  compute(Array{dense_shape[0]}, [&](const Array& indices) {
+PrimExpr ret = PrimExpr(Bool(1));
+for (int i = 0; i < GetConstInt(sparse_indices->shape[0]); ++i) {

Review comment:
   I would like to see this done in a way that supports dynamic shapes, 
this will crash a compile time if we have a dynamic input.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige opened a new pull request #7153: [RUNTIME] Add libbacktrace for backtraces with line numbers

2020-12-22 Thread GitBox


tkonolige opened a new pull request #7153:
URL: https://github.com/apache/tvm/pull/7153


   - Added libbacktrace to 3rdparty
   - Changed build settings to give absolute paths in debug symbols
   - Move CHECK and LOG to tvm/support/logging.h
   - Rename tvm::Error to tvm::CompileError
   - Create new tvm::Error that contains structured information including a 
backtrace.
   - Replace dmlc::Error with new tvm::Error.
   - Unify dmlc headers to `include/tvm/support/dmlc.h` and hide LOG and CHECK 
macros from dmlc.
   - Rename ICHECK, CHECK, and LOG to TVM_ICHECK, TVM_CHECK, TVM_LOG.
   
   This branch is not up to date with main because the changes to check hit a 
lot of places in the codebase. I'll update it when we are in agreement to merge 
it.
   
   RFC: https://discuss.tvm.apache.org/t/rfc-line-numbers-in-backtraces/8711
   
   @areusch @junrushao1994 @u99127 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige opened a new pull request #7152: [RUNTIME] Improve error messages for TypedPackedFunc

2020-12-22 Thread GitBox


tkonolige opened a new pull request #7152:
URL: https://github.com/apache/tvm/pull/7152


   This PR is an alternate to #7108, that captures the name of a 
TypedPackedFunc into a lambda instead of adding it as a field to the class. 
Right now, naming a TypedPackedFunc is optional, but I suggest we make it 
mandatory. I've already converted some of the places in the codebase to add a 
name (in this PR).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7142: Asymmetric padding in conv2d workload

2020-12-22 Thread GitBox


comaniac commented on pull request #7142:
URL: https://github.com/apache/tvm/pull/7142#issuecomment-749830449


   I see what you meant. How about we just simply add a test in 
`test_topi_conv2d_int8.py` that directly calls 
`fallback_schedule_cpu_common_int8` that takes a workload generated by 
`_get_workload`? Something like:
   ```python
   def test_worload_with_asmmetric_padding():
 cfg = ...
 wkl = _get_workload(...) # with asmmetric padding
 int32_lanes = ...
 num_int8_elements = ...
 fallback_schedule_cpu_common_int8(cfg, wkl, int32_lanes, num_int8_elements)
 assert cfg["tile_ow"] ... # check if tile_ow candidates are the factors of 
the right output weight.
   ```
   
   So does the other ops changed by this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Wheest commented on pull request #7142: Asymmetric padding in conv2d workload

2020-12-22 Thread GitBox


Wheest commented on pull request #7142:
URL: https://github.com/apache/tvm/pull/7142#issuecomment-749822386


   Thanks, I understand better what a good test for this PR would be: one that 
fails on the current `main` branch but not this PR.
   
   I've been working on devising a test like this, but haven't got one that 
fails yet.  Will keep working on it, but here's my reasoning so far:
   
   afaik the workload data is only used the creation of fallback schedules.  
e.g. [for creating the `"tile_ow"` parameter for spatial pack 
convolution](https://github.com/apache/tvm/blob/main/python/tvm/topi/x86/conv2d_avx_common.py#L54).

   So I imagine I would want to create a test that has padding such that 
something like `"tile_ow"` is generated with an incorrect value, creating a 
schedule that causes an invalid transformation.
   
   e.g. `verify_conv2d_nchw(1, 64, 8, 128, 3, 1, (6, 6, 0, 0))`
   
   If we are to focus on NCHWc convolution, which uses `"tile_ow"` in its 
schedule (via SplitEntity with `reg` in the `_fallback_schedule`) 
[`python/tvm/topi/x86/conv2d_avx_common.py#L119`](https://github.com/apache/tvm/blob/main/python/tvm/topi/x86/conv2d_avx_common.py#L119).
   There is no value we can set `reg` or `"tile_ow"` to that will cause an 
invalid transformation, even through silly hardcoding like `out_width = 1` 
or `ow_chunk, ow_block = s[C].split(ow, factor=1)`
   
   It always works.  Though perhaps it suffers a performance regression? 
   
   It could be happenstance that none of the conv2d schedules make 
transformations that are rendered invalid by having an incorrect value for the 
output height/width.  That being the case, I'm unsure how I would devise a test 
for this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7107: [Tutorial] Add output validation to sparse tutorial

2020-12-22 Thread GitBox


tkonolige commented on a change in pull request #7107:
URL: https://github.com/apache/tvm/pull/7107#discussion_r547540336



##
File path: tests/scripts/task_ci_python_setup.sh
##
@@ -31,3 +31,4 @@ set -o pipefail
 echo "Addtiional setup in" ${CI_IMAGE_NAME}
 
 python3 -m pip install --user tlcpack-sphinx-addon==0.1.3 synr==0.2.1
+python3 -m pip install --user tokenizers==0.9.4 transformers==4.0.1

Review comment:
   Ok, I think I have this figured out. This tutorial depends on 
transformers, but because the import was local to the download function (which 
wasn't run), we didn't hit it. That means that we need to install transformers 
in the ci_script (like you had done before). However, this model is pretty 
large (500MB), so I don't think we want to be downloading it for every run on 
CI. Do you have a small sparse model we could use instead? If not, I suggest we 
leave the finally running of the script commented out and let the user run it 
if they so choose.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7142: Asymmetric padding in conv2d workload

2020-12-22 Thread GitBox


comaniac commented on pull request #7142:
URL: https://github.com/apache/tvm/pull/7142#issuecomment-749775593


   As you pointed out, the workload doesn't handle asymmetric padding as the 
compute implementation, which looks like a bug to me. However, it never 
triggers CI errors before, meaning that there aren't existing test cases for 
it. As a result, I'm expecting to have a test case that requires this PR to 
pass. For example, `conv2d_avx_1x1` has `out_height = (wkl.height + 2 * HPAD - 
wkl.hkernel) // HSTR + 1` before this PR. Then you will get a wrong 
`out_height` if you provide a workload with asymmetric padding without this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


comaniac commented on pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#issuecomment-749772392


   > @comaniac Every time I select request changes on GitHub, it switches it 
"suggested changes". Maybe because I am not a committer?
   
   Yes that's the case, but we'll do our best to make sure all comments are 
addressed before merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Wheest commented on pull request #7142: Asymmetric padding in conv2d workload

2020-12-22 Thread GitBox


Wheest commented on pull request #7142:
URL: https://github.com/apache/tvm/pull/7142#issuecomment-749763887


   Happy to add a test case if necessary, though I'm still to get familiar with 
the testing infrastructure for TVM.
   
   Existing specific TOPI conv2d implementations are tested with asymmetric 
padding under `tests/python/topi/python/` (e.g. 
[test_topi_conv2d_nchw.py#L227](https://github.com/apache/tvm/blob/main/tests/python/topi/python/test_topi_conv2d_nchw.py#L227)).
   
   This change is just ensuring that data is held in the workload too.  If all 
of the existing tests pass, is that sufficient? 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart opened a new pull request #7151: Add a FunctionPattern, remove unused attributes in CallPattern

2020-12-22 Thread GitBox


mbrookhart opened a new pull request #7151:
URL: https://github.com/apache/tvm/pull/7151


   Thanks!
   
   cc @comaniac 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7149: Sparse segment sum op

2020-12-22 Thread GitBox


tkonolige commented on a change in pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#discussion_r547482617



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,59 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseSegmentSumAttrs);
+
+bool SparseSegmentSumRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+ const TypeReporter& reporter) {
+  // types: [data, indices, segment_ids, result]
+  ICHECK_EQ(types.size(), 4) << "SparseSegmentSumRel expects 4 types but 
provided " << types.size();

Review comment:
   ```suggestion
 ICHECK_EQ(types.size(), 4) << "SparseSegmentSumRel expects 4 types but " 
<< types.size() << " were provided.";
   ```

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,59 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseSegmentSumAttrs);
+
+bool SparseSegmentSumRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+ const TypeReporter& reporter) {
+  // types: [data, indices, segment_ids, result]
+  ICHECK_EQ(types.size(), 4) << "SparseSegmentSumRel expects 4 types but 
provided " << types.size();
+  auto data = types[0].as();
+  auto indices = types[1].as();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);

Review comment:
   ```suggestion
 ICHECK_NOTNULL(param);
   ```

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,59 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseSegmentSumAttrs);
+
+bool SparseSegmentSumRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+ const TypeReporter& reporter) {
+  // types: [data, indices, segment_ids, result]
+  ICHECK_EQ(types.size(), 4) << "SparseSegmentSumRel expects 4 types but 
provided " << types.size();
+  auto data = types[0].as();
+  auto indices = types[1].as();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);
+  Array new_data_shape;
+  new_data_shape.push_back(tvm::max(indices->shape[0], param->num_segments));
+  for (int i = 1; i < static_cast(data->shape.size()); ++i) {
+new_data_shape.push_back(data->shape[i]);
+  }
+  std::vector fields;
+  fields.push_back(TensorType(new_data_shape, data->dtype));
+  fields.push_back(TensorType(Array{1}, tvm::DataType::Int(32)));
+  reporter->Assign(types[3], TupleType(Array(fields)));
+  return true;
+}
+
+Array SparseSegmentSumCompute(const Attrs& attrs, const 
Array& inputs,
+  const Type& out_type) {
+  ICHECK_EQ(inputs.size(), 3) << "SparseSegmentSumCompute expects 3 input but 
provided "
+  << inputs.size();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);

Review comment:
   ```suggestion
 ICHECK_NOTNULL(param);
   ```

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,59 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseSegmentSumAttrs);
+
+bool SparseSegmentSumRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+ const TypeReporter& reporter) {
+  // types: [data, indices, segment_ids, result]
+  ICHECK_EQ(types.size(), 4) << "SparseSegmentSumRel expects 4 types but 
provided " << types.size();
+  auto data = types[0].as();
+  auto indices = types[1].as();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);
+  Array new_data_shape;
+  new_data_shape.push_back(tvm::max(indices->shape[0], param->num_segments));
+  for (int i = 1; i < static_cast(data->shape.size()); ++i) {
+new_data_shape.push_back(data->shape[i]);
+  }
+  std::vector fields;
+  fields.push_back(TensorType(new_data_shape, data->dtype));
+  fields.push_back(TensorType(Array{1}, tvm::DataType::Int(32)));
+  reporter->Assign(types[3], TupleType(Array(fields)));
+  return true;
+}
+
+Array SparseSegmentSumCompute(const Attrs& attrs, const 
Array& inputs,
+  const Type& out_type) {
+  ICHECK_EQ(inputs.size(), 3) << "SparseSegmentSumCompute expects 3 input but 
provided "
+  << inputs.size();

Review comment:
   ```suggestion
 ICHECK_EQ(inputs.size(), 3) << "SparseSegmentSumCompute expects 3 input 
but " << inputs.size() << " were provided.";
   ```

##
File path: python/tvm/relay/op/transform.py
##
@@ -1320,3 +1320,55 @@ def adv_index(inputs):
 Output tensor.
 """
 return _make.adv_index(Tuple(inputs))
+
+
+def sparse_segment_sum(data, indices, segment_ids, num_segments=None):
+"""

[GitHub] [tvm] tkonolige commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


tkonolige commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547480430



##
File path: python/tvm/relay/op/transform.py
##
@@ -1320,3 +1320,84 @@ def adv_index(inputs):
 Output tensor.
 """
 return _make.adv_index(Tuple(inputs))
+
+
+def sparsefillemptyrows(sparse_indices, sparse_values, dense_shape, 
default_value):
+"""
+Fill first column of the empty rows with default values for a sparse array.
+
+Parameters
+--
+sparse_indices : relay.Expr
+A 2-D tensor[N, n_dim] of integers containing location of sparse 
values, where N is the
+number of sparse values and n_dim is the number of dimensions of the 
dense_shape
+
+sparse_values : relay.Expr
+A 1-D tensor[N] containing the sparse values for the sparse indices.
+
+dense_shape : relay.Expr
+A list of integers. Shape of the dense output tensor.
+
+default_value : relay.Expr
+A 0-D tensor containing the default value for the remaining locations.
+Defaults to 0.
+
+Returns
+---
+TupleWrapper with the following four outputs
+
+new_sparse_indices : relay.Expr
+A 2-D tensor[N + dense_shape[0], n_dim] of integers containing 
location of new sparse
+indices where N is the number of sparse values. It is filled with -1 
at to_be_discarded
+indices.
+
+empty_row_indicator : relay.Expr
+A 1-D Boolean tensor[dense_shape[0]] indicating whether the particular 
row is empty
+
+new_sparse_values : relay.Expr
+A 1-D tensor[dense_shape[0]] containing the sparse values for the 
sparse indices. It is
+filled with -1 at to_be_discarded indices.
+
+slice_element_index : relay.Expr
+A 1-D tensor containing the amount of elements in the sparse_indices 
and new_sparse_values
+expression to be sliced in a future op discarding non-useful elements 
in new_sparse_indices
+and new_sparse_values
+
+Examples
+---
+
+.. code-block:: python
+
+sparse_indices = [[0, 1],
+  [0, 3],
+  [2, 0],
+  [3, 1]]
+sparse_values = [1, 2, 3, 4]
+default_value = [10]
+dense_shape = [5, 6]
+new_sparse_indices, empty_row_indicator, new_sparse_values, 
slice_element_index =
+relay.sparsereshape(
+sparse_indices,
+sparse_values,
+prev_shape,
+new_shape)
+new_sparse_indices =   [[0, 1],
+   [0, 3],
+   [2, 0],
+   [3, 1],
+   [1, 0],
+   [4, 0],
+   [-1, -1],

Review comment:
   Given you are using it for a single op, I would follow NMS's example and 
use dynamic shape.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


tkonolige commented on pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#issuecomment-749747412


   @comaniac Every time I select request changes on GitHub, it switches it 
"suggested changes". Maybe because I am not a committer?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


tkonolige commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547476827



##
File path: python/tvm/relay/op/transform.py
##
@@ -1320,3 +1320,83 @@ def adv_index(inputs):
 Output tensor.
 """
 return _make.adv_index(Tuple(inputs))
+
+
+def sparse_fill_empty_rows(sparse_indices, sparse_values, dense_shape, 
default_value):
+"""
+Fill first column of the empty rows with default values for a sparse array.
+It returns a TupleWrapper with four outputs
+
+Parameters
+--
+sparse_indices : relay.Expr
+A 2-D tensor[N, n_dim] of integers containing location of sparse 
values, where N is the
+number of sparse values and n_dim is the number of dimensions of the 
dense_shape
+
+sparse_values : relay.Expr
+A 1-D tensor[N] containing the sparse values for the sparse indices.
+
+dense_shape : relay.Expr
+A list of integers. Shape of the dense output tensor.
+
+default_value : relay.Expr
+A 0-D tensor containing the default value for the remaining locations.
+Defaults to 0.
+
+Returns
+---
+new_sparse_indices : relay.Expr
+A 2-D tensor[N + dense_shape[0], n_dim] of integers containing 
location of new sparse
+indices where N is the number of sparse values. It is filled with -1 
at irrelevant indices
+which will be sliced in a future op discarding non-useful elements. 
This is done since the
+real rows of new_sparse_indices depends on the input.
+
+empty_row_indicator : relay.Expr
+A 1-D Boolean tensor[dense_shape[0]] indicating whether the particular 
row is empty
+
+new_sparse_values : relay.Expr
+A 1-D tensor[dense_shape[0]] containing the sparse values for the 
sparse indices. It is
+filled with -1 at to_be_discarded indices

Review comment:
   Could you update the wording here? `to_be_discarded` refers to nothing.

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,65 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseFillEmptyRowsAttrs);
+
+bool SparseFillEmptyRowsRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+const TypeReporter& reporter) {
+  // types: [ sparse_indices, sparse_values, default_values, result]
+  ICHECK_EQ(types.size(), 4) << "SparseFillEmptyRowsRel expects 4 arguments 
but provided "
+ << types.size();
+  std::vector fields;
+  auto sparse_indices = types[0].as();
+  auto default_value = types[2].as();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);
+
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(param->dense_shape[0] + 
sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  fields.push_back(TensorType(sp_ordered_output_shape, sparse_indices->dtype));
+  fields.push_back(TensorType(Array{param->dense_shape[0]}, 
tvm::DataType::Bool()));
+  fields.push_back(TensorType(Array{sp_ordered_output_shape[0]}, 
default_value->dtype));
+  fields.push_back(TensorType(Array{1}, tvm::DataType::Int(32)));
+  reporter->Assign(types[3], TupleType(Array(fields)));
+  return true;
+}
+
+Array SparseFillEmptyRowsCompute(const Attrs& attrs, const 
Array& inputs,
+ const Type& out_type) {
+  ICHECK_EQ(inputs.size(), 3) << "SparseFillEmptyRowsCompute expects 3 
arguments but provided "
+  << inputs.size();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);

Review comment:
   ```suggestion
 ICHECK_NOTNULL(param);
   ```

##
File path: include/tvm/topi/transform.h
##
@@ -1386,6 +1386,96 @@ inline Array meshgrid(const Array& 
inputs, const std::string& in
   return result;
 }
 
+/*!
+ * \brief Fill Empty rows of a sparse tensor with default value

Review comment:
   ```suggestion
* \brief Fill empty rows of a sparse tensor with default values
   ```

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,65 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseFillEmptyRowsAttrs);
+
+bool SparseFillEmptyRowsRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+const TypeReporter& reporter) {
+  // types: [ sparse_indices, sparse_values, default_values, result]
+  ICHECK_EQ(types.size(), 4) << "SparseFillEmptyRowsRel expects 4 arguments 
but provided "
+ << types.size();
+  std::vector fields;
+  auto sparse_indices = types[0].as();
+  auto default_value = types[2].as();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);

Re

[GitHub] [tvm] jwfromm commented on pull request #7086: [Relay][Op] Remove reverse attribute from reshape and reverse_reshape operators.

2020-12-22 Thread GitBox


jwfromm commented on pull request #7086:
URL: https://github.com/apache/tvm/pull/7086#issuecomment-749726587


   I think @tqchen and @icemelon9 need to take another look to confirm that 
these changes look good before we can merge. I believe this solution addresses 
their concerns while removing the hidden reverse attribute as elegantly as 
possible.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7149: Sparse segment sum op

2020-12-22 Thread GitBox


comaniac commented on pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#issuecomment-749725438


   The Relay part LGTM. However, since I'm not familiar with the implementation 
of those operators, I would ask @tkonolige and @mbrookhart to review this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


comaniac commented on pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#issuecomment-749724064


   @tkonolige @mbrookhart PTAL and approve or request changes explicitly. 
Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7145: [AutoScheduler][Bugfix] Hardware params is not serialized properly

2020-12-22 Thread GitBox


comaniac commented on a change in pull request #7145:
URL: https://github.com/apache/tvm/pull/7145#discussion_r547452564



##
File path: python/tvm/auto_scheduler/search_task.py
##
@@ -221,10 +221,6 @@ def __init__(
 target_host = Target(target_host)
 
 self.dag = compute_dag

Review comment:
   @jcf94 the root cause is we have two ComputeDAG constructors in C++, one 
takes `compute` and another takes `schedule`. Since we can only register one 
`auto_scheduler.ComputeDAG` symbol to Python, I made a constructor dispatching 
as follows. Recall that this is to preserve the stage order if we already have 
a schedule and wish to use it to create a ComputeDAG.
   
   ```
 if (tensors) {
   return ComputeDAG(tensors.value());
 }
 ICHECK(sch) << "Both tensors and schedule are null";
 return ComputeDAG(sch.value());
   ```
   
   It means we need both `tensors` (named `compute` on the Python side) and 
`sch` to call the ComputeDAG constructor from Python. If we really want to keep 
one, then we should keep `sch` because it has more information than `tensors`. 
Since `sch` is not stored in the C++ object so we cannot access it via FFI, the 
current workaround is having a Python-side specific attribute `self.sche`. 
However, this attribute will be wiped out when mapping to the C++ object, so 
the current solution is keeping a Python object.
   
   Another solution would be keeping `sch` in the ComputeDAG C++ object. I made 
a new commit for this solution. Please comment which one you prefer.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 opened a new pull request #7150: Tf front end for sparse reshape op

2020-12-22 Thread GitBox


codeislife99 opened a new pull request #7150:
URL: https://github.com/apache/tvm/pull/7150


   This PR builds the TF Frontend for sparse_reshape op( #7125 ) [TF: 
https://www.tensorflow.org/api_docs/python/tf/sparse/reshape] 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7148: [Frontend][Tensorflow] Sparse_Dense Op CSR scheduling issue resolved for Cuda & X86

2020-12-22 Thread GitBox


tkonolige commented on a change in pull request #7148:
URL: https://github.com/apache/tvm/pull/7148#discussion_r547395050



##
File path: python/tvm/topi/cuda/sparse.py
##
@@ -311,6 +339,8 @@ def sparse_dense_padded(data, weight_data, weight_indices, 
weight_indptr):
 output : tvm.te.Tensor
 2-D with shape [M, N]
 """
+# TODO(ANSHUMAN87): Handle for sparse_lhs case too
+assert not sparse_lhs

Review comment:
   Can you add a message to this one.

##
File path: python/tvm/topi/x86/sparse.py
##
@@ -28,15 +28,17 @@ def schedule_sparse_dense(outs):
 
 def _callback(op):
 simd_width = get_fp32_len()
-if op.tag == "sparse_dense_csrmm" and op != outs[0].op:
-(_, v_i) = s[op].op.axis
-s[op].vectorize(v_i)
-(y_o, y_i) = s[outs[0].op].split(s[outs[0].op].op.axis[1], 2 * 
simd_width)
-s[op].compute_at(s[outs[0]], y_o)
-s[outs[0].op].vectorize(y_i)
-if op.tag == "sparse_dense_bsrmm":
+if op.tag == "sparse_dense_csrmm_v2" or op.tag == 
"sparse_dense_csrmm_v1":
+(y_o, y_i) = s[op].split(s[op].op.axis[1], 2)
+fused = s[op].fuse(s[op].op.axis[0], y_o)
+s[op].parallel(fused)
+s[op].vectorize(y_i)
+elif op.tag == "sparse_dense_bsrmm_v2" or op.tag == 
"sparse_dense_bsrmm_v1":

Review comment:
   What is the difference between v1 and v2? Can we consolidate tags? If 
not, maybe we should rename them to be more clear about what they are doing.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7125: Sparse reshape op

2020-12-22 Thread GitBox


tkonolige commented on a change in pull request #7125:
URL: https://github.com/apache/tvm/pull/7125#discussion_r547393612



##
File path: python/tvm/relay/op/transform.py
##
@@ -1320,3 +1320,52 @@ def adv_index(inputs):
 Output tensor.
 """
 return _make.adv_index(Tuple(inputs))
+
+
+def sparsereshape(sparse_indices, sparse_values, prev_shape, new_shape):
+"""
+Reshape a Sparse Tensor

Review comment:
   The convention is the same as `sparse_to_dense`. However `sparse_dense` 
uses CSR and BSR formats. We should probably add documentation to 
`sparse_to_dense` too.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7144: [AutoScheduler] Support string processing to records

2020-12-22 Thread GitBox


comaniac commented on a change in pull request #7144:
URL: https://github.com/apache/tvm/pull/7144#discussion_r547386833



##
File path: python/tvm/auto_scheduler/measure_record.py
##
@@ -98,6 +98,46 @@ def __iter__(self):
 yield ret[0], ret[1]  # (input, result)
 
 
+def load_record_from_string(record):
+"""
+Load the measure record from string.
+
+Parameters
+--
+record: str
+A record string, including the serialized MeausreInput and 
MeasureResult.
+
+Returns
+---
+ret: Tuple[MeasureInput, MeasureResult, str]
+A tuple of MeasureInput, MeasureResult, and the log version.
+"""
+return _ffi_api.ReadMeasureRecord(record)
+
+
+def dump_record_to_string(inp, res, log_version):
+"""
+Dump the measure record to a string.
+
+Parameters
+--
+inp: MeasureInput
+The measure input.
+
+res: MeasureResult
+The measure result.
+
+log_version: str
+The log version of the given record.
+
+Returns
+---
+ret: str
+The dumped string.
+"""
+return _ffi_api.WriteMeasureRecords(inp, res, log_version)

Review comment:
   I was thinking about the is too, but if we don't then load and dump APIs 
cannot reverse each other, because it writes the latest version by default. It 
means if you load a record in v0.3 and write it back, it becomes v0.4.
   
   A more general solution should be embedding the log version to MeasureInput 
like AutoTVM.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7107: [Tutorial] Add output validation to sparse tutorial

2020-12-22 Thread GitBox


ANSHUMAN87 commented on a change in pull request #7107:
URL: https://github.com/apache/tvm/pull/7107#discussion_r547302078



##
File path: tests/scripts/task_ci_python_setup.sh
##
@@ -31,3 +31,4 @@ set -o pipefail
 echo "Addtiional setup in" ${CI_IMAGE_NAME}
 
 python3 -m pip install --user tlcpack-sphinx-addon==0.1.3 synr==0.2.1
+python3 -m pip install --user tokenizers==0.9.4 transformers==4.0.1

Review comment:
   @tkonolige : I think the error is reproduced now. Would you please have 
a look once in the latest CI report. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (66744d9 -> bc43ed4)

2020-12-22 Thread mbaret
This is an automated email from the ASF dual-hosted git repository.

mbaret pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 66744d9  [TFLite] pack operation extedned with const args (#6984)
 add bc43ed4  [BYOC] [ACL] include_non_call_ops = False (#7121)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/arm_compute_lib.py |  2 +-
 .../contrib/test_arm_compute_lib/test_network.py   | 25 ++
 2 files changed, 26 insertions(+), 1 deletion(-)



[GitHub] [tvm] mbaret merged pull request #7121: [BYOC] [ACL] include_non_call_ops = False

2020-12-22 Thread GitBox


mbaret merged pull request #7121:
URL: https://github.com/apache/tvm/pull/7121


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbaret commented on pull request #7121: [BYOC] [ACL] include_non_call_ops = False

2020-12-22 Thread GitBox


mbaret commented on pull request #7121:
URL: https://github.com/apache/tvm/pull/7121#issuecomment-749529776


   Thanks @d-smirnov 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (968b6f6 -> 66744d9)

2020-12-22 Thread sijusamuel
This is an automated email from the ASF dual-hosted git repository.

sijusamuel pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 968b6f6  Add `is_floating_point()` test and better type support in 
`verify_model_vm()` (#7134)
 add 66744d9  [TFLite] pack operation extedned with const args (#6984)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  8 
 tests/python/frontend/tflite/test_forward.py | 26 ++
 2 files changed, 22 insertions(+), 12 deletions(-)



[GitHub] [tvm] siju-samuel merged pull request #6984: [TFLite] pack operation extedned with const args

2020-12-22 Thread GitBox


siju-samuel merged pull request #6984:
URL: https://github.com/apache/tvm/pull/6984


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on pull request #7149: Sparse segment sum op

2020-12-22 Thread GitBox


codeislife99 commented on pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#issuecomment-749495823


   cc: @trevor-m @zhiics @comaniac @anijain2305 PTAL ! 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 opened a new pull request #7149: Sparse segment sum op

2020-12-22 Thread GitBox


codeislife99 opened a new pull request #7149:
URL: https://github.com/apache/tvm/pull/7149


   This PR is for adding support for sparse segment sum OP 
(https://www.tensorflow.org/api_docs/python/tf/sparse/segment_sum?hl=bn) as a 
part of a larger effort to add sparse operator support. (#7125, #7126)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


codeislife99 commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547197106



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,63 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseFillEmptyRowsAttrs);
+
+bool SparseFillEmptyRowsRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+const TypeReporter& reporter) {
+  // types: [ sparse_indices, sparse_values, default_values, result]
+  ICHECK_EQ(types.size(), 4);
+  ICHECK_EQ(num_inputs, 3);
+  std::vector fields;
+  auto sparse_indices = types[0].as();
+  auto default_value = types[2].as();
+  const auto* param = attrs.as();
+  CHECK(param != nullptr);
+
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(param->dense_shape[0] + 
sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  fields.push_back(TensorType(sp_ordered_output_shape, sparse_indices->dtype));
+  fields.push_back(TensorType(Array{param->dense_shape[0]}, 
tvm::DataType::Bool()));
+  fields.push_back(TensorType(Array{sp_ordered_output_shape[0]}, 
default_value->dtype));
+  fields.push_back(TensorType(Array{1}, tvm::DataType::Int(32)));
+  reporter->Assign(types[3], TupleType(Array(fields)));
+  return true;
+}
+
+Array SparseFillEmptyRowsCompute(const Attrs& attrs, const 
Array& inputs,
+ const Type& out_type) {
+  CHECK_EQ(inputs.size(), 3);

Review comment:
   Done.

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,63 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseFillEmptyRowsAttrs);
+
+bool SparseFillEmptyRowsRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+const TypeReporter& reporter) {
+  // types: [ sparse_indices, sparse_values, default_values, result]
+  ICHECK_EQ(types.size(), 4);

Review comment:
   Done

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,63 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseFillEmptyRowsAttrs);
+
+bool SparseFillEmptyRowsRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+const TypeReporter& reporter) {
+  // types: [ sparse_indices, sparse_values, default_values, result]
+  ICHECK_EQ(types.size(), 4);
+  ICHECK_EQ(num_inputs, 3);
+  std::vector fields;
+  auto sparse_indices = types[0].as();
+  auto default_value = types[2].as();
+  const auto* param = attrs.as();
+  CHECK(param != nullptr);

Review comment:
   Done

##
File path: python/tvm/relay/op/transform.py
##
@@ -1320,3 +1320,84 @@ def adv_index(inputs):
 Output tensor.
 """
 return _make.adv_index(Tuple(inputs))
+
+
+def sparsefillemptyrows(sparse_indices, sparse_values, dense_shape, 
default_value):
+"""
+Fill first column of the empty rows with default values for a sparse array.
+
+Parameters
+--
+sparse_indices : relay.Expr
+A 2-D tensor[N, n_dim] of integers containing location of sparse 
values, where N is the
+number of sparse values and n_dim is the number of dimensions of the 
dense_shape
+
+sparse_values : relay.Expr
+A 1-D tensor[N] containing the sparse values for the sparse indices.
+
+dense_shape : relay.Expr
+A list of integers. Shape of the dense output tensor.
+
+default_value : relay.Expr
+A 0-D tensor containing the default value for the remaining locations.
+Defaults to 0.
+
+Returns
+---
+TupleWrapper with the following four outputs
+
+new_sparse_indices : relay.Expr
+A 2-D tensor[N + dense_shape[0], n_dim] of integers containing 
location of new sparse
+indices where N is the number of sparse values. It is filled with -1 
at to_be_discarded

Review comment:
   Changed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on a change in pull request #7126: Sparse fill empty rows op

2020-12-22 Thread GitBox


codeislife99 commented on a change in pull request #7126:
URL: https://github.com/apache/tvm/pull/7126#discussion_r547196976



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,63 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseFillEmptyRowsAttrs);
+
+bool SparseFillEmptyRowsRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+const TypeReporter& reporter) {
+  // types: [ sparse_indices, sparse_values, default_values, result]
+  ICHECK_EQ(types.size(), 4);
+  ICHECK_EQ(num_inputs, 3);
+  std::vector fields;
+  auto sparse_indices = types[0].as();
+  auto default_value = types[2].as();
+  const auto* param = attrs.as();
+  CHECK(param != nullptr);
+
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(param->dense_shape[0] + 
sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  fields.push_back(TensorType(sp_ordered_output_shape, sparse_indices->dtype));
+  fields.push_back(TensorType(Array{param->dense_shape[0]}, 
tvm::DataType::Bool()));
+  fields.push_back(TensorType(Array{sp_ordered_output_shape[0]}, 
default_value->dtype));
+  fields.push_back(TensorType(Array{1}, tvm::DataType::Int(32)));
+  reporter->Assign(types[3], TupleType(Array(fields)));
+  return true;
+}
+
+Array SparseFillEmptyRowsCompute(const Attrs& attrs, const 
Array& inputs,
+ const Type& out_type) {
+  CHECK_EQ(inputs.size(), 3);
+  const auto* param = attrs.as();
+  CHECK(param != nullptr);
+  return {topi::SparseFillEmptyRows(inputs[0], inputs[1], inputs[2], 
param->dense_shape)};
+}
+
+Expr MakeSparseFillEmptyRows(Expr sparse_indices, Expr sparse_values, Expr 
default_value,
+ Array dense_shape) {
+  auto attrs = make_object();
+  attrs->dense_shape = std::move(dense_shape);
+  static const Op& op = Op::Get("sparsefillemptyrows");
+  return Call(op, {sparse_indices, sparse_values, default_value}, 
Attrs(attrs), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op._make.sparsefillemptyrows").set_body_typed(MakeSparseFillEmptyRows);
+
+RELAY_REGISTER_OP("sparsefillemptyrows")
+.describe(R"code(Return twice of normal addition of two tensors.

Review comment:
   Done

##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,63 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseFillEmptyRowsAttrs);
+
+bool SparseFillEmptyRowsRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+const TypeReporter& reporter) {
+  // types: [ sparse_indices, sparse_values, default_values, result]
+  ICHECK_EQ(types.size(), 4);
+  ICHECK_EQ(num_inputs, 3);
+  std::vector fields;
+  auto sparse_indices = types[0].as();
+  auto default_value = types[2].as();
+  const auto* param = attrs.as();
+  CHECK(param != nullptr);
+
+  Array sp_ordered_output_shape;
+  sp_ordered_output_shape.push_back(param->dense_shape[0] + 
sparse_indices->shape[0]);
+  if (sparse_indices->shape.size() > 1) {
+sp_ordered_output_shape.push_back(sparse_indices->shape[1]);
+  }
+  fields.push_back(TensorType(sp_ordered_output_shape, sparse_indices->dtype));
+  fields.push_back(TensorType(Array{param->dense_shape[0]}, 
tvm::DataType::Bool()));
+  fields.push_back(TensorType(Array{sp_ordered_output_shape[0]}, 
default_value->dtype));
+  fields.push_back(TensorType(Array{1}, tvm::DataType::Int(32)));
+  reporter->Assign(types[3], TupleType(Array(fields)));
+  return true;
+}
+
+Array SparseFillEmptyRowsCompute(const Attrs& attrs, const 
Array& inputs,
+ const Type& out_type) {
+  CHECK_EQ(inputs.size(), 3);
+  const auto* param = attrs.as();
+  CHECK(param != nullptr);

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7134: Add `is_floating_point()` test and better type support in `verify_model_vm()`

2020-12-22 Thread GitBox


masahi commented on pull request #7134:
URL: https://github.com/apache/tvm/pull/7134#issuecomment-749440133


   Thanks @TylerADavis 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Add `is_floating_point()` test and better type support in `verify_model_vm()` (#7134)

2020-12-22 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 968b6f6  Add `is_floating_point()` test and better type support in 
`verify_model_vm()` (#7134)
968b6f6 is described below

commit 968b6f60da37d85232af6f9a6070d8ff2ed4be8a
Author: Tyler Davis 
AuthorDate: Tue Dec 22 01:20:36 2020 -0800

Add `is_floating_point()` test and better type support in 
`verify_model_vm()` (#7134)

* Add div_ and is_floating_point operators

* Add handling of exprs to op, update tests

* add test + supporting functions

* Revert whitespace changes

* Properly assign dtype to random integers

* Reformat with black

* Switched default dtype logic, removed extra line
---
 tests/python/frontend/pytorch/test_forward.py | 85 +--
 1 file changed, 80 insertions(+), 5 deletions(-)

diff --git a/tests/python/frontend/pytorch/test_forward.py 
b/tests/python/frontend/pytorch/test_forward.py
index 2dda675..74d9c78 100644
--- a/tests/python/frontend/pytorch/test_forward.py
+++ b/tests/python/frontend/pytorch/test_forward.py
@@ -1889,9 +1889,10 @@ def _get_default_vm_targets():
 return [tgt for (tgt, _) in tvm.testing.enabled_targets()]
 
 
-def verify_script_model(pt_model, ishapes, targets):
+def verify_script_model(pt_model, ishapes, targets, idtype=None):
 script_module = torch.jit.script(pt_model)
-verify_model_vm(script_module, ishapes, targets=targets)
+
+verify_model_vm(script_module, ishapes, idtype=idtype, targets=targets)
 
 
 def verify_trace_model(pt_model, idata, targets):
@@ -1900,10 +1901,60 @@ def verify_trace_model(pt_model, idata, targets):
 verify_model_vm(traced_model, ishapes, idata=idata, targets=targets)
 
 
-def verify_model_vm(input_model, ishapes, idtype=torch.float, idata=None, 
targets=["llvm"]):
+def convert_pt_to_tvm_type(idtype):
+""" Accepts a pytorch dtype and returns string TVM dtype."""
+# TVM does not support PyTorch complex dtypes
+if idtype == torch.float64:
+curr_dtype = "float64"
+elif idtype == torch.float32:
+curr_dtype = "float32"
+elif idtype == torch.float16:
+curr_dtype = "float16"
+elif idtype == torch.bfloat16:
+curr_dtype = "bfloat16"
+elif idtype == torch.int64:
+curr_dtype = "int64"
+elif idtype == torch.int32:
+curr_dtype = "int32"
+elif idtype == torch.int16:
+curr_dtype = "int16"
+elif idtype == torch.int8:
+curr_dtype = "int8"
+elif idtype == torch.uint8:
+curr_dtype = "uint8"
+elif idtype == torch.bool:
+curr_dtype = "bool"
+else:
+raise NotImplementedError("Unsupported dtype: {}".format(idtype))
+return curr_dtype
+
+
+def verify_model_vm(input_model, ishapes, idtype=None, idata=None, 
targets=["llvm"]):
+if not idtype:
+idtype = torch.float
+
 input_names = ["i{}".format(idx) for idx, ish in enumerate(ishapes)]
-input_shapes = list(zip(input_names, ishapes))
-input_data = idata if idata else [torch.randn(shape, dtype=idtype) for 
shape in ishapes]
+tvm_dtype = convert_pt_to_tvm_type(idtype)
+input_dtypes = [tvm_dtype] * len(input_names)
+input_shapes = list(zip(input_names, list(zip(ishapes, input_dtypes
+
+if idata:
+input_data = idata
+# If no input_data provided, generate random data of specified dtype
+else:
+if idtype == torch.bool:
+input_data = [
+torch.Tensor.bool(torch.randint(low=0, high=2, size=shape)) 
for shape in ishapes
+]
+# Torch dtype can be float, complex, int, or Bool. Complex not 
supported, so if not float or Bool,
+# dtype must be int!
+elif not idtype.is_floating_point:
+input_data = [
+torch.randint(low=0, high=10, size=shape, dtype=idtype) for 
shape in ishapes
+]
+else:
+input_data = [torch.randn(shape, dtype=idtype) for shape in 
ishapes]
+
 # Compile via VM
 mod, params = relay.frontend.from_pytorch(input_model, input_shapes)
 
@@ -2951,6 +3002,29 @@ def test_forward_true_divide():
 
 
 @tvm.testing.uses_gpu
+def test_forward_is_floating_point():
+torch.set_grad_enabled(False)
+
+class IsFloatingPoint(Module):
+def forward(self, arg):
+# `torch.jit.trace` cannot accept something that outputs
+# a Bool, so `torch.jit.script` will be used instead
+return torch.is_floating_point(arg)
+
+targets = _get_default_vm_targets()
+verify_script_model(IsFloatingPoint(), [(1, 1)], targets, 
idtype=torch.float64)
+verify_script_model(IsFloatingPoint(), [(1, 1)], targets, 
idtype=torch.float32)
+verify_script_model(IsFloatingPoint(), [(1, 1)], targets, 
idtype=torch.float16)
+   

[GitHub] [tvm] masahi merged pull request #7134: Add `is_floating_point()` test and better type support in `verify_model_vm()`

2020-12-22 Thread GitBox


masahi merged pull request #7134:
URL: https://github.com/apache/tvm/pull/7134


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 closed issue #6943: [MaskRCNN][AnnotateTarget]: Fails while reconciling shapes of concatenate inputs

2020-12-22 Thread GitBox


junrushao1994 closed issue #6943:
URL: https://github.com/apache/tvm/issues/6943


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on issue #6943: [MaskRCNN][AnnotateTarget]: Fails while reconciling shapes of concatenate inputs

2020-12-22 Thread GitBox


junrushao1994 commented on issue #6943:
URL: https://github.com/apache/tvm/issues/6943#issuecomment-749415022


   Thanks for reporting. We use the discussion forum 
(https://discuss.tvm.apache.org/) for general usage issues. Please open a 
thread in the forum. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 closed issue #6931: Problems when import BERT model from tensorflow Relay

2020-12-22 Thread GitBox


junrushao1994 closed issue #6931:
URL: https://github.com/apache/tvm/issues/6931


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on issue #6931: Problems when import BERT model from tensorflow Relay

2020-12-22 Thread GitBox


junrushao1994 commented on issue #6931:
URL: https://github.com/apache/tvm/issues/6931#issuecomment-749414663


   Thanks for reporting. We use the discussion forum 
(https://discuss.tvm.apache.org/) for general usage issues. Please open a 
thread in the forum. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on issue #7057: [Bug] RecursionError: maximum recursion depth exceeded while calling a Python object

2020-12-22 Thread GitBox


junrushao1994 commented on issue #7057:
URL: https://github.com/apache/tvm/issues/7057#issuecomment-749413853


   AFAIK it happens when an error is thrown in the constructor. Thanks for 
reporting. We use the discussion forum (https://discuss.tvm.apache.org/) for 
general usage issues. Please open a thread in the forum. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 closed issue #7057: [Bug] RecursionError: maximum recursion depth exceeded while calling a Python object

2020-12-22 Thread GitBox


junrushao1994 closed issue #7057:
URL: https://github.com/apache/tvm/issues/7057


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on issue #7077: tvm_mobilefacenet running error

2020-12-22 Thread GitBox


junrushao1994 commented on issue #7077:
URL: https://github.com/apache/tvm/issues/7077#issuecomment-749413539


   Thanks for reporting. We use the discussion forum 
(https://discuss.tvm.apache.org/) for general usage issues. Please open a 
thread in the forum. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 closed issue #7077: tvm_mobilefacenet running error

2020-12-22 Thread GitBox


junrushao1994 closed issue #7077:
URL: https://github.com/apache/tvm/issues/7077


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org