[GitHub] [tvm] junrushao1994 commented on a change in pull request #7152: [RUNTIME] Improve error messages for TypedPackedFunc

2021-01-26 Thread GitBox


junrushao1994 commented on a change in pull request #7152:
URL: https://github.com/apache/tvm/pull/7152#discussion_r565075157



##
File path: include/tvm/runtime/packed_func.h
##
@@ -562,6 +597,44 @@ class TVMMovableArgValue_ : public TVMPODValue_ {
   TVMArgValue AsArgValue() const { return TVMArgValue(value_, type_code_); }
 };
 
+/*!
+ * \brief Internal auxiliary struct for TypedPackedFunc to indicate a movable 
argument with
+ * additional context information (function name and argument index) for 
better error reporting.
+ *
+ * \sa MovableArgValue_
+ * \note For internal development purpose only.
+ */
+class TVMMovableArgValueWithContext_ {
+ public:
+  /*!
+   * \brief move constructor from another return value.
+   * \param value The other return value.
+   * \param type_code The code associated with the type of the value.
+   * \param arg_index In a function call, this argument is at index arg_index 
(0-indexed).
+   * \param optional_name Name of the function being called. Can be nullptr if 
the function is not
+   * named.
+   */
+  TVMMovableArgValueWithContext_(TVMValue value, int type_code, int arg_index,
+ const std::string* optional_name)
+  : value_(value, type_code), arg_index_(arg_index), 
optional_name_(optional_name) {}
+
+  template 
+  operator T() const {
+try {
+  return value_;  // implicit conversion happens here
+} catch (dmlc::Error& e) {
+  LOG(FATAL) << "In function " << (optional_name_ == nullptr ? 
"" : *optional_name_)
+ << ": error while converting argument " << arg_index_ << ": " 
<< e.what();
+  throw "";  // never reached, LOG(FATAL) throws, but this silences a 
warning.

Review comment:
   Looks like we don't have to throw anything haha
   
   ```suggestion
 throw;  // never reached, LOG(FATAL) throws, but this silences a 
warning.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch ci-docker-staging updated (03e2edc -> 0fd91fb)

2021-01-26 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 03e2edc  try stashing entire standalone_crt in hopes it will not upset 
jenkins
 add fc9e264  Made tensorflow IsNan actually work (#7320)
 add 7b6a1a7  Fix an issue with dynamic functions overwritting call arg 
types (#7295)
 add 17ae44d  add a shape function and dynamic test for round (#7324)
 add 790344c  relax tolerance for dlpack test (#7325)
 add 6787d74  get_top_results works on a copy of output (#7327)
 add af9d1d2  [BYOC][Verilator] add support to dynamically load hardware 
library (#7286)
 add 3ec67f0  [AutoScheduler] Fix conv3d's op strategy for auto-scheduler 
(#7328)
 add e889def  [PatternLang] Add a relay LetPattern (#7332)
 add 218048e  [FIX,AUTOTVM] Add flop counts to cublas (#7297)
 add 42eb55d  add Verilator to CI (#7098)
 add 5d33491  [Tutorial] Autoscheduler on ARM devices (#7326)
 add e6d5318  [AutoScheduler] Separate shapes from DAG hash and enable 
schedule sharing (#7317)
 add f3b852d  [FIX] Infer input shape in sparse_dense_padded's alter_op if 
one does not exist (#7308)
 add da446af  Fix warning showed with GCC10 (#7336)
 add 6f75cff  [Relay][Training] Add more gradients (#7323)
 add 3d13809  fix tanh gradient and update tests to use downstream gradient 
(#7340)
 add c53030f  [CMake] use wrong flag name (#7341)
 add ab8bc0a  Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC. (#7338)
 add 1e0d356  [Relay, TOPI] Add numpy style cumsum op (#7334)
 add 0fd91fb  Merge remote-tracking branch 'origin/main' into 
standalone-crt-build-tree

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |   4 +-
 Jenkinsfile|   4 +-
 cmake/config.cmake |   4 +-
 cmake/modules/contrib/Verilator.cmake  |   8 +-
 docs/langref/relay_pattern.rst |  29 +++
 include/tvm/auto_scheduler/compute_dag.h   |   7 +
 include/tvm/relay/attrs/transform.h|  10 +
 include/tvm/relay/dataflow_pattern.h   |  36 +++
 include/tvm/relay/dataflow_pattern_functor.h   |  11 +-
 include/tvm/runtime/packed_func.h  |   4 +-
 python/tvm/auto_scheduler/compute_dag.py   |  35 +--
 python/tvm/auto_scheduler/measure_record.py| 126 -
 python/tvm/auto_scheduler/relay_integration.py |   6 +-
 python/tvm/auto_scheduler/search_task.py   |   8 +-
 python/tvm/auto_scheduler/utils.py |  27 ++
 python/tvm/auto_scheduler/workload_registry.py |  37 ++-
 python/tvm/autotvm/task/task.py|   1 +
 python/tvm/contrib/cublas.py   |   4 +-
 python/tvm/driver/tvmc/runner.py   |   2 +-
 python/tvm/relay/dataflow_pattern/__init__.py  |  44 
 python/tvm/relay/frontend/tensorflow.py|   1 +
 python/tvm/relay/op/_tensor.py |   1 +
 python/tvm/relay/op/_tensor_grad.py|  56 +++-
 python/tvm/relay/op/_transform.py  |  12 +-
 python/tvm/relay/op/strategy/cuda.py   |  12 +
 python/tvm/relay/op/strategy/generic.py|  21 ++
 python/tvm/relay/op/strategy/x86.py|   2 +-
 python/tvm/relay/op/transform.py   |  49 
 python/tvm/topi/__init__.py|   1 +
 python/tvm/topi/cuda/__init__.py   |   1 +
 python/tvm/topi/cuda/dense.py  |   5 +-
 python/tvm/topi/cuda/nms.py|   3 +-
 python/tvm/topi/cuda/scan.py   | 255 +--
 python/tvm/topi/cuda/sort.py   |   7 +-
 python/tvm/topi/cuda/sparse.py |   9 +-
 python/tvm/topi/cumsum.py  | 106 
 python/tvm/topi/utils.py   |   5 +
 src/auto_scheduler/compute_dag.cc  | 109 
 src/relay/analysis/type_solver.cc  |  18 +-
 src/relay/analysis/type_solver.h   |   3 +-
 src/relay/backend/contrib/verilator/codegen.cc |  30 ++-
 src/relay/ir/dataflow_matcher.cc   |  11 +-
 src/relay/ir/dataflow_pattern.cc   |  22 ++
 src/relay/ir/dataflow_pattern_functor.cc   |   6 +
 src/relay/ir/indexed_graph.cc  |   6 +
 src/relay/op/tensor/transform.cc   |  52 
 src/relay/transforms/alter_op_layout.cc|   1 +
 src/relay/transforms/type_infer.cc |  12 +-
 src/runtime/contrib/thrust/thrust.cu   |  73 +-
 src/runtime/contrib/verilator/verilator_runtime.cc |  69 -
 tests/cpp/ir_functor_test.cc   |   2 +-
 

[GitHub] [tvm] masahi opened a new pull request #7346: [Torch] More graph rewrites for Faster RCNN / MaskRCNN

2021-01-26 Thread GitBox


masahi opened a new pull request #7346:
URL: https://github.com/apache/tvm/pull/7346


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7267: [Frontend][Tensorflow] Sparse dense matmul adjoint option added

2021-01-26 Thread GitBox


ANSHUMAN87 commented on a change in pull request #7267:
URL: https://github.com/apache/tvm/pull/7267#discussion_r565007656



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -941,9 +934,48 @@ def _impl(inputs, attr, params, mod):
 (values_tensor, (rows, cols)), 
shape=tuple(dense_shape_tensor.tolist())
 )
 
+# As per tensorflow implementation, we have 4 possible input 
combination
+# and the first input(A) is always sparse and second input(B) is 
always dense.
+# Case 1: A , B , adjoint_a=False, adjoint_b=False  --> A * B
+# Case 2: A , B , adjoint_a=True,   adjoint_b=False  --> A.T * B
+# Case 3: A , B , adjoint_a=False, adjoint_b=True--> A * B.T
+# Case 4: A , B , adjoint_a=True,   adjoint_b=True--> (A.T * B.T).T
+#
+# Topi implementation for sparse_dense(matmul) has 2 possible input
+# combination where first input(A) is always dense
+# and second input(B) is always sparse.
+# Case 1: A , B, sparse_lhs = False  --> A * B.T
+# Case 2: A , B, sparse_lhs = True--> B * A.T
+#
+# The mapping would be as below:
+# TF Case 1: A , B , adjoint_a=False, adjoint_b=False

Review comment:
   Done!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-26 Thread GitBox


masahi edited a comment on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767987598


   Yes, I believe dynamic tuning and codegen is one of the biggest challenges 
of TVM this year. I'm glad at least there are folks looking at the problem. 
   
   MaskRCNN should serve as a good benchmark, it has both dynamic dense (very 
large) and dynamic conv2d + conv2d transpose. All of them are current 
bottleneck, without tuning them I cannot beat pytorch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-26 Thread GitBox


masahi commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767987598


   Yes, I believe dynamic tuning and codegen is one of the biggest challenges 
of TVM this year. I'm glad at least there are folks looking at the problem. 
MaskRCNN should serve as a good benchmark, it has both dynamic dense (very 
large) and dynamic conv2d + conv2d transpose.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-26 Thread GitBox


jcf94 commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767986306


   > > @comaniac Does this mean tuning support for dynamic workload (dynamic 
batch size etc) is coming soon? I'm very excited for this, that would 
tremendously help my MaskRCNN!!
   > 
   > Ah this is not the perfect solution for dynamic shape. This is more like a 
solution to make tuned logs more useful. For example, you can apply the tuning 
log with batch 1 to all batch sizes. You can even tune several batch sizes in 
prime numbers to achieve better performance to their multiples. Meanwhile, we 
do work on the dynamic shape support in auto_scheduler, but it may not be ready 
to be upstreamed before this summer or fall.
   
     Looking forward to the dynamic shape support, too! It will be great 
useful.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ANSHUMAN87 commented on a change in pull request #7267: [Frontend][Tensorflow] Sparse dense matmul adjoint option added

2021-01-26 Thread GitBox


ANSHUMAN87 commented on a change in pull request #7267:
URL: https://github.com/apache/tvm/pull/7267#discussion_r564993711



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -941,8 +934,47 @@ def _impl(inputs, attr, params, mod):
 (values_tensor, (rows, cols)), 
shape=tuple(dense_shape_tensor.tolist())
 )
 
-if sparse_lhs:
+# As per tensorflow implementation, we have 4 possible input 
combination
+# and the first input(A) is always sparse and second input(B) is 
always dense.
+# Case 1: A , B , adjoint_a=False, adjoint_b=False  --> A * B
+# Case 2: A , B , adjoint_a=True,   adjoint_b=False  --> A.T * B
+# Case 3: A , B , adjoint_a=False, adjoint_b=True--> A * B.T
+# Case 4: A , B , adjoint_a=True,   adjoint_b=True--> (A.T * B.T).T
+#
+# Topi implementation for sparse_dense(matmul) has 2 possible input
+# combination where first input(A) is always dense
+# and second input(B) is always sparse.
+# Case 1: A , B, sparse_lhs = False  --> A * B.T
+# Case 2: A , B, sparse_lhs = True--> B * A.T
+#
+# The mapping would be as below:
+# TF Case 1: A , B , adjoint_a=False, adjoint_b=False
+#   --> sparse_dense(transpose(B), A, sparse_lhs=True)
+#
+# TF Case 2: A , B , adjoint_a=True, adjoint_b=False
+#   --> sparse_dense(transpose(B), transpose(A), 
sparse_lhs=True)
+#
+# TF Case 3: A , B , adjoint_a=False, adjoint_b=True
+#   --> sparse_dense(B, A, sparse_lhs=True)
+#
+# TF Case 4: A , B , adjoint_a=True, adjoint_b=True
+#   --> transpose(sparse_dense(B, transpose(A), 
sparse_lhs=False))
+
+# By default, in tensorflow the first input ,i.e., data is sparse
+sparse_lhs = True
+
+# TF Case 1:
+if not attr.get("adjoint_a") and not attr.get("adjoint_b"):
+data = _op.transpose(data)
+# TF Case 2:
+elif attr.get("adjoint_a") and not attr.get("adjoint_b"):
 data = _op.transpose(data)
+weight_sp = csr_matrix(weight_sp.transpose())
+# TF Case 3:
+elif not attr.get("adjoint_a") and attr.get("adjoint_b"):
+pass
+# TF Case 4:
+# attr.get("adjoint_a") and attr.get("adjoint_b"):
 else:
 weight_sp = csr_matrix(weight_sp.transpose())

Review comment:
   Good catch!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj opened a new pull request #7345: pipeline graph patch 1

2021-01-26 Thread GitBox


huajsj opened a new pull request #7345:
URL: https://github.com/apache/tvm/pull/7345


   Issue:
   SOC hardware plarform have multiple types compute chipset like
   GPU,FPGA,APU,RPU etc, there is a requirement that use these compute
   unit in parallel to reach best performance.
   
   Solution:
   In these pipeline solution, we first split the compute graph into
   a group of subgraph, then run these subgraph in a pipeline module
   to make the GPU/FPGA/APU/RPU parallel running become possible.
   
   this patch is to address compute graph split issue
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-26 Thread GitBox


comaniac commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767974936


   > @comaniac Does this mean tuning support for dynamic workload (dynamic 
batch size etc) is coming soon? I'm very excited for this, that would 
tremendously help my MaskRCNN!!
   
   Ah this is not the perfect solution for dynamic shape. This is more like a 
solution to make tuned logs more useful. For example, you can apply the tuning 
log with batch 1 to all batch sizes. You can even tune several batch sizes in 
prime numbers to achieve better performance to their multiples. Meanwhile, we 
do work on the dynamic shape support in auto_scheduler, but it may not be ready 
to be upstreamed before this summer or fall.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-26 Thread GitBox


masahi commented on pull request #7344:
URL: https://github.com/apache/tvm/pull/7344#issuecomment-767970847


   @comaniac Does this mean tuning support for dynamic workload (dynamic batch 
size etc) is coming soon? I'm very excited for this, that would tremendously 
help my MaskRCNN!!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac opened a new pull request #7344: [AutoScheduler] Enable schedule sharing in dispatch context

2021-01-26 Thread GitBox


comaniac opened a new pull request #7344:
URL: https://github.com/apache/tvm/pull/7344


   This is the follow up PR for #7317 to enable the schedule sharing in the 
auto_scheduler diaptch context.
   
   cc @merrymercy @jcf94 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #7330: [FFI] Improve error messages when array/map types do not match in function calls

2021-01-26 Thread GitBox


junrushao1994 commented on pull request #7330:
URL: https://github.com/apache/tvm/pull/7330#issuecomment-767910998


   I think this PR is good to merge once the CI is green



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #7152: [RUNTIME] Improve error messages for TypedPackedFunc

2021-01-26 Thread GitBox


junrushao1994 commented on pull request #7152:
URL: https://github.com/apache/tvm/pull/7152#issuecomment-767910081


   Will do tonight!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7334: [Relay, TOPI] Add numpy style cumsum op

2021-01-26 Thread GitBox


masahi commented on pull request #7334:
URL: https://github.com/apache/tvm/pull/7334#issuecomment-767907682


   Thanks @tkonolige @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Relay, TOPI] Add numpy style cumsum op (#7334)

2021-01-26 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 1e0d356  [Relay, TOPI] Add numpy style cumsum op (#7334)
1e0d356 is described below

commit 1e0d3569b94f650243f4d0ac204d196e3be8b0aa
Author: masahi 
AuthorDate: Wed Jan 27 08:54:36 2021 +0900

[Relay, TOPI] Add numpy style cumsum op (#7334)

* Add cumsum relay/topi op

* relay tests working

* add torch frontend converter

* fix for importing detr

* fix bad merge

* begin cuda cumsum

* support non innermost axis

* support rank higher than 3

* making binop parameter

* fix overflow issue in thrust scan

* generic binop parameter working

* relay test working

* fixed for bool input

* remove pytorch change

* fix pylint

* doc update

* Update python/tvm/topi/cumsum.py

Co-authored-by: Tristan Konolige 

* Update tests/python/relay/test_op_level3.py

Co-authored-by: Tristan Konolige 

* add example outputs

* add supported input and output dtype in thrust log

* adding more loop var names

* fix cpplint

* fix missing check for the cuda target in nms thrust sort

* parallelize cpu cumsum

* making binop argument tir function

* update doc for binop

* doc update

Co-authored-by: Tristan Konolige 
---
 include/tvm/relay/attrs/transform.h  |  10 ++
 python/tvm/relay/op/_transform.py|  12 +-
 python/tvm/relay/op/strategy/cuda.py |  12 ++
 python/tvm/relay/op/strategy/generic.py  |  21 +++
 python/tvm/relay/op/transform.py |  49 +
 python/tvm/topi/__init__.py  |   1 +
 python/tvm/topi/cuda/__init__.py |   1 +
 python/tvm/topi/cuda/nms.py  |   3 +-
 python/tvm/topi/cuda/scan.py | 255 +++
 python/tvm/topi/cuda/sort.py |   7 +-
 python/tvm/topi/cumsum.py| 106 +++
 python/tvm/topi/utils.py |   5 +
 src/relay/op/tensor/transform.cc |  52 ++
 src/runtime/contrib/thrust/thrust.cu |  73 ++--
 tests/python/contrib/test_thrust.py  |   4 +-
 tests/python/relay/test_op_level3.py |  36 
 tests/python/topi/python/test_topi_cumsum.py |  72 
 17 files changed, 625 insertions(+), 94 deletions(-)

diff --git a/include/tvm/relay/attrs/transform.h 
b/include/tvm/relay/attrs/transform.h
index efa44e0..4316624 100644
--- a/include/tvm/relay/attrs/transform.h
+++ b/include/tvm/relay/attrs/transform.h
@@ -438,6 +438,16 @@ struct MatrixSetDiagAttrs : public 
tvm::AttrsNode {
   }
 };  // struct MatrixSetDiagAttrs
 
+/*! \brief Attributes used in cumsum operator */
+struct CumsumAttrs : public tvm::AttrsNode {
+  Integer axis;
+  DataType dtype;
+  TVM_DECLARE_ATTRS(CumsumAttrs, "relay.attrs.CumsumAttrs") {
+TVM_ATTR_FIELD(axis).describe("The axis to sum 
over").set_default(NullValue());
+TVM_ATTR_FIELD(dtype).describe("Output data 
type").set_default(NullValue());
+  }
+};
+
 }  // namespace relay
 }  // namespace tvm
 #endif  // TVM_RELAY_ATTRS_TRANSFORM_H_
diff --git a/python/tvm/relay/op/_transform.py 
b/python/tvm/relay/op/_transform.py
index 05ca6d2..fd07c98 100644
--- a/python/tvm/relay/op/_transform.py
+++ b/python/tvm/relay/op/_transform.py
@@ -103,7 +103,7 @@ def compute_scatter_add(attrs, inputs, output_type):
 
 _reg.register_strategy("scatter_add", strategy.scatter_add_strategy)
 
-# scatter
+# scatter_nd
 @_reg.register_compute("scatter_nd")
 def compute_scatter_nd(attrs, inputs, output_type):
 """Compute definition of scatter_nd"""
@@ -112,6 +112,16 @@ def compute_scatter_nd(attrs, inputs, output_type):
 
 _reg.register_strategy("scatter_nd", strategy.scatter_nd_strategy)
 
+# cumsum
+@_reg.register_compute("cumsum")
+def compute_cumsum(attrs, inputs, output_type):
+"""Compute definition of cumsum"""
+return [topi.cumsum(inputs[0], attrs.axis, attrs.dtype)]
+
+
+_reg.register_strategy("cumsum", strategy.cumsum_strategy)
+_reg.register_shape_func("cumsum", False, elemwise_shape_func)
+
 #
 #  Shape functions  #
 #
diff --git a/python/tvm/relay/op/strategy/cuda.py 
b/python/tvm/relay/op/strategy/cuda.py
index 3863df0..346e934 100644
--- a/python/tvm/relay/op/strategy/cuda.py
+++ b/python/tvm/relay/op/strategy/cuda.py
@@ -996,3 +996,15 @@ def argwhere_strategy_cuda(attrs, inputs, out_type, 
target):
 name="argwhere.cuda",
 )
 return strategy
+
+
+@cumsum_strategy.register(["cuda", "gpu"])
+def cumsum_strategy_cuda(attrs, inputs, out_type, target):
+"""cumsum cuda strategy"""
+strategy = _op.OpStrategy()
+

[GitHub] [tvm] masahi merged pull request #7334: [Relay, TOPI] Add numpy style cumsum op

2021-01-26 Thread GitBox


masahi merged pull request #7334:
URL: https://github.com/apache/tvm/pull/7334


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7152: [RUNTIME] Improve error messages for TypedPackedFunc

2021-01-26 Thread GitBox


tqchen commented on pull request #7152:
URL: https://github.com/apache/tvm/pull/7152#issuecomment-767892174


   @junrushao1994 please take another look and manage the PR. 
   @tkonolige Thanks for keep improving the code through the reviewing process.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch ci-docker-staging updated (75b565a -> 03e2edc)

2021-01-26 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


omit 75b565a  add Verilator to CI
omit 3ec67f0  [AutoScheduler] Fix conv3d's op strategy for auto-scheduler 
(#7328)
omit af9d1d2  [BYOC][Verilator] add support to dynamically load hardware 
library (#7286)
omit 6787d74  get_top_results works on a copy of output (#7327)
omit 790344c  relax tolerance for dlpack test (#7325)
omit 17ae44d  add a shape function and dynamic test for round (#7324)
omit 7b6a1a7  Fix an issue with dynamic functions overwritting call arg 
types (#7295)
omit fc9e264  Made tensorflow IsNan actually work (#7320)
 add 9db961d  Build microTVM using standalone_crt in build tree.
 add 4ae8dcb  black format
 add 0935452  pylint
 add 03e2edc  try stashing entire standalone_crt in hopes it will not upset 
jenkins

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (75b565a)
\
 N -- N -- N   refs/heads/ci-docker-staging (03e2edc)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile|   6 +-
 cmake/config.cmake |   4 +-
 cmake/modules/StandaloneCrt.cmake  |   8 +-
 cmake/modules/contrib/Verilator.cmake  |   8 +-
 pyproject.toml |  93 +++
 python/tvm/driver/tvmc/runner.py   |   2 +-
 python/tvm/micro/__init__.py   |   4 +-
 python/tvm/micro/build.py  | 174 ++---
 python/tvm/micro/compiler.py   |   5 +-
 python/tvm/relay/frontend/tensorflow.py|   1 -
 python/tvm/relay/op/_tensor.py |   1 -
 python/tvm/relay/op/strategy/x86.py|   2 +-
 src/relay/analysis/type_solver.cc  |  18 +--
 src/relay/analysis/type_solver.h   |   3 +-
 src/relay/backend/contrib/verilator/codegen.cc |  30 +---
 src/relay/transforms/type_infer.cc |  12 +-
 src/runtime/contrib/verilator/verilator_runtime.cc |  69 ++--
 tests/micro/qemu/test_zephyr.py|   3 +-
 tests/python/contrib/test_dlpack.py|   2 +-
 .../contrib/test_verilator/infrastructure.py   |  39 +
 tests/python/frontend/tensorflow/test_forward.py   |   4 -
 tests/python/relay/test_any.py |   1 -
 tests/python/relay/test_type_infer.py  |  14 --
 tests/python/unittest/test_crt.py  |  13 +-
 tests/python/unittest/test_link_params.py  |  13 +-
 tests/scripts/task_config_build_cpu.sh |   1 -
 tests/scripts/task_config_build_i386.sh|   1 -
 tutorials/micro/micro_tflite.py|  13 +-
 28 files changed, 278 insertions(+), 266 deletions(-)



[GitHub] [tvm] jinchenglee commented on pull request #7338: Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC.

2021-01-26 Thread GitBox


jinchenglee commented on pull request #7338:
URL: https://github.com/apache/tvm/pull/7338#issuecomment-767875785


   @areusch , created PR #7343 .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jinchenglee opened a new pull request #7343: Add resource_handle to both TVM_DLL_EXPORT_TYPED_FUNC and TVM_DLL_EXP…

2021-01-26 Thread GitBox


jinchenglee opened a new pull request #7343:
URL: https://github.com/apache/tvm/pull/7343


   …ORT_PACKED_FUNC macros in packed_func.h. This is a patch PR for #7388.
   @areusch @tqchen 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mdw-octoml commented on a change in pull request #7331: Update uTVM code to work with the nRF5340DK dev board.

2021-01-26 Thread GitBox


mdw-octoml commented on a change in pull request #7331:
URL: https://github.com/apache/tvm/pull/7331#discussion_r564827199



##
File path: python/tvm/target/target.py
##
@@ -234,7 +234,10 @@ def micro(model="unknown", options=None):
 trans_table = {
 "host": [],
 "stm32f746xx": ["-mcpu=cortex-m7", "-march=armv7e-m"],
+"nrf5340dk": ["-keys=arm_cpu", "-mcpu=cortex-m33"],

Review comment:
   @areusch Done - PTAL





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564773562



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),

Review comment:
   thanks! added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


leandron commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564759432



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),

Review comment:
   @areusch the version restrictions comes from 
https://github.com/apache/tvm/issues/4953





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#issuecomment-767738967


   @comaniac @leandron @tqchen @mdw-octoml @tkonolige please take a look at the 
[latest 
update](https://discuss.tvm.apache.org/t/rfc-consolidating-tvm-python-dependencies/8329/27)
 on the RFC forum thread.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


comaniac commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564731209



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),

Review comment:
   Yes. Previously there were some reported issues with the older XGboost 
versions, so the required XGBoost version bumps from 0.9.0 to 1.1.0.
   
   Here are two examples I could think of at this moment:
   https://github.com/apache/tvm/issues/4953
   
https://discuss.tvm.apache.org/t/segfault-in-auto-tuning-tutorial-tune-relay-x86-py/5928/9

##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended 

[GitHub] [tvm] Laurawly commented on pull request #7147: [CUDA][PASS]Legalize tensorcore

2021-01-26 Thread GitBox


Laurawly commented on pull request #7147:
URL: https://github.com/apache/tvm/pull/7147#issuecomment-767732899


   @jwfromm @jcf94 Could you update your reviews?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564728329



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),

Review comment:
   @comaniac @tqchen any idea why it is restricted to that version?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7331: Update uTVM code to work with the nRF5340DK dev board.

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7331:
URL: https://github.com/apache/tvm/pull/7331#discussion_r564727395



##
File path: python/tvm/target/target.py
##
@@ -234,7 +234,10 @@ def micro(model="unknown", options=None):
 trans_table = {
 "host": [],
 "stm32f746xx": ["-mcpu=cortex-m7", "-march=armv7e-m"],
+"nrf5340dk": ["-keys=arm_cpu", "-mcpu=cortex-m33"],

Review comment:
   I think this was because we originally intended to enable ARM schedules 
based on `-march`. i think this was because i had read some documentation that 
`-march` was the proper way to specify ISA, but it turns out that this is just 
specific to x86 targets. For ARM targets, it seems `-mcpu` is actually 
canonical. So, we may need to improve our ISA parser to handle both.
   
   Actually I think `-keys` here is also unnecessary--my apologies. @mdw-octoml 
can you remove that? I believe it should get auto-added.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mdw-octoml commented on a change in pull request #7331: Update uTVM code to work with the nRF5340DK dev board.

2021-01-26 Thread GitBox


mdw-octoml commented on a change in pull request #7331:
URL: https://github.com/apache/tvm/pull/7331#discussion_r564724227



##
File path: python/tvm/target/target.py
##
@@ -234,7 +234,10 @@ def micro(model="unknown", options=None):
 trans_table = {
 "host": [],
 "stm32f746xx": ["-mcpu=cortex-m7", "-march=armv7e-m"],
+"nrf5340dk": ["-keys=arm_cpu", "-mcpu=cortex-m33"],

Review comment:
   TBH, I have no idea. @areusch ?
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


comaniac commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564718576



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),

Review comment:
   The upstream python/setup.py suggests `xgboost>=1.1.0`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564710839



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),

Review comment:
   @comaniac can you point me to something more concrete that specifies the 
min required version?
   
   @leandron added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564709209



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),
+
+# Development requirements
+("dev", ["matplotlib", "pillow"]),
+]
+
+# Maps a named Python package (which should appear in REQUIREMENTS_BY_PIECE 
above) to a
+# semver or pip version constraint. Semver constraints are translated into 
requirements.txt
+# constraints.
+CONSTRAINTS = [
+  ("onnx", ">=1.7.0"),
+  ("onnxruntime", ">=1.0.0"),
+  ("pillow", "<7"),
+  ("synr", ">=0.2.1"),
+  ("tensorflow", ">=2.1.0"),
+  ("tflite", ">=2.1.0"),
+  ("torch", "^1.7.0"),
+  ("torchvision", ">=0.5.0"),
+]
+
+
+# End of configuration options.
+
+
+
+
+
+# Required keys in REQUIREMENTS_BY_PIECE.
+REQUIRED_PIECES = ["core", "dev"]
+
+# Regex to validates piece names.
+PIECE_REGEX = re.compile(r"^[a-z0-9][a-z0-9-]*", re.IGNORECASE)
+
+# Regex to match a constraint specification. Multiple constraints are not 
supported.
+CONSTRAINT_REGEX = re.compile(r"(?:\^|\<|(?:<=)|(?:==)|(?:>=)|\>)[^<>=\^,]+")
+
+# Regex for parsing semantic versions. See
+# 
https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string
+SEMVER_REGEX = 
re.compile(r"^(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)(?:-(?P(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$")
+
+
+def validate_requirements_by_piece():
+  problems = []
+
+  unseen_required_pieces = set(REQUIRED_PIECES)
+  seen_pieces = set()
+
+  # Ensure that core is listed 

[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564708746



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),

Review comment:
   added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564707316



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),
+
+# Development requirements
+("dev", ["matplotlib", "pillow"]),
+]
+
+# Maps a named Python package (which should appear in REQUIREMENTS_BY_PIECE 
above) to a
+# semver or pip version constraint. Semver constraints are translated into 
requirements.txt
+# constraints.
+CONSTRAINTS = [
+  ("onnx", ">=1.7.0"),
+  ("onnxruntime", ">=1.0.0"),
+  ("pillow", "<7"),
+  ("synr", ">=0.2.1"),
+  ("tensorflow", ">=2.1.0"),
+  ("tflite", ">=2.1.0"),
+  ("torch", "^1.7.0"),
+  ("torchvision", ">=0.5.0"),
+]
+
+
+# End of configuration options.
+
+
+
+
+
+# Required keys in REQUIREMENTS_BY_PIECE.
+REQUIRED_PIECES = ["core", "dev"]
+
+# Regex to validates piece names.
+PIECE_REGEX = re.compile(r"^[a-z0-9][a-z0-9-]*", re.IGNORECASE)
+
+# Regex to match a constraint specification. Multiple constraints are not 
supported.
+CONSTRAINT_REGEX = re.compile(r"(?:\^|\<|(?:<=)|(?:==)|(?:>=)|\>)[^<>=\^,]+")
+
+# Regex for parsing semantic versions. See
+# 
https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string
+SEMVER_REGEX = 
re.compile(r"^(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)(?:-(?P(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$")
+
+
+def validate_requirements_by_piece():

Review comment:
   I will add type annotations before submitting this PR





[GitHub] [tvm] tkonolige opened a new pull request #7342: [FIX] Don't add $TVM_HOME/.. to the include path when compiling code

2021-01-26 Thread GitBox


tkonolige opened a new pull request #7342:
URL: https://github.com/apache/tvm/pull/7342


   If the user has a dmlc-core directory next to the tvm directory, this 
dmlc-core directory will be incorrectly used when compiling files with cc.py.
   
   @Mutinifni @Huyuwei 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564706852



##
File path: python/gen_requirements.py
##
@@ -0,0 +1,391 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""TVM Python requriements.txt generator.
+
+This script generates a set of requirements.txt files (stored in 
`./requirements`) that describe
+TVM's Python dependencies.
+
+## Pieces
+
+TVM can be roughly broken into these named pieces along the lines of Python 
dependencies:
+
+- "core": A core piece, which is intended to be buildable with very few 
external dependencies. Users
+  can use Relay, compile models, and run autotuning with this part.
+- "importer-": Model importers, which convert models defined in various 
other tools (i.e.
+  TensorFlow, PyTorch, etc) into Relay models.
+- Extra features (i.e. XGBoost in AutoTVM). These enhance TVM's functionality, 
but aren't required
+  for basic operation.
+
+## What this tool does
+
+From these pieces, this tool builds:
+ - requirements/.txt - Python dependencies for each named piece above, 
`` is the same as
+   the quoted piece name.
+ - requirements/all.txt - Consolidated Python dependencies for all pieces, 
excluding dev below.
+ - requirements/dev.txt - Python dependencies needed to develop TVM, such as 
lint and test tools.
+
+The data representing each piece is contained in the two maps below.
+"""
+
+import argparse
+import collections
+import os
+import re
+import textwrap
+import sys
+
+# Maps named TVM piece (see description above) to a list of names of Python 
packages. Please use
+# alphabetical order for each package list, and do not add version constraints 
here!
+REQUIREMENTS_BY_PIECE = [
+# Base requirements needed to install tvm with no extras.
+("core", [
+"attrs",
+"decorator",
+"numpy",
+"psutil",
+"scipy",
+"synr",
+]),
+
+# Relay frontends.
+("importer-caffe2", ["torch"]),
+("importer-coreml", ["coremltools"]),
+("importer-darknet", ["opencv-python"]),
+("importer-keras", ["tensorflow", "tensorflow-estimator"]),
+("importer-onnx", ["future", "onnx", "onnxruntime", "torch", 
"torchvision"]),
+("importer-pytorch", ["future", "torch", "torchvision"]),
+("importer-tensorflow", ["tensorflow", "tensorflow-estimator"]),
+("importer-tflite", ["tensorflow", "tensorflow-estimator", "tflite"]),
+
+("tvmc", ["onnx", "onnxruntime", "tensorflow", "tflite", "torch", 
"torchvision"]),
+
+# XGBoost, useful for autotuning on some targets.
+("xgboost", ["torch"]),
+
+# Development requirements
+("dev", ["matplotlib", "pillow"]),
+]
+
+# Maps a named Python package (which should appear in REQUIREMENTS_BY_PIECE 
above) to a
+# semver or pip version constraint. Semver constraints are translated into 
requirements.txt
+# constraints.
+CONSTRAINTS = [
+  ("onnx", ">=1.7.0"),
+  ("onnxruntime", ">=1.0.0"),
+  ("pillow", "<7"),
+  ("synr", ">=0.2.1"),
+  ("tensorflow", ">=2.1.0"),
+  ("tflite", ">=2.1.0"),
+  ("torch", "^1.7.0"),
+  ("torchvision", ">=0.5.0"),
+]
+
+
+# End of configuration options.
+
+
+
+
+
+# Required keys in REQUIREMENTS_BY_PIECE.
+REQUIRED_PIECES = ["core", "dev"]
+
+# Regex to validates piece names.
+PIECE_REGEX = re.compile(r"^[a-z0-9][a-z0-9-]*", re.IGNORECASE)
+
+# Regex to match a constraint specification. Multiple constraints are not 
supported.
+CONSTRAINT_REGEX = re.compile(r"(?:\^|\<|(?:<=)|(?:==)|(?:>=)|\>)[^<>=\^,]+")
+
+# Regex for parsing semantic versions. See
+# 
https://semver.org/#is-there-a-suggested-regular-expression-regex-to-check-a-semver-string
+SEMVER_REGEX = 
re.compile(r"^(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)(?:-(?P(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$")
+
+
+def validate_requirements_by_piece():
+  problems = []
+
+  unseen_required_pieces = set(REQUIRED_PIECES)
+  seen_pieces = set()
+
+  # Ensure that core is listed 

[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564706472



##
File path: python/requirements/all.txt
##
@@ -0,0 +1,16 @@
+numpy

Review comment:
   I added a way to specify a comment to each piece and that's now placed 
at the top of each requirements.txt file





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7289: Generate requirements.txt from Python spec

2021-01-26 Thread GitBox


areusch commented on a change in pull request #7289:
URL: https://github.com/apache/tvm/pull/7289#discussion_r564706225



##
File path: python/setup.py
##
@@ -171,38 +171,25 @@ def get_package_data_files():
 return ["relay/std/prelude.rly", "relay/std/core.rly"]
 
 
+# Temporarily add this directory to the path so we can import the requirements 
generator
+# tool.
+sys.path.insert(0, os.path.dirname(__file__))
+import gen_requirements
+sys.path.pop(0)
+
+requirements = gen_requirements.join_requirements()

Review comment:
   what do you mean exactly?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7338: Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC.

2021-01-26 Thread GitBox


areusch commented on pull request #7338:
URL: https://github.com/apache/tvm/pull/7338#issuecomment-767697337


   @jinchenglee gotcha, do you want to submit a PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jinchenglee edited a comment on pull request #7338: Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC.

2021-01-26 Thread GitBox


jinchenglee edited a comment on pull request #7338:
URL: https://github.com/apache/tvm/pull/7338#issuecomment-767695790


   @areusch , there's another macro in packed_func.h needs same fix. You are 
actually fixing TVM_DLL_EXPORT_PACKED_FUNC in the commit. 
   
   ```
   #define TVM_DLL_EXPORT_TYPED_FUNC(ExportName, Function)  
   \
 extern "C" {   
   \
 TVM_DLL int ExportName(TVMValue* args, int* type_code, int num_args, 
TVMValue* out_value, \
int* out_type_code) {   
   \
   try {
   \
 auto f = Function; 
   \
 using FType = 
::tvm::runtime::detail::function_signature::FType; \
 ::tvm::runtime::TVMRetValue rv;
   \
 ::tvm::runtime::detail::unpack_call_by_signature::run(  
   \
 f, ::tvm::runtime::TVMArgs(args, type_code, num_args), );   
   \
 rv.MoveToCHost(out_value, out_type_code);  
   \
 return 0;  
   \
   } catch (const ::std::runtime_error& _except_) { 
   \
 TVMAPISetLastError(_except_.what());   
   \
 return -1; 
   \
   }
   \
 }  
   \
 }
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jinchenglee commented on pull request #7338: Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC.

2021-01-26 Thread GitBox


jinchenglee commented on pull request #7338:
URL: https://github.com/apache/tvm/pull/7338#issuecomment-767695790


   @areusch , there's another macro in packed_func.h needs same fix:
   
   ```
   #define TVM_DLL_EXPORT_TYPED_FUNC(ExportName, Function)  
   \
 extern "C" {   
   \
 TVM_DLL int ExportName(TVMValue* args, int* type_code, int num_args, 
TVMValue* out_value, \
int* out_type_code) {   
   \
   try {
   \
 auto f = Function; 
   \
 using FType = 
::tvm::runtime::detail::function_signature::FType; \
 ::tvm::runtime::TVMRetValue rv;
   \
 ::tvm::runtime::detail::unpack_call_by_signature::run(  
   \
 f, ::tvm::runtime::TVMArgs(args, type_code, num_args), );   
   \
 rv.MoveToCHost(out_value, out_type_code);  
   \
 return 0;  
   \
   } catch (const ::std::runtime_error& _except_) { 
   \
 TVMAPISetLastError(_except_.what());   
   \
 return -1; 
   \
   }
   \
 }  
   \
 }
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7334: [Relay, TOPI] Add numpy style cumsum op

2021-01-26 Thread GitBox


tkonolige commented on a change in pull request #7334:
URL: https://github.com/apache/tvm/pull/7334#discussion_r564640703



##
File path: python/tvm/topi/cuda/scan.py
##
@@ -251,99 +269,103 @@ def scan_thrust(data, output_dtype, exclusive=True, 
return_reduction=False):
 Whether or not do exclusive or inclusive scan.
 
 return_reduction: bool, optional
-Whether or not return a 1-D tensor storing the reduction of each row.
+Whether or not return a (N-1)-D tensor storing the reduction of each 
scan axis.
 Reductions are computed as part of the upsweep pass, so there is no 
extra cost.
-If False, reductions are ignored.
+If False, reductions are ignored. It must be False when exclusive is 
False.
+
+binop: function, optional
+A binary associative op to use for scan. Since we need to lookup the 
corresponding
+thrust function, arbitrariy callables are not supported. Currently only
+tvm.tir.generic.add can be passed in.
 
 Returns
 ---
 output : tvm.te.Tensor
-1-D tensor that is the exclusive scan of the input, or
-2-D tensor storing the exclusive scan of each row.
+A N-D tensor of the same rank N and shape as the input data.
 
 reduction : tvm.te.Tensor, optional
-1-D tensor storing the reduction of each row.
+(N-1)-D tensor storing the reduction of each scan axis.
 Returned if return_reduction is True.
 """
 data_buf = tvm.tir.decl_buffer(data.shape, data.dtype, "data_buf", 
data_alignment=8)
 output_buf = tvm.tir.decl_buffer(data.shape, output_dtype, "output_buf", 
data_alignment=8)
+
 output = te.extern(
 [data.shape],
 [data],
 lambda ins, outs: tvm.tir.call_packed(
-"tvm.contrib.thrust.sum_scan", ins[0], outs[0], exclusive
+_get_thrust_func_name(binop), ins[0], outs[0], exclusive
 ),
 dtype=[output_dtype],
 in_buffers=[data_buf],
 out_buffers=[output_buf],
-name="exclusive_sum_scan2d",
-tag="exclusive_sum_scan2d_gpu",
+name="exclusive_scan_thrust",
+tag="exclusive_scan_thrust_gpu",
 )
 
 if return_reduction:
 assert exclusive, "return_reduction should be False for inclusive scan"
-reduction = get_reduction_from_exclusive_scan(data, output)
+reduction = get_reduction_from_exclusive_scan(data, output, binop)
 return output, reduction
 
 return output
 
 
-def exclusive_scan(data, axis=-1, return_reduction=False, output_dtype=None):
-"""Do exclusive scan on 1D input or along rows of 2D input.
+def exclusive_scan(
+data, axis=-1, return_reduction=False, output_dtype=None, 
binop=tvm.tir.generic.add
+):
+"""Do exclusive scan on 1D or multidimensional input.
 
 Parameters
 --
 data : tvm.te.Tensor
-Input data. 1-D tensor with shape [scan_axis_size], or
-2-D tensor with shape [batch_size, scan_axis_size].
+Input data of any shape.
 
 axis: int, optional
-The axis to do scan on. For now, only the inner most axis is supported.
+The axis to do scan on. By default, scan is done on the innermost axis.
 
 return_reduction: bool, optional
-Whether or not return a 1-D tensor storing the reduction of each row.
+Whether or not return a tensor storing the reduction over each scan 
axis.
+If the input rank is N, this tensor is of rank N - 1.
 Reductions are computed as part of the upsweep pass, so there is no 
extra cost.
 If False, reductions are ignored.
 
 output_dtype: string, optional
 The dtype of the output scan tensor. If not provided, the dtype of the 
input is used.
 
+binop: function, optional

Review comment:
   I think you should say that this defaults to add.

##
File path: python/tvm/relay/op/transform.py
##
@@ -1320,3 +1320,50 @@ def adv_index(inputs):
 Output tensor.
 """
 return _make.adv_index(Tuple(inputs))
+
+
+def cumsum(data, axis=None, dtype=None):
+"""Numpy style cumsum op. Return the cumulative inclusive sum of the 
elements along
+a given axis.
+
+Parameters
+--
+data : relay.Expr
+The input data to the operator.
+
+axis : int, optional
+Axis along which the cumulative sum is computed. The default (None) is 
to compute
+the cumsum over the flattened array.
+
+dtype : string, optional
+Type of the returned array and of the accumulator in which the 
elements are summed.
+If dtype is not specified, it defaults to the dtype of data.
+
+Returns
+---
+result : relay.Expr
+The result has the same size as data, and the same shape as data if 
axis is not None.
+If axis is None, the result is a 1-d array.
+
+Examples:

Review comment:
   I think this formatting is necessary for rst?
   

[GitHub] [tvm] tqchen closed issue #7305: how to estimate a model's flops number in tvm?

2021-01-26 Thread GitBox


tqchen closed issue #7305:
URL: https://github.com/apache/tvm/issues/7305


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on issue #7305: how to estimate a model's flops number in tvm?

2021-01-26 Thread GitBox


tqchen commented on issue #7305:
URL: https://github.com/apache/tvm/issues/7305#issuecomment-767589149


   Thank you for asking the question. Please open a new thread on 
https://discuss.tvm.apache.org/ where the community collectively answers 
related questions



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7338: Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC.

2021-01-26 Thread GitBox


tqchen commented on pull request #7338:
URL: https://github.com/apache/tvm/pull/7338#issuecomment-767588682


   Thanks @jinchenglee @areusch !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (c53030f -> ab8bc0a)

2021-01-26 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from c53030f  [CMake] use wrong flag name (#7341)
 add ab8bc0a  Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC. (#7338)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/packed_func.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)



[GitHub] [tvm] tqchen merged pull request #7338: Add resource_handle to TVM_DLL_EXPORT_TYPED_FUNC.

2021-01-26 Thread GitBox


tqchen merged pull request #7338:
URL: https://github.com/apache/tvm/pull/7338


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on pull request #7331: Update uTVM code to work with the nRF5340DK dev board.

2021-01-26 Thread GitBox


leandron commented on pull request #7331:
URL: https://github.com/apache/tvm/pull/7331#issuecomment-767570078


   cc @tom-gall and @gromero might want to have a look as well



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [CMake] use wrong flag name (#7341)

2021-01-26 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new c53030f  [CMake] use wrong flag name (#7341)
c53030f is described below

commit c53030f40e6911a10555097230f69809bc5af73f
Author: windclarion 
AuthorDate: Tue Jan 26 21:51:24 2021 +0800

[CMake] use wrong flag name (#7341)

Signed-off-by: windclarion 
---
 CMakeLists.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 6929dd6..98dd7de 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -375,9 +375,9 @@ add_library(tvm_objs OBJECT ${COMPILER_SRCS} 
${RUNTIME_SRCS})
 add_library(tvm_runtime_objs OBJECT ${RUNTIME_SRCS})
 
 add_library(tvm SHARED $)
-set_property(TARGET tvm APPEND PROPERTY LINK_OPTIONS "${TVM_VISIBILITY_FLAGS}")
+set_property(TARGET tvm APPEND PROPERTY LINK_OPTIONS "${TVM_VISIBILITY_FLAG}")
 add_library(tvm_runtime SHARED $)
-set_property(TARGET tvm_runtime APPEND PROPERTY LINK_OPTIONS 
"${TVM_VISIBILITY_FLAGS}")
+set_property(TARGET tvm_runtime APPEND PROPERTY LINK_OPTIONS 
"${TVM_VISIBILITY_FLAG}")
 
 if(USE_MICRO)
   # NOTE: cmake doesn't track dependencies at the file level across 
subdirectories. For the



[GitHub] [tvm] FrozenGene commented on pull request #7341: [CMake] use wrong flag name

2021-01-26 Thread GitBox


FrozenGene commented on pull request #7341:
URL: https://github.com/apache/tvm/pull/7341#issuecomment-767553996


   thanks @windclarion @leandron 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene merged pull request #7341: [CMake] use wrong flag name

2021-01-26 Thread GitBox


FrozenGene merged pull request #7341:
URL: https://github.com/apache/tvm/pull/7341


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Meteorix commented on pull request #7147: [CUDA][PASS]Legalize tensorcore

2021-01-26 Thread GitBox


Meteorix commented on pull request #7147:
URL: https://github.com/apache/tvm/pull/7147#issuecomment-767434897


   @Laurawly Thanks! The ci passed. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org