[tvm] branch main updated (23bd825 -> 1812060)

2021-01-05 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 23bd825  [AutoScheduler] Add custom build function (#7185)
 add 1812060  Fix prelu bug in onnx frontend. (#7208)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  | 12 +---
 tests/python/frontend/onnx/test_forward.py |  5 -
 2 files changed, 9 insertions(+), 8 deletions(-)



[GitHub] [tvm] masahi merged pull request #7208: [Relay][Frontend][Onnx] Fix mismatch between Onnx Prelu definition and importer.

2021-01-05 Thread GitBox


masahi merged pull request #7208:
URL: https://github.com/apache/tvm/pull/7208


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi closed issue #7202: [Bug] [Relay] Error when compiling a simple ONNX model

2021-01-05 Thread GitBox


masahi closed issue #7202:
URL: https://github.com/apache/tvm/issues/7202


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi closed issue #7203: [Bug] [Relay] Error when compiling a simple ONNX model with Abs and PRelu

2021-01-05 Thread GitBox


masahi closed issue #7203:
URL: https://github.com/apache/tvm/issues/7203


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] fantasyRqg opened a new pull request #7212: test tvm ci

2021-01-05 Thread GitBox


fantasyRqg opened a new pull request #7212:
URL: https://github.com/apache/tvm/pull/7212


   just test tvm ci



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hzfan commented on a change in pull request #7045: [Arith] Simplify cast

2021-01-05 Thread GitBox


hzfan commented on a change in pull request #7045:
URL: https://github.com/apache/tvm/pull/7045#discussion_r551789221



##
File path: src/arith/canonical_simplify.cc
##
@@ -77,6 +77,25 @@ inline PrimExpr DivImpl(PrimExpr a, PrimExpr b, DivMode 
mode) {
   }
 }
 
+bool CheckCastImpl(DataType dtype, PrimExpr value, Analyzer* analyzer) {

Review comment:
   I guess CastIsSafe seems better, since it checks both upcast and downcast





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi closed pull request #7212: test tvm ci

2021-01-05 Thread GitBox


masahi closed pull request #7212:
URL: https://github.com/apache/tvm/pull/7212


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7212: test tvm ci

2021-01-05 Thread GitBox


masahi commented on pull request #7212:
URL: https://github.com/apache/tvm/pull/7212#issuecomment-754497941


   Do not open a PR like this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] fantasyRqg commented on pull request #7212: test tvm ci

2021-01-05 Thread GitBox


fantasyRqg commented on pull request #7212:
URL: https://github.com/apache/tvm/pull/7212#issuecomment-754498507


   test run OK on my ubuntu , don't know why ci test failed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7212: test tvm ci

2021-01-05 Thread GitBox


masahi commented on pull request #7212:
URL: https://github.com/apache/tvm/pull/7212#issuecomment-754500048


   If you want to run CI again, you can do a dummy commit and push. Make sure 
the failure is not due to your change. If you believe it is indeed a flaky 
test, you can open an issue. 
   
   But since the test fails with segfault, which should not happen, it is 
likely there is some problem with your change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7197: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime

2021-01-05 Thread GitBox


jcf94 commented on a change in pull request #7197:
URL: https://github.com/apache/tvm/pull/7197#discussion_r551816877



##
File path: python/tvm/auto_scheduler/task_scheduler.py
##
@@ -82,11 +82,12 @@ def make_search_policies(
 if isinstance(search_policy, str):
 policy_type, model_type = search_policy.split(".")
 if model_type == "xgb":
-cost_model = XGBModel(num_warmup_sample=len(tasks) * 
num_measures_per_round)
-if load_model_file:
-logger.info("TaskScheduler: Load pretrained model...")
-cost_model.load(load_model_file)
-elif load_log_file:
+cost_model = XGBModel(
+num_warmup_sample=len(tasks) * num_measures_per_round,
+model_file=load_model_file,
+)
+if load_log_file:
+logger.info("TaskScheduler: Reload measured states and train 
the model...")

Review comment:
   Oh, the old one is fine. I was just going to add a `self.model_file` 
for cost model saving after training, this was modified by the way.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7197: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime

2021-01-05 Thread GitBox


jcf94 commented on a change in pull request #7197:
URL: https://github.com/apache/tvm/pull/7197#discussion_r551816877



##
File path: python/tvm/auto_scheduler/task_scheduler.py
##
@@ -82,11 +82,12 @@ def make_search_policies(
 if isinstance(search_policy, str):
 policy_type, model_type = search_policy.split(".")
 if model_type == "xgb":
-cost_model = XGBModel(num_warmup_sample=len(tasks) * 
num_measures_per_round)
-if load_model_file:
-logger.info("TaskScheduler: Load pretrained model...")
-cost_model.load(load_model_file)
-elif load_log_file:
+cost_model = XGBModel(
+num_warmup_sample=len(tasks) * num_measures_per_round,
+model_file=load_model_file,
+)
+if load_log_file:
+logger.info("TaskScheduler: Reload measured states and train 
the model...")

Review comment:
   The old one is fine. I was just going to add a `self.model_file` for 
cost model saving after training, this was modified by the way.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7197: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime

2021-01-05 Thread GitBox


jcf94 commented on a change in pull request #7197:
URL: https://github.com/apache/tvm/pull/7197#discussion_r551830554



##
File path: python/tvm/auto_scheduler/cost_model/xgb_model.py
##
@@ -141,6 +147,12 @@ def update(self, inputs, results):
 self.inputs.extend(inputs)
 self.results.extend(results)
 
+if len(self.inputs) - self.last_train_length < self.last_train_length 
/ 5:

Review comment:
   Added.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7197: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime

2021-01-05 Thread GitBox


jcf94 commented on a change in pull request #7197:
URL: https://github.com/apache/tvm/pull/7197#discussion_r551830662



##
File path: python/tvm/auto_scheduler/cost_model/xgb_model.py
##
@@ -116,12 +117,17 @@ def __init__(self, verbose_eval=25, 
num_warmup_sample=100, seed=None):
 self.plan_size = 32
 self.num_warmup_sample = num_warmup_sample
 self.verbose_eval = verbose_eval
+self.model_file = model_file
+if model_file:
+logger.info("XGBModel: Load pretrained model from %s...", 
model_file)
+self.load(model_file)

Review comment:
   Removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7045: [Arith] Simplify cast

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7045:
URL: https://github.com/apache/tvm/pull/7045#discussion_r551976566



##
File path: src/arith/canonical_simplify.cc
##
@@ -1071,6 +1208,33 @@ PrimExpr CanonicalSimplifier::Impl::VisitExpr_(const 
ReduceNode* op) {
   return ret;
 }
 
+PrimExpr CanonicalSimplifier::Impl::VisitExpr_(const CastNode* op) {
+  if (!IsIndexType(op->dtype)) {
+return Rewriter::VisitExpr_(op);
+  }
+  // normalize
+  PrimExpr value = this->CanonicalMutate(op->value);
+  PrimExpr ret;
+  // PushCastToChildren
+  if (value.as()) {
+SumExpr se = Downcast(value);
+if (se->CanPushCastToChildren(op->dtype, analyzer_)) {
+  se.CopyOnWrite()->PushCastToChildren(op->dtype);
+  ret = se;

Review comment:
   consider directly return here.

##
File path: src/arith/canonical_simplify.cc
##
@@ -1071,6 +1208,33 @@ PrimExpr CanonicalSimplifier::Impl::VisitExpr_(const 
ReduceNode* op) {
   return ret;
 }
 
+PrimExpr CanonicalSimplifier::Impl::VisitExpr_(const CastNode* op) {
+  if (!IsIndexType(op->dtype)) {
+return Rewriter::VisitExpr_(op);
+  }
+  // normalize
+  PrimExpr value = this->CanonicalMutate(op->value);
+  PrimExpr ret;
+  // PushCastToChildren
+  if (value.as()) {
+SumExpr se = Downcast(value);
+if (se->CanPushCastToChildren(op->dtype, analyzer_)) {
+  se.CopyOnWrite()->PushCastToChildren(op->dtype);
+  ret = se;
+}
+  } else if (value.as()) {
+SplitExpr se = Downcast(value);
+if (se->CanPushCastToChildren(op->dtype, analyzer_)) {
+  se.CopyOnWrite()->PushCastToChildren(op->dtype);
+  ret = se;

Review comment:
   consider directly return here.

##
File path: src/arith/canonical_simplify.cc
##
@@ -1071,6 +1208,33 @@ PrimExpr CanonicalSimplifier::Impl::VisitExpr_(const 
ReduceNode* op) {
   return ret;
 }
 
+PrimExpr CanonicalSimplifier::Impl::VisitExpr_(const CastNode* op) {
+  if (!IsIndexType(op->dtype)) {
+return Rewriter::VisitExpr_(op);
+  }
+  // normalize
+  PrimExpr value = this->CanonicalMutate(op->value);
+  PrimExpr ret;
+  // PushCastToChildren
+  if (value.as()) {
+SumExpr se = Downcast(value);
+if (se->CanPushCastToChildren(op->dtype, analyzer_)) {
+  se.CopyOnWrite()->PushCastToChildren(op->dtype);
+  ret = se;
+}
+  } else if (value.as()) {
+SplitExpr se = Downcast(value);
+if (se->CanPushCastToChildren(op->dtype, analyzer_)) {
+  se.CopyOnWrite()->PushCastToChildren(op->dtype);
+  ret = se;
+}
+  }
+  if (!ret.defined()) {
+ret = Rewriter::VisitExpr_(op);

Review comment:
   return Rewriter::VisitExpr_(op);





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r551977702



##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~
+
+microTVM currently tests against Cortex-M microcontrollers with the Zephyr 
RTOS; however, it is
+flexible and portable to other processors such as RISC-V and does not require 
Zephyr. The current
+demos run against QEMU and the following hardware:
+
+* `STM Nucleo-F746ZG 
`_
+* `nRF 5340 Preview Development Kit 
`_
+
+
+Getting Started with microTVM
+~

Review comment:
   rst requires the underline to have the same length as the title to avoid 
warning

##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~
+
+microTVM currently tests against Cortex-M microcontrollers with the Zephyr 
RTOS; however, it is
+flexible and portable to other processors such as RISC-V and does not require 
Zephyr. The current
+demos run against QEMU and the following hardware:
+
+* `STM Nucleo-F746ZG 
`_
+* `nRF 5340 Preview Development Kit 
`_
+
+
+Getting Started with microTVM
+~
+
+Before working with microTVM, we recommend you have a supported development 
board. Then, follow these
+tutorials to get started with microTVM:
+
+1. :doc:`Start the microTVM Reference VM 
`. The microTVM tutorials

Review comment:
   consider use label and reference. 
   
https://stackoverflow.com/questions/15394347/adding-a-cross-reference-to-a-subheading-or-anchor-in-another-page

##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  L

[GitHub] [tvm] mbrookhart commented on issue #7201: [Bug] Error when compiling a simple ONNX model for opt_level=3

2021-01-05 Thread GitBox


mbrookhart commented on issue #7201:
URL: https://github.com/apache/tvm/issues/7201#issuecomment-754725862


   This is simply because the DynamicToStatic pass is defined at opt level 3, 
not 2. We 
   can lower it to make this pass, if we think that's a robust thing to do more 
generally. 
   
   cc @tqchen @jroesch @masahi @jwfromm 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart removed a comment on issue #7201: [Bug] Error when compiling a simple ONNX model for opt_level=3

2021-01-05 Thread GitBox


mbrookhart removed a comment on issue #7201:
URL: https://github.com/apache/tvm/issues/7201#issuecomment-754725862


   This is simply because the DynamicToStatic pass is defined at opt level 3, 
not 2. We 
   can lower it to make this pass, if we think that's a robust thing to do more 
generally. 
   
   cc @tqchen @jroesch @masahi @jwfromm 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on issue #7201: [Bug] Error when compiling a simple ONNX model for opt_level=3

2021-01-05 Thread GitBox


mbrookhart commented on issue #7201:
URL: https://github.com/apache/tvm/issues/7201#issuecomment-754726173


   I will debug this morning.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on issue #7200: [Bug] Error when compiling a simple ONNX model with MatMul operator for opt_level=2

2021-01-05 Thread GitBox


mbrookhart commented on issue #7200:
URL: https://github.com/apache/tvm/issues/7200#issuecomment-754726305


   This is simply because the DynamicToStatic pass is defined at opt level 3, 
not 2. We 
   can lower it to make this pass, if we think that's a robust thing to do more 
generally. 
   
   cc @tqchen @jroesch @masahi @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on issue #7200: [Bug] Error when compiling a simple ONNX model with MatMul operator for opt_level=2

2021-01-05 Thread GitBox


tqchen commented on issue #7200:
URL: https://github.com/apache/tvm/issues/7200#issuecomment-754737437


   I agree that given DynamicToStatic brings no harm, it should be applied more 
generally.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


trevor-m commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754739627


   Hi @mbrookhart thanks for this performance improvment! 
   
   I found that this PR is causing `CUDA: an illegal memory access was 
encountered` during inference for a TensorFlow SSD object detection model. I 
can't reproduce it in a standalone unit test, so I think there may be some race 
condition or code relying on unitialized memory. I'll let you know if I find 
out anything more.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


mbrookhart commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754745252


   Thanks, Trevor. If you can share the model script you're using, I can also 
work to debug today.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart opened a new pull request #7213: Reorder dynamic to static and simplify inference, lower DynamicToStatic Opt Level

2021-01-05 Thread GitBox


mbrookhart opened a new pull request #7213:
URL: https://github.com/apache/tvm/pull/7213


   Add a dropout unit test.
   
   Fixes #7201 and #7200 
   
   cc @tqchen @merrymercy @luyaor
   
   Thanks for reporting the issues, @luyaor



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7165: [µTVM] Raise a better error when project_dir does not exist

2021-01-05 Thread GitBox


areusch commented on pull request #7165:
URL: https://github.com/apache/tvm/pull/7165#issuecomment-754770183


   @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7210: [VM] Per-input, data dependence specification for shape func

2021-01-05 Thread GitBox


mbrookhart commented on pull request #7210:
URL: https://github.com/apache/tvm/pull/7210#issuecomment-754802181


   I like this, I think it makes a lot of sense, but I'll defer mainly to 
@zhiics and @icemelon9 since they mainly implemented the infrastructure for 
heterogeneous shape functions.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac opened a new pull request #7214: [ConvertLayout] Support transpose

2021-01-05 Thread GitBox


comaniac opened a new pull request #7214:
URL: https://github.com/apache/tvm/pull/7214


   Support layout inference for `transpose`.
   
   cc @anijain2305 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-05 Thread GitBox


altanh commented on a change in pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#discussion_r552146139



##
File path: python/tvm/relay/op/random/kernel.py
##
@@ -0,0 +1,134 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Splittable and parallelizable PRNG kernels."""
+# pylint: disable=invalid-name,unused-argument
+from __future__ import absolute_import
+
+import sys
+import numpy as np
+
+from ...expr import Constant
+from  import nd
+from . import _make
+
+
+def threefry_key(seed):
+"""Create a new Threefry random number generator key.
+
+Example
+---
+
+.. code-block:: python
+
+gen = threefry_key(0)
+_, random_number = threefry_generate(gen, (4,))
+
+Parameters
+--
+seed : int
+Starting seed for the key
+
+Returns
+---
+key : relay.Expr
+New key to pass to future uses of :py:func:`threefry_split` or
+:py:func:`threefry_generate`.
+"""
+s = np.frombuffer(seed.to_bytes(32, sys.byteorder), dtype="uint64")
+a = np.concatenate((s, np.array([0, 0, 0, 0, 1 << 63, 0], dtype="uint64")))
+return Constant(nd.array(a))
+
+
+def threefry_generate(key, shape):
+"""Generate an array of random bits (`uint64`) using the Threefry algorithm
+
+Example
+---
+
+.. code-block:: python
+
+key = threefry_key(0)
+new_key, random1 = threefry_generate(key, (4,))
+_, random2 = threefry_generate(new_key, (4,))
+# random1 and random2 are different random numbers
+
+Parameters
+--
+key : relay.Expr
+key that uniquely determines the random values. Multiple uses with the
+same key will generate the same random values. This key should be
+treated as an opaque pointer. You can create one from calling
+:py:func:`threefry_key`, :py:func:`threefry_split`, or
+:py:func:`threefry_generate`. **Do not use this key again after calling
+this function.**
+
+shape : Sequence[int]
+Desired outputs shape of random numbers. **Currently the total
+number of elements must be a multiple of 4.**
+
+Returns
+---
+new_key : relay.Expr
+New key to pass to future uses of :py:func:`threefry_split` or
+:py:func:`threefry_generate`.
+
+random_array : relay.Expr
+Array of random numbers. Has shape `shape`.
+"""
+return _make.threefry_generate(key, shape)
+
+
+def threefry_split(key):
+"""Split an existing Threefry key into two new ones.
+
+This is useful if you have to subsequent calls which each need their own
+independent random number generation.
+
+Example
+---
+
+.. code-block:: python
+
+def foo(key):
+new_key, num = threefry_generate(key, (1,))

Review comment:
   fix this example to use multiple of 4 for now, until we support 
non-multiples

##
File path: python/tvm/topi/random/kernel.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pseudorandom number kernels."""
+import tvm
+import tvm.topi
+from ... import tir
+from ...tir import ir_builder
+
+
+# Threefry PRNG with splitting based on
+# - J. K. Salmon, M. A. Moraes, R. O. Dror and D. E. Shaw, "Parallel random 
numbers: As easy as 1,
+#   2, 3," SC '11: Proceedings of 2011 International Conference for High 
Performance Computing,
+#   Networking, Storage and Analysis, Seattle, WA, 2011, 

[GitHub] [tvm] jwfromm opened a new pull request #7215: [Relay][Frontend][ONNX] Allow condition in if op to be an array.

2021-01-05 Thread GitBox


jwfromm opened a new pull request #7215:
URL: https://github.com/apache/tvm/pull/7215


   This PR is a very small addition to our if converter that unpacks an input 
condition for an If operator into a scalar if it's passed as an array, which 
the onnx spec allows.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on a change in pull request #7197: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime

2021-01-05 Thread GitBox


merrymercy commented on a change in pull request #7197:
URL: https://github.com/apache/tvm/pull/7197#discussion_r551763767



##
File path: python/tvm/auto_scheduler/cost_model/xgb_model.py
##
@@ -116,12 +117,17 @@ def __init__(self, verbose_eval=25, 
num_warmup_sample=100, seed=None):
 self.plan_size = 32
 self.num_warmup_sample = num_warmup_sample
 self.verbose_eval = verbose_eval
+self.model_file = model_file
+if model_file:
+logger.info("XGBModel: Load pretrained model from %s...", 
model_file)
+self.load(model_file)

Review comment:
   Remove this. No pytorch/sklearn model has api or behavior like this.
   Calling `model.load(model_file)` explicitly is cleaner than adding 
`model_file` to the constructor.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


masahi commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754891319


   I can reproduce the issue by running ssd test in tensorflow/test_forward.py 
with cuda target:
   
   ```
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [05:42:13] 
/home/masa/projects/dev/tvm/src/runtime/cuda/cuda_device_api.cc:126: 
   ---
   An internal invariant was violated during the execution of TVM.
   Please read TVM's error reporting guidelines.
   More details can be found here: 
https://discuss.tvm.ai/t/error-reporting/7793.
   ---
 Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading == false: 
CUDA: an illegal memory access was encountered
   Stack trace:
 [bt] (0) /home/masa/projects/dev/tvm/build/libtvm.so(+0x14aa8e8) 
[0x7f4fcb8ca8e8]
 [bt] (1) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::CUDADeviceAPI::FreeDataSpace(DLContext,
 void*)+0xe4) [0x7f4fcb8cabe4]
 [bt] (2) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::NDArray::Internal::DefaultDeleter(tvm::runtime::Object*)+0x5b)
 [0x7f4fcb8593fb]
 [bt] (3) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::NDArray::CopyTo(DLContext
 const&) const+0x325) [0x7f4fcb5e4915]
 [bt] (4) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::vm::CopyTo(tvm::runtime::ObjectRef,
 DLContext const&)+0x311) [0x7f4fcb884b11]
 [bt] (5) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::vm::VirtualMachine::RunLoop()+0x2aee)
 [0x7f4fcb880dde]
 [bt] (6) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::vm::VirtualMachine::Invoke(tvm::runtime::vm::VMFunction
 const&, std::vector > const&)+0x27) [0x7f4fcb881c17]
 [bt] (7) /home/masa/projects/dev/tvm/build/libtvm.so(+0x14621f0) 
[0x7f4fcb8821f0]
 [bt] (8) /home/masa/projects/dev/tvm/build/libtvm.so(TVMFuncCall+0x63) 
[0x7f4fcb835613]
   ```
   
   @trevor-m Are you sure this is caused by `get_valid_counts` change? I've 
also changed NMS in https://github.com/apache/tvm/pull/7172,  I hope that 
change is fine.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


masahi edited a comment on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754891319


   I can reproduce the issue by running ssd test in tensorflow/test_forward.py 
with cuda target (I looked at this test yesterday for my PR, so I have a fresh 
memory):
   
   ```
   terminate called after throwing an instance of 'dmlc::Error'
 what():  [05:42:13] 
/home/masa/projects/dev/tvm/src/runtime/cuda/cuda_device_api.cc:126: 
   ---
   An internal invariant was violated during the execution of TVM.
   Please read TVM's error reporting guidelines.
   More details can be found here: 
https://discuss.tvm.ai/t/error-reporting/7793.
   ---
 Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading == false: 
CUDA: an illegal memory access was encountered
   Stack trace:
 [bt] (0) /home/masa/projects/dev/tvm/build/libtvm.so(+0x14aa8e8) 
[0x7f4fcb8ca8e8]
 [bt] (1) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::CUDADeviceAPI::FreeDataSpace(DLContext,
 void*)+0xe4) [0x7f4fcb8cabe4]
 [bt] (2) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::NDArray::Internal::DefaultDeleter(tvm::runtime::Object*)+0x5b)
 [0x7f4fcb8593fb]
 [bt] (3) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::NDArray::CopyTo(DLContext
 const&) const+0x325) [0x7f4fcb5e4915]
 [bt] (4) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::vm::CopyTo(tvm::runtime::ObjectRef,
 DLContext const&)+0x311) [0x7f4fcb884b11]
 [bt] (5) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::vm::VirtualMachine::RunLoop()+0x2aee)
 [0x7f4fcb880dde]
 [bt] (6) 
/home/masa/projects/dev/tvm/build/libtvm.so(tvm::runtime::vm::VirtualMachine::Invoke(tvm::runtime::vm::VMFunction
 const&, std::vector > const&)+0x27) [0x7f4fcb881c17]
 [bt] (7) /home/masa/projects/dev/tvm/build/libtvm.so(+0x14621f0) 
[0x7f4fcb8821f0]
 [bt] (8) /home/masa/projects/dev/tvm/build/libtvm.so(TVMFuncCall+0x63) 
[0x7f4fcb835613]
   ```
   
   @trevor-m Are you sure this is caused by `get_valid_counts` change? I've 
also changed NMS in https://github.com/apache/tvm/pull/7172,  I hope that 
change is fine.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] icemelon9 merged pull request #7120: [PatternLang] Add Syntatic Sugar to the C++ pattern API and support DataType Attribute Matching

2021-01-05 Thread GitBox


icemelon9 merged pull request #7120:
URL: https://github.com/apache/tvm/pull/7120


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] icemelon9 commented on pull request #7120: [PatternLang] Add Syntatic Sugar to the C++ pattern API and support DataType Attribute Matching

2021-01-05 Thread GitBox


icemelon9 commented on pull request #7120:
URL: https://github.com/apache/tvm/pull/7120#issuecomment-754903121


   Thanks @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (1812060 -> d3bb762)

2021-01-05 Thread haichen
This is an automated email from the ASF dual-hosted git repository.

haichen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 1812060  Fix prelu bug in onnx frontend. (#7208)
 add d3bb762  [PatternLang] Add Syntatic Sugar to the C++ pattern API and 
support DataType Attribute Matching (#7120)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/dataflow_pattern.h  |  56 ++--
 python/tvm/relay/dataflow_pattern/__init__.py |   4 +-
 src/relay/ir/dataflow_matcher.cc  |  14 +-
 src/relay/ir/dataflow_pattern.cc  |  67 +++--
 src/relay/transforms/simplify_expr.cc |   9 +-
 tests/cpp/dataflow_pattern_test.cc| 200 ++
 tests/python/relay/test_dataflow_pattern.py   |   6 +
 7 files changed, 319 insertions(+), 37 deletions(-)
 create mode 100644 tests/cpp/dataflow_pattern_test.cc



[GitHub] [tvm] tqchen commented on a change in pull request #7165: [µTVM] Raise a better error when project_dir does not exist

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7165:
URL: https://github.com/apache/tvm/pull/7165#discussion_r552199320



##
File path: python/tvm/micro/contrib/zephyr.py
##
@@ -58,6 +58,10 @@ def run(self, cmd, **kw):
 return subprocess.check_output(cmd, env=env, **kw)
 
 
+class ProjectNotFoundError(Exception):

Review comment:
   Can handle later. Consider add the exception to the hierachy in 
tvm.error later





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen merged pull request #7165: [µTVM] Raise a better error when project_dir does not exist

2021-01-05 Thread GitBox


tqchen merged pull request #7165:
URL: https://github.com/apache/tvm/pull/7165


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [µTVM] Raise a better error when project_dir does not exist (#7165)

2021-01-05 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 8b44741  [µTVM] Raise a better error when project_dir does not exist 
(#7165)
8b44741 is described below

commit 8b447411b1948bea3785059ffae4daa890b5a971
Author: Andrew Reusch 
AuthorDate: Tue Jan 5 13:16:31 2021 -0800

[µTVM] Raise a better error when project_dir does not exist (#7165)
---
 python/tvm/micro/contrib/zephyr.py | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/python/tvm/micro/contrib/zephyr.py 
b/python/tvm/micro/contrib/zephyr.py
index 6625498..61aec2b 100644
--- a/python/tvm/micro/contrib/zephyr.py
+++ b/python/tvm/micro/contrib/zephyr.py
@@ -58,6 +58,10 @@ class SubprocessEnv(object):
 return subprocess.check_output(cmd, env=env, **kw)
 
 
+class ProjectNotFoundError(Exception):
+"""Raised when the project_dir supplied to ZephyrCompiler does not 
exist."""
+
+
 class FlashRunnerNotSupported(Exception):
 """Raised when the FLASH_RUNNER for a project isn't supported by this 
Zephyr adapter."""
 
@@ -95,6 +99,13 @@ class ZephyrCompiler(tvm.micro.Compiler):
 If given, additional environment variables present when invoking 
west, cmake, or make.
 """
 self._project_dir = project_dir
+if not os.path.exists(project_dir):
+# Raise this error instead of a potentially-more-cryptic compiler 
error due to a missing
+# prj.conf.
+raise ProjectNotFoundError(
+f"project_dir supplied to ZephyrCompiler does not exist: 
{project_dir}"
+)
+
 self._board = board
 if west_cmd is None:
 self._west_cmd = [sys.executable, "-mwest.app.main"]



[GitHub] [tvm] tqchen opened a new pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var hint.

2021-01-05 Thread GitBox


tqchen opened a new pull request #7216:
URL: https://github.com/apache/tvm/pull/7216


   This is a refactoring step to cleanup legacy issue of opaque buffer
   var without type information. Now all the allocation comes with the right
   pointer data type. Places touched:
   
   - TVMScript Parser: add the right info to get the correct pointer type.
   - Cross thread all reduce: set the right pointer type.
   - Storage rewrite: setup the right pointer type.
   - Custom dtype: remap the variables with new pointer type.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var hint.

2021-01-05 Thread GitBox


tqchen edited a comment on pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#issuecomment-754921537


   cc @spectrometerHBH @junrushao1994 @Hzfengsy @tkonolige @ZihengJiang 
@hypercubestart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


areusch commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552234705



##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~
+
+microTVM currently tests against Cortex-M microcontrollers with the Zephyr 
RTOS; however, it is
+flexible and portable to other processors such as RISC-V and does not require 
Zephyr. The current
+demos run against QEMU and the following hardware:
+
+* `STM Nucleo-F746ZG 
`_
+* `nRF 5340 Preview Development Kit 
`_
+
+
+Getting Started with microTVM
+~
+
+Before working with microTVM, we recommend you have a supported development 
board. Then, follow these
+tutorials to get started with microTVM:
+
+1. :doc:`Start the microTVM Reference VM 
`. The microTVM tutorials

Review comment:
   I tried this but it didn't seem to work--unfortunately now it's been a 
few weeks and I can't recall why. I think because I was linking to a tutorial?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


masahi edited a comment on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754912340


   hmm strange, after running the ssd test on GPU a few times, I cannot 
reproduce the error anymore. Could this error be random? 
   
   One annoying thing about this model is that compilation time is extremely 
slow. It also requires increasing the stack size limit, otherwise it segfaults. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


areusch commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552234486



##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~
+
+microTVM currently tests against Cortex-M microcontrollers with the Zephyr 
RTOS; however, it is
+flexible and portable to other processors such as RISC-V and does not require 
Zephyr. The current
+demos run against QEMU and the following hardware:
+
+* `STM Nucleo-F746ZG 
`_
+* `nRF 5340 Preview Development Kit 
`_
+
+
+Getting Started with microTVM
+~
+
+Before working with microTVM, we recommend you have a supported development 
board. Then, follow these
+tutorials to get started with microTVM:
+
+1. :doc:`Start the microTVM Reference VM 
`. The microTVM tutorials
+   depend on Zephyr and on a compiler toolchain for your hardware. The 
reference VM is a convenient
+   way to install those dependencies.
+2. Try the :doc:`microTVM with TFLite Tutorial 
`.
+3. Try running a more complex `CIFAR10-CNN model 
`_.
+
+
+How microTVM Works
+

Review comment:
   done

##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~
+
+microTVM currently tests against Cortex-M microcontrollers with the Zephyr 
RTOS; however, it is
+flexible and portable to other processors such as RISC-V and does not require 
Zephyr. The current
+demos run against QEMU and the following hardware:
+
+* `STM Nucleo-F746ZG 
`_
+* `nRF 5340 Preview Development Kit 
`_
+
+
+Getting Started with microTVM
+~

[GitHub] [tvm] tqchen commented on pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var hint.

2021-01-05 Thread GitBox


tqchen commented on pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#issuecomment-754921537


   cc @spectrometerHBH @junrushao1994 @Hzfengsy @tkonolige 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


areusch commented on pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#issuecomment-754938818


   @tqchen ready for another look



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


masahi commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754912340


   hmm strange, after running the ssd test on GPU a few times, I cannot 
reproduce the error anymore. Could this error be random? 
   
   One annoying thing about this model is that compilation time is extremely 
slow and it requires increasing the stack size limit. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var pointer hint.

2021-01-05 Thread GitBox


tkonolige commented on a change in pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#discussion_r552228734



##
File path: python/tvm/tir/buffer.py
##
@@ -247,7 +247,9 @@ def decl_buffer(
 shape_dtype = shape[0].dtype if hasattr(shape[0], "dtype") else "int32"
 elem_offset = Var("%s_elem_offset" % name, shape_dtype)
 if data is None:
-data = Var(name, PointerType(PrimType(dtype)), span)
+# store bool as i8
+storage_dtype = "int8" if dtype == "bool" else dtype

Review comment:
   Why is there a special case for bool here?

##
File path: src/target/source/codegen_cuda.cc
##
@@ -581,7 +581,10 @@ void CodeGenCUDA::VisitStmt_(const AllocateNode* op) {
   int32_t constant_size = op->constant_allocation_size();
   ICHECK_GT(constant_size, 0) << "Can only handle constant size stack 
allocation for now";
   const VarNode* buffer = op->buffer_var.as();
-  std::string scope = alloc_storage_scope_.at(buffer);
+  auto it = alloc_storage_scope_.find(buffer);
+  ICHECK(it != alloc_storage_scope_.end())

Review comment:
   I think we could be a little more specific here. We need an `AttrStmt` 
with a key of `storage_scope` right? So maybe say `"Buffer " << op->buffer_var 
<< " is missing an AttrStmt with a \"storage_scope\" key"`.

##
File path: src/tir/ir/stmt.cc
##
@@ -274,9 +274,10 @@ TVM_STATIC_IR_FUNCTOR(ReprPrinter, vtable)
 // Allocate
 Allocate::Allocate(Var buffer_var, DataType dtype, Array extents, 
PrimExpr condition,
Stmt body, Span span) {
-  // TODO(tvm-team): Add invariant check to make sure
-  // IsPointerPType(buffer_var->type_annotation, dtype)
-  // once we fix the allocate tvm script printing.
+  ICHECK(IsPointerType(buffer_var->type_annotation, dtype))
+  << "Allocate: buffer_var expect to have the right pointer type 
annotation"
+  << " annotation=" << buffer_var->type_annotation << ", dtype=" << dtype;

Review comment:
   This should probably be a `CHECK` too if people are writing tvmscript.

##
File path: src/tir/ir/stmt.cc
##
@@ -274,9 +274,10 @@ TVM_STATIC_IR_FUNCTOR(ReprPrinter, vtable)
 // Allocate
 Allocate::Allocate(Var buffer_var, DataType dtype, Array extents, 
PrimExpr condition,
Stmt body, Span span) {
-  // TODO(tvm-team): Add invariant check to make sure
-  // IsPointerPType(buffer_var->type_annotation, dtype)
-  // once we fix the allocate tvm script printing.
+  ICHECK(IsPointerType(buffer_var->type_annotation, dtype))
+  << "Allocate: buffer_var expect to have the right pointer type 
annotation"
+  << " annotation=" << buffer_var->type_annotation << ", dtype=" << dtype;

Review comment:
   This might be a better error message:
   ```suggestion
 << "The allocated data type (" << dtype << ") does not match the type 
annotation of the buffer " << buffer_var << " (" << buffer_var->type_annotation 
<< "). The data type should be an element of the pointer type."
   ```
   
   It does seem like we could infer the type from the buffer_var though...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


mbrookhart commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754952503


   Yeah, ouch:
   `447.33s (0:07:27)`
   
   I'm not needing to increase the stack limit, and I haven't gotten this test 
to fail yet.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #7215: [Relay][Frontend][ONNX] Allow condition in if op to be an array.

2021-01-05 Thread GitBox


jwfromm commented on pull request #7215:
URL: https://github.com/apache/tvm/pull/7215#issuecomment-754863440


   @mbrookhart @tmoreau89 can you take a look at this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var pointer hint.

2021-01-05 Thread GitBox


junrushao1994 commented on a change in pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#discussion_r552250872



##
File path: python/tvm/tir/buffer.py
##
@@ -247,7 +247,9 @@ def decl_buffer(
 shape_dtype = shape[0].dtype if hasattr(shape[0], "dtype") else "int32"
 elem_offset = Var("%s_elem_offset" % name, shape_dtype)
 if data is None:
-data = Var(name, PointerType(PrimType(dtype)), span)
+# store bool as i8
+storage_dtype = "int8" if dtype == "bool" else dtype

Review comment:
   I think bool is usually treated as "uint1" though...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


trevor-m commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754954515


   > hmm strange, after running the ssd test on GPU a few times, I cannot 
reproduce the error anymore. Could this error be random?
   > 
   > One annoying thing about this model is that compilation time is extremely 
slow. It also requires increasing the stack size limit, otherwise it segfaults.
   
   Yeah the error is a bit random. However, I was able to reproduce it 100% of 
the time with TRT offload enabled. I can share a script shortly.
   
   > @trevor-m Are you sure this is caused by get_valid_counts change? I've 
also changed NMS in #7172, I hope that change is fine.
   
   Yeah, I did a git bisect to determine this PR was the source of the issue, 
and #7172 was fine.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] anijain2305 commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


anijain2305 commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754954590


   > hmm strange, after running the ssd test on GPU a few times, I cannot 
reproduce the error anymore. Could this error be random?
   > 
   > One annoying thing about this model is that compilation time is extremely 
slow. It also requires increasing the stack size limit, otherwise it segfaults.
   
   Maybe it depends on the input data. Trevor and I ran it across a bunch of 
models, and it fails for few of them (not all). I believe that it can be 
because of input data (as number of boxes etc change with input image)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var pointer hint.

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#discussion_r552251047



##
File path: python/tvm/tir/buffer.py
##
@@ -247,7 +247,9 @@ def decl_buffer(
 shape_dtype = shape[0].dtype if hasattr(shape[0], "dtype") else "int32"
 elem_offset = Var("%s_elem_offset" % name, shape_dtype)
 if data is None:
-data = Var(name, PointerType(PrimType(dtype)), span)
+# store bool as i8
+storage_dtype = "int8" if dtype == "bool" else dtype

Review comment:
   In the current convention, the bool are stored as int8, so the pointer 
type of the bool is `i8*`, while bool is represented as `i1` in the IR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (8b44741 -> 197594b)

2021-01-05 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 8b44741  [µTVM] Raise a better error when project_dir does not exist 
(#7165)
 add 197594b  Allow condition in if op to be an array. (#7215)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  |  3 +++
 tests/python/frontend/onnx/test_forward.py | 15 ---
 2 files changed, 15 insertions(+), 3 deletions(-)



[GitHub] [tvm] tmoreau89 merged pull request #7215: [Relay][Frontend][ONNX] Allow condition in if op to be an array.

2021-01-05 Thread GitBox


tmoreau89 merged pull request #7215:
URL: https://github.com/apache/tvm/pull/7215


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tmoreau89 commented on pull request #7215: [Relay][Frontend][ONNX] Allow condition in if op to be an array.

2021-01-05 Thread GitBox


tmoreau89 commented on pull request #7215:
URL: https://github.com/apache/tvm/pull/7215#issuecomment-754964148


   Thanks everyone, the PR has been merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


mbrookhart commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754965956


   @trevor-m I'm in mountain time, so I'll need to leave in about half an hour. 
If you can post the script that consistently fails tonight, I'll jump in first 
thing tomorrow morning and start hunting for which line causes the issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7123: Parallelize cumsum in get_valid_counts

2021-01-05 Thread GitBox


masahi commented on pull request #7123:
URL: https://github.com/apache/tvm/pull/7123#issuecomment-754966258


   @anijain2305 @trevor-m We should definitely use a fixed, real image for CI 
testing, like pytorch MaskRCNN test does. Please send a PR
   
   
https://github.com/apache/tvm/blob/4c13ae9d17d1709ed7a777ce1bb72212e8d2559d/tests/python/frontend/pytorch/test_object_detection.py#L90-L95



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #7209: [Frontend][MXNet] add _npi_stack, issue #7186

2021-01-05 Thread GitBox


junrushao1994 commented on pull request #7209:
URL: https://github.com/apache/tvm/pull/7209#issuecomment-754969011


   Thanks for the contribution! CC @sxjscience for a second look



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var hint.

2021-01-05 Thread GitBox


tqchen edited a comment on pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#issuecomment-754921537


   cc @spectrometerHBH @junrushao1994 @Hzfengsy @tkonolige @ZihengJiang 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7083: [RELAY,TOPI] Threefry PRNG: splittable and stateless

2021-01-05 Thread GitBox


tkonolige commented on a change in pull request #7083:
URL: https://github.com/apache/tvm/pull/7083#discussion_r552271978



##
File path: python/tvm/topi/random/kernel.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Pseudorandom number kernels."""
+import tvm
+import tvm.topi
+from ... import tir
+from ...tir import ir_builder
+
+
+# Threefry PRNG with splitting based on
+# - J. K. Salmon, M. A. Moraes, R. O. Dror and D. E. Shaw, "Parallel random 
numbers: As easy as 1,
+#   2, 3," SC '11: Proceedings of 2011 International Conference for High 
Performance Computing,
+#   Networking, Storage and Analysis, Seattle, WA, 2011, pp. 1-12, doi: 
10.1145/2063384.2063405.
+# - Claessen, K. ; Palka, M. (2013) "Splittable Pseudorandom Number Generators 
using Cryptographic
+#   Hashing". Proceedings of Haskell Symposium 2013 pp. 47-58.  MLA
+# - Ferguson, Niels, et al. "The Skein hash function family." Submission to 
NIST (round 3) 7.7.5
+#   (2010): 3.
+
+
+# Threefry is a counter based PRNG: given a unique input, it generates a 
unique random number. As
+# there is no state to maintain, we can apply it to a sequence of numbers 
(0..N) to generate a
+# sequence of random numbers in parallel. In order to make the PRNG splittable 
(that is we can
+# generate a sequence of random numbers in one place, and another sequence in 
another), we add a
+# path and key in addition to the counter. The path allows us to encode a 
sequence of splits (a 0 in
+# the path indicates the left result of a split, a 1 indicates the right). To 
avoid continuously
+# growing the path, we can compress an existing path into the key portion of 
the generator by
+# hashing the current key, path, and counter to create the new key (this same 
technique is used if
+# we run out of room for the counter).
+
+# This module use encoding e4 from the appendix of "Splittable Pseudorandom 
Number Generators using
+# Cryptographic Hashing" (confusingly, the definition in the paper uses e3 to 
define the encoding
+# function). This encoding uses a 10 element uint64 tensor where each byte 
means the following:
+
+# .. code-block:
+
+# gen:
+# words: 0 1 2 3 | 4 5  | 6 7 | 8 9
+# usage: key | path | counter | position of next step in path encoded 
in binary
+#   ex: 0b00010 -> next path entry goes 
one from the right
+
+# Right now, counter only uses the rightmost word.
+
+# Threefry rotation constants from the Skein paper ("The Skein Hash Function 
Family"
+# https://www.schneier.com/wp-content/uploads/2015/01/skein.pdf)
+_ROTATIONS = {
+4: [[14, 16], [52, 57], [23, 40], [5, 37], [25, 33], [46, 12], [58, 22], 
[32, 32]],
+8: [
+[46, 36, 19, 37],
+[33, 27, 14, 42],
+[17, 49, 36, 39],
+[44, 9, 54, 56],
+[39, 30, 34, 24],
+[13, 50, 10, 17],
+[25, 29, 39, 43],
+[8, 35, 56, 22],
+],
+16: [
+[24, 13, 8, 47, 8, 17, 22, 37],
+[38, 19, 10, 55, 49, 18, 23, 52],
+[33, 4, 51, 13, 34, 41, 59, 17],
+[5, 20, 48, 41, 47, 28, 16, 25],
+[41, 9, 37, 31, 12, 47, 44, 30],
+[16, 34, 56, 51, 4, 53, 42, 41],
+[31, 44, 47, 46, 19, 42, 44, 25],
+[9, 48, 35, 52, 23, 31, 37, 20],
+],
+}
+
+# Threefry permutation constants from the Skein paper ("The Skein Hash 
Function Family"
+# https://www.schneier.com/wp-content/uploads/2015/01/skein.pdf)
+_PERMUTATIONS = {
+4: [0, 3, 2, 1],
+8: [2, 1, 4, 7, 6, 5, 0, 3],
+16: [0, 9, 2, 13, 6, 11, 4, 15, 10, 7, 12, 3, 14, 5, 8, 1],
+}
+
+
+def _threefry(
+irb, key_buf, key_offset, counter_buf, counter_offset, out_buf, 
out_offset, out_shape
+):
+"""IRBuilder code for running Threefry
+
+Parameters
+--
+irb: IRBuilder
+IRBuilder that this code will be generated for.
+
+key_buf: BufferVar
+Buffer to read the key from.
+
+key_offset: number
+Threefry will write to :code:`key_buf[key_offset:key_offset+4]`
+
+counter_buf: BufferVar
+Buffer to read the counter from.
+
+counter_offset: number
+Threefry will write to 
:code:`counter_buf[counter_offset:counter_offset+4]`
+
+ou

[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552274427



##
File path: tutorials/micro/micro_tflite.py
##
@@ -15,83 +15,110 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Micro TVM with TFLite Models
+microTVM with TFLite Models
 

Review comment:
   length matching





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552274489



##
File path: tutorials/micro/README.txt
##
@@ -1,4 +1,4 @@
 .. _tutorial-micro:
 
-Micro TVM 
+microTVM
 -

Review comment:
   length matching

##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+

Review comment:
   length matching





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552275015



##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~

Review comment:
   length matching
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552275237



##
File path: tutorials/micro/micro_tflite.py
##
@@ -15,83 +15,110 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Micro TVM with TFLite Models
+microTVM with TFLite Models
 

Review comment:
   length matching
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552275670



##
File path: docs/dev/index.rst
##
@@ -396,3 +396,11 @@ Security
:maxdepth: 1
 
security
+
+
+microTVM
+-

Review comment:
   length matching
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552277964



##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~
+
+microTVM currently tests against Cortex-M microcontrollers with the Zephyr 
RTOS; however, it is
+flexible and portable to other processors such as RISC-V and does not require 
Zephyr. The current
+demos run against QEMU and the following hardware:
+
+* `STM Nucleo-F746ZG 
`_
+* `nRF 5340 Preview Development Kit 
`_
+
+
+Getting Started with microTVM
+~
+
+Before working with microTVM, we recommend you have a supported development 
board. Then, follow these
+tutorials to get started with microTVM:
+
+1. :doc:`Start the microTVM Reference VM 
`. The microTVM tutorials

Review comment:
   see
   - 
https://github.com/apache/tvm/blob/main/tutorials/get_started/cross_compilation_and_rpc.py#L18
   - https://raw.githubusercontent.com/apache/tvm/main/docs/deploy/index.rst 
(`:ref:`tutorial-cross-compilation-and-rpc`_
   - https://tvm.apache.org/docs/deploy/





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


areusch commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552290652



##
File path: tutorials/micro/micro_tflite.py
##
@@ -15,83 +15,110 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Micro TVM with TFLite Models
+microTVM with TFLite Models
 

Review comment:
   done

##
File path: tutorials/micro/README.txt
##
@@ -1,4 +1,4 @@
 .. _tutorial-micro:
 
-Micro TVM 
+microTVM
 -

Review comment:
   done

##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+

Review comment:
   done

##
File path: tutorials/micro/micro_tflite.py
##
@@ -15,83 +15,110 @@
 # specific language governing permissions and limitations
 # under the License.
 """
-Micro TVM with TFLite Models
+microTVM with TFLite Models
 

Review comment:
   done

##
File path: docs/dev/index.rst
##
@@ -396,3 +396,11 @@ Security
:maxdepth: 1
 
security
+
+
+microTVM
+-

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


areusch commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552291036



##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


areusch commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r552300343



##
File path: docs/microtvm/index.rst
##
@@ -0,0 +1,72 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+.. _microtvm-index:
+
+microTVM: TVM on bare-metal
+
+
+microTVM runs TVM models on bare-metal (i.e. IoT) devices. microTVM depends 
only on the C standard
+library, and doesn't require an operating system to execute. microTVM is 
currently under heavy
+development.
+
+.. figure:: 
https://raw.githubusercontent.com/tvmai/web-data/main/images/dev/microtvm_workflow.svg
+   :align: center
+   :width: 85%
+
+microTVM is:
+
+* an extension to TVM's compiler to allow it to target microcontrollers
+* a way to run the TVM RPC server on-device, to allow autotuning
+* a minimal C runtime that supports standalone model inference on bare metal 
devices.
+
+Supported Hardware
+~~~
+
+microTVM currently tests against Cortex-M microcontrollers with the Zephyr 
RTOS; however, it is
+flexible and portable to other processors such as RISC-V and does not require 
Zephyr. The current
+demos run against QEMU and the following hardware:
+
+* `STM Nucleo-F746ZG 
`_
+* `nRF 5340 Preview Development Kit 
`_
+
+
+Getting Started with microTVM
+~
+
+Before working with microTVM, we recommend you have a supported development 
board. Then, follow these
+tutorials to get started with microTVM:
+
+1. :doc:`Start the microTVM Reference VM 
`. The microTVM tutorials

Review comment:
   fixed, not sure what I did before but this works fine





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7164: [µTVM] Add documentation

2021-01-05 Thread GitBox


areusch commented on pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#issuecomment-754996397


   thanks for the review! sorry my editor was a bit messed up so it was hard to 
see the alignment issues. I believe they're all good now



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 merged pull request #7209: [Frontend][MXNet] add _npi_stack, issue #7186

2021-01-05 Thread GitBox


junrushao1994 merged pull request #7209:
URL: https://github.com/apache/tvm/pull/7209


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Frontend][MXNet] add _npi_stack, issue #7186 (#7209)

2021-01-05 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 3f0dc42  [Frontend][MXNet] add _npi_stack, issue #7186 (#7209)
3f0dc42 is described below

commit 3f0dc420ff9b891a79f55181886b536ecc337796
Author: insop 
AuthorDate: Tue Jan 5 17:17:02 2021 -0800

[Frontend][MXNet] add _npi_stack, issue #7186 (#7209)

- https://github.com/apache/tvm/issues/7186
- add MxNet stack, `_npi_stack`
- 
https://mxnet.apache.org/versions/master/api/python/docs/api/np/generated/mxnet.np.stack.html?highlight=stack
---
 python/tvm/relay/frontend/mxnet.py  |  9 +
 tests/python/frontend/mxnet/test_forward.py | 28 
 2 files changed, 37 insertions(+)

diff --git a/python/tvm/relay/frontend/mxnet.py 
b/python/tvm/relay/frontend/mxnet.py
index 1085e90..b272ead 100644
--- a/python/tvm/relay/frontend/mxnet.py
+++ b/python/tvm/relay/frontend/mxnet.py
@@ -2335,6 +2335,14 @@ def _mx_npi_concatenate(inputs, attrs):
 return _op.concatenate(tuple(inputs), axis=int(axis))
 
 
+def _mx_npi_stack(inputs, attrs):
+axis = attrs.get_str("axis", "0")
+if axis == "None":
+return _op.reshape(_op.stack(tuple(inputs), axis=0), (-1,))
+else:
+return _op.stack(tuple(inputs), axis=int(axis))
+
+
 def _mx_npx_reshape(inputs, attrs):
 shape = attrs.get_int_tuple("newshape")
 reverse = attrs.get_bool("reverse", False)
@@ -2700,6 +2708,7 @@ _convert_map = {
 "_npi_less_equal": _mx_compare(_op.less_equal, _rename),
 "_npi_tanh": _rename(_op.tanh),
 "_npi_true_divide_scalar": _binop_scalar(_op.divide),
+"_npi_stack": _mx_npi_stack,
 }
 
 # set identity list
diff --git a/tests/python/frontend/mxnet/test_forward.py 
b/tests/python/frontend/mxnet/test_forward.py
index d3be8c0..537349e 100644
--- a/tests/python/frontend/mxnet/test_forward.py
+++ b/tests/python/frontend/mxnet/test_forward.py
@@ -2012,6 +2012,34 @@ def test_forward_npi_concatenate(data_shape1, 
data_shape2, axis, dtype, target,
 tvm.testing.assert_allclose(op_res.asnumpy(), ref_res.asnumpy(), rtol=1e-5)
 
 
+@pytest.mark.parametrize(
+"data_shape1, data_shape2, axis",
+[
+((3,), (3,), 0),
+((3,), (3,), -1),
+((1, 3, 2), (1, 3, 2), 2),
+((1, 3, 3), (1, 3, 3), 1),
+((1, 3), (1, 3), 0),
+],
+)
+@pytest.mark.parametrize("dtype", ["float64", "float32", "int64", "int32"])
+@tvm.testing.parametrize_targets
+@pytest.mark.parametrize("kind", ["graph", "vm", "debug"])
+def test_forward_npi_stack(data_shape1, data_shape2, axis, dtype, target, ctx, 
kind):
+data_np1 = np.random.uniform(size=data_shape1).astype(dtype)
+data_np2 = np.random.uniform(size=data_shape2).astype(dtype)
+data1 = mx.sym.var("data1")
+data2 = mx.sym.var("data2")
+ref_res = mx.np.stack([mx.np.array(data_np1), mx.np.array(data_np2)], 
axis=axis)
+mx_sym = mx.sym.np.stack([data1.as_np_ndarray(), data2.as_np_ndarray()], 
axis=axis)
+mod, _ = relay.frontend.from_mxnet(
+mx_sym, shape={"data1": data_shape1, "data2": data_shape2}, dtype=dtype
+)
+intrp = relay.create_executor(kind, mod=mod, ctx=ctx, target=target)
+op_res = intrp.evaluate()(data_np1, data_np2)
+tvm.testing.assert_allclose(op_res.asnumpy(), ref_res.asnumpy(), rtol=1e-5)
+
+
 @pytest.mark.parametrize("data_shape", [(2, 2, 2), (2, 7, 2), (2, 2, 2, 1, 2, 
3, 1), (1, 8)])
 @pytest.mark.parametrize("dtype", ["float64", "float32", "int64", "int32", 
"bool"])
 @tvm.testing.parametrize_targets



[GitHub] [tvm] hypercubestart commented on a change in pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var pointer hint.

2021-01-05 Thread GitBox


hypercubestart commented on a change in pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#discussion_r552314233



##
File path: python/tvm/tir/buffer.py
##
@@ -247,7 +247,10 @@ def decl_buffer(
 shape_dtype = shape[0].dtype if hasattr(shape[0], "dtype") else "int32"
 elem_offset = Var("%s_elem_offset" % name, shape_dtype)
 if data is None:
-data = Var(name, PointerType(PrimType(dtype)), span)
+# Bool is represented as uint1 in the IR, but stored as uint8

Review comment:
   is this supposed to be "stored as int8"?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7216: [TIR][REFACTOR] Enforce allocate to use the correct var pointer hint.

2021-01-05 Thread GitBox


tqchen commented on a change in pull request #7216:
URL: https://github.com/apache/tvm/pull/7216#discussion_r552316119



##
File path: python/tvm/tir/buffer.py
##
@@ -247,7 +247,10 @@ def decl_buffer(
 shape_dtype = shape[0].dtype if hasattr(shape[0], "dtype") else "int32"
 elem_offset = Var("%s_elem_offset" % name, shape_dtype)
 if data is None:
-data = Var(name, PointerType(PrimType(dtype)), span)
+# Bool is represented as uint1 in the IR, but stored as uint8

Review comment:
   good catch





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7214: [ConvertLayout] Support transpose

2021-01-05 Thread GitBox


jcf94 commented on a change in pull request #7214:
URL: https://github.com/apache/tvm/pull/7214#discussion_r552322291



##
File path: tests/python/relay/test_pass_convert_op_layout.py
##
@@ -1473,29 +1557,30 @@ def expected():
 
 
 if __name__ == "__main__":
-test_qnn_binary_no_convert_layout()
-test_no_convert_layout()
-test_conv_convert_layout()
-test_conv_nhwc_convert_layout()
-test_conv_bias_pool_convert_layout()
-test_conv_concat_convert_layout()
-test_dual_path_convert_layout()
-test_bn_convert_layout()
-test_slice_like_convert_layout()
-test_resnet_convert_layout()
-test_scalar_convert_layout()
-test_conv_bn_convert_layout()
-test_qnn_conv_requantize_convert_layout()
-test_qnn_conv_concat_convert_layout()
-test_qnn_conv_add_convert_layout()
-test_qnn_conv_nhwc_convert_layout()
-test_conv_convert_kernel_layout()
-test_conv_transpose_convert_layout()
-test_conv_roi_align_convert_layout()
-test_conv_roi_pool_convert_layout()
-test_conv_strided_slice_convert_layout()
-test_deformable_conv_bias_pool_convert_layout()
-test_default_keyword()
-test_different_ops_convert_layout()
-test_no_desired_layout()
-test_convert_with_config()
+# test_qnn_binary_no_convert_layout()
+# test_no_convert_layout()
+# test_conv_convert_layout()
+# test_conv_nhwc_convert_layout()
+# test_conv_bias_pool_convert_layout()
+# test_conv_concat_convert_layout()
+# test_dual_path_convert_layout()
+# test_bn_convert_layout()
+# test_slice_like_convert_layout()
+test_transpose_convert_layout()
+# test_resnet_convert_layout()
+# test_scalar_convert_layout()
+# test_conv_bn_convert_layout()
+# test_qnn_conv_requantize_convert_layout()
+# test_qnn_conv_concat_convert_layout()
+# test_qnn_conv_add_convert_layout()
+# test_qnn_conv_nhwc_convert_layout()
+# test_conv_convert_kernel_layout()
+# test_conv_transpose_convert_layout()
+# test_conv_roi_align_convert_layout()
+# test_conv_roi_pool_convert_layout()
+# test_conv_strided_slice_convert_layout()
+# test_deformable_conv_bias_pool_convert_layout()
+# test_default_keyword()
+# test_different_ops_convert_layout()
+# test_no_desired_layout()
+# test_convert_with_config()

Review comment:
   Did you forget to uncomment these tests?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7214: [ConvertLayout] Support transpose

2021-01-05 Thread GitBox


comaniac commented on a change in pull request #7214:
URL: https://github.com/apache/tvm/pull/7214#discussion_r552322831



##
File path: tests/python/relay/test_pass_convert_op_layout.py
##
@@ -1473,29 +1557,30 @@ def expected():
 
 
 if __name__ == "__main__":
-test_qnn_binary_no_convert_layout()
-test_no_convert_layout()
-test_conv_convert_layout()
-test_conv_nhwc_convert_layout()
-test_conv_bias_pool_convert_layout()
-test_conv_concat_convert_layout()
-test_dual_path_convert_layout()
-test_bn_convert_layout()
-test_slice_like_convert_layout()
-test_resnet_convert_layout()
-test_scalar_convert_layout()
-test_conv_bn_convert_layout()
-test_qnn_conv_requantize_convert_layout()
-test_qnn_conv_concat_convert_layout()
-test_qnn_conv_add_convert_layout()
-test_qnn_conv_nhwc_convert_layout()
-test_conv_convert_kernel_layout()
-test_conv_transpose_convert_layout()
-test_conv_roi_align_convert_layout()
-test_conv_roi_pool_convert_layout()
-test_conv_strided_slice_convert_layout()
-test_deformable_conv_bias_pool_convert_layout()
-test_default_keyword()
-test_different_ops_convert_layout()
-test_no_desired_layout()
-test_convert_with_config()
+# test_qnn_binary_no_convert_layout()
+# test_no_convert_layout()
+# test_conv_convert_layout()
+# test_conv_nhwc_convert_layout()
+# test_conv_bias_pool_convert_layout()
+# test_conv_concat_convert_layout()
+# test_dual_path_convert_layout()
+# test_bn_convert_layout()
+# test_slice_like_convert_layout()
+test_transpose_convert_layout()
+# test_resnet_convert_layout()
+# test_scalar_convert_layout()
+# test_conv_bn_convert_layout()
+# test_qnn_conv_requantize_convert_layout()
+# test_qnn_conv_concat_convert_layout()
+# test_qnn_conv_add_convert_layout()
+# test_qnn_conv_nhwc_convert_layout()
+# test_conv_convert_kernel_layout()
+# test_conv_transpose_convert_layout()
+# test_conv_roi_align_convert_layout()
+# test_conv_roi_pool_convert_layout()
+# test_conv_strided_slice_convert_layout()
+# test_deformable_conv_bias_pool_convert_layout()
+# test_default_keyword()
+# test_different_ops_convert_layout()
+# test_no_desired_layout()
+# test_convert_with_config()

Review comment:
   Ah yeah. Thanks for pointing out.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime (#7197)

2021-01-05 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 040afb0  [Fix][Autoscheduler] Costmodel enhancement & bug fix for 
graph debug runtime (#7197)
040afb0 is described below

commit 040afb0245526e1cc71dc0ada6c3c5787394a5c6
Author: Chenfan 
AuthorDate: Wed Jan 6 10:04:34 2021 +0800

[Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug 
runtime (#7197)

* Enhancement for autoscheduler cost model

* Bug fix for graph_runtime_debug

* Update

* Lint fix

* Update

* Update

* Add file exist check for cost model load

* Update

* Update

* Lint fix

* Update

* Bug fix
---
 python/tvm/auto_scheduler/cost_model/xgb_model.py | 25 ++-
 python/tvm/auto_scheduler/task_scheduler.py   | 13 ++--
 src/auto_scheduler/feature.cc | 18 ++--
 3 files changed, 47 insertions(+), 9 deletions(-)

diff --git a/python/tvm/auto_scheduler/cost_model/xgb_model.py 
b/python/tvm/auto_scheduler/cost_model/xgb_model.py
index eb14dff..f426482 100644
--- a/python/tvm/auto_scheduler/cost_model/xgb_model.py
+++ b/python/tvm/auto_scheduler/cost_model/xgb_model.py
@@ -88,7 +88,14 @@ class XGBModel(PythonBasedModel):
 their predictions.
 """
 
-def __init__(self, verbose_eval=25, num_warmup_sample=100, seed=None):
+def __init__(
+self,
+verbose_eval=25,
+num_warmup_sample=100,
+seed=None,
+model_file=None,
+adapative_training=False,
+):
 global xgb
 try:
 if xgb is None:
@@ -116,12 +123,15 @@ class XGBModel(PythonBasedModel):
 self.plan_size = 32
 self.num_warmup_sample = num_warmup_sample
 self.verbose_eval = verbose_eval
+self.model_file = model_file
+self.adapative_training = adapative_training
 
 super().__init__()
 
 # cache measurement input/result pairs and extracted features
 self.inputs = []
 self.results = []
+self.last_train_length = 0
 self.inputs_feature_cache = []
 
 def update(self, inputs, results):
@@ -141,6 +151,15 @@ class XGBModel(PythonBasedModel):
 self.inputs.extend(inputs)
 self.results.extend(results)
 
+if (
+self.adapative_training
+and len(self.inputs) - self.last_train_length < 
self.last_train_length / 5
+):
+# Set a training threshold related to `last_train_length` to 
reduce the training
+# overhead when there're too many logs
+return
+self.last_train_length = len(self.inputs)
+
 # extract feature
 n_cached = len(self.inputs_feature_cache)
 features, normalized_throughputs, task_ids = 
get_per_store_features_from_measure_pairs(
@@ -176,6 +195,10 @@ class XGBModel(PythonBasedModel):
 ],
 )
 
+# Update the model file if it has been set
+if self.model_file:
+self.save(self.model_file)
+
 def predict(self, task, states):
 """Predict the scores of states
 Parameters
diff --git a/python/tvm/auto_scheduler/task_scheduler.py 
b/python/tvm/auto_scheduler/task_scheduler.py
index ab83ff4..975306f 100644
--- a/python/tvm/auto_scheduler/task_scheduler.py
+++ b/python/tvm/auto_scheduler/task_scheduler.py
@@ -47,6 +47,7 @@ def make_search_policies(
 verbose,
 load_model_file=None,
 load_log_file=None,
+adapative_training=False,
 ):
 """Make a list of search policies for a list of search tasks.
 It creates one policy per task.
@@ -70,6 +71,9 @@ def make_search_policies(
 load_log_file: Optional[str]
 Load measurement records from this file. If it is not None, the status 
of the
 task scheduler, search policies and cost models will be restored 
according to this file.
+adapative_training: bool = False
+Option used for XGBModel, which will reduce the model training 
frequency when there're too
+many logs.
 
 Returns
 ---
@@ -82,11 +86,16 @@ def make_search_policies(
 if isinstance(search_policy, str):
 policy_type, model_type = search_policy.split(".")
 if model_type == "xgb":
-cost_model = XGBModel(num_warmup_sample=len(tasks) * 
num_measures_per_round)
-if load_model_file:
+cost_model = XGBModel(
+num_warmup_sample=len(tasks) * num_measures_per_round,
+model_file=load_model_file,
+adapative_training=adapative_training,
+)
+if load_model_file and os.path.isfile(load_model_file):
 logger.info("TaskScheduler: Load pretrained model...")
 cost_model.load(load_mod

[GitHub] [tvm] comaniac merged pull request #7197: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime

2021-01-05 Thread GitBox


comaniac merged pull request #7197:
URL: https://github.com/apache/tvm/pull/7197


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7197: [Fix][Autoscheduler] Costmodel enhancement & bug fix for graph debug runtime

2021-01-05 Thread GitBox


comaniac commented on pull request #7197:
URL: https://github.com/apache/tvm/pull/7197#issuecomment-755028119


   Thanks @jcf94 @merrymercy @areusch @junrushao1994 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] yzhliu commented on pull request #6370: [TOPI] Add einsum operator

2021-01-05 Thread GitBox


yzhliu commented on pull request #6370:
URL: https://github.com/apache/tvm/pull/6370#issuecomment-755062862


   will do shortly.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] fantasyRqg removed a comment on pull request #7211: Build multi models into one system-lib

2021-01-05 Thread GitBox


fantasyRqg removed a comment on pull request #7211:
URL: https://github.com/apache/tvm/pull/7211#issuecomment-754468357


   ```log
   enabled targets: llvm -device=arm_cpu; llvm
   pytest marker:
   == test 
session starts 
==
   platform linux -- Python 3.8.0, pytest-6.1.1, py-1.9.0, pluggy-0.13.1 -- 
/usr/local/bin/python3
   cachedir: .pytest_cache
   rootdir: /root/tvm, configfile: pytest.ini
   collected 14 items
   
   tests/python/unittest/test_runtime_rpc.py::test_bigendian_rpc PASSED 
 [  7%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_simple PASSED
 [ 14%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_runtime_string PASSED
 [ 21%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_array PASSED 
 [ 28%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_large_array PASSED   
 [ 35%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_echo PASSED  
 [ 42%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_file_exchange PASSED 
 [ 50%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_remote_module PASSED 
 [ 57%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_return_func PASSED   
 [ 64%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_session_constructor_args 
PASSED   [ 71%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_return_ndarray PASSED
 [ 78%]
   tests/python/unittest/test_runtime_rpc.py::test_local_func PASSED
 [ 85%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_tracker_register PASSED  
 [ 92%]
   tests/python/unittest/test_runtime_rpc.py::test_rpc_tracker_request PASSED   
 [100%]
   
   == 14 passed 
in 7.20s ===
   ```
   
   `test_rpc_echo` passed on my ubuntu.  
   trigger ci again



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org