[GitHub] [incubator-tvm] windclarion opened a new pull request #6231: [uTVM] fix crt building and running error

2020-08-06 Thread GitBox


windclarion opened a new pull request #6231:
URL: https://github.com/apache/incubator-tvm/pull/6231


   1. include\tvm\runtime\crt\module.h function TVMSystemLibEntryPoint need 
extern "C", or else linker complain this symbol can't be found.
   
   2. src\target\source\codegen_c_host.cc  function GenerateFuncRegistry:   f 
need cast, or else C++ compiler say type not match
   
   L291 array _tvm_func_array miss "};", so build fail
   
   system_lib_registry and system_lib name need use new name in PR #6145 
   
   3. src\support\str_escape.h  function StrEscape  convert to octal need 3bit, 
but unsigned char c only use LSB 2bit, because mask macro is 0x03, should be 
0x07.
   
   '0' + ((c >> 6) & 0x03) need cast to unsigned char,  because ostringstream 
treat it as int, not unsigned char, so value is error. ex.
   c = 0x17, means  we have 23  functions to register, so ((c >> 6) & 0x03) == 
0,  and '0' + ((c >> 6) & 0x03)  is the int value of '0', which is  48,  but  
ostringstream treat it as int, so  we get  a string "485055",  in fact it 
should be "027"
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #6213: fix compilation error with cuda 11

2020-08-06 Thread GitBox


icemelon9 commented on a change in pull request #6213:
URL: https://github.com/apache/incubator-tvm/pull/6213#discussion_r466838537



##
File path: src/runtime/contrib/cublas/cublas.cc
##
@@ -172,7 +172,11 @@ inline void CallLtIgemm(TVMArgs args, TVMRetValue* ret, 
cublasLtHandle_t hdl) {
   cublasLtOrder_t order_COL32 = CUBLASLT_ORDER_COL32;
   cublasLtOrder_t order_COL4_4R2_8C = CUBLASLT_ORDER_COL4_4R2_8C;
   cublasLtMatmulDesc_t operationDesc = nullptr;
+#if CUDART_VERSION >= 11000
+  CHECK_CUBLAS_ERROR(cublasLtMatmulDescCreate(&operationDesc, 
CUBLAS_COMPUTE_32I, CUDA_R_32I));
+#elif

Review comment:
   No condition in the `#elif`. This is causing compilation error. 
@lanchongyizu 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [C++ RPC] fix typo to keep same with source code (#6220)

2020-08-06 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 3d8ad7a  [C++ RPC] fix typo to keep same with source code (#6220)
3d8ad7a is described below

commit 3d8ad7a124265b0844c61b40d712443cca038d47
Author: windclarion 
AuthorDate: Fri Aug 7 12:23:39 2020 +0800

[C++ RPC] fix typo to keep same with source code (#6220)

Signed-off-by: windclarion 
---
 apps/cpp_rpc/main.cc   | 4 ++--
 apps/cpp_rpc/rpc_server.cc | 4 ++--
 apps/cpp_rpc/rpc_server.h  | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/apps/cpp_rpc/main.cc b/apps/cpp_rpc/main.cc
index ae2636d..777fffa 100644
--- a/apps/cpp_rpc/main.cc
+++ b/apps/cpp_rpc/main.cc
@@ -51,7 +51,7 @@ static const string kUsage =
 " server   - Start the server\n"
 "--host- The hostname of the server, Default=0.0.0.0\n"
 "--port- The port of the RPC, Default=9090\n"
-"--port-end- The end search port of the RPC, Default=9199\n"
+"--port-end- The end search port of the RPC, Default=9099\n"
 "--tracker - The RPC tracker address in host:port format e.g. 
10.1.1.2:9190 Default=\"\"\n"
 "--key - The key used to identify the device type in tracker. 
Default=\"\"\n"
 "--custom-addr - Custom IP Address to Report to RPC Tracker. 
Default=\"\"\n"
@@ -66,7 +66,7 @@ static const string kUsage =
  * \brief RpcServerArgs.
  * \arg host The hostname of the server, Default=0.0.0.0
  * \arg port The port of the RPC, Default=9090
- * \arg port_end The end search port of the RPC, Default=9199
+ * \arg port_end The end search port of the RPC, Default=9099
  * \arg tracker The address of RPC tracker in host:port format e.g. 
10.77.1.234:9190 Default=""
  * \arg key The key used to identify the device type in tracker. Default=""
  * \arg custom_addr Custom IP Address to Report to RPC Tracker. Default=""
diff --git a/apps/cpp_rpc/rpc_server.cc b/apps/cpp_rpc/rpc_server.cc
index 2628ff7..592a6db 100644
--- a/apps/cpp_rpc/rpc_server.cc
+++ b/apps/cpp_rpc/rpc_server.cc
@@ -86,7 +86,7 @@ static std::string getNextString(std::stringstream* iss) {
  * \brief RPCServer RPC Server class.
  * \param host The hostname of the server, Default=0.0.0.0
  * \param port The port of the RPC, Default=9090
- * \param port_end The end search port of the RPC, Default=9199
+ * \param port_end The end search port of the RPC, Default=9099
  * \param tracker The address of RPC tracker in host:port format e.g. 
10.77.1.234:9190 Default=""
  * \param key The key used to identify the device type in tracker. Default=""
  * \param custom_addr Custom IP Address to Report to RPC Tracker. Default=""
@@ -362,7 +362,7 @@ void ServerLoopFromChild(SOCKET socket) {
  * \brief RPCServerCreate Creates the RPC Server.
  * \param host The hostname of the server, Default=0.0.0.0
  * \param port The port of the RPC, Default=9090
- * \param port_end The end search port of the RPC, Default=9199
+ * \param port_end The end search port of the RPC, Default=9099
  * \param tracker_addr The address of RPC tracker in host:port format e.g. 
10.77.1.234:9190
  * Default="" \param key The key used to identify the device type in tracker. 
Default="" \param
  * custom_addr Custom IP Address to Report to RPC Tracker. Default="" \param 
silent Whether run in
diff --git a/apps/cpp_rpc/rpc_server.h b/apps/cpp_rpc/rpc_server.h
index 0936c51..7a4bda5 100644
--- a/apps/cpp_rpc/rpc_server.h
+++ b/apps/cpp_rpc/rpc_server.h
@@ -44,7 +44,7 @@ void ServerLoopFromChild(SOCKET socket);
  * \brief RPCServerCreate Creates the RPC Server.
  * \param host The hostname of the server, Default=0.0.0.0
  * \param port The port of the RPC, Default=9090
- * \param port_end The end search port of the RPC, Default=9199
+ * \param port_end The end search port of the RPC, Default=9099
  * \param tracker The address of RPC tracker in host:port format e.g. 
10.77.1.234:9190 Default=""
  * \param key The key used to identify the device type in tracker. Default=""
  * \param custom_addr Custom IP Address to Report to RPC Tracker. Default=""



[GitHub] [incubator-tvm] FrozenGene merged pull request #6220: [C++ RPC] fix port_end wrong default value 9199 to 9099 for keeping same with source code

2020-08-06 Thread GitBox


FrozenGene merged pull request #6220:
URL: https://github.com/apache/incubator-tvm/pull/6220


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6230: [runtime][cublas] fix typo

2020-08-06 Thread GitBox


cloud-mxd commented on pull request #6230:
URL: https://github.com/apache/incubator-tvm/pull/6230#issuecomment-670310107


   cc @lanchongyizu 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6230: [runtime][cublas] fix typo

2020-08-06 Thread GitBox


cloud-mxd commented on pull request #6230:
URL: https://github.com/apache/incubator-tvm/pull/6230#issuecomment-670308769


   
![image](https://user-images.githubusercontent.com/68592047/89607669-a785f900-d8a5-11ea-8c86-0442916136da.png)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd opened a new pull request #6230: [runtime][cublas] fix typo

2020-08-06 Thread GitBox


cloud-mxd opened a new pull request #6230:
URL: https://github.com/apache/incubator-tvm/pull/6230


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] slyubomirsky edited a comment on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-06 Thread GitBox


slyubomirsky edited a comment on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-670304237


   Hm, if the CI fails because `TVM_HOME` can't be assumed to be defined, is 
there any way for the produced C++ files to refer to TVM data structures 
definitions like NDArrays? I will see if there is any way to reference the 
compiled TVM `so` file from `aot.py`
   
   edit: I guess I can use `find_include_path()` and `find_lib_path()` from 
`libinfo`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] slyubomirsky edited a comment on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-06 Thread GitBox


slyubomirsky edited a comment on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-670304237


   Hm, if the CI fails because `TVM_HOME` can't be assumed to be defined, is 
there any way for the produced C++ files to refer to TVM data structures 
definitions like NDArrays? I will see if there is any way to reference the 
compiled TVM `so` file from `aot.py`
   
   edit: I guess I can use `find_include_path()` from `libinfo`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] slyubomirsky commented on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-06 Thread GitBox


slyubomirsky commented on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-670304237


   Hm, if the CI fails because `TVM_HOME` can't be assumed to be defined, is 
there any way for the produced C++ files to refer to TVM data structures 
definitions like NDArrays? I will see if there is any way to reference the 
compiled TVM `so` file from `aot.py`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (87f9010 -> da27e6d)

2020-08-06 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 87f9010  [ONNX]Mod operator, bug fix (#6160)
 add da27e6d  Reshape with dynamic shape arg (#6208)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 35 +---
 tests/python/frontend/tflite/test_forward.py | 29 +--
 2 files changed, 48 insertions(+), 16 deletions(-)



[GitHub] [incubator-tvm] FrozenGene commented on pull request #6208: RESHAPE with dynamic shape arg in TFLite frontend

2020-08-06 Thread GitBox


FrozenGene commented on pull request #6208:
URL: https://github.com/apache/incubator-tvm/pull/6208#issuecomment-670296339


   Thanks @d-smirnov @cbalint13 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene merged pull request #6208: RESHAPE with dynamic shape arg in TFLite frontend

2020-08-06 Thread GitBox


FrozenGene merged pull request #6208:
URL: https://github.com/apache/incubator-tvm/pull/6208


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-06 Thread GitBox


FrozenGene commented on a change in pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229#discussion_r466794546



##
File path: apps/cpp_rpc/README.md
##
@@ -19,24 +19,31 @@
 This folder contains a simple recipe to make RPC server in c++.
 
 ## Usage (Non-Windows)
-- Build tvm runtime
-- Make the rpc executable [Makefile](Makefile).
-  `make CXX=/path/to/cross compiler g++/ TVM_RUNTIME_DIR=/path/to/tvm runtime 
library directory/ OS=Linux`
-  if you want to compile it for embedded Linux, you should add `OS=Linux`.
-  if the target os is Android, you doesn't need to pass OS argument.
-  You could cross compile the TVM runtime like this:
-```
-  cd tvm
-  mkdir arm_runtime
-  cp cmake/config.cmake arm_runtime
-  cd arm_runtime
-  cmake .. -DCMAKE_CXX_COMPILER="/path/to/cross compiler g++/"
-  make runtime
+- Configure the tvm cmake build with `config.cmake` ensuring that 
`USE_CPP_RPC` is set to `ON` in the config.

Review comment:
   I think we should tell users how to cross compile C++ rpc for embed 
linux platform (like Ubuntu / rasp), not only android.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-06 Thread GitBox


jcf94 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r466767169



##
File path: include/tvm/auto_scheduler/auto_schedule.h
##
@@ -42,19 +42,14 @@ class TuningOptionsNode : public Object {
   int early_stopping;
   /*! \brief The number of programs to be measured at each search round. */
   int num_measures_per_round;
-  /*!
-   * \brief Verbosity level.
-   * 0 for silent, 1 to output information during schedule searching.
-   */
+  /*! \brief Verbosity level. 0 for silent, 1 to output information during 
schedule searching. */
   int verbose;
   /*! \brief ProgramBuilder which builds the program */
   ProgramBuilder builder;
   /*! \brief ProgramRunner which runs the program and measures time costs */
   ProgramRunner runner;
   /*! \brief MeasureCallback functions to be called after each measure batch */
   Optional> measure_callbacks;
-  /*! \brief SearchCallback functions to be called before schedule search */
-  Optional> pre_search_callbacks;

Review comment:
   Oh, just moved this to another position after the refactoring. Let's see 
`SearchPolicy`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-06 Thread GitBox


comaniac commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r466760151



##
File path: cmake/config.cmake
##
@@ -198,6 +198,16 @@ set(USE_DNNL_CODEGEN OFF)
 set(USE_ARM_COMPUTE_LIB OFF)
 set(USE_ARM_COMPUTE_LIB_GRAPH_RUNTIME OFF)
 
+# Whether to build with Arm Ethos-N support
+# Possible values:
+# - OFF: disable Arm Ethos-N support
+# - path/to/arm-ethos-N-stack: use a specific version of the
+#   Ethos-N driver stack
+set(USE_ETHOSN OFF)
+# If USE_ETHOSN is enabled, use Ethos-N hardware (ON) or
+# software test infrastructure (OFF)
+set(USE_ETHOSN_HW ON)

Review comment:
   * This should be OFF by default?
   * In terms of naming, it seems to me that `USE_EHTOSN_CODEGEN` and 
`USE_ETHOSN_RUNTIME` are better. Otherwise it's a bit confusing to find that 
`USE_ETHOSN` actually just means the codegen.

##
File path: python/tvm/relay/op/contrib/ethosn.py
##
@@ -0,0 +1,64 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument
+"""Arm(R) Ethos(TM) -N NPU supported operators."""
+import tvm.ir
+from ... import qnn as _qnn
+from . import _ethosn as support
+
+
+@tvm.ir.register_op_attr("qnn.concatenate", "target.ethos-n")
+def qnn_concatenate(attrs, args):
+"""Check if a concatenate is supported by Ethos-N."""
+conc = _qnn.op.concatenate(*args, **attrs)
+if not support.concatenate(conc):

Review comment:
   You might need a checker for functions from `support`. It's possible 
that users forget to enable ETHOSN which will result in miss registration.

##
File path: src/relay/backend/contrib/ethosn/capabilities.h
##
@@ -0,0 +1,57 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_CAPABILITIES_H_
+#define TVM_RELAY_BACKEND_CONTRIB_ETHOSN_CAPABILITIES_H_
+
+#include 
+
+static std::vector targets[3] = {

Review comment:
   `targets` is too vague and common. What's this for? You should rename it 
with a better name and put this vector in a proper namespace.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [ONNX]Mod operator, bug fix (#6160)

2020-08-06 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 87f9010  [ONNX]Mod operator, bug fix (#6160)
87f9010 is described below

commit 87f90107846841eba41409d65e8a77c82c033bf4
Author: Siju Samuel 
AuthorDate: Fri Aug 7 06:27:20 2020 +0530

[ONNX]Mod operator, bug fix (#6160)

* Onnx mod, bug fix

* Added comment for the mod/floor_mod behaviour difference between numpy & 
relay
---
 python/tvm/relay/frontend/onnx.py  |  7 ++-
 tests/python/frontend/onnx/test_forward.py | 29 +
 2 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
index 1568c97..74626d4 100644
--- a/python/tvm/relay/frontend/onnx.py
+++ b/python/tvm/relay/frontend/onnx.py
@@ -530,10 +530,15 @@ class Mod(OnnxOpConverter):
 @classmethod
 def _impl_v1(cls, inputs, attr, params):
 assert len(inputs) == 2, "Mod op take 2 inputs, {} 
given".format(len(inputs))
-if attr['fmod'] == 1:
+
+# Note: attr['fmod'] determines whether the operator should behave 
like np.fmod or np.mod.
+# attr['fmod'] == 0 will behave as np.mod and attr['fmod'] == 1 will 
force fmod treatment.
+# The relay equivalent of np.fmod is relay.mod and np.mod is 
relay.floor_mod
+if attr['fmod'] == 0:
 op_name = "floor_mod"
 else:
 op_name = "mod"
+
 return AttrCvt(op_name)(inputs, {}, params)
 
 
diff --git a/tests/python/frontend/onnx/test_forward.py 
b/tests/python/frontend/onnx/test_forward.py
index 56ea96d..14b827c 100644
--- a/tests/python/frontend/onnx/test_forward.py
+++ b/tests/python/frontend/onnx/test_forward.py
@@ -2374,17 +2374,11 @@ def test_pooling():
auto_pad='SAME_UPPER')
 
 
-def verify_mod(x_shape, y_shape, fmod, dtype='float32'):
-x_np = np.random.uniform(size=x_shape).astype(dtype)
-y_np = np.random.uniform(size=y_shape).astype(dtype)
+def verify_mod(x_shape, y_shape, fmod, out_shape, dtype='float32'):
+x_np = np.random.uniform(-100.0, 100.0, x_shape).astype(dtype)
+y_np = np.random.uniform(-100.0, 100.0, y_shape).astype(dtype)
 y_np = np.where(y_np==0, 1, y_np) #remove 0's to avoid division by zero 
error
 
-if fmod:
-np_out = np.fmod(x_np, y_np)
-else:
-np_out = np.mod(x_np, y_np)
-
-out_shape = np_out.shape
 mod_node = helper.make_node("Mod",
 inputs=["x", "y"],
 outputs=["z"],
@@ -2401,22 +2395,25 @@ def verify_mod(x_shape, y_shape, fmod, dtype='float32'):
 
onnx_dtype, list(out_shape))])
 model = helper.make_model(graph, producer_name='mod_test')
 
+onnx_out = get_onnxruntime_output(model, [x_np, y_np], dtype)[0]
+
 for target, ctx in ctx_list():
 tvm_out = get_tvm_output(
 model, [x_np, y_np], target, ctx, out_shape)
-tvm.testing.assert_allclose(np_out, tvm_out, rtol=1e-5, atol=1e-5)
+tvm.testing.assert_allclose(onnx_out, tvm_out, rtol=1e-5, atol=1e-5)
 
 
 def test_mod():
 # Mod
-verify_mod(x_shape=[1, 32, 32], y_shape=[1, 32, 32], fmod=0)
-
-verify_mod(x_shape=[1, 32, 32], y_shape=[1, 1, 32], fmod=0, dtype="int32")
+verify_mod(x_shape=[1, 32, 32], y_shape=[1, 1, 32], fmod=0, out_shape=(1, 
32, 32), dtype="int32")
+verify_mod(x_shape=[1, 32, 32, 32], y_shape=[1, 32, 32, 32], fmod=0, 
out_shape=(1, 32, 32, 32), dtype="int32")
 
 # fmod
-verify_mod(x_shape=[1, 1, 32], y_shape=[1, 32, 32], fmod=1)
-
-verify_mod(x_shape=[1, 32, 32], y_shape=[1, 32, 32], fmod=1, dtype="int32")
+verify_mod(x_shape=[1, 32, 32], y_shape=[1, 32, 32], fmod=1, out_shape=(1, 
32, 32), dtype="int32")
+verify_mod(x_shape=[1, 1, 32, 32], y_shape=[1, 32, 32, 32], fmod=1, 
out_shape=(1, 32, 32, 32))
+verify_mod(x_shape=[1, 32, 32, 32], y_shape=[1, 1, 32, 32], fmod=1, 
out_shape=(1, 32, 32, 32))
+verify_mod(x_shape=[1, 32, 32, 32], y_shape=[1, 32, 32, 32], fmod=1, 
out_shape=(1, 32, 32, 32), dtype="int32")
+verify_mod(x_shape=[1, 32, 32, 32], y_shape=[1, 32, 32, 32], fmod=1, 
out_shape=(1, 32, 32, 32))
 
 
 def verify_xor(x_shape, y_shape):



[GitHub] [incubator-tvm] masahi commented on pull request #6160: [ONNX]Mod operator, bug fix

2020-08-06 Thread GitBox


masahi commented on pull request #6160:
URL: https://github.com/apache/incubator-tvm/pull/6160#issuecomment-670262900


   Thanks @siju-samuel @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #6160: [ONNX]Mod operator, bug fix

2020-08-06 Thread GitBox


masahi merged pull request #6160:
URL: https://github.com/apache/incubator-tvm/pull/6160


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6226: [PYTORCH]Std op without specified dimensions support

2020-08-06 Thread GitBox


masahi commented on pull request #6226:
URL: https://github.com/apache/incubator-tvm/pull/6226#issuecomment-670262467


   Thanks @shiwenloong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [PYTORCH]Std op without specified dimensions support (#6226)

2020-08-06 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new c1eb315  [PYTORCH]Std op without specified dimensions support (#6226)
c1eb315 is described below

commit c1eb31566ac7321809f4b9734df97edf378573f6
Author: shiwenloong <52487098+shiwenlo...@users.noreply.github.com>
AuthorDate: Fri Aug 7 08:55:46 2020 +0800

[PYTORCH]Std op without specified dimensions support (#6226)
---
 python/tvm/relay/frontend/pytorch.py  | 11 ---
 tests/python/frontend/pytorch/test_forward.py |  5 +
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
index 3dfdb2f..bbc684e 100644
--- a/python/tvm/relay/frontend/pytorch.py
+++ b/python/tvm/relay/frontend/pytorch.py
@@ -1253,9 +1253,14 @@ def _frobenius_norm():
 def _std():
 def _impl(inputs, input_types):
 data = inputs[0]
-axis = list(_infer_shape(inputs[1]))
-keepdims = bool(inputs[3])
-unbiased = bool(inputs[2])
+if len(inputs) == 2:
+axis = None
+keepdims = False
+unbiased = bool(inputs[1])
+else:
+axis = list(_infer_shape(inputs[1]))
+keepdims = bool(inputs[3])
+unbiased = bool(inputs[2])
 
 if unbiased:
 msg = "Currently only supports standard-deviation calculated via 
the biased "\
diff --git a/tests/python/frontend/pytorch/test_forward.py 
b/tests/python/frontend/pytorch/test_forward.py
index e370cd5..3c9dfb1 100644
--- a/tests/python/frontend/pytorch/test_forward.py
+++ b/tests/python/frontend/pytorch/test_forward.py
@@ -1869,12 +1869,17 @@ def test_forward_std():
 def forward(self, *args):
 return args[0].std(dim=(2,3), keepdim=False, unbiased=False)
 
+class Std6(Module):
+def forward(self, *args):
+return args[0].std(unbiased=False)
+
 input_data = torch.rand(input_shape).float()
 verify_model(Std1().float().eval(), input_data=input_data)
 verify_model(Std2().float().eval(), input_data=input_data)
 verify_model(Std3().float().eval(), input_data=input_data)
 verify_model(Std4().float().eval(), input_data=input_data)
 verify_model(Std5().float().eval(), input_data=input_data)
+verify_model(Std6().float().eval(), input_data=input_data)
 
 
 def test_forward_variance():



[GitHub] [incubator-tvm] masahi merged pull request #6226: [PYTORCH]Std op without specified dimensions support

2020-08-06 Thread GitBox


masahi merged pull request #6226:
URL: https://github.com/apache/incubator-tvm/pull/6226


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] weberlo commented on pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


weberlo commented on pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#issuecomment-670256324


   @tom-gall I inspected the generated C from the example code you posted, and 
I'm seeing this at the end of the file:
   ```c

   
   static TVMBackendPackedCFunc _tvm_func_array[] = {   
   
   fused_reshape,   
   
   fused_reshape_1, 
   
   fused_nn_dense_nn_bias_add,  
   
   fused_nn_dense_nn_bias_add_nn_relu,  
   
   fused_nn_dense_nn_bias_add_nn_relu_1,
   
   static const TVMFuncRegistry _tvm_func_registry = {  
   
   
"\484849fused_reshape\484848fused_reshape_1\484848fused_nn_dense_nn_bias_add\484848fused_nn_dense_nn_bias_..."
   };   
   
   static const TVMModule _tvm_system_lib = {   
   
   &system_lib_registry,
   
   };   
   
   const TVMModule* TVMSystemLibEntryPoint(void) {  
   
   return &system_lib;  
   
   }
   ```
   So my guess is there's a closing brace not being generated 
[here](https://github.com/apache/incubator-tvm/pull/6145/files#diff-544046339cee2c05d342a785aaa55779R283).
   
   @areusch Maybe we should add a test that compiles the module source?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tmoreau89 commented on pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#issuecomment-670234979


   @weberlo can help address your question @tom-gall 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 commented on pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tmoreau89 commented on pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#issuecomment-670234785


   Thanks @areusch , @liangfu , @tom-gall for the comments. I made the decision 
to merge the PR in order to avoid bit-rot.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 merged pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tmoreau89 merged pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend (#6145)

2020-08-06 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new b485d47  [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM 
backend (#6145)
b485d47 is described below

commit b485d478f280938cebf5d3072a4420c2cef56c6e
Author: Andrew Reusch 
AuthorDate: Thu Aug 6 16:08:19 2020 -0700

[µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend (#6145)

* need to fill address of globals in tvmfuncregistry

* llvm func registry generator works!

* lint fixes

* rm hexdump include

* bring bundle_deploy back to life and add to CI

* revert gcda additions

* git-clang-format

* fix check for --system-lib and test_runtime_micro target

* fixup compile flags for bundle_deploy CRT and improve robustness

* git-clang-format

* add debugging info

* git-clang-format

* initialize ret_values in PackedFunc_Call.

* retrigger CI

* fix log messages

* git-clang-format

* remove default for --runtime target opt

* put backtrace behind a flag and enable it

* simpify ReadString(), fixing bad instruction exception on os x.

* git-clang-format

* uncomment tests

* reorder backtrace ldflags for linux gcc
---
 apps/bundle_deploy/Makefile| 141 +
 apps/bundle_deploy/backtrace.c |  57 +
 apps/bundle_deploy/{bundle.h => backtrace.h}   |  22 ++--
 apps/bundle_deploy/build_model.py  |  69 +-
 apps/bundle_deploy/bundle.c|  67 +-
 apps/bundle_deploy/bundle.cc   |   1 +
 apps/bundle_deploy/bundle.h|   2 +-
 apps/bundle_deploy/bundle_static.c |  12 +-
 apps/bundle_deploy/demo.cc |  74 +--
 apps/bundle_deploy/demo_static.c   |  12 +-
 apps/bundle_deploy/{bundle.h => runtime.cc}|  35 ++---
 apps/bundle_deploy/test.cc |  81 
 apps/bundle_deploy/test_static.c   |   2 +-
 include/tvm/target/target_kind.h   |   6 +
 src/runtime/crt/Makefile   |   4 +-
 src/runtime/crt/common/crt_runtime_api.c   |   2 -
 src/runtime/crt/common/memory.c|   2 -
 src/runtime/crt/common/packed_func.c   |   2 +
 src/runtime/crt/graph_runtime/graph_runtime.c  |  58 +++--
 src/runtime/crt/graph_runtime/load_json.c  |  52 
 .../runtime/crt/internal/graph_runtime/load_json.h |   4 +-
 src/support/str_escape.h   |  12 +-
 .../target/func_registry_generator.cc  |  29 +++--
 .../target/func_registry_generator.h   |  26 ++--
 src/target/llvm/codegen_amdgpu.cc  |   2 +-
 src/target/llvm/codegen_cpu.cc |  71 ++-
 src/target/llvm/codegen_cpu.h  |  13 +-
 src/target/llvm/codegen_llvm.cc|   5 +-
 src/target/llvm/codegen_llvm.h |   6 +-
 src/target/llvm/codegen_nvptx.cc   |   2 +-
 src/target/llvm/llvm_module.cc |  15 ++-
 src/target/source/codegen_c_host.cc|  46 ++-
 src/target/source/codegen_c_host.h |  11 ++
 src/target/target_kind.cc  |  11 +-
 tests/python/unittest/test_runtime_micro.py|   2 +-
 tests/python/unittest/test_target_codegen_llvm.py  |  11 ++
 tests/scripts/task_python_integration.sh   |   2 +-
 37 files changed, 671 insertions(+), 298 deletions(-)

diff --git a/apps/bundle_deploy/Makefile b/apps/bundle_deploy/Makefile
index eeea539..adb8d33 100644
--- a/apps/bundle_deploy/Makefile
+++ b/apps/bundle_deploy/Makefile
@@ -21,13 +21,16 @@
 TVM_ROOT=$(shell cd ../..; pwd)
 CRT_ROOT ?= ../../src/runtime/crt
 
+ENABLE_TVM_PLATFORM_ABORT_BACKTRACE ?= 1
+
 DMLC_CORE=${TVM_ROOT}/3rdparty/dmlc-core
-PKG_CXXFLAGS = -g -Wall -std=c++14 -O2 -fPIC \
+PKG_COMPILE_OPTS = -g -Wall -O2 -fPIC
+PKG_CXXFLAGS = ${PKG_COMPILE_OPTS} -std=c++14 \
-I${TVM_ROOT}/include \
-I${DMLC_CORE}/include \
-I${TVM_ROOT}/3rdparty/dlpack/include \
-Icrt_config
-PKG_CFLAGS = -g -Wall -std=c99 -O2 -fPIC \
+PKG_CFLAGS = ${PKG_COMPILE_OPTS} \
-I${TVM_ROOT}/include \
-I${DMLC_CORE}/include \
-I${TVM_ROOT}/3rdparty/dlpack/include \
@@ -37,90 +40,116 @@ PKG_LDFLAGS = -pthread
 
 build_dir := build
 
+BACKTRACE_SRCS =
+BACKTRACE_LDFLAGS =
+BACKTRACE_CFLAGS =
+$(ifeq ENABLE_TVM_PLATFORM_ABORT_BACKTRACE,1)
+BACKTRACE_SRCS += backtrace.c
+BACKTRACE_LDFLAGS += -ldl
+BACKTRACE_CFLAGS += -DENABLE_TVM_PLATFORM_ABORT_BAC

[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tmoreau89 commented on a change in pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#discussion_r466731917



##
File path: apps/bundle_deploy/bundle_static.c
##
@@ -22,7 +22,11 @@
 #include 
 #include 
 #include 
+#include 

Review comment:
   I suggest this gets trimmed in a follow up PR





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] slyubomirsky commented on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-06 Thread GitBox


slyubomirsky commented on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-670229770


   Thank you for the suggestions! I will move the files to `backend`. I will 
see about getting `PackedFunc`s for operators directly instead of using the JIT 
to register them (this might fix some of the other weird bugs we've seen in the 
research prototype).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6225: fix cuda half math function is undefined: hpow, htanh

2020-08-06 Thread GitBox


tqchen commented on pull request #6225:
URL: https://github.com/apache/incubator-tvm/pull/6225#issuecomment-670200841


   cc @yongfeng-nv @wpan11nv @yzhliu @vinx13 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-06 Thread GitBox


junrushao1994 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r466691384



##
File path: include/tvm/auto_scheduler/auto_schedule.h
##
@@ -42,19 +42,14 @@ class TuningOptionsNode : public Object {
   int early_stopping;
   /*! \brief The number of programs to be measured at each search round. */
   int num_measures_per_round;
-  /*!
-   * \brief Verbosity level.
-   * 0 for silent, 1 to output information during schedule searching.
-   */
+  /*! \brief Verbosity level. 0 for silent, 1 to output information during 
schedule searching. */
   int verbose;
   /*! \brief ProgramBuilder which builds the program */
   ProgramBuilder builder;
   /*! \brief ProgramRunner which runs the program and measures time costs */
   ProgramRunner runner;
   /*! \brief MeasureCallback functions to be called after each measure batch */
   Optional> measure_callbacks;
-  /*! \brief SearchCallback functions to be called before schedule search */
-  Optional> pre_search_callbacks;

Review comment:
   just curious, why pre_search_callbacks are deleted in the codebase?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tom-gall edited a comment on pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tom-gall edited a comment on pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#issuecomment-670194174


   Since there isn't an RFC, I'll drop this comment here,  what does a somewhat 
working example look like? 
   
   Updating the micro tvm tutorial as inspiration : 
   
   target = "c  --system-lib  --runtime=c"
   input_tensor = "dense_4_input"
   input_shape = (1,)
   input_dtype = "float32"
   
   dev_config = micro.device.arm.stm32f746xx.generate_config("127.0.0.1", )
   
   mod, params = relay.frontend.from_tflite(tflite_model,
shape_dict={input_tensor: 
input_shape},
dtype_dict={input_tensor: 
input_dtype})
   
   with micro.Session(dev_config) as sess:
   ctx = tvm.micro_dev(0)
   
   with tvm.transform.PassContext(disabled_pass={'FuseOps'}, 
config={"tir.disable_vectorize": True}):
   graph, c_mod, params = relay.build(mod, target=target, params=params)
   
   micro_mod = micro.create_micro_mod(c_mod, dev_config)
   mod = graph_runtime.create(graph, micro_mod, ctx)
   
   mod.set_input(**params)
   mod.set_input(input_tensor, tvm.nd.array(np.array([0.5], 
dtype="float32")))
   
   mod.run()
   
   # Get output
   tvm_output = mod.get_output(0).asnumpy()
   
   print("result is: "+str(tvm_output))
   
   This feels like it should be close however it fails at 
micro.create_micro_mod 
   
   /tmp/tmpok80dx2r/temp.c:232:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 232 | fused_nn_dense_nn_bias_add_nn_relu,
 | ^~
   /tmp/tmpok80dx2r/temp.c:232:5: note: (near initialization for 
'_tvm_func_array[0]')
   /tmp/tmpok80dx2r/temp.c:233:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 233 | fused_reshape_1,
 | ^~~
   /tmp/tmpok80dx2r/temp.c:233:5: note: (near initialization for 
'_tvm_func_array[1]')
   /tmp/tmpok80dx2r/temp.c:234:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 234 | fused_reshape,
 | ^
   /tmp/tmpok80dx2r/temp.c:234:5: note: (near initialization for 
'_tvm_func_array[2]')
   /tmp/tmpok80dx2r/temp.c:235:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 235 | fused_nn_dense_nn_bias_add,
 | ^~
   /tmp/tmpok80dx2r/temp.c:235:5: note: (near initialization for 
'_tvm_func_array[3]')
   /tmp/tmpok80dx2r/temp.c:236:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 236 | fused_nn_dense_nn_bias_add_nn_relu_1,
 | ^~~~
   /tmp/tmpok80dx2r/temp.c:236:5: note: (near initialization for 
'_tvm_func_array[4]')
   /tmp/tmpok80dx2r/temp.c:237:1: error: expected expression before 'static'
 237 | static const TVMFuncRegistry _tvm_func_registry = {
   
   Should that be expected for the time being?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-06 Thread GitBox


junrushao1994 commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r466603974



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   nitpick on the doc
   ```suggestion
   #   int sizes[0]
   #   ...
   #   int sizes[n + 1]
   ```

##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for 

[GitHub] [incubator-tvm] tom-gall commented on pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tom-gall commented on pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#issuecomment-670194174


   Since there isn't an RFC, I'll drop this comment here,  what does a somewhat 
working example look like? 
   
   Updating the micro tvm tutorial as inspiration : 
   
   input_tensor = "dense_4_input"
   input_shape = (1,)
   input_dtype = "float32"
   
   dev_config = micro.device.arm.stm32f746xx.generate_config("127.0.0.1", )
   
   mod, params = relay.frontend.from_tflite(tflite_model,
shape_dict={input_tensor: 
input_shape},
dtype_dict={input_tensor: 
input_dtype})
   
   with micro.Session(dev_config) as sess:
   ctx = tvm.micro_dev(0)
   
   with tvm.transform.PassContext(disabled_pass={'FuseOps'}, 
config={"tir.disable_vectorize": True}):
   graph, c_mod, params = relay.build(mod, target=target, params=params)
   
   micro_mod = micro.create_micro_mod(c_mod, dev_config)
   mod = graph_runtime.create(graph, micro_mod, ctx)
   
   mod.set_input(**params)
   mod.set_input(input_tensor, tvm.nd.array(np.array([0.5], 
dtype="float32")))
   
   mod.run()
   
   # Get output
   tvm_output = mod.get_output(0).asnumpy()
   
   print("result is: "+str(tvm_output))
   
   This feels like it should be close however it fails at 
micro.create_micro_mod 
   
   /tmp/tmpok80dx2r/temp.c:232:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 232 | fused_nn_dense_nn_bias_add_nn_relu,
 | ^~
   /tmp/tmpok80dx2r/temp.c:232:5: note: (near initialization for 
'_tvm_func_array[0]')
   /tmp/tmpok80dx2r/temp.c:233:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 233 | fused_reshape_1,
 | ^~~
   /tmp/tmpok80dx2r/temp.c:233:5: note: (near initialization for 
'_tvm_func_array[1]')
   /tmp/tmpok80dx2r/temp.c:234:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 234 | fused_reshape,
 | ^
   /tmp/tmpok80dx2r/temp.c:234:5: note: (near initialization for 
'_tvm_func_array[2]')
   /tmp/tmpok80dx2r/temp.c:235:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 235 | fused_nn_dense_nn_bias_add,
 | ^~
   /tmp/tmpok80dx2r/temp.c:235:5: note: (near initialization for 
'_tvm_func_array[3]')
   /tmp/tmpok80dx2r/temp.c:236:5: warning: initialization of 'int (*)(TVMValue 
*, int *, int,  TVMValue *, int *, void *)' {aka 'int (*)(union  *, 
int *, int,  union  *, int *, void *)'} from incompatible pointer 
type 'int32_t (*)(void *, void *, int32_t,  void *, void *, void *)' {aka 'long 
int (*)(void *, void *, long int,  void *, void *, void *)'} 
[-Wincompatible-pointer-types]
 236 | fused_nn_dense_nn_bias_add_nn_relu_1,
 | ^~~~
   /tmp/tmpok80dx2r/temp.c:236:5: note: (near initialization for 
'_tvm_func_array[4]')
   /tmp/tmpok80dx2r/temp.c:237:1: error: expected expression before 'static'
 237 | static const TVMFuncRegistry _tvm_func_registry = {
   
   Should that be expected for the time being?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tom-gall edited a comment on pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tom-gall edited a comment on pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#issuecomment-670168389


   As part of this PR there is a small change to 
tvm/tests/micro/test_runtime_micro_on_arm.py,  attempting to run the tests 
fail. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tom-gall commented on pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


tom-gall commented on pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#issuecomment-670168389


   As part of this PR, we should update 
tvm/tests/micro/test_runtime_micro_on_arm.py.  Tho I don't see it as a blocking 
issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout

2020-08-06 Thread GitBox


tqchen commented on pull request #6216:
URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-670165520


   Great, Thanks @csullivan , seems thatw we can go ahead and merge it then. 
please help to fix the CI error



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-06 Thread GitBox


anijain2305 commented on pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#issuecomment-670157341


   @siju-samuel @FrozenGene Please review when you get time



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-06 Thread GitBox


anijain2305 commented on a change in pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r466646173



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
 # convert to tflite model
 converter = tf.lite.TFLiteConverter.from_session(
 sess, input_tensors, output_tensors)
-
+converter.experimental_new_converter=experimental_new_converter

Review comment:
   I am still not fully comfortable about this. 
https://www.tensorflow.org/api_docs/python/tf/lite/TFLiteConverter shows that 
that API is subject to change. What do @u99127 @siju-samuel think about this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-06 Thread GitBox


anijain2305 commented on a change in pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r466645505



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1089,7 +1093,7 @@ def convert_square(self, op):
 
 return out
 
-def _convert_elemwise(self, relay_op, op):
+def _convert_elemwise(self, relay_op, op, use_real_qnn=True):

Review comment:
   That makes sense now. Thanks for your patience. I would suggest to 
rename the `use_real_qnn` to `ignore_qnn_params`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] csullivan commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout

2020-08-06 Thread GitBox


csullivan commented on pull request #6216:
URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-670144900


   Again though, this doesn't appear to have entirely fixed the issue. Less 
frequent instabilities, but they still exist with finish() and still seem to 
have something to do with the timeout behavior. Unfortunately, any 
instabilities that cause resets or crashes to the lock screen are enough to 
make long unmonitored autotuning runs untenable. 
   
   For this reason I’ve moved to cross compiling the C++ RPC app (#6229) and 
running it from the android shell which is much more stable. After battle 
testing it I'd like to update our android docs to recommend using it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] csullivan commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout

2020-08-06 Thread GitBox


csullivan commented on pull request #6216:
URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-670140359


   @tqchen I added a counter to the RPCProcessor loop:
   ```
class RPCProcessor extends Thread {
  private String host;
  private int port;
  private String key;
  private boolean running = false;
  private long startTime;
  private ConnectTrackerServerProcessor currProcessor;
  private boolean first = true;
   +  private int counter = 0;
   
  @Override public void run() {
RPCWatchdog watchdog = new RPCWatchdog();
watchdog.start();
while (true) {
  synchronized (this) {
currProcessor = null;
while (!running) {
  try {
this.wait();
  } catch (InterruptedException e) {
  }
}
try {
  currProcessor = new ConnectTrackerServerProcessor(host, port, 
key, watchdog);
} catch (Throwable e) {
  e.printStackTrace();
  // kill if creating a new processor failed
   +  System.err.println("Creating a new processor failed, exiting");
  System.exit(0);
}
  }
   +  System.err.println("RPCProcessor infinite loop: " + counter);
   +  counter += 1;
  if (currProcessor != null)
currProcessor.run();
  watchdog.finishTimeout();
}
  }
   ```
   
   In the logs I see that after the watchdog wakes up and calls finish, the 
counter is reset. Seems to give evidence that the thread is exiting as expected.
   
   ```
   ...
   08-06 14:42:31.157 24737 24761 W System.err: Connection from 
/192.168.1.10:60626
   08-06 14:42:31.158 24737 24762 W System.err: waiting for timeout: 1
   08-06 14:42:31.160 24737 24761 W System.err: starting server loop...
   08-06 14:42:31.192 24737 24761 W System.err: Load module from 
/data/user/0/org.apache.tvm.tvmrpc/cache/tvm4j_rpc_7111901762916576085/tmp_func_19a48fccba43c2c0.so
   08-06 14:42:31.809 24737 24761 W System.err: done server loop...
   08-06 14:42:31.809 24737 24761 W System.err: Finish serving 
/192.168.1.10:60626
   08-06 14:42:31.812 24737 24762 W System.err: watchdog woken up, ok...
   08-06 14:42:31.812 24737 24761 W System.err: using port: 5001
   08-06 14:42:31.812 24737 24761 W System.err: RPCProcessor infinite loop: 7 
<---
   08-06 14:42:31.812 24737 24761 W System.err: currProcessor.run()
   08-06 14:42:31.922 24737 24761 W System.err: registered with tracker...
   08-06 14:42:31.922 24737 24761 W System.err: waiting for requests...
   08-06 14:42:31.923 24737 24761 W System.err: 
matchKey:android:0.16312182579570966
   08-06 14:42:31.923 24737 24761 W System.err: key: 
client:android:0.16312182579570966 -timeout=10
   08-06 14:42:31.924 24737 24761 W System.err: alloted timeout: 10
   08-06 14:42:31.924 24737 24761 W System.err: Connection from 
/192.168.1.10:60628
   08-06 14:42:31.924 24737 24762 W System.err: waiting for timeout: 1
   08-06 14:42:31.925 24737 24761 W System.err: starting server loop...
   08-06 14:42:32.001 24737 24761 W System.err: Load module from 
/data/user/0/org.apache.tvm.tvmrpc/cache/tvm4j_rpc_1859845163597930021/tmp_func_d08fd3eb029d7c7.so
   08-06 14:42:41.925 24737 24762 W System.err: watchdog woke up! <---
   08-06 14:42:41.925 24737 24762 W System.err: terminating... <--- calls 
finish()
   08-06 14:42:41.926 24737 24813 W System.err: Deleting 
/data/user/0/org.apache.tvm.tvmrpc/cache/tvm4j6272654684694666821
   08-06 14:42:41.983 24511 24511 W System.err: MainActivity onResume...
   08-06 14:42:46.987 24511 24511 W System.err: relaunching RPC activity...
   08-06 14:42:46.987 24511 24511 W System.err: updating preferences...
   08-06 14:42:47.048 24818 24818 W System.err: rpc activity onCreate...
   08-06 14:42:47.049 24818 24842 W System.err: using port: 5001
   08-06 14:42:47.049 24818 24842 W System.err: RPCProcessor infinite loop: 0 
<---
   ...
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-06 Thread GitBox


d-smirnov commented on a change in pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r466594422



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1089,7 +1093,7 @@ def convert_square(self, op):
 
 return out
 
-def _convert_elemwise(self, relay_op, op):
+def _convert_elemwise(self, relay_op, op, use_real_qnn=True):

Review comment:
   I might be not correct here, but the whole idea of using same qnn 
parameters is about being able to re-use non-quantized version of the 
operation. In case of Slice op #6217 there is only a check and the 
non-quantized operation is always used. In case of maximum and minimum this is 
not possible without changes either in the command operands (the qnn_params 
should stripped off) or, alternatively changes in _convert_elemwise in order 
explicitly prevent it going via quantized version. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-06 Thread GitBox


junrushao1994 commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r466588655



##
File path: include/tvm/auto_scheduler/feature.h
##
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_scheduler/feature.h
+ * \brief Feature extraction for the cost model.
+ * We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+ * so we call this feature as "Per Store" feature.
+ * The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+ * the predictions as the whole score for a TVM IR (Stmt).
+ *
+ * The feature specification is defined by `src/auto_scheduler/feature.cc:: 
FeatureSet`
+ */
+
+#ifndef TVM_AUTO_SCHEDULER_FEATURE_H_
+#define TVM_AUTO_SCHEDULER_FEATURE_H_
+
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+namespace auto_scheduler {
+
+/*!
+ * \brief Get PerStore feature from a TIR Stmt

Review comment:
   ```suggestion
* \brief Get per-store feature from a TIR Stmt
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] csullivan opened a new pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-06 Thread GitBox


csullivan opened a new pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229


   Standardize build support for building and cross compiling apps/cpp_rpc with 
cmake.
   * Add cmake coverage for building the C++ RPC server binary and update 
documentation.
   * Add support for linking against custom OpenCL SDK employing a custom 
find_opencl macro. This can be useful when cross compiling with a custom OpenCL 
device driver.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on a change in pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


areusch commented on a change in pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#discussion_r466559503



##
File path: apps/bundle_deploy/bundle_static.c
##
@@ -22,7 +22,11 @@
 #include 
 #include 
 #include 
+#include 

Review comment:
   Ah I think I missed this one, it’s probably not needed. But, I won’t be 
able to address this for a week, so perhaps we can merge as is? Happy to take 
an issue to fix in a followup PR if that’s ok with you.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] LiyouZhou edited a comment on pull request #6112: TVMC - a command line driver for TVM

2020-08-06 Thread GitBox


LiyouZhou edited a comment on pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#issuecomment-670047293


   The command name`tvmc` sounds strange, it is not immediately obvious what 
the c stands for. Can the shell command be called just `tvm compile` `tvm tune` 
etc. same as [aws cli](https://aws.amazon.com/cli/) or [github 
cli](https://github.com/cli/cli).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] LiyouZhou commented on pull request #6112: TVMC - a command line driver for TVM

2020-08-06 Thread GitBox


LiyouZhou commented on pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#issuecomment-670047293


   The command name`tvmc` sounds strange, it is not immediately obvious what 
the c stands for. Can the shell command be called just `tvm compile` `tvm tune` 
etc. Like [aws cli](https://aws.amazon.com/cli/) or [github 
cli](https://github.com/cli/cli).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #6145: [µTVM] Add --runtime=c, remove micro_dev target, enable LLVM backend

2020-08-06 Thread GitBox


liangfu commented on a change in pull request #6145:
URL: https://github.com/apache/incubator-tvm/pull/6145#discussion_r466522943



##
File path: apps/bundle_deploy/bundle_static.c
##
@@ -22,7 +22,11 @@
 #include 
 #include 
 #include 
+#include 

Review comment:
   why do we need this header file here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Relay][Dynamic] OneHot operation (#6209)

2020-08-06 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new da75d85  [Relay][Dynamic] OneHot operation (#6209)
da75d85 is described below

commit da75d85cdce6fa189f3662793e0a68e0f84309f1
Author: Matthew Brookhart 
AuthorDate: Thu Aug 6 08:46:58 2020 -0700

[Relay][Dynamic] OneHot operation (#6209)

* Dynamic OneHot Op

* refactor dynamic_to_static

* add onehot to dynamic_to_static pass
---
 include/tvm/topi/transform.h  |  19 ++--
 python/tvm/relay/op/dyn/_transform.py |  35 +--
 python/tvm/relay/op/transform.py  |  15 ++-
 src/relay/op/dyn/tensor/transform.cc  |  70 ++
 src/relay/op/make_op.h|   2 +
 src/relay/transforms/dynamic_to_static.cc | 113 +++---
 tests/python/relay/dyn/test_dynamic_op_level10.py |  64 ++--
 tests/python/relay/test_pass_dynamic_to_static.py |  28 ++
 8 files changed, 285 insertions(+), 61 deletions(-)

diff --git a/include/tvm/topi/transform.h b/include/tvm/topi/transform.h
index cd19436..19b2ef4 100644
--- a/include/tvm/topi/transform.h
+++ b/include/tvm/topi/transform.h
@@ -1421,22 +1421,25 @@ inline Tensor ndarray_size(const Tensor& src, const 
DataType& dtype,
  * \param depth depth of the one-hot dimension.
  * \param axis axis to fill.
  * \param dtype data type of the output tensor.
+ * \param oshape shape of the output tensor.
  * \param name output tensor name.
  * \param tag output tensor tag.
  * \return one-hot tensor.
  */
 inline Tensor one_hot(const Tensor& indices, const PrimExpr on_value, const 
PrimExpr off_value,
   int depth, int axis, const DataType& dtype,
+  Array oshape = Array(),
   const std::string name = "T_one_hot", const std::string 
tag = kInjective) {
-  Array oshape;
-  int ndim = indices->shape.size() + 1;
-  int indices_index = 0;
   int true_axis = (axis == -1) ? indices->shape.size() : axis;
-  for (int i = 0; i < ndim; i++) {
-if (i == true_axis) {
-  oshape.push_back(Integer(depth));
-} else {
-  oshape.push_back(indices->shape[indices_index++]);
+  if (oshape.size() == 0) {
+int ndim = indices->shape.size() + 1;
+int indices_index = 0;
+for (int i = 0; i < ndim; i++) {
+  if (i == true_axis) {
+oshape.push_back(Integer(depth));
+  } else {
+oshape.push_back(indices->shape[indices_index++]);
+  }
 }
   }
 
diff --git a/python/tvm/relay/op/dyn/_transform.py 
b/python/tvm/relay/op/dyn/_transform.py
index e2704bc..3a80f5a 100644
--- a/python/tvm/relay/op/dyn/_transform.py
+++ b/python/tvm/relay/op/dyn/_transform.py
@@ -25,11 +25,13 @@ from .. import op as _reg
 _reg.register_broadcast_schedule("dyn.broadcast_to")
 _reg.register_injective_schedule("dyn.reshape")
 _reg.register_broadcast_schedule("dyn.tile")
+_reg.register_injective_schedule("dyn.one_hot")
+
 
 @script
 def _reshape_shape_func_input_data(data, newshape, ndim):
-out = output_tensor((ndim,), "int64")
-data_shape = allocate((len(data.shape),), "int64")
+out = output_tensor((ndim, ), "int64")
+data_shape = allocate((len(data.shape), ), "int64")
 for x in const_range(len(data.shape)):
 data_shape[x] = int64(data.shape[x])
 src_idx = 0
@@ -59,7 +61,7 @@ def _reshape_shape_func_input_data(data, newshape, ndim):
 elif newshape[i] == -3:
 assert data_shape.shape[0] - src_idx > 1, \
 "Not enough dims in input shape for -3"
-out[dst_idx] = data_shape[src_idx] * data_shape[src_idx+1]
+out[dst_idx] = data_shape[src_idx] * data_shape[src_idx + 1]
 src_idx += 2
 dst_idx += 1
 elif newshape[i] == -4:
@@ -82,6 +84,7 @@ def _reshape_shape_func_input_data(data, newshape, ndim):
 out[infer_idx] = old_size // new_size
 return out
 
+
 @_reg.register_shape_func("dyn.reshape", True)
 def dynamic_reshape_shape_func(attrs, inputs, out_ndims):
 return [_reshape_shape_func_input_data(*inputs, out_ndims[0])]
@@ -89,7 +92,7 @@ def dynamic_reshape_shape_func(attrs, inputs, out_ndims):
 
 @script
 def _tile_shape_func(data, reps, ndim, tndim, rndim):
-out = output_tensor((tndim,), "int64")
+out = output_tensor((tndim, ), "int64")
 
 if ndim == rndim:
 for i in const_range(tndim):
@@ -120,5 +123,25 @@ def tile_shape_func(attrs, inputs, _):
 ndim = len(inputs[0].shape)
 rndim = inputs[1].shape[0].value
 tndim = ndim if ndim > rndim else rndim
-return [_tile_shape_func(inputs[0], reps, convert(ndim),
- convert(tndim), convert(rndim))]
+return [_tile_shape_func(inputs[0], reps, convert(ndim), convert(tndim), 
convert(rndim))]
+
+
+

[GitHub] [incubator-tvm] zhiics commented on pull request #6209: [Relay][Dynamic] OneHot operation

2020-08-06 Thread GitBox


zhiics commented on pull request #6209:
URL: https://github.com/apache/incubator-tvm/pull/6209#issuecomment-670010971


   Thanks @mbrookhart @jroesch 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-06 Thread GitBox


d-smirnov commented on a change in pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r466509216



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
 # convert to tflite model
 converter = tf.lite.TFLiteConverter.from_session(
 sess, input_tensors, output_tensors)
-
+converter.experimental_new_converter=experimental_new_converter

Review comment:
   I understood that it is not an experimental feature any more. However 
the name "experimental_new_converter" was preserved. I don't see any harm to 
use this feature and have this test especially if we plan to migrate to a newer 
version of TFLite.

##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1089,7 +1093,7 @@ def convert_square(self, op):
 
 return out
 
-def _convert_elemwise(self, relay_op, op):
+def _convert_elemwise(self, relay_op, op, use_real_qnn=True):

Review comment:
   The extraction of "use_real_qnn" functionality to _convert_minimum and 
_convert_maximum methods (L1225 and L1229) will lead either: to change to 
operation's parameters stripping the qnn_attrs from at least lhs input tensor 
or to addition an extra flag forces _convert_elemwise to use non quantized 
version of the operation. Alternatively I might not understand your point here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #6209: [Relay][Dynamic] OneHot operation

2020-08-06 Thread GitBox


zhiics merged pull request #6209:
URL: https://github.com/apache/incubator-tvm/pull/6209


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6216: [JVM] Remove destructive call to System.exit(0) upon timeout

2020-08-06 Thread GitBox


tqchen commented on pull request #6216:
URL: https://github.com/apache/incubator-tvm/pull/6216#issuecomment-670007953


   One quick way to confirm the behavior it is to write the main loop as an 
infinite loop(that sleeps periodically) and increases the counter, while the 
watchdog thread calls the finish function.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-06 Thread GitBox


tqchen commented on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-669998984


   Thanks @slyubomirsky . Some highlevel comments.
   
   First of all, AOT should not be part of the runtime. Runtime contains the 
minimum set of things we need to execute the program. Most of the AOT logic are 
actually part of target translation phase. Taking the current organization of 
relay into account, they should be moved to `relay/backend`(eventually a better 
place might be `target`).
   
   Of course there are additional runtime features and wrappers needed to run 
the program compiled from AOT. From the interface point of view, we should 
remove these wrapper code and completely rely on the PackedFunc and 
runtime.Module interface.
   
   So the AOT compilation should take in an IRModule and output a 
runtime.Module, which contains the functions necessary to run the generated 
program. Ideally the runtime.Module should contain similar interface with other 
compiled programs, such as the vm and the graph runtime



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov opened a new pull request #6228: Constant input attr added to fully connected operation in TFLite frontend

2020-08-06 Thread GitBox


d-smirnov opened a new pull request #6228:
URL: https://github.com/apache/incubator-tvm/pull/6228


   This PR adds an ability to handle constant input attr to "fully connected" 
operation. Unit tests amended.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-06 Thread GitBox


tqchen commented on pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227#issuecomment-669994507


   Thanks @spectrometerHBH , some quick items:
   - run `./tests/lint/git-clang-format.sh -i` to format the code
   - add testcases to cover some of the existing logics



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH commented on pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-06 Thread GitBox


spectrometerHBH commented on pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227#issuecomment-669980481


   cc @tqchen @junrushao1994 @Hzfengsy 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] spectrometerHBH opened a new pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-06 Thread GitBox


spectrometerHBH opened a new pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227


   In [[RFC] Hybrid Script Support for 
TIR](https://discuss.tvm.ai/t/rfc-hybrid-script-support-for-tir/7516), we plans 
to utilize a subset of Python AST that can express every TIR node. In this PR, 
we introduce hybrid script printer&parser with basic infra and a complete 
parsing/printing feature.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] shiwenloong opened a new pull request #6226: [PYTORCH]Std op without specified dimensions support

2020-08-06 Thread GitBox


shiwenloong opened a new pull request #6226:
URL: https://github.com/apache/incubator-tvm/pull/6226


   Std op in torchscript supports overloading 
(https://pytorch.org/docs/stable/jit_builtin_functions.html#builtin-functions) :
   ```
   torch.std(self : Tensor,
 unbiased : bool=True) -> Tensor
   
   torch.std(self : Tensor,
 dim : List[int],
 unbiased : bool=True,
 keepdim : bool=False) -> Tensor
   ```
   The std op without specified dimensions was not supported in current pytorch 
frontend, so pytorch module like below can't be converted.
   ```
   class StdModule(nn.Module):
   def forward(self, *args):
   return args[0].std(unbiased=False)
   ```
   This PR fixes the problem.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd opened a new pull request #6225: fix cuda half math function is undefined: hpow, htanh

2020-08-06 Thread GitBox


cloud-mxd opened a new pull request #6225:
URL: https://github.com/apache/incubator-tvm/pull/6225


   I was trying to transfer BERT model to TVM, but encountered some problems 
with undefined function: hpow, htanh.
   
![image](https://user-images.githubusercontent.com/68592047/89543223-3dcd0700-d833-11ea-9706-b05e6586c4e8.png)
   
![image](https://user-images.githubusercontent.com/68592047/89543244-445b7e80-d833-11ea-8f1f-10e9ad544951.png)
   
   with refer to the [cuda user 
manual](https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATHHALF__FUNCTIONS.html#group__CUDA__MATHHALF__FUNCTIONS),
 I just simulate with float32 function. 
   
   Thanks !
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] windclarion commented on pull request #6221: [TFLite] axis can be a scalar

2020-08-06 Thread GitBox


windclarion commented on pull request #6221:
URL: https://github.com/apache/incubator-tvm/pull/6221#issuecomment-669905184


   ok, I will add a UT



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 opened a new pull request #6224: [BYOC][JSON] json_node.h should include data_type.h

2020-08-06 Thread GitBox


lhutton1 opened a new pull request #6224:
URL: https://github.com/apache/incubator-tvm/pull/6224


   Fixes compilation issue after #6214.
   
   cc @comaniac @zhiics @tqchen 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6223: [TFLite] Implemented ONE_HOT Operator for TFLite.

2020-08-06 Thread GitBox


FrozenGene commented on a change in pull request #6223:
URL: https://github.com/apache/incubator-tvm/pull/6223#discussion_r466338957



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -2886,6 +2887,56 @@ def convert_detection_postprocess(self, op):
 ret = _expr.TupleWrapper(_expr.Tuple([boxes, cls_ids, scores, 
valid_count]), size=4)
 return ret
 
+def convert_one_hot(self, op):
+"""Convert TFLite ONE_HOT"""
+try:
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.OneHotOptions import OneHotOptions
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 4, "Input tensor's length should be 4"
+
+# Ensuring input isn't quantized
+for t in input_tensors:
+assert not t.qnn_params, "Quantized input is not expected."

Review comment:
   we could use `assert all(not i.qnn_params for i in input_tensors)`

##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -2886,6 +2887,56 @@ def convert_detection_postprocess(self, op):
 ret = _expr.TupleWrapper(_expr.Tuple([boxes, cls_ids, scores, 
valid_count]), size=4)
 return ret
 
+def convert_one_hot(self, op):
+"""Convert TFLite ONE_HOT"""
+try:
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.OneHotOptions import OneHotOptions
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 4, "Input tensor's length should be 4"
+
+# Ensuring input isn't quantized
+for t in input_tensors:
+assert not t.qnn_params, "Quantized input is not expected."
+
+# TFlite ONE_HOT requires both on_value
+# and off_value, making dtype redundant.
+indices = input_tensors[0]
+depth = input_tensors[1]
+on_value = input_tensors[2]
+off_value = input_tensors[3]
+
+assert on_value.tensor.Type() == off_value.tensor.Type(), \
+"on_value and off_value should be of the same type"

Review comment:
   no `of`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on pull request #6223: [TFLite] Implemented ONE_HOT Operator for TFLite.

2020-08-06 Thread GitBox


leandron commented on pull request #6223:
URL: https://github.com/apache/incubator-tvm/pull/6223#issuecomment-669862215


   cc @anijain2305 @u99127 @mbaret @FrozenGene @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jainris opened a new pull request #6223: [TFLite] Implemented ONE_HOT Operator for TFLite.

2020-08-06 Thread GitBox


jainris opened a new pull request #6223:
URL: https://github.com/apache/incubator-tvm/pull/6223


   * Added implementation for ONE_HOT Operator.
   * Added tests for ONE_HOT Operator. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene merged pull request #6187: [Ansor][AutoTVM v2.0] Phase 1: The base class for cost models

2020-08-06 Thread GitBox


FrozenGene merged pull request #6187:
URL: https://github.com/apache/incubator-tvm/pull/6187


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (5721387 -> 7ef89ad)

2020-08-06 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 5721387  [DOCS] Update pass infra tutorial (#6193)
 add 7ef89ad  [Ansor][AutoTVM v2.0] Phase 1: The base class for cost models 
(#6187)

No new revisions were added by this update.

Summary of changes:
 include/tvm/auto_scheduler/cost_model.h| 160 +
 python/tvm/auto_scheduler/__init__.py  |   3 +-
 .../cost_model}/__init__.py|   6 +-
 python/tvm/auto_scheduler/cost_model/cost_model.py | 150 +++
 src/auto_scheduler/cost_model.cc   | 156 
 ...ofiler.py => test_auto_scheduler_cost_model.py} |  38 ++---
 6 files changed, 491 insertions(+), 22 deletions(-)
 create mode 100644 include/tvm/auto_scheduler/cost_model.h
 copy python/tvm/{contrib/tf_op => auto_scheduler/cost_model}/__init__.py (84%)
 create mode 100644 python/tvm/auto_scheduler/cost_model/cost_model.py
 create mode 100644 src/auto_scheduler/cost_model.cc
 copy tests/python/unittest/{test_runtime_vm_profiler.py => 
test_auto_scheduler_cost_model.py} (57%)



[GitHub] [incubator-tvm] mbaret commented on pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-06 Thread GitBox


mbaret commented on pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#issuecomment-669853364


   cc @zhiics @comaniac @masahi 
   
   Note that we're still awaiting the CI docker image to be updated and until 
then don't expect CI to pass.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #5691: [COMMUNITY] @masahi -> PPMC

2020-08-06 Thread GitBox


cloud-mxd commented on pull request #5691:
URL: https://github.com/apache/incubator-tvm/pull/5691#issuecomment-669852635


   Congratulations!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret opened a new pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-06 Thread GitBox


mbaret opened a new pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222


   This is the first of 3 PRs to introduce the Ethos-N integration into TVM via 
the BYOC framework. It adds support for partitioning and compiling for the 
Ethos-N77 target with CPU fallback for unsupported operators. Additionally, 
runtime support is added in the form of an Ethos-N runtime module. In this 
initial PR, only quantized concatenate and split are supported with follow-up 
PRs adding support for many further operators.
   
   
   Co-authored-by: Leo Blonk  @Leo-arm 
   Co-authored-by: Tristan O'Connor  @tristan-arm 
   Co-authored-by: Leandro Nunes  @leandron 
   Co-authored-by: Ramana Radhakrishnan  @u99127 
   Co-authored-by: Luke Hutton  @lhutton1 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] wrongtest commented on pull request #6062: [Relay][Pass] Support combine multiple dense op just into dense

2020-08-06 Thread GitBox


wrongtest commented on pull request #6062:
URL: https://github.com/apache/incubator-tvm/pull/6062#issuecomment-669775071


   Sorry for too late, a wrapped function BatchingOps() is added.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org