[GitHub] [incubator-tvm] redpanda3 opened a new pull request #4863: vta+nvdla integration

2020-02-10 Thread GitBox
redpanda3 opened a new pull request #4863: vta+nvdla integration
URL: https://github.com/apache/incubator-tvm/pull/4863
 
 
   somnia is a pipe of cmac+cacc, which can be a good starting point. 
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-10 Thread GitBox
FrozenGene commented on a change in pull request #4847: Use dummy func when no 
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377444157
 
 

 ##
 File path: src/relay/backend/build_module.cc
 ##
 @@ -438,13 +442,19 @@ class RelayBuildModule : public runtime::ModuleNode {
 
 auto lowered_funcs = graph_codegen_->GetLoweredFunc();
 if (lowered_funcs.size() == 0) {
-  LOG(WARNING) << "no lowered funcs exist in the compiled module";
-} else {
-  ret_.mod = tvm::build(
-lowered_funcs,
-target_host_,
-BuildConfig::Current());
+  LOG(WARNING) << "No lowered funcs exist in the compiled module, "
+   << "a dummy function \"__dummy__\" will be created.";
+  Stmt body = EvaluateNode::make(0);
+  Array api_args;
+  auto dummy_func = MakeAPI(body, "__dummy__", api_args, 0, false);
+  lowered_funcs.Set("llvm", Array({dummy_func}));
 
 Review comment:
   I think should set `target_host_`. Even we have LLVM support, it is not 
correct too, imagine our target host is ARM.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #4862: [WIP][REFACTOR][PY] establish tvm.ir, migrate base, expr, type

2020-02-10 Thread GitBox
tqchen opened a new pull request #4862: [WIP][REFACTOR][PY] establish tvm.ir, 
migrate base, expr, type
URL: https://github.com/apache/incubator-tvm/pull/4862
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] hlu1 opened a new pull request #4861: [TVM] const auto p -> const auto& p

2020-02-10 Thread GitBox
hlu1 opened a new pull request #4861: [TVM] const auto p -> const auto& p
URL: https://github.com/apache/incubator-tvm/pull/4861
 
 
   Clang 10 was complaining about about copies being made in the range for 
loops. The warning can be removed by making the variable a const reference.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4856: [RUNTIME] Fix memory leakage of TVMByteArray

2020-02-10 Thread GitBox
tqchen commented on issue #4856: [RUNTIME] Fix memory leakage of TVMByteArray
URL: https://github.com/apache/incubator-tvm/pull/4856#issuecomment-584471907
 
 
   please send another commit to re-trigger the CI due to flaky issue
   
   https://github.com/apache/incubator-tvm/issues/4860


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new issue #4860: [TEST][FLAKY] topi/tests/python/test_topi_sort.py::test_argsort

2020-02-10 Thread GitBox
tqchen opened a new issue #4860: [TEST][FLAKY] 
topi/tests/python/test_topi_sort.py::test_argsort
URL: https://github.com/apache/incubator-tvm/issues/4860
 
 
   
https://ci.tvm.ai/blue/rest/organizations/jenkins/pipelines/tvm/branches/PR-4856/runs/2/nodes/245/log/?start=0
   
   Likely due to a tie, need to find a way to make sure the results are apart, 
perhaps by having a sorted array then shuffle it.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] hlu1 commented on a change in pull request #4859: [LLVM] Explicit llvm::StringRef to std::string conversion

2020-02-10 Thread GitBox
hlu1 commented on a change in pull request #4859: [LLVM] Explicit 
llvm::StringRef to std::string conversion
URL: https://github.com/apache/incubator-tvm/pull/4859#discussion_r377438520
 
 

 ##
 File path: src/target/llvm/codegen_llvm.cc
 ##
 @@ -88,7 +88,11 @@ void CodeGenLLVM::InitTarget(llvm::TargetMachine* tm) {
   native_vector_bits_ = 128;
 } else {
   native_vector_bits_ = 128;
+#if TVM_LLVM_VERSION >= 100
+  std::string arch_name = std::string(tm->getTargetTriple().getArchName());
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-10 Thread GitBox
zhiics commented on issue #4847: Use dummy func when no lowered_funcs exists in 
Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-584462639
 
 
   CSourceModule with an empty string looks to me as well. @kumasento could you 
do that instead of creating a dummy llvm module? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene merged pull request #4816: [TFLite] Using real image for QNN testing.

2020-02-10 Thread GitBox
FrozenGene merged pull request #4816: [TFLite] Using real image for QNN testing.
URL: https://github.com/apache/incubator-tvm/pull/4816
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4816: [TFLite] Using real image for QNN testing.

2020-02-10 Thread GitBox
FrozenGene commented on issue #4816: [TFLite] Using real image for QNN testing.
URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-584459220
 
 
   Thanks @anijain2305 @inadob It is merged now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (b7364b4 -> 902e21b)

2020-02-10 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b7364b4  reverse changes in pr #4849 (#4853)
 add 902e21b  [TFLite] Using real image for QNN testing. (#4816)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 21 +++--
 tests/python/frontend/tflite/test_forward.py | 70 +---
 2 files changed, 72 insertions(+), 19 deletions(-)



[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4859: [LLVM] Explicit llvm::StringRef to std::string conversion

2020-02-10 Thread GitBox
FrozenGene commented on a change in pull request #4859: [LLVM] Explicit 
llvm::StringRef to std::string conversion
URL: https://github.com/apache/incubator-tvm/pull/4859#discussion_r377427382
 
 

 ##
 File path: src/target/llvm/codegen_llvm.cc
 ##
 @@ -88,7 +88,11 @@ void CodeGenLLVM::InitTarget(llvm::TargetMachine* tm) {
   native_vector_bits_ = 128;
 } else {
   native_vector_bits_ = 128;
+#if TVM_LLVM_VERSION >= 100
+  std::string arch_name = std::string(tm->getTargetTriple().getArchName());
 
 Review comment:
   I agree. We could only use `std::string arch_name = 
std::string(tm->getTargetTriple().getArchName());`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] sergei-grechanik commented on issue #2498: [TVM] Automatic differentiation for tensor expressions

2020-02-10 Thread GitBox
sergei-grechanik commented on issue #2498: [TVM] Automatic differentiation for 
tensor expressions
URL: https://github.com/apache/incubator-tvm/pull/2498#issuecomment-584458721
 
 
   @yzhliu Yeah, absolutely, I don't mind.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377425280
 
 

 ##
 File path: python/tvm/runtime/vm.py
 ##
 @@ -0,0 +1,357 @@
+# License .to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, undefined-variable, 
invalid-name, redefined-builtin
+"""
+The Relay Virtual Machine runtime.
+
+Implements a Python interface to executing the compiled VM object.
+"""
+import numpy as np
+
+import tvm
+from tvm.runtime import Object, container
+from tvm.runtime import _ffi_api
+from tvm._ffi.runtime_ctypes import TVMByteArray
+from tvm._ffi import base as _base
+
+def _convert(arg, cargs):
+if isinstance(arg, Object):
+cargs.append(arg)
+elif isinstance(arg, np.ndarray):
+nd_arr = tvm.nd.array(arg, ctx=tvm.cpu(0))
+cargs.append(nd_arr)
+elif isinstance(arg, tvm.nd.NDArray):
+cargs.append(arg)
+elif isinstance(arg, (tuple, list)):
+field_args = []
+for field in arg:
+_convert(field, field_args)
+cargs.append(container.tuple_object(field_args))
+elif isinstance(arg, (_base.numeric_types, bool)):
+dtype = "int32" if isinstance(arg, (int, bool)) else "float32"
+value = tvm.nd.array(np.array(arg, dtype=dtype), ctx=tvm.cpu(0))
+cargs.append(value)
+else:
+raise TypeError("Unsupported type: %s" % (type(arg)))
+
+
+def convert(args):
+cargs = []
+for arg in args:
+_convert(arg, cargs)
+
+return cargs
+
+
+class Executable(object):
+"""Relay VM executable"""
+def __init__(self, mod):
+self.mod = mod
+self._function_params = {}
+self._save = self.mod["save"]
+self._get_lib = self.mod["get_lib"]
+self._get_bytecode = self.mod["get_bytecode"]
+self._get_stats = self.mod["get_stats"]
+self._get_function_arity = self.mod["get_function_arity"]
+self._get_function_param_name = self.mod["get_function_param_name"]
+
+def save(self):
+"""Save the Relay VM Executable.
+
+Returns
+---
+code : bytearray
+The binary blob representing a serialized Relay VM executable. It
+can then be saved to disk and later deserialized into a new
+Executable.
+
+lib : :py:class:`~tvm.runtime.Module`
+The runtime module that contains the generated code. It is
+basically a library that is composed of hardware dependent code.
+
+Notes
+-
+The returned code is organized with the following sections in order.
+ - Global section. This section contains the globals used by the
+ virtual machine.
+ - Constant section. This section is used to store the constant pool of
+ a virtual machine.
+ - Primitive name section. This section is introduced to accommodate
+ the list of primitive operator names that will be invoked by the
+ virtual machine.
+ - Code section. The VM functions, including bytecode, are sitting in
+ this section.
+
+Examples
+
+
+.. code-block:: python
+
+import numpy as np
+import tvm
+from tvm import relay
+# define a simple network.
+x = relay.var('x', shape=(10, 10))
+f = relay.Function([x], x + x)
+mod = relay.Module({"main": f})
+# create a Relay VM.
+ctx = tvm.cpu()
+target = "llvm"
+executable = relay.vm.compile(mod, target)
+code, lib = executable.save()
+# save and load the code and lib file.
+tmp = tvm.contrib.util.tempdir()
+path_lib = tmp.relpath("lib.so")
+lib.export_library(path_lib)
+with open(tmp.relpath("code.ro"), "wb") as fo:
+fo.write(code)
+loaded_lib = tvm.runtime.load_module(path_lib)
+loaded_code = bytearray(open(tmp.relpath("code.ro"), "rb").read())
+# deserialize.
+des_exec = tvm.run

[GitHub] [incubator-tvm] tqchen commented on issue #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
tqchen commented on issue #4855: [Refactor] move vm.py under runtime and adt to 
runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#issuecomment-584457283
 
 
   cc @icemelon9 please also help to take a look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377425484
 
 

 ##
 File path: python/tvm/runtime/vm.py
 ##
 @@ -0,0 +1,357 @@
+# License .to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=no-else-return, unidiomatic-typecheck, undefined-variable, 
invalid-name, redefined-builtin
+"""
+The Relay Virtual Machine runtime.
+
+Implements a Python interface to executing the compiled VM object.
+"""
+import numpy as np
+
+import tvm
+from tvm.runtime import Object, container
+from tvm.runtime import _ffi_api
 
 Review comment:
   Given runtime is local(in the same dir), consider doing from `.object import 
Object instead`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4859: [LLVM] Explicit llvm::StringRef to std::string conversion

2020-02-10 Thread GitBox
tqchen commented on a change in pull request #4859: [LLVM] Explicit 
llvm::StringRef to std::string conversion
URL: https://github.com/apache/incubator-tvm/pull/4859#discussion_r377425029
 
 

 ##
 File path: src/target/llvm/codegen_llvm.cc
 ##
 @@ -88,7 +88,11 @@ void CodeGenLLVM::InitTarget(llvm::TargetMachine* tm) {
   native_vector_bits_ = 128;
 } else {
   native_vector_bits_ = 128;
+#if TVM_LLVM_VERSION >= 100
+  std::string arch_name = std::string(tm->getTargetTriple().getArchName());
 
 Review comment:
   Seems it is always safe to directly do explicit conversion, so I think we 
can remove the macro guard and use this impl


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] hlu1 opened a new pull request #4859: [LLVM] Explicit llvm::StringRef to std::string conversion

2020-02-10 Thread GitBox
hlu1 opened a new pull request #4859: [LLVM] Explicit llvm::StringRef to 
std::string conversion
URL: https://github.com/apache/incubator-tvm/pull/4859
 
 
   LLVM recently made the conversion from llvm::StringRef to std::string 
explicit. See
   
   
https://github.com/llvm/llvm-project/commit/adcd02683856c30ba6f349279509acecd90063df
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] KindleHe opened a new issue #4858: Error in Build NNPACK

2020-02-10 Thread GitBox
KindleHe opened a new issue #4858: Error in Build NNPACK
URL: https://github.com/apache/incubator-tvm/issues/4858
 
 
   There might be some error in `Build NNPACK`
   ```
   # Add PIC option in CFLAG and CXXFLAG to build NNPACK shared library
   sed -i "s|gnu99|gnu99 -fPIC|g" CMakeLists.txt
   sed -i "s|gnu++11|gnu++11 -fPIC|g" CMakeLists.txt
   ```
   when running `sed ...`, get error :
   ```
   sed: 1: "CMakeLists.txt": invalid command code C
   ```
   I found that there is no `gnu99` and `gnu++11` in  `CmakeLists.txt`, 
   Could you give some advices, thanks very much! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jmorrill commented on issue #4548: Windows support for autotvm - Do not merge

2020-02-10 Thread GitBox
jmorrill commented on issue #4548: Windows support for autotvm - Do not merge
URL: https://github.com/apache/incubator-tvm/pull/4548#issuecomment-584445719
 
 
   > @jmorrill have you gotten a chance to work on the CPP server PR?
   
   So sorry @soiferj!  It's the time of the year where kids bring home sickness.
   Anyways, created a PR here.
   https://github.com/apache/incubator-tvm/pull/4857


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jmorrill opened a new pull request #4857: Windows Support for cpp_rpc

2020-02-10 Thread GitBox
jmorrill opened a new pull request #4857: Windows Support for cpp_rpc
URL: https://github.com/apache/incubator-tvm/pull/4857
 
 
   Cherry picked PR from (PR #4548).
   
   This adds Windows support to the C++ RPC server.
   To enable building it make sure the CMake option USE_CXX_RPC is set to ON.
   
   Notables:
   
   - To workaround the lack for fork() sys call in Windows, the RPC socket is 
passed to a new child process.
   - To workaround the lack of tar on Windows, wsl must be installed as tar is 
executed with "wsl tar -C..."
   - Clang is used for compilation, so make sure[ LLVM binaries are installed 
](http://releases.llvm.org/download.html)and in the system path.
   - Updated CMakeLists.txt to use v3.9 on Windows to use 
INTERPROCEDURAL_OPTIMIZATION for better optimization of the libraries tvm.dll 
and topi.dll.  
   - CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS is used on the tvm_runtime.dll so exports 
always work when using the graph runtime from C++ projects.
   
   I have not verified I have not broken the Linux build of this in some way.  
Certain liberties were also taken as I didn't expect to ever be making this PR, 
so apologies :)
   
   @soiferj @FrozenGene 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
jwfromm commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377411817
 
 

 ##
 File path: tests/python/relay/test_pass_merge_composite.py
 ##
 @@ -0,0 +1,158 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for merge composite."""
+from tvm import relay
+from tvm.relay.testing import run_opt_pass
+
+
+def make_add_sub_mul_pattern():
+"""Create a pattern to match the following graph.
+
+add  sub
+ \   /
+  \ /
+  mul
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+sub_node = relay.subtract(x, y)
+mul_node = relay.multiply(add_node, sub_node)
+return mul_node
+
+
+def make_add_relu_pattern():
+"""Create a pattern to match the following graph.
+
+add
+ |
+   ReLu
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+r = relay.nn.relu(add_node)
+return r
+
+
+def test_simple_merge():
+"""Test composite function is correctly produced from simple graph.
+
+We could expect the pattern `make_add_relu_pattern` to be merged
+into a single op `add_relu`.
+
+a  b
+\ /   a  b
+add>  \ /
+ | add_relu
+   ReLu
+
+"""
+pattern_table = {
+"add_sub_mul": make_add_relu_pattern()
+}
+
+def before():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+add_node = relay.add(a, b)
+r = relay.nn.relu(add_node)
+return relay.Function([a, b], r)
+
+def expected():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+
+# add_relu function
+in_1 = relay.var('in_1', shape=(10, 10))
+in_2 = relay.var('in_2', shape=(10, 10))
+add_node = relay.add(in_1, in_2)
+relu_node = relay.nn.relu(add_node)
+add_relu = relay.Function([in_1, in_2], relu_node)
+
+# merged function
+r = relay.Call(add_relu, [a, b])
+return relay.Function([a, b], r)
+
+result = run_opt_pass(before(), 
relay.transform.MergeComposite(pattern_table))
+expected = run_opt_pass(expected(), relay.transform.InferType())
+assert relay.analysis.alpha_equal(result, expected)
+
 
 Review comment:
   Whats stopping Composite from being exposed? It seems pretty important to be 
able to see the composite name in python for a lot of optimizations.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-10 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r377403904
 
 

 ##
 File path: include/tvm/relay/op_attr_types.h
 ##
 @@ -207,13 +216,137 @@ enum AnyCodegenStrategy {
   kVariableDimensions
 };
 
-/* \brief A runtime representation of shape. */
+/*! \brief A runtime representation of shape. */
 using Shape = Array;
 
 using FShapeFunc = runtime::TypedPackedFunc<
   Array(const Attrs& attrs,
- const Array& inputs,
- const Array& out_ndims)>;
+const Array& inputs,
+const Array& out_ndims)>;
+
+/*!
+ * \brief Operator implementation in TVM.
+ */
+class OpImplementNode : public Object {
+ public:
+  /*! \brief Compute function */
+  FTVMCompute fcompute;
+  /*! \brief Schedule function */
+  FTVMSchedule fschedule;
+  /*! \brief Name of the implementation */
+  std::string name;
+  /*! \brief Priority level */
+  int plevel;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("name", &name);
+v->Visit("plevel", &plevel);
+  }
+
+  static constexpr const char* _type_key = "relay.OpImplement";
+  TVM_DECLARE_FINAL_OBJECT_INFO(OpImplementNode, Object);
+};
+
+/*!
+ * \brief Operator implementation class.
+ */
+class OpImplement : public ObjectRef {
+ public:
+  /*!
+   * \brief Invoke the operator compute function.
+   * \param attrs The attribute of the primitive
+   * \param inputs The input tensors.
+   * \param out_type The output type information.
+   * \return The output compute description of the operator.
+   */
+  Array Compute(const Attrs& attrs,
+const Array& inputs,
+const Type& out_type);
+  /*!
+   * \brief Build the computation schedule.
+   * \param attrs The attribute of the node.
+   * \param outs The output tensors.
+   * \param target The build target.
+   * \return The computation schedule.
+   */
+  te::Schedule Schedule(const Attrs& attrs,
+const Array& outs,
+const Target& target);
+
+  TVM_DEFINE_OBJECT_REF_METHODS(OpImplement, ObjectRef, OpImplementNode);
+};
+
+/*!
+ * \brief Specialized implementations for operators under certain conditions.
+ */
+class OpSpecializationNode : public Object {
+ public:
+  /*! \brief List of implementations. */
+  Array implements;
+  /*! \brief Condition to enable the specialization.
+   *Could be undefined to represent generic case. */
+  te::SpecializedCondition condition;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("condition", &condition);
+v->Visit("implements", &implements);
+  }
+
+  static constexpr const char* _type_key = "relay.OpSpecialization";
+  TVM_DECLARE_FINAL_OBJECT_INFO(OpSpecializationNode, ExprNode);
+};
+
+/*!
+ * \brief Operator specialization class.
+ */
+class OpSpecialization : public ObjectRef {
+ public:
+  /*!
+   * \brief Add an implementation.
+   * \param compute Compute function
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-10 Thread GitBox
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584390133
 
 
   > @alexwong It seems you have problems with alexnet, vgg and mobilenet v2 on 
cuda. In my refactored version, I have no problem with these three. Have a look 
and try my script below. You can parse the module in two ways and compare the 
difference.
   > 
https://github.com/masahi/torchscript-to-tvm/blob/master/torchvision_test.py#L51-L60
   > 
   > I guess the issue is in dtype or optional arguments handling in your op 
conversions. I've prepared [a 
branch](https://github.com/masahi/tvm/tree/torch-refactor) for the refactoring 
PR based on your current implementation, and I can reproduce errors on alexnet, 
vgg and mobilenet v2.
   > 
   > The difference between this branch and the implementation at 
`torchscript-to-tvm` is mostly on op conversion map, that's why I think 
problems are there.
   
   I compared the produced relay graph for mobilenet, vgg, and alexnet and they 
look the same so I'm not sure if it's a parsing issue. VGG and AlexNet have had 
issues with accuracy but the mobilenet issue is a memory thing I think.
   
   `
   CUDAError: Check failed: ret == 0 (-1 vs. 0) : 
cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: 
CUDA_ERROR_INVALID_PTX
   `
   
   I'm reverting to what was passing previously and will re-apply the recent 
changes later tonight. For memory issues, I'm not sure what else I can try at 
this point. It's already pretty extreme about cleaning everything after testing 
a model.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-10 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r377403887
 
 

 ##
 File path: include/tvm/relay/op_attr_types.h
 ##
 @@ -207,13 +216,137 @@ enum AnyCodegenStrategy {
   kVariableDimensions
 };
 
-/* \brief A runtime representation of shape. */
+/*! \brief A runtime representation of shape. */
 using Shape = Array;
 
 using FShapeFunc = runtime::TypedPackedFunc<
   Array(const Attrs& attrs,
- const Array& inputs,
- const Array& out_ndims)>;
+const Array& inputs,
+const Array& out_ndims)>;
+
+/*!
+ * \brief Operator implementation in TVM.
+ */
+class OpImplementNode : public Object {
+ public:
+  /*! \brief Compute function */
+  FTVMCompute fcompute;
+  /*! \brief Schedule function */
+  FTVMSchedule fschedule;
+  /*! \brief Name of the implementation */
+  std::string name;
+  /*! \brief Priority level */
+  int plevel;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("name", &name);
+v->Visit("plevel", &plevel);
+  }
+
+  static constexpr const char* _type_key = "relay.OpImplement";
+  TVM_DECLARE_FINAL_OBJECT_INFO(OpImplementNode, Object);
+};
+
+/*!
+ * \brief Operator implementation class.
+ */
+class OpImplement : public ObjectRef {
+ public:
+  /*!
+   * \brief Invoke the operator compute function.
+   * \param attrs The attribute of the primitive
+   * \param inputs The input tensors.
+   * \param out_type The output type information.
+   * \return The output compute description of the operator.
+   */
+  Array Compute(const Attrs& attrs,
+const Array& inputs,
+const Type& out_type);
+  /*!
+   * \brief Build the computation schedule.
+   * \param attrs The attribute of the node.
+   * \param outs The output tensors.
+   * \param target The build target.
+   * \return The computation schedule.
+   */
+  te::Schedule Schedule(const Attrs& attrs,
+const Array& outs,
+const Target& target);
+
+  TVM_DEFINE_OBJECT_REF_METHODS(OpImplement, ObjectRef, OpImplementNode);
+};
+
+/*!
+ * \brief Specialized implementations for operators under certain conditions.
+ */
+class OpSpecializationNode : public Object {
+ public:
+  /*! \brief List of implementations. */
+  Array implements;
+  /*! \brief Condition to enable the specialization.
+   *Could be undefined to represent generic case. */
+  te::SpecializedCondition condition;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("condition", &condition);
+v->Visit("implements", &implements);
+  }
+
+  static constexpr const char* _type_key = "relay.OpSpecialization";
+  TVM_DECLARE_FINAL_OBJECT_INFO(OpSpecializationNode, ExprNode);
+};
+
+/*!
+ * \brief Operator specialization class.
+ */
+class OpSpecialization : public ObjectRef {
+ public:
+  /*!
+   * \brief Add an implementation.
+   * \param compute Compute function
+   * \param schedule Schedule function
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-10 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r377403933
 
 

 ##
 File path: include/tvm/te/schedule.h
 ##
 @@ -742,6 +743,55 @@ class SingletonNode : public IterVarRelationNode {
   TVM_DECLARE_FINAL_OBJECT_INFO(SingletonNode, IterVarRelationNode);
 };
 
+class SpecializedConditionNode;
+
+/*!
+ * \brief Specialized condition to enable op specialization
+ */
+class SpecializedCondition : public ObjectRef {
+ public:
+  SpecializedCondition() {}
+  explicit SpecializedCondition(ObjectPtr n) : ObjectRef(n) {}
+  /*!
+   * \brief Get the current specialized condition.
+   * \return The current specialized condition.
+   */
+  TVM_DLL static SpecializedCondition Current();
+
+  const SpecializedConditionNode* operator->() const;
+
+  using ContainerType = SpecializedConditionNode;
+  class Internal;
+ private:
+  // enable with syntax.
+  friend class Internal;
+  friend class With;
+  /*! \brief Push a new specialized condition onto the thread local stack. */
+  TVM_DLL void EnterWithScope();
+  /*! \brief Pop a specialized condition off the thread local context stack. */
+  TVM_DLL void ExitWithScope();
+};
+
+/*! \brief Container for specialization conditions. */
+class SpecializedConditionNode : public Object {
+ public:
+  /*!
+   * \brief List of conditions in conjunctive joint form (CNF).
+   *   Each condition should be a simple expression, e.g., n > 16, m % 8 == 0, 
etc.,
+   *   where n, m are tvm::Var that represents a dimension in the tensor shape.
+   */
+  Array clauses;
+
+  void VisitAttrs(AttrVisitor* v) {
+v->Visit("clauses", &clauses);
+  }
+
+  static SpecializedCondition make(Array conditions);
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-10 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r377403861
 
 

 ##
 File path: python/tvm/relay/backend/compile_engine.py
 ##
 @@ -63,6 +85,317 @@ def _get_cache_key(source_func, target):
 return source_func
 
 
+def get_shape(shape):
+"""Convert the shape to correct dtype and vars."""
+ret = []
+for dim in shape:
+if isinstance(dim, tvm.expr.IntImm):
+val = int(dim)
+assert val <= np.iinfo(np.int32).max
+ret.append(tvm.expr.IntImm("int32", val))
+elif isinstance(dim, tvm.expr.Any):
+ret.append(tvm.var("any_dim", "int32"))
+else:
+ret.append(dim)
+return ret
+
+
+def get_valid_implements(op, attrs, inputs, out_type, target):
+"""Get all valid implementations from the op strategy.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list of tvm.Tensor
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+Returns
+---
+ret : list of relay.op.OpImplement
+The list of op implementations.
+"""
+fstrategy = op.get_attr("FTVMStrategy")
+assert fstrategy is not None, "%s doesn't have FTVMStrategy registered" % 
op.name
+with target:
+strategy = fstrategy(attrs, inputs, out_type, target)
+ret = []
+for spec in strategy.specializations:
+if spec.condition:
+# check if all the clauses in the specialized condition are true
+flag = True
+for clause in spec.condition.clauses:
+clause = tvm.ir_pass.Simplify(clause)
+if isinstance(clause, tvm.expr.IntImm) and clause.value:
+continue
+flag = False
+break
+if flag:
+for impl in spec.implements:
+ret.append(impl)
+else:
+for impl in spec.implements:
+ret.append(impl)
+return ret
+
+
+def select_implement(op, attrs, inputs, out_type, target, use_autotvm=True):
+"""Select the best implement from the op strategy.
+
+If use_autotvm is True, it'll first try to find the best implementation
+based on AutoTVM profile results. If no AutoTVM profile result is found,
+it'll choose the implementation with highest plevel.
+
+If use_autotvm is False, it'll directly choose the implementation with
+highest plevel.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list[tvm.Tensor]
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+use_autotvm : bool
+Whether query AutoTVM to pick the best.
+
+Returns
+---
+ret : tuple(relay.op.OpImplement, list[tvm.Tensor])
+The best op implementation and the corresponding output tensors.
+"""
+all_impls = get_valid_implements(op, attrs, inputs, out_type, target)
+
+best_plevel_impl = None
+for impl in all_impls:
+if best_plevel_impl is None or impl.plevel > best_plevel_impl.plevel:
+best_plevel_impl = impl
+if not use_autotvm:
+outs = best_plevel_impl.compute(attrs, inputs, out_type)
+return best_plevel_impl, outs
+
+outputs = {}
+best_autotvm_impl = None
+best_cfg = None
+dispatch_ctx = autotvm.task.DispatchContext.current
+for impl in all_impls:
+outs = impl.compute(attrs, inputs, out_type)
+outputs[impl] = outs
+workload = autotvm.task.get_workload(outs)
+if workload is None:
+continue
+cfg = dispatch_ctx.query(target, workload)
+if cfg.cost is None:
+# It's a fallback config
+continue
+if best_cfg is None or best_cfg.cost > cfg.cost:
+best_autotvm_impl = impl
+best_cfg = cfg
+if best_autotvm_impl:
+return best_autotvm_impl, outputs[best_autotvm_impl]
+return best_plevel_impl, outputs[best_plevel_impl]
+
+
+class ScheduleGetter(ExprVisitor):
 
 Review comment:
   Now only port part of ScheduleGetter to python.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Serv

[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-10 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r377403661
 
 

 ##
 File path: python/tvm/schedule.py
 ##
 @@ -650,4 +650,38 @@ def opengl(self):
 """
 _api_internal._StageOpenGL(self)
 
+@tvm._ffi.register_object
+class SpecializedCondition(Object):
+"""Specialized condition to enable op specialization."""
+def __init__(self, conditions):
+"""Create a specialized condition.
+
+.. note::
+Conditions are represented in conjunctive joint form (CNF).
+Each condition should be a simple expression, e.g., n > 16,
+m % 8 == 0, etc., where n, m are tvm.Var that represents a
+dimension in the tensor shape.
+
+Parameters
+--
+conditions : List of tvm.Expr
+List of conditions in conjunctive joint form (CNF).
+"""
+if not isinstance(conditions, (list, _container.Array)):
+conditions = [conditions]
+self.__init_handle_by_constructor__(
+_api_internal._CreateSpecializedCondition, conditions)
+
+def __enter__(self):
+_api_internal._EnterSpecializationScope(self)
+return self
+
+def __exit__(self, ptype, value, trace):
+_api_internal._ExitSpecializationScope(self)
+
+
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] merrymercy opened a new pull request #4856: [RUNTIME] Fix memory leakage of TVMByteArray

2020-02-10 Thread GitBox
merrymercy opened a new pull request #4856: [RUNTIME] Fix memory leakage of 
TVMByteArray
URL: https://github.com/apache/incubator-tvm/pull/4856
 
 
   cc @tqchen 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on issue #4548: Windows support for autotvm - Do not merge

2020-02-10 Thread GitBox
soiferj commented on issue #4548: Windows support for autotvm - Do not merge
URL: https://github.com/apache/incubator-tvm/pull/4548#issuecomment-584421317
 
 
   @jmorrill have you gotten a chance to work on the CPP server PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
zhiics commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377388698
 
 

 ##
 File path: src/runtime/vm/vm.cc
 ##
 @@ -1057,7 +1057,7 @@ runtime::Module CreateVirtualMachine(const Executable* 
exec) {
   return runtime::Module(vm);
 }
 
-TVM_REGISTER_GLOBAL("relay._vm._VirtualMachine")
+TVM_REGISTER_GLOBAL("runtime._vm._VirtualMachine")
 
 Review comment:
   Sure. Will update in a bit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377376605
 
 

 ##
 File path: src/runtime/vm/vm.cc
 ##
 @@ -1057,7 +1057,7 @@ runtime::Module CreateVirtualMachine(const Executable* 
exec) {
   return runtime::Module(vm);
 }
 
-TVM_REGISTER_GLOBAL("relay._vm._VirtualMachine")
+TVM_REGISTER_GLOBAL("runtime._vm._VirtualMachine")
 
 Review comment:
   then it will be available under runtime._ffi_api


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377376524
 
 

 ##
 File path: src/runtime/vm/vm.cc
 ##
 @@ -1057,7 +1057,7 @@ runtime::Module CreateVirtualMachine(const Executable* 
exec) {
   return runtime::Module(vm);
 }
 
-TVM_REGISTER_GLOBAL("relay._vm._VirtualMachine")
+TVM_REGISTER_GLOBAL("runtime._vm._VirtualMachine")
 
 Review comment:
   consider just make it ```runtime.VirtualMachine```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-10 Thread GitBox
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584390133
 
 
   > @alexwong It seems you have problems with alexnet, vgg and mobilenet v2 on 
cuda. In my refactored version, I have no problem with these three. Have a look 
and try my script below. You can parse the module in two ways and compare the 
difference.
   > 
https://github.com/masahi/torchscript-to-tvm/blob/master/torchvision_test.py#L51-L60
   > 
   > I guess the issue is in dtype or optional arguments handling in your op 
conversions. I've prepared [a 
branch](https://github.com/masahi/tvm/tree/torch-refactor) for the refactoring 
PR based on your current implementation, and I can reproduce errors on alexnet, 
vgg and mobilenet v2.
   > 
   > The difference between this branch and the implementation at 
`torchscript-to-tvm` is mostly on op conversion map, that's why I think 
problems are there.
   
   I compared the produced relay graph for mobilenet, vgg, and alexnet and they 
look the same so I'm not sure if it's a parsing issue. VGG and AlexNet have had 
issues with accuracy but the mobilenet issue is a memory thing I think.
   
   `
   CUDAError: Check failed: ret == 0 (-1 vs. 0) : 
cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: 
CUDA_ERROR_INVALID_PTX
   `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4748: [RELAY] Support RelayBuild with Only Constants

2020-02-10 Thread GitBox
tqchen commented on issue #4748: [RELAY] Support RelayBuild with Only Constants
URL: https://github.com/apache/incubator-tvm/issues/4748#issuecomment-584385580
 
 
   I would also like to comment about the future direction here. 
   
   In our vision, eventually everything will be stored in an IRModule, and it 
would be fine to create an empty IRModule and export it. We could also 
in-theory create an EmptyRuntimeModule, or use CSourceModule(with no content) 
as an empty module.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-10 Thread GitBox
tqchen commented on issue #4847: Use dummy func when no lowered_funcs exists in 
Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-584384721
 
 
   I agree that perhaps an empty module provides useful middle ground. The 
closest thing so far might be CSourceModule with an empty string 
https://github.com/apache/incubator-tvm/blob/master/src/target/source/source_module.cc#L190


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (0dbe70c -> b7364b4)

2020-02-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 0dbe70c  [Relay] Added Merge Composite pass (#4771)
 add b7364b4  reverse changes in pr #4849 (#4853)

No new revisions were added by this update.

Summary of changes:
 topi/python/topi/intel_graphics/conv2d.py | 40 +++
 1 file changed, 20 insertions(+), 20 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #4853: [Fix] reverse some changes made for intel_graphics/conv2d.py in PR #4849

2020-02-10 Thread GitBox
tqchen merged pull request #4853: [Fix] reverse some changes made for 
intel_graphics/conv2d.py in PR #4849
URL: https://github.com/apache/incubator-tvm/pull/4853
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377348247
 
 

 ##
 File path: python/tvm/autotvm/task/relay_integration.py
 ##
 @@ -49,7 +50,7 @@ def _lower(mod,
 grc = graph_runtime_codegen.GraphRuntimeCodegen(None, target)
 grc.codegen(mod["main"])
 # default case
-compiler = relay.vm.VMCompiler()
+compiler = runtime.vm.VMCompiler()
 
 Review comment:
   Yes, the idea is that we want the structure to reflect the structure in the 
C++ project.
   
   More importantly, we should be able to isolate the files under 
python/tvm/runtime for runtime only packaging.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
zhiics commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377346473
 
 

 ##
 File path: python/tvm/autotvm/task/relay_integration.py
 ##
 @@ -49,7 +50,7 @@ def _lower(mod,
 grc = graph_runtime_codegen.GraphRuntimeCodegen(None, target)
 grc.codegen(mod["main"])
 # default case
-compiler = relay.vm.VMCompiler()
+compiler = runtime.vm.VMCompiler()
 
 Review comment:
   So we can just have a relay/backend/vm.py and python/tvm/runtime/vm.py, 
right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
tqchen commented on a change in pull request #4855: [Refactor] move vm.py under 
runtime and adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855#discussion_r377345255
 
 

 ##
 File path: python/tvm/autotvm/task/relay_integration.py
 ##
 @@ -49,7 +50,7 @@ def _lower(mod,
 grc = graph_runtime_codegen.GraphRuntimeCodegen(None, target)
 grc.codegen(mod["main"])
 # default case
-compiler = relay.vm.VMCompiler()
+compiler = runtime.vm.VMCompiler()
 
 Review comment:
   we still want to keep the compiler under the original namespace, but only 
move the runtime only component


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi edited a comment on issue #4741: [External codegen] Add test cases for fused ops with manual annotation

2020-02-10 Thread GitBox
masahi edited a comment on issue #4741: [External codegen] Add test cases for 
fused ops with manual annotation
URL: https://github.com/apache/incubator-tvm/pull/4741#issuecomment-584368589
 
 
   yes I want to update this PR but we don't have a way to hook `Composite` and 
`Compiler` attributes yet, so I couldn't "see" a composite conv + bias + relu 
in CodegenDNNL atm. Refer to the comments below.
   https://github.com/apache/incubator-tvm/pull/4771#issuecomment-578066583 
   https://github.com/apache/incubator-tvm/pull/4771#discussion_r377029670


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
masahi commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377332546
 
 

 ##
 File path: tests/python/relay/test_pass_merge_composite.py
 ##
 @@ -0,0 +1,439 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for merge composite."""
+from tvm import expr
+from tvm import relay
+from tvm.relay.testing import run_opt_pass
+
+"""
+The merge composite pass is designed to merge multiple relay operators, that
+match a given pattern, and combine them into a single relay function.
+
+For example suppose we have the graph:
+
+conv2d
+  |   (merge composite pass)
+   bias_add>   conv2d_bias_relu
+  |(our target)
+ relu
+
+Our Relay IR before the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+%0 = nn.conv2d(%data, %kernel, kernel_size=[1, 1])
+/* ty=Tensor[(1, 256, 28, 28), float32] */;
+%1 = nn.bias_add(%0, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+Our Relay IR after the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+  %2 = fn (%x: Tensor[(1, 512, 28, 28), float32], %y: Tensor[(256, 512, 1, 
1), float32],
+%z: Tensor[(256), float32], Primitive=1, 
Composite="conv2d_bias_relu") ->
+Tensor[(1, 256, 28, 28), float32] {
+%0 = nn.conv2d(%x, %y, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 
28), float32] */;
+%1 = nn.bias_add(%0, %z) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+  };
+  %2(%data, %kernel, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+As you can see in the second relay example, the pattern we specified has been 
wrapped
+in a function. The function is then called, producing the same result as the 
first relay
+example.
+
+One convenient use for this pass is to offload multiple operators to a single 
external
+codegen function.
+"""
+
+
+def make_add_sub_mul_pattern():
+"""Create a pattern to match the following graph.
+
+add  sub
+ \   /
+  \ /
+  mul
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+sub_node = relay.subtract(x, y)
+mul_node = relay.multiply(add_node, sub_node)
+return mul_node
+
+
+def make_add_relu_pattern():
+"""Create a pattern to match the following graph.
+
+add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+r = relay.nn.relu(add_node)
+return r
+
+
+def make_conv_bias_relu_pattern():
+"""Create a pattern to match the following graph.
+
+   conv2d
+ |
+  bias_add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+z = relay.var('z')
+conv_node = relay.nn.conv2d(x, y)
+bias_node = relay.nn.bias_add(conv_node, z)
+r = relay.nn.relu(bias_node)
+return r
+
+
+def test_simple_merge():
+"""Test composite function is correctly produced from simple graph.
+
+We could expect the pattern `make_add_relu_pattern` to be merged
+into a single op `add_relu`.
+
+a  b
+\ /   a  b
+add>  \ /
+ | add_relu
+   relu
+
+"""
+pattern_table = [
+("add_relu", make_add_relu_pattern())
+]
+
+def before():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+add_node = relay.add(a, b)
+r = relay.nn.relu(add_node)
+return relay.Function([a, b], r)
+
+def expected():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+
+# add_relu function
+in_1 = relay.var('in_1', shape=(10, 10))
+in_2 = relay.var('in_2', shape=(

[GitHub] [incubator-tvm] masahi commented on issue #4741: [External codegen] Add test cases for fused ops with manual annotation

2020-02-10 Thread GitBox
masahi commented on issue #4741: [External codegen] Add test cases for fused 
ops with manual annotation
URL: https://github.com/apache/incubator-tvm/pull/4741#issuecomment-584368589
 
 
   yes I want to update this PR but we don't have a way to hook `Composite` and 
`Compiler` attributes yet, so I couldn't "see" a composite conv + bias + relu 
in CodegenDNNL atm. Refer to the comment below.
   https://github.com/apache/incubator-tvm/pull/4771#issuecomment-578066583 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4854: [REFACTOR] Move python vm runtime into runtime/vm.py

2020-02-10 Thread GitBox
zhiics commented on issue #4854: [REFACTOR] Move python vm runtime into 
runtime/vm.py
URL: https://github.com/apache/incubator-tvm/issues/4854#issuecomment-584364753
 
 
   #4855 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics opened a new pull request #4855: [Refactor] move vm.py under runtime and adt to runtime.container.py

2020-02-10 Thread GitBox
zhiics opened a new pull request #4855: [Refactor] move vm.py under runtime and 
adt to runtime.container.py
URL: https://github.com/apache/incubator-tvm/pull/4855
 
 
   #4854 
   This PR moves vm.py under the runtime folder and adt to runtime.container
   
   cc @tqchen @wweic @icemelon9 @jroesch 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on issue #4741: [External codegen] Add test cases for fused ops with manual annotation

2020-02-10 Thread GitBox
comaniac commented on issue #4741: [External codegen] Add test cases for fused 
ops with manual annotation
URL: https://github.com/apache/incubator-tvm/pull/4741#issuecomment-584359173
 
 
   As #4771 has been merged, we can revisit this PR for DNNL fuse patterns.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-10 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r377309086
 
 

 ##
 File path: include/tvm/te/schedule.h
 ##
 @@ -742,6 +743,55 @@ class SingletonNode : public IterVarRelationNode {
   TVM_DECLARE_FINAL_OBJECT_INFO(SingletonNode, IterVarRelationNode);
 };
 
+class SpecializedConditionNode;
+
+/*!
+ * \brief Specialized condition to enable op specialization
+ */
+class SpecializedCondition : public ObjectRef {
+ public:
+  SpecializedCondition() {}
+  explicit SpecializedCondition(ObjectPtr n) : ObjectRef(n) {}
+  /*!
+   * \brief Get the current specialized condition.
+   * \return The current specialized condition.
+   */
+  TVM_DLL static SpecializedCondition Current();
+
+  const SpecializedConditionNode* operator->() const;
+
+  using ContainerType = SpecializedConditionNode;
+  class Internal;
+ private:
+  // enable with syntax.
+  friend class Internal;
+  friend class With;
+  /*! \brief Push a new specialized condition onto the thread local stack. */
+  TVM_DLL void EnterWithScope();
+  /*! \brief Pop a specialized condition off the thread local context stack. */
+  TVM_DLL void ExitWithScope();
+};
+
+/*! \brief Container for specialization conditions. */
+class SpecializedConditionNode : public Object {
+ public:
+  /*!
+   * \brief List of conditions in conjunctive joint form (CNF).
+   *   Each condition should be a simple expression, e.g., n > 16, m % 8 == 0, 
etc.,
+   *   where n, m are tvm::Var that represents a dimension in the tensor shape.
+   */
+  Array clauses;
+
+  void VisitAttrs(AttrVisitor* v) {
+v->Visit("clauses", &clauses);
+  }
+
+  static SpecializedCondition make(Array conditions);
+
+  static constexpr const char* _type_key = "SpecializedCondition";
 
 Review comment:
   It seems all type keys in te didn't add "te" in the type key.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on issue #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
mbarrett97 commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-584334732
 
 
   @masahi @comaniac @zhiics Thanks for the reviews. An RFC on alternative 
annotation mechanisms would be great.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-10 Thread GitBox
mbarrett97 commented on a change in pull request #4847: Use dummy func when no 
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377289530
 
 

 ##
 File path: src/relay/backend/build_module.cc
 ##
 @@ -438,13 +442,19 @@ class RelayBuildModule : public runtime::ModuleNode {
 
 auto lowered_funcs = graph_codegen_->GetLoweredFunc();
 if (lowered_funcs.size() == 0) {
-  LOG(WARNING) << "no lowered funcs exist in the compiled module";
-} else {
-  ret_.mod = tvm::build(
-lowered_funcs,
-target_host_,
-BuildConfig::Current());
+  LOG(WARNING) << "No lowered funcs exist in the compiled module, "
+   << "a dummy function \"__dummy__\" will be created.";
+  Stmt body = EvaluateNode::make(0);
+  Array api_args;
+  auto dummy_func = MakeAPI(body, "__dummy__", api_args, 0, false);
+  lowered_funcs.Set("llvm", Array({dummy_func}));
 
 Review comment:
   Is defaulting the LLVM the correct behaviour here (eg. will this fall over 
if we build without LLVM support)?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-10 Thread GitBox
mbarrett97 commented on a change in pull request #4847: Use dummy func when no 
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r377290273
 
 

 ##
 File path: src/relay/backend/build_module.cc
 ##
 @@ -438,13 +442,19 @@ class RelayBuildModule : public runtime::ModuleNode {
 
 auto lowered_funcs = graph_codegen_->GetLoweredFunc();
 if (lowered_funcs.size() == 0) {
-  LOG(WARNING) << "no lowered funcs exist in the compiled module";
-} else {
-  ret_.mod = tvm::build(
-lowered_funcs,
-target_host_,
-BuildConfig::Current());
+  LOG(WARNING) << "No lowered funcs exist in the compiled module, "
 
 Review comment:
   Do we need to retain this warning? With external codegen, having no lowered 
funcs can be a perfectly normal mode of operation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
masahi commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-584313916
 
 
   Thanks @mbarrett97 @comaniac @zhiics @anijain2305 this is merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [Relay] Added Merge Composite pass (#4771)

2020-02-10 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 0dbe70c  [Relay] Added Merge Composite pass (#4771)
0dbe70c is described below

commit 0dbe70c16dd6d8a2f7596a175544589e6b05e711
Author: mbarrett97 <55580676+mbarret...@users.noreply.github.com>
AuthorDate: Mon Feb 10 19:39:20 2020 +

[Relay] Added Merge Composite pass (#4771)

* [Relay] Added MergeComposite pass

This pass allows for patterns to be wrapped
in a function marked with 'Composite' and a
composite function name. This is intended to be
used with the external codegen for the cases where
an external operator maps to multiple Relay
operators. In that case, the mapping can be expressed
as a pattern and assigned a name.

For more information on this pass and its motivation,
see the RFC:

https://discuss.tvm.ai/t/rfc-external-codegen-defining-composite-relay-operators/5470

Change-Id: Icb1b803a9f0ac57c529143200228f3bb5793afc0

* [Relay] Merge composite tests

Added tests for the merge_composite pass.

Change-Id: I1728b4a05b0c1c36140a40f1afe028fde62185dd

* Merge composite additional test

Change-Id: I9bc7d6053c575e9468ac5abc31214c6ad8507e46

* Support priority order in merge_composite

The order in which the patterns are matched
was currently random as an unordered_map was
used to store the pattern table. This uses
arrays instead so that a distinct priority
order of matching can be defined. Additional
tests have also been added to verify this
behaviour.

Change-Id: Ief347df4262639138d5d9d7c8cee7ef233af7b56

* Improved merge composite docs

Change-Id: Ie3a72045ecc3f13ad3c302fbdf192b7296a306a8

* Removed unused variable

Change-Id: I7814d5fde368ffaf1b3d6d806060c774c7720364

* Remove unnecessary op check

Change-Id: I38e78d2acd5b86cb8e837be72ff9d72cd10bcf33

* Improve styling on composite function creation

Change-Id: I37add1c3134e0b5d5085fe1eb9daf8e06890fa8c

* Comment reword

Change-Id: Ie05872dcbbe0c3e1190b0597083b9a64e6b66c66

* Stylistic changes to avoid std::move

Change-Id: I43a93995bbf10530399900c992aa99dd4ae4575f

* Relax a check in ExtractPattern

Change-Id: I0faef77a66c55f83f09e6e47c561ffaea63dedfa

* Remove new line

Change-Id: Ifdd02c12087a7e1a0a9b54825669bc0de8f13c3d

* Removed MatchPattern from MergeComposite

This is not necessary now that ExtractPattern
can fulfill the same purpose.

Change-Id: I14dc020afa8e50f2df4c0a2efb88a011987f8196

* Removed a new line

Change-Id: I8b50f0c9069aa1bcaccbe68eb421031f01a64842

* Improved docs for merge composite

Change-Id: Ib1959a35c856e7ea5639de2e4ef314a54f44caf5

* Fixed free vars in test

Change-Id: I2b7f273db275964ec0e9820560663f0808adee79

* Handle case where root arg might not be a call

Change-Id: I4eeea3ce723d3ba337d110dcc690377daebe8626

* Removed blank line

Change-Id: I07f5392c0e95cfe3cfa5c333703cc6f82d6034fb

* Change to CHECK_EQ

Change-Id: I5c5d62d3cd57f72508b30b926f72091ae6f0d1cc

* Revised a conditional

Change-Id: I23a7897ca15a7cd076db5039dc653a4b8c27e803

* Improved doc styling

Change-Id: I377f0a1c1ac70f3b8d7584b0c49bddc8c6c134ef

* Fail extraction if vars conflict

Change-Id: I78e36d805e8ed6b55e61d490212a967c857554a4

* Added further merge composite tests

Change-Id: Ib1d800409fca4c1834c7fe0cab5a26ab99a26820

Co-authored-by: lhutton1 <35535092+lhutt...@users.noreply.github.com>
---
 include/tvm/relay/expr.h|   2 +
 python/tvm/relay/transform.py   |  25 +
 src/relay/pass/merge_composite.cc   | 218 +
 tests/python/relay/test_pass_merge_composite.py | 609 
 4 files changed, 854 insertions(+)

diff --git a/include/tvm/relay/expr.h b/include/tvm/relay/expr.h
index 64f2278..1dcf957 100644
--- a/include/tvm/relay/expr.h
+++ b/include/tvm/relay/expr.h
@@ -561,6 +561,8 @@ constexpr const char* kParams = "__params__";
 constexpr const char* kExternalSymbol = "ExternalSymbol";
 /*! \brief Mark if the function should be avoided being optimized. */
 constexpr const char* kSkipOptimization = "SkipOptimization";
+/*! \brief Treat the function as a composite operator. */
+constexpr const char* kComposite = "Composite";
 }  // namespace attr
 
 }  // namespace relay
diff --git a/python/tvm/relay/transform.py b/python/tvm/relay/transform.py
index 26b20e0..cfca4a6 100644
--- a/python/tvm/relay/transform.py
+++ b/python/tvm/relay/transform.py

[GitHub] [incubator-tvm] masahi commented on issue #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
masahi commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-584313221
 
 
   @zhiics @comaniac It is worth discussing if we can use composite and 
partitioning passes to remove the annotation pass, as mentioned by @mbarrett97  
  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi merged pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
masahi merged pull request #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4854: [REFACTOR] Move python vm runtime into runtime/vm.py

2020-02-10 Thread GitBox
zhiics commented on issue #4854: [REFACTOR] Move python vm runtime into 
runtime/vm.py
URL: https://github.com/apache/incubator-tvm/issues/4854#issuecomment-584312758
 
 
   Yeah, I just noticed this a few days ago. Will take a stab on it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new issue #4854: [REFACTOR] Move python vm runtime into runtime/vm.py

2020-02-10 Thread GitBox
tqchen opened a new issue #4854: [REFACTOR] Move python vm runtime into 
runtime/vm.py
URL: https://github.com/apache/incubator-tvm/issues/4854
 
 
   as https://github.com/apache/incubator-tvm/pull/4818 we are moving to put 
the core runtime related modules in python to the runtime folder.
   
   We will need to do the same thing for relay related codes, in particular:
   - vm.py perhaps need to goto the runtime folder
   - Data structures (e.g. ADT) goes to runtime/container.py
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4854: [REFACTOR] Move python vm runtime into runtime/vm.py

2020-02-10 Thread GitBox
tqchen commented on issue #4854: [REFACTOR] Move python vm runtime into 
runtime/vm.py
URL: https://github.com/apache/incubator-tvm/issues/4854#issuecomment-584311898
 
 
   cc @zhiics @wweic please see if you are interested


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4852: [Bugfix] Fixed crash caused by reversing bitwise operations

2020-02-10 Thread GitBox
tqchen commented on issue #4852: [Bugfix] Fixed crash caused by reversing 
bitwise operations
URL: https://github.com/apache/incubator-tvm/pull/4852#issuecomment-584310813
 
 
   Thanks @dpankratz !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (ee2d3cc -> d55e21f)

2020-02-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ee2d3cc  [Frontend][TFlite] use qnn helper function in softmax (#4840)
 add d55e21f  Fixed bug in ExprOp that caused bitwise operators to fail 
when a basic python type was on the left hand side of the expression. Added 
regression test for crashing cases. (#4852)

No new revisions were added by this update.

Summary of changes:
 python/tvm/expr.py   | 9 +
 tests/python/unittest/test_lang_basic.py | 3 +++
 2 files changed, 12 insertions(+)



[GitHub] [incubator-tvm] tqchen merged pull request #4852: [Bugfix] Fixed crash caused by reversing bitwise operations

2020-02-10 Thread GitBox
tqchen merged pull request #4852: [Bugfix] Fixed crash caused by reversing 
bitwise operations
URL: https://github.com/apache/incubator-tvm/pull/4852
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4847: Use dummy func when no lowered_funcs exists in Relay mod

2020-02-10 Thread GitBox
tqchen commented on issue #4847: Use dummy func when no lowered_funcs exists in 
Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-584305730
 
 
   cc @zhiics @FrozenGene 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-10 Thread GitBox
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-584303447
 
 
   > @alexwong It seems you have problems with alexnet, vgg and mobilenet v2 on 
cuda. In my refactored version, I have no problem with these three. Have a look 
and try my script below. You can parse the module in two ways and compare the 
difference.
   > 
https://github.com/masahi/torchscript-to-tvm/blob/master/torchvision_test.py#L51-L60
   > 
   > I guess the issue is in dtype or optional arguments handling in your op 
conversions. I've prepared [a 
branch](https://github.com/masahi/tvm/tree/torch-refactor) for the refactoring 
PR based on your current implementation, and I can reproduce errors on alexnet, 
vgg and mobilenet v2.
   > 
   > The difference between this branch and the implementation at 
`torchscript-to-tvm` is mostly on op conversion map, that's why I think 
problems are there.
   
   Okay, will check it out. I was assuming the vgg and alexnet errors were due 
to the topi implementation of one of the adaptive poolings as it didn't seem 
like any other frontend used these. If it's in op conversion though, that's an 
easier fix.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4853: [Fix] reverse some changes made for intel_graphics/conv2d.py in PR #4849

2020-02-10 Thread GitBox
tqchen commented on issue #4853: [Fix] reverse some changes made for 
intel_graphics/conv2d.py in PR #4849
URL: https://github.com/apache/incubator-tvm/pull/4853#issuecomment-584296417
 
 
   sorry for the oversight


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Laurawly opened a new pull request #4853: [Fix] reverse some changes made for `intel_graphics/conv2d.py` in PR #4849

2020-02-10 Thread GitBox
Laurawly opened a new pull request #4853: [Fix] reverse some changes made for 
`intel_graphics/conv2d.py` in PR #4849
URL: https://github.com/apache/incubator-tvm/pull/4853
 
 
   In recent PR $4849,  code after line 51 was mistakenly moved under the for 
loop in [line 
48](https://github.com/apache/incubator-tvm/commit/b528acc143dd8a09b322ba0845743e18ae206e22#diff-f42aee9848a02ce033e17e390757f49bL48),
 which caused problems for `tile_ic` and other `cfg` params not defined when 
output channel size is a multiple of 16. In this PR, we correct this change.
   @comaniac @tqchen 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu commented on issue #2498: [TVM] Automatic differentiation for tensor expressions

2020-02-10 Thread GitBox
yzhliu commented on issue #2498: [TVM] Automatic differentiation for tensor 
expressions
URL: https://github.com/apache/incubator-tvm/pull/2498#issuecomment-584290098
 
 
   @sgrechanik-h would you mind if someone take over and rebuild it on top of 
you work?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
anijain2305 commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-584286159
 
 
   Oh just saw. The tests are already added. Thanks, its good from my side :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
anijain2305 commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-584284517
 
 
   My only request is to add more test cases. In my experience, things get very 
ugly as the networks get bigger and the things that seems corner cases become 
very common. But, I am ok with the scope of this PR and delaying aggressive 
testing to later PR. I think it helps in quick flush of e2e pileline.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on issue #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
zhiics commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-584274614
 
 
   @mbarrett97 Considering that this is a relatively standalone pass and it 
helps fusion with external functions, I think it is okay to take it in.
   
   For the pattern matching cases, one possible way I am thinking of is 
automatically generating some possible pattern given some metadata.
   
   @comaniac and I will start an RFC to discuss how we can add the whitelist 
based annotation back. It probably will leverage the pattern matching here.
   
   @masahi @anijain2305 PTAL and share your comments if there is any. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #4816: [TFLite] Using real image for QNN testing.

2020-02-10 Thread GitBox
anijain2305 commented on issue #4816: [TFLite] Using real image for QNN testing.
URL: https://github.com/apache/incubator-tvm/pull/4816#issuecomment-584254304
 
 
   @FrozenGene Good to merge?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4840: [Frontend][TFlite] use qnn helper function in softmax

2020-02-10 Thread GitBox
tqchen commented on issue #4840: [Frontend][TFlite] use qnn helper function in 
softmax
URL: https://github.com/apache/incubator-tvm/pull/4840#issuecomment-584249745
 
 
   Thanks @u99127 @wyc-ruiker @inadob !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (13f2155 -> ee2d3cc)

2020-02-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 13f2155  [CI][DOCKER] Update ci-lint to pylint2.4.4 (#4851)
 add ee2d3cc  [Frontend][TFlite] use qnn helper function in softmax (#4840)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py | 11 ++-
 1 file changed, 2 insertions(+), 9 deletions(-)



[GitHub] [incubator-tvm] tqchen merged pull request #4840: [Frontend][TFlite] use qnn helper function in softmax

2020-02-10 Thread GitBox
tqchen merged pull request #4840: [Frontend][TFlite] use qnn helper function in 
softmax
URL: https://github.com/apache/incubator-tvm/pull/4840
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4845: [DEV] TVM v0.7 Roadmap

2020-02-10 Thread GitBox
tqchen commented on issue #4845: [DEV] TVM v0.7 Roadmap
URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-584249293
 
 
   @Coderx7 as far as i know, intel graphics is already supported for a limited 
setting by @Laurawly . @etom42 it would be great if you can start a RFC thread 
in the discus forum to keep the community aware of what you are working on -- 
it also brings chance for collaboration and give suggestions :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377133432
 
 

 ##
 File path: tests/python/relay/test_pass_merge_composite.py
 ##
 @@ -0,0 +1,158 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for merge composite."""
+from tvm import relay
+from tvm.relay.testing import run_opt_pass
+
+
+def make_add_sub_mul_pattern():
+"""Create a pattern to match the following graph.
+
+add  sub
+ \   /
+  \ /
+  mul
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+sub_node = relay.subtract(x, y)
+mul_node = relay.multiply(add_node, sub_node)
+return mul_node
+
+
+def make_add_relu_pattern():
+"""Create a pattern to match the following graph.
+
+add
+ |
+   ReLu
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+r = relay.nn.relu(add_node)
+return r
+
+
+def test_simple_merge():
+"""Test composite function is correctly produced from simple graph.
+
+We could expect the pattern `make_add_relu_pattern` to be merged
+into a single op `add_relu`.
+
+a  b
+\ /   a  b
+add>  \ /
+ | add_relu
+   ReLu
+
+"""
+pattern_table = {
+"add_sub_mul": make_add_relu_pattern()
+}
+
+def before():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+add_node = relay.add(a, b)
+r = relay.nn.relu(add_node)
+return relay.Function([a, b], r)
+
+def expected():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+
+# add_relu function
+in_1 = relay.var('in_1', shape=(10, 10))
+in_2 = relay.var('in_2', shape=(10, 10))
+add_node = relay.add(in_1, in_2)
+relu_node = relay.nn.relu(add_node)
+add_relu = relay.Function([in_1, in_2], relu_node)
+
+# merged function
+r = relay.Call(add_relu, [a, b])
+return relay.Function([a, b], r)
+
+result = run_opt_pass(before(), 
relay.transform.MergeComposite(pattern_table))
+expected = run_opt_pass(expected(), relay.transform.InferType())
+assert relay.analysis.alpha_equal(result, expected)
+
 
 Review comment:
   Unfortunately this doesn't seem to be exposed by the Python API (I can set 
attributes but not retrieve them).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Hzfengsy commented on issue #4651: Tensor Expression Debug Display (TEDD)

2020-02-10 Thread GitBox
Hzfengsy commented on issue #4651: Tensor Expression Debug Display (TEDD)
URL: https://github.com/apache/incubator-tvm/pull/4651#issuecomment-584170658
 
 
   Thank you for the update.
   
   I think the location on `dmlc/web-data` is good, but maybe need others' help 
to commit.
   
   You can render the tutorial at local by
   ```
   python3 -m pip install sphinx-gallery
   cd tvm_path/docs
   make html
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yongfeng-nv commented on issue #4651: Tensor Expression Debug Display (TEDD)

2020-02-10 Thread GitBox
yongfeng-nv commented on issue #4651: Tensor Expression Debug Display (TEDD)
URL: https://github.com/apache/incubator-tvm/pull/4651#issuecomment-584163670
 
 
   @tqchen, @Hzfengsy
   I have updated this PR with changes listed in the forum:
   https://discuss.tvm.ai/t/visualize-tensor-expression/5174/12?u=maplegu
   
   Two questions about the tutorial: 
   1. As graphviz is not always available in CI.  Shall I use static images in 
the tutorial?  If so, my current submission uses image locations such as 
https://github.com/dmlc/web-data/raw/master/tvm/tutorial/tedd_st.png.  I 
haven't committed any image yet.  Is it a good location?
   
   2. I would like to inspect the rendered tutorial.  How to view it in my repo?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] etom42 commented on issue #4845: [DEV] TVM v0.7 Roadmap

2020-02-10 Thread GitBox
etom42 commented on issue #4845: [DEV] TVM v0.7 Roadmap
URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-584123110
 
 
   External codegen integration with uTVM is highly interesting for us at CEVA.
   We are looking at compiling networks using TVM and generating code with our 
implementation of codegen and then executing on CEVA hardware via uTVM.
   The goal (beside getting it done) is to share via TVM github repo.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377029670
 
 

 ##
 File path: tests/python/relay/test_pass_merge_composite.py
 ##
 @@ -0,0 +1,439 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for merge composite."""
+from tvm import expr
+from tvm import relay
+from tvm.relay.testing import run_opt_pass
+
+"""
+The merge composite pass is designed to merge multiple relay operators, that
+match a given pattern, and combine them into a single relay function.
+
+For example suppose we have the graph:
+
+conv2d
+  |   (merge composite pass)
+   bias_add>   conv2d_bias_relu
+  |(our target)
+ relu
+
+Our Relay IR before the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+%0 = nn.conv2d(%data, %kernel, kernel_size=[1, 1])
+/* ty=Tensor[(1, 256, 28, 28), float32] */;
+%1 = nn.bias_add(%0, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+Our Relay IR after the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+  %2 = fn (%x: Tensor[(1, 512, 28, 28), float32], %y: Tensor[(256, 512, 1, 
1), float32],
+%z: Tensor[(256), float32], Primitive=1, 
Composite="conv2d_bias_relu") ->
+Tensor[(1, 256, 28, 28), float32] {
+%0 = nn.conv2d(%x, %y, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 
28), float32] */;
+%1 = nn.bias_add(%0, %z) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+  };
+  %2(%data, %kernel, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+As you can see in the second relay example, the pattern we specified has been 
wrapped
+in a function. The function is then called, producing the same result as the 
first relay
+example.
+
+One convenient use for this pass is to offload multiple operators to a single 
external
+codegen function.
+"""
+
+
+def make_add_sub_mul_pattern():
+"""Create a pattern to match the following graph.
+
+add  sub
+ \   /
+  \ /
+  mul
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+sub_node = relay.subtract(x, y)
+mul_node = relay.multiply(add_node, sub_node)
+return mul_node
+
+
+def make_add_relu_pattern():
+"""Create a pattern to match the following graph.
+
+add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+r = relay.nn.relu(add_node)
+return r
+
+
+def make_conv_bias_relu_pattern():
+"""Create a pattern to match the following graph.
+
+   conv2d
+ |
+  bias_add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+z = relay.var('z')
+conv_node = relay.nn.conv2d(x, y)
+bias_node = relay.nn.bias_add(conv_node, z)
+r = relay.nn.relu(bias_node)
+return r
+
+
+def test_simple_merge():
+"""Test composite function is correctly produced from simple graph.
+
+We could expect the pattern `make_add_relu_pattern` to be merged
+into a single op `add_relu`.
+
+a  b
+\ /   a  b
+add>  \ /
+ | add_relu
+   relu
+
+"""
+pattern_table = [
+("add_relu", make_add_relu_pattern())
+]
+
+def before():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+add_node = relay.add(a, b)
+r = relay.nn.relu(add_node)
+return relay.Function([a, b], r)
+
+def expected():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+
+# add_relu function
+in_1 = relay.var('in_1', shape=(10, 10))
+in_2 = relay.var('in_2', sha

[GitHub] [incubator-tvm] FrozenGene commented on issue #4628: [Object] Add String container

2020-02-10 Thread GitBox
FrozenGene commented on issue #4628: [Object] Add String container
URL: https://github.com/apache/incubator-tvm/pull/4628#issuecomment-584097003
 
 
   > @FrozenGene In general I agree that we should avoid std::experimental.
   > 
   > In this particular case, i think the usage is fair, because it is guarded 
by marco tests and is only under a very limited case we a std::hash function 
that can hash a string without copying it(instead of using the string_view data 
structure).
   > 
   > * T0: We could have implemented a hash function by ourselves, but the hash 
itself may be inconsistent with the std version.
   > * T1: While the std::experimental::string_view's hash could have been 
inconsistent with the std::string version as per compiler version(because of 
experimental), in practice it is consistent with std::string as per string_view 
proposal(and can be confirmed using different compilers).  More importantly, it 
is also fine if the hash is inconsistent with the std ver(then we will be the 
case of T1.
   > 
   > Given the above consideration, I think it is fine to permit the limited 
usecase. However, I agree that we should have a more careful documentation 
about the std::experimental use case here and only limit it to the specific 
usecase.
   
   Ok. One minor comment: let us add one comment in the C++14 experimental part 
what compiler (version) we have tested. Like GCC 5.4 / Clang 7.0 etc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377023736
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'. This is primarily intended to be used alongside the
+ * external codegen infrastructure to support the case where multiple
+ * Relay operators map to a single external operator.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
+
+class MergeCompositeWrapper : public ExprMutator {
+ public:
+  explicit MergeCompositeWrapper(const std::string& pattern_name, const Expr& 
pattern)
+: pattern_name_(pattern_name), pattern_(pattern) {}
+
+  Expr ExtractPattern(const Var& pattern, const Expr& root,
+  Map>* var_map) {
+if (var_map->find(pattern->name_hint()) == var_map->end()) {
+  // if we haven't encountered this var yet, make a new free var and 
associate
+  // it with the value at 'root'
+  auto free_var = VarNode::make(pattern->name_hint(), Type());
+  var_map->Set(pattern->name_hint(), Array({free_var, root}));
+  return std::move(free_var);
+} else {
+  // if we have encountered this var already, return the free var that was 
created
+  return (*var_map)[pattern->name_hint()][0];
+}
+  }
+
+  Expr ExtractPattern(const Constant& pattern, const Expr& root,
+  Map>* var_map) {
+return root;
+  }
+
+  /* How does this work?
+   *
+   * A pattern consists of Relay expression containing only operator call 
nodes, constants
+   * and free variables. The free variables indicate where the pattern can 
'attach' in your
+   * graph. This function takes the final call node of the pattern and the 
call node currently
+   * being traversed in the Relay graph. It traverses through the pattern in 
lockstep with call node
+   * from the graph (referred to as the 'root' node here) to check they're 
identical. If at any point
+   * they differ, an empty expression is returned to signify the extract 
failed. If a free var is
+   * reached in the pattern, the corresponding value in the root is associated 
with the name of the
+   * free var (via the var_map) so that when we construct the composite 
function, the inputs match
+   * up correctly with the rest of the graph. The return value of this 
function when successful is
+   * a new Relay expression ready to be wrapped into a composite function.
+   */
+  Expr ExtractPattern(const Call& pattern, const Call& root,
+  Map>* var_map) {
+// check to make sure both calls are to operators (not functions)
+if (!pattern->op->IsInstance() || !root->op->IsInstance())
+  return Expr();
+if (pattern->op.as()->name != root->op.as()->name)
+  return Expr();
+
+unsigned int i = 0;
+Array new_args;
+for (const auto& arg : pattern->args) {
+  Expr new_arg;
+  if (arg->IsInstance()) {
+// fail if the root argument is not also a call node
+if (!root->args[i]->IsInstance()) {
+  return Expr();
+}
+// if it's a call node, recursively call this function
+new_arg = ExtractPattern(Downcast(arg),
+ Downcast(root->args[i]),
+ var_map);
+  } else if (arg->IsInstance()) {
+// if there's a var in the pattern, it must be a free var
+// so call the function to update the var_map
+new_arg = ExtractPattern(Downcast(arg),
+ root->args[i],
+ var_map);
+  } else if (arg->IsInstance()) {
+// if there's a constant, simply get the corresponding
+// value of the constant from the root
+new_arg = ExtractPattern(Downcast(arg),
+ root->args[i],
+ var_map);
+  }
+  if (!new_arg.defined()) {
+return Expr();
+  }

[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377003977
 
 

 ##
 File path: tests/python/relay/test_pass_merge_composite.py
 ##
 @@ -0,0 +1,439 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for merge composite."""
+from tvm import expr
+from tvm import relay
+from tvm.relay.testing import run_opt_pass
+
+"""
+The merge composite pass is designed to merge multiple relay operators, that
+match a given pattern, and combine them into a single relay function.
+
+For example suppose we have the graph:
+
+conv2d
+  |   (merge composite pass)
+   bias_add>   conv2d_bias_relu
+  |(our target)
+ relu
+
+Our Relay IR before the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+%0 = nn.conv2d(%data, %kernel, kernel_size=[1, 1])
+/* ty=Tensor[(1, 256, 28, 28), float32] */;
+%1 = nn.bias_add(%0, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+Our Relay IR after the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+  %2 = fn (%x: Tensor[(1, 512, 28, 28), float32], %y: Tensor[(256, 512, 1, 
1), float32],
+%z: Tensor[(256), float32], Primitive=1, 
Composite="conv2d_bias_relu") ->
+Tensor[(1, 256, 28, 28), float32] {
+%0 = nn.conv2d(%x, %y, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 
28), float32] */;
+%1 = nn.bias_add(%0, %z) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+  };
+  %2(%data, %kernel, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+As you can see in the second relay example, the pattern we specified has been 
wrapped
+in a function. The function is then called, producing the same result as the 
first relay
+example.
+
+One convenient use for this pass is to offload multiple operators to a single 
external
+codegen function.
+"""
+
+
+def make_add_sub_mul_pattern():
+"""Create a pattern to match the following graph.
+
+add  sub
+ \   /
+  \ /
+  mul
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+sub_node = relay.subtract(x, y)
+mul_node = relay.multiply(add_node, sub_node)
+return mul_node
+
+
+def make_add_relu_pattern():
+"""Create a pattern to match the following graph.
+
+add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+r = relay.nn.relu(add_node)
+return r
+
+
+def make_conv_bias_relu_pattern():
+"""Create a pattern to match the following graph.
+
+   conv2d
+ |
+  bias_add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+z = relay.var('z')
+conv_node = relay.nn.conv2d(x, y)
+bias_node = relay.nn.bias_add(conv_node, z)
+r = relay.nn.relu(bias_node)
+return r
+
+
+def test_simple_merge():
+"""Test composite function is correctly produced from simple graph.
+
+We could expect the pattern `make_add_relu_pattern` to be merged
+into a single op `add_relu`.
+
+a  b
+\ /   a  b
+add>  \ /
+ | add_relu
+   relu
+
+"""
+pattern_table = [
+("add_relu", make_add_relu_pattern())
+]
+
+def before():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+add_node = relay.add(a, b)
+r = relay.nn.relu(add_node)
+return relay.Function([a, b], r)
+
+def expected():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+
+# add_relu function
+in_1 = relay.var('in_1', shape=(10, 10))
+in_2 = relay.var('in_2', sha

[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r377003354
 
 

 ##
 File path: tests/python/relay/test_pass_merge_composite.py
 ##
 @@ -0,0 +1,439 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for merge composite."""
+from tvm import expr
+from tvm import relay
+from tvm.relay.testing import run_opt_pass
+
+"""
 
 Review comment:
   To be more general, yes we will need to think about this case. For this 1st 
iteration I've just considered patterns which are composed of Calls. I'd prefer 
to start with this and add the additional functionality as and when it's 
required.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on issue #4771: [Relay] Added Merge Composite pass

2020-02-10 Thread GitBox
mbarrett97 commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-584074285
 
 
   @zhiics Regarding the add-sub/sub-add case, yes this would require two 
patterns with the order or merging controlled by their priority. I can't think 
of any general way to express both these cases as a single pattern, but if you 
have any thoughts I'd be glad to hear them. There is potentially an issue with 
requiring lots of patterns and if we can come up with some concrete examples 
where that may be the case then I can try and reason about how to improve the 
pattern matching.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob removed a comment on issue #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-02-10 Thread GitBox
inadob removed a comment on issue #4805: [Frontend][TFlite] Add parser support 
for relu6, leaky_relu, relu_n1_to_1, log_softmax
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-584053528
 
 
   > Relu and Clip implementation does not look right.
   > 
   > We can keep the computation in integer domain. The way to do that is to 
subtract the input zero point, and then call Relu, then requantize to the 
output scale (only if output scale/zero point are different from input 
scale/zero point).
   
   Is that lowering to int8, not int32? @anijain2305 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob edited a comment on issue #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-02-10 Thread GitBox
inadob edited a comment on issue #4805: [Frontend][TFlite] Add parser support 
for relu6, leaky_relu, relu_n1_to_1, log_softmax
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-584053528
 
 
   > Relu and Clip implementation does not look right.
   > 
   > We can keep the computation in integer domain. The way to do that is to 
subtract the input zero point, and then call Relu, then requantize to the 
output scale (only if output scale/zero point are different from input 
scale/zero point).
   
   Is that lowering to int8, not int32? @anijain2305 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob commented on issue #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-02-10 Thread GitBox
inadob commented on issue #4805: [Frontend][TFlite] Add parser support for 
relu6, leaky_relu, relu_n1_to_1, log_softmax
URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-584053528
 
 
   > Relu and Clip implementation does not look right.
   > 
   > We can keep the computation in integer domain. The way to do that is to 
subtract the input zero point, and then call Relu, then requantize to the 
output scale (only if output scale/zero point are different from input 
scale/zero point).
   
   Is that lowering to int8, not int32?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] Coderx7 commented on issue #4845: [DEV] TVM v0.7 Roadmap

2020-02-10 Thread GitBox
Coderx7 commented on issue #4845: [DEV] TVM v0.7 Roadmap
URL: https://github.com/apache/incubator-tvm/issues/4845#issuecomment-584047567
 
 
   Could you also add support for intel graphics as well. it seems it is not 
supported at the moment!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services