[GitHub] [incubator-tvm] slyubomirsky commented on a change in pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-14 Thread GitBox


slyubomirsky commented on a change in pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#discussion_r470935641



##
File path: python/tvm/relay/backend/aot/aot.py
##
@@ -0,0 +1,282 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Defines the entry point into the AoT compiler.
+"""
+import ctypes
+import os
+import subprocess
+import tempfile
+import time
+
+import tvm
+from tvm import relay, get_global_func, register_func
+from tvm.relay.function import Function
+from tvm.relay.expr import Expr, Let, GlobalVar
+from tvm.relay.adt import Constructor
+from tvm.relay.expr_functor import ExprFunctor
+from tvm.relay.backend import compile_engine
+from .little_cpp import (PackedCall, CPPFunction, Invoke, Decl, CPPIf,
+ CPPTuple, CPPMatch, CPPConstructor, CPPTupleGetItem,
+ CPPRefCreate, CPPRefRead, CPPRefWrite)
+from . import to_source
+from .convert import convert
+
+TVM_PATH = os.environ['TVM_HOME']
+
+def must_run_process(args):
+proc = subprocess.run(args, check=True)
+assert proc.returncode == 0
+
+def compile_cpp(source, lib_name, flags=None, lib_path=None):
+"""
+Compiles the given source into a C++ library
+and returns the full path to the compiled library.
+"""
+if flags is None:
+flags = []
+
+if lib_path is None:
+lib_path = os.curdir
+
+debug_source_path = os.path.join(lib_path, 'source.cc')
+# Write out the file for debugging.
+with open(debug_source_path, 'w') as source_file:
+source_file.write(source)
+
+# with tempfile.TmporaryDirectory() as tmpdir:
+tmpdir = tempfile.mkdtemp(prefix="relay_aot_compiler")
+lib_path = os.path.join(tmpdir, lib_name)
+source_path = os.path.join(tmpdir, 'source.cc')
+with open(source_path, 'w') as source_file:
+source_file.write(source)
+
+must_run_process(["clang-format", "-i", debug_source_path])
+
+system = os.uname()[0]
+include_paths = [
+f"-I{TVM_PATH}/3rdparty/dmlc-core/include",
+f"-I{TVM_PATH}/3rdparty/dlpack/include",
+f"-I{TVM_PATH}/3rdparty/HalideIR/src",
+f"-I{TVM_PATH}/include",
+f"-L{TVM_PATH}/build"
+]
+
+if system == 'Darwin':
+command = [
+"clang", "-std=c++14", "-shared", "-undefined", "dynamic_lookup",
+"-o", lib_path,
+source_path,
+*include_paths,
+"-ltvm"
+] + flags
+else:
+command = [
+"clang", "-std=c++14", "-shared", "-fPIC", "-o", lib_path,
+source_path,
+*include_paths,
+"-ltvm"
+] + flags
+
+must_run_process(command)
+return lib_path
+
+def load_lib(name):
+return ctypes.CDLL(name, ctypes.RTLD_GLOBAL)
+
+def is_primitive(expr: relay.Expr):
+return (isinstance(expr, relay.Function)
+and expr.attrs
+and expr.attrs.Primitive.value == 1)
+
+class AoTCompiler(ExprFunctor):
+"""
+Takes a Relay program and converts into a Little CPP program
+that can in turn be converted into C++ source code.
+"""
+def __init__(self, mod, tgt) -> None:
+super().__init__()
+self.mod = mod
+self.tgt = tgt
+self.engine = compile_engine.get()
+self.bindings = [[]]
+self.gv_map = {}
+
+def add_binding(self, var, value):
+self.bindings[-1].append((var, value))
+
+def optimize(self, expr: Function) -> Function:
+opts = tvm.transform.Sequential([
+relay.transform.SimplifyInference(),
+relay.transform.FuseOps(),
+relay.transform.ToANormalForm()])
+self.mod['main'] = expr
+self.mod = opts(self.mod)
+ret = self.mod['main']
+return ret
+
+def mk_primitive_op(self, func: Expr, args, output_type) -> Expr:
+cc_key = compile_engine.CCacheKey(func, self.tgt)
+func_hash = tvm.ir.structural_hash(func)
+name = f"op_{func_hash}"
+if not get_global_func(name, allow_missing=True):
+jit_func = self.engine.jit(cc_key, self.tgt)
+register_func(name, jit_func)
+return PackedCall(name, args, [x.che

[GitHub] [incubator-tvm] slyubomirsky commented on pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-14 Thread GitBox


slyubomirsky commented on pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#issuecomment-674346394


   > [Clarification Question] How are the memory allocated for tensors -- 
in-between primitive functions -- ? Please point me to the code if its there -- 
It seems I have missed that. Do you do storage_id usage optimizations such as 
done in graph plan memory ?
   
   There are not, to my knowledge (@MarisaKirisame wrote most of the compiler), 
any memory planning optimizations in the AoT prototype, though it would 
certainly be a good addition. I never specifically looked into the memory 
allocation behavior (it was an area we ignored in the prototype altogether), 
but I believe allocations simply happen when the NDArray constructor is called 
in the generated code -- I will check that.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] slyubomirsky commented on a change in pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-14 Thread GitBox


slyubomirsky commented on a change in pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#discussion_r470935131



##
File path: python/tvm/relay/backend/aot/aot.py
##
@@ -0,0 +1,282 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Defines the entry point into the AoT compiler.
+"""
+import ctypes
+import os
+import subprocess
+import tempfile
+import time
+
+import tvm
+from tvm import relay, get_global_func, register_func
+from tvm.relay.function import Function
+from tvm.relay.expr import Expr, Let, GlobalVar
+from tvm.relay.adt import Constructor
+from tvm.relay.expr_functor import ExprFunctor
+from tvm.relay.backend import compile_engine
+from .little_cpp import (PackedCall, CPPFunction, Invoke, Decl, CPPIf,
+ CPPTuple, CPPMatch, CPPConstructor, CPPTupleGetItem,
+ CPPRefCreate, CPPRefRead, CPPRefWrite)
+from . import to_source
+from .convert import convert
+
+TVM_PATH = os.environ['TVM_HOME']
+
+def must_run_process(args):
+proc = subprocess.run(args, check=True)
+assert proc.returncode == 0
+
+def compile_cpp(source, lib_name, flags=None, lib_path=None):
+"""
+Compiles the given source into a C++ library
+and returns the full path to the compiled library.
+"""
+if flags is None:
+flags = []
+
+if lib_path is None:
+lib_path = os.curdir
+
+debug_source_path = os.path.join(lib_path, 'source.cc')
+# Write out the file for debugging.
+with open(debug_source_path, 'w') as source_file:
+source_file.write(source)
+
+# with tempfile.TmporaryDirectory() as tmpdir:
+tmpdir = tempfile.mkdtemp(prefix="relay_aot_compiler")
+lib_path = os.path.join(tmpdir, lib_name)
+source_path = os.path.join(tmpdir, 'source.cc')
+with open(source_path, 'w') as source_file:
+source_file.write(source)
+
+must_run_process(["clang-format", "-i", debug_source_path])
+
+system = os.uname()[0]
+include_paths = [
+f"-I{TVM_PATH}/3rdparty/dmlc-core/include",
+f"-I{TVM_PATH}/3rdparty/dlpack/include",
+f"-I{TVM_PATH}/3rdparty/HalideIR/src",
+f"-I{TVM_PATH}/include",
+f"-L{TVM_PATH}/build"
+]
+
+if system == 'Darwin':
+command = [
+"clang", "-std=c++14", "-shared", "-undefined", "dynamic_lookup",
+"-o", lib_path,
+source_path,
+*include_paths,
+"-ltvm"
+] + flags
+else:
+command = [
+"clang", "-std=c++14", "-shared", "-fPIC", "-o", lib_path,
+source_path,
+*include_paths,
+"-ltvm"
+] + flags
+
+must_run_process(command)
+return lib_path
+
+def load_lib(name):
+return ctypes.CDLL(name, ctypes.RTLD_GLOBAL)
+
+def is_primitive(expr: relay.Expr):
+return (isinstance(expr, relay.Function)
+and expr.attrs
+and expr.attrs.Primitive.value == 1)
+
+class AoTCompiler(ExprFunctor):
+"""
+Takes a Relay program and converts into a Little CPP program
+that can in turn be converted into C++ source code.
+"""
+def __init__(self, mod, tgt) -> None:
+super().__init__()
+self.mod = mod
+self.tgt = tgt
+self.engine = compile_engine.get()
+self.bindings = [[]]
+self.gv_map = {}
+
+def add_binding(self, var, value):
+self.bindings[-1].append((var, value))
+
+def optimize(self, expr: Function) -> Function:
+opts = tvm.transform.Sequential([
+relay.transform.SimplifyInference(),
+relay.transform.FuseOps(),
+relay.transform.ToANormalForm()])
+self.mod['main'] = expr
+self.mod = opts(self.mod)
+ret = self.mod['main']
+return ret
+
+def mk_primitive_op(self, func: Expr, args, output_type) -> Expr:
+cc_key = compile_engine.CCacheKey(func, self.tgt)
+func_hash = tvm.ir.structural_hash(func)
+name = f"op_{func_hash}"
+if not get_global_func(name, allow_missing=True):
+jit_func = self.engine.jit(cc_key, self.tgt)
+register_func(name, jit_func)
+return PackedCall(name, args, [x.che

[GitHub] [incubator-tvm] slyubomirsky commented on a change in pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-14 Thread GitBox


slyubomirsky commented on a change in pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#discussion_r470935087



##
File path: python/tvm/relay/backend/aot/aot.py
##
@@ -0,0 +1,282 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Defines the entry point into the AoT compiler.
+"""
+import ctypes
+import os
+import subprocess
+import tempfile
+import time
+
+import tvm
+from tvm import relay, get_global_func, register_func
+from tvm.relay.function import Function
+from tvm.relay.expr import Expr, Let, GlobalVar
+from tvm.relay.adt import Constructor
+from tvm.relay.expr_functor import ExprFunctor
+from tvm.relay.backend import compile_engine
+from .little_cpp import (PackedCall, CPPFunction, Invoke, Decl, CPPIf,
+ CPPTuple, CPPMatch, CPPConstructor, CPPTupleGetItem,
+ CPPRefCreate, CPPRefRead, CPPRefWrite)
+from . import to_source
+from .convert import convert
+
+TVM_PATH = os.environ['TVM_HOME']
+
+def must_run_process(args):
+proc = subprocess.run(args, check=True)
+assert proc.returncode == 0
+
+def compile_cpp(source, lib_name, flags=None, lib_path=None):
+"""
+Compiles the given source into a C++ library
+and returns the full path to the compiled library.
+"""
+if flags is None:
+flags = []
+
+if lib_path is None:
+lib_path = os.curdir
+
+debug_source_path = os.path.join(lib_path, 'source.cc')
+# Write out the file for debugging.
+with open(debug_source_path, 'w') as source_file:
+source_file.write(source)
+
+# with tempfile.TmporaryDirectory() as tmpdir:
+tmpdir = tempfile.mkdtemp(prefix="relay_aot_compiler")
+lib_path = os.path.join(tmpdir, lib_name)
+source_path = os.path.join(tmpdir, 'source.cc')
+with open(source_path, 'w') as source_file:
+source_file.write(source)
+
+must_run_process(["clang-format", "-i", debug_source_path])
+
+system = os.uname()[0]
+include_paths = [
+f"-I{TVM_PATH}/3rdparty/dmlc-core/include",
+f"-I{TVM_PATH}/3rdparty/dlpack/include",
+f"-I{TVM_PATH}/3rdparty/HalideIR/src",
+f"-I{TVM_PATH}/include",
+f"-L{TVM_PATH}/build"
+]
+
+if system == 'Darwin':

Review comment:
   Yes, I think this should be written in a more maintainable manner. I 
will see if it can be made to programmatically match up with TVM's own C++ 
build configuration, for example





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] slyubomirsky commented on a change in pull request #6219: [Runtime][WIP] Add prototype Relay AoT compiler directly into TVM

2020-08-14 Thread GitBox


slyubomirsky commented on a change in pull request #6219:
URL: https://github.com/apache/incubator-tvm/pull/6219#discussion_r470934995



##
File path: python/tvm/relay/backend/aot/aot.py
##
@@ -0,0 +1,282 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Defines the entry point into the AoT compiler.
+"""
+import ctypes
+import os
+import subprocess
+import tempfile
+import time
+
+import tvm
+from tvm import relay, get_global_func, register_func
+from tvm.relay.function import Function
+from tvm.relay.expr import Expr, Let, GlobalVar
+from tvm.relay.adt import Constructor
+from tvm.relay.expr_functor import ExprFunctor
+from tvm.relay.backend import compile_engine
+from .little_cpp import (PackedCall, CPPFunction, Invoke, Decl, CPPIf,
+ CPPTuple, CPPMatch, CPPConstructor, CPPTupleGetItem,
+ CPPRefCreate, CPPRefRead, CPPRefWrite)
+from . import to_source
+from .convert import convert
+
+TVM_PATH = os.environ['TVM_HOME']
+
+def must_run_process(args):
+proc = subprocess.run(args, check=True)
+assert proc.returncode == 0
+
+def compile_cpp(source, lib_name, flags=None, lib_path=None):
+"""
+Compiles the given source into a C++ library
+and returns the full path to the compiled library.
+"""
+if flags is None:
+flags = []
+
+if lib_path is None:
+lib_path = os.curdir
+
+debug_source_path = os.path.join(lib_path, 'source.cc')
+# Write out the file for debugging.
+with open(debug_source_path, 'w') as source_file:
+source_file.write(source)
+
+# with tempfile.TmporaryDirectory() as tmpdir:
+tmpdir = tempfile.mkdtemp(prefix="relay_aot_compiler")
+lib_path = os.path.join(tmpdir, lib_name)
+source_path = os.path.join(tmpdir, 'source.cc')
+with open(source_path, 'w') as source_file:
+source_file.write(source)
+
+must_run_process(["clang-format", "-i", debug_source_path])
+
+system = os.uname()[0]
+include_paths = [
+f"-I{TVM_PATH}/3rdparty/dmlc-core/include",
+f"-I{TVM_PATH}/3rdparty/dlpack/include",
+f"-I{TVM_PATH}/3rdparty/HalideIR/src",
+f"-I{TVM_PATH}/include",
+f"-L{TVM_PATH}/build"
+]
+
+if system == 'Darwin':
+command = [
+"clang", "-std=c++14", "-shared", "-undefined", "dynamic_lookup",
+"-o", lib_path,
+source_path,
+*include_paths,
+"-ltvm"
+] + flags
+else:
+command = [
+"clang", "-std=c++14", "-shared", "-fPIC", "-o", lib_path,
+source_path,
+*include_paths,
+"-ltvm"
+] + flags
+
+must_run_process(command)
+return lib_path
+
+def load_lib(name):
+return ctypes.CDLL(name, ctypes.RTLD_GLOBAL)
+
+def is_primitive(expr: relay.Expr):
+return (isinstance(expr, relay.Function)
+and expr.attrs
+and expr.attrs.Primitive.value == 1)
+
+class AoTCompiler(ExprFunctor):
+"""
+Takes a Relay program and converts into a Little CPP program
+that can in turn be converted into C++ source code.
+"""
+def __init__(self, mod, tgt) -> None:
+super().__init__()
+self.mod = mod
+self.tgt = tgt
+self.engine = compile_engine.get()
+self.bindings = [[]]
+self.gv_map = {}
+
+def add_binding(self, var, value):
+self.bindings[-1].append((var, value))
+
+def optimize(self, expr: Function) -> Function:
+opts = tvm.transform.Sequential([
+relay.transform.SimplifyInference(),
+relay.transform.FuseOps(),
+relay.transform.ToANormalForm()])
+self.mod['main'] = expr
+self.mod = opts(self.mod)
+ret = self.mod['main']
+return ret
+
+def mk_primitive_op(self, func: Expr, args, output_type) -> Expr:
+cc_key = compile_engine.CCacheKey(func, self.tgt)
+func_hash = tvm.ir.structural_hash(func)
+name = f"op_{func_hash}"
+if not get_global_func(name, allow_missing=True):
+jit_func = self.engine.jit(cc_key, self.tgt)
+register_func(name, jit_func)
+return PackedCall(name, args, [x.che

[GitHub] [incubator-tvm] junrushao1994 edited a comment on pull request #6280: [Build] Reflect Compile-Time CMake Options into libtvm.so

2020-08-14 Thread GitBox


junrushao1994 edited a comment on pull request #6280:
URL: https://github.com/apache/incubator-tvm/pull/6280#issuecomment-674344661


   @tqchen I updated the script accordingly. Now if it is not in a git repo, or 
git executable is not found, the result will be:
   
   ```
   > python -c "import tvm; print(tvm.support.libinfo())"
   {"GIT_COMMIT_HASH": "NOT-FOUND"}
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6280: [Build] Reflect Compile-Time CMake Options into libtvm.so

2020-08-14 Thread GitBox


junrushao1994 commented on pull request #6280:
URL: https://github.com/apache/incubator-tvm/pull/6280#issuecomment-674344661


   @tqchen I updated the script accordingly. Now if it is not in a git repo, 
the result will be:
   
   ```
   > python -c "import tvm; print(tvm.support.libinfo())"
   {"GIT_COMMIT_HASH": "NOT-FOUND"}
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6171: [CI][ETHOSN] Enable CI for Ethos-N

2020-08-14 Thread GitBox


tqchen commented on pull request #6171:
URL: https://github.com/apache/incubator-tvm/pull/6171#issuecomment-674342900


   https://github.com/apache/incubator-tvm/pull/6283



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [CI] Update ci-cpu to the latest (#6283)

2020-08-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new d76df0a  [CI] Update ci-cpu to the latest (#6283)
d76df0a is described below

commit d76df0aea380ae2857adb8f9c090fc0078c78eed
Author: Tianqi Chen 
AuthorDate: Fri Aug 14 20:41:30 2020 -0700

[CI] Update ci-cpu to the latest (#6283)
---
 Jenkinsfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 12bee04..7df0d3f 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -45,7 +45,7 @@
 
 ci_lint = "tvmai/ci-lint:v0.61"
 ci_gpu = "tvmai/ci-gpu:v0.64"
-ci_cpu = "tvmai/ci-cpu:v0.64"
+ci_cpu = "tvmai/ci-cpu:v0.65"
 ci_wasm = "tvmai/ci-wasm:v0.60"
 ci_i386 = "tvmai/ci-i386:v0.52"
 



[GitHub] [incubator-tvm] tqchen merged pull request #6283: [CI] Update ci-cpu to the latest

2020-08-14 Thread GitBox


tqchen merged pull request #6283:
URL: https://github.com/apache/incubator-tvm/pull/6283


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6281: Improve error messages for memory verifier and gpu memory verifier

2020-08-14 Thread GitBox


tqchen merged pull request #6281:
URL: https://github.com/apache/incubator-tvm/pull/6281


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Improve error messages for memory verifier and gpu memory verifier (#6281)

2020-08-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 25bcd1c  Improve error messages for memory verifier and gpu memory 
verifier (#6281)
25bcd1c is described below

commit 25bcd1ceae299371a56b26358b224d07d34119d7
Author: Tristan Konolige 
AuthorDate: Fri Aug 14 20:06:24 2020 -0700

Improve error messages for memory verifier and gpu memory verifier (#6281)

* [FIX] Print exactly what issues the GPU memory verifier encountered.

* [FIX] Print exactly why memory verifier failed.
---
 src/tir/analysis/verify_gpu_code.cc | 112 +++-
 src/tir/analysis/verify_memory.cc   |  56 +-
 2 files changed, 115 insertions(+), 53 deletions(-)

diff --git a/src/tir/analysis/verify_gpu_code.cc 
b/src/tir/analysis/verify_gpu_code.cc
index cce0823..5ef755a 100644
--- a/src/tir/analysis/verify_gpu_code.cc
+++ b/src/tir/analysis/verify_gpu_code.cc
@@ -35,9 +35,10 @@ namespace tir {
 
 class GPUCodeVerifier : public StmtExprVisitor {
  public:
-  bool Verify(Stmt stmt, int64_t max_local_memory_per_block, int64_t 
max_shared_memory_per_block,
-  int64_t max_threads_per_block, int64_t max_thread_x, int64_t 
max_thread_y,
-  int64_t max_thread_z, int64_t max_vthread, int64_t 
max_vector_bytes) {
+  std::vector Verify(Stmt stmt, int64_t max_local_memory_per_block,
+ int64_t max_shared_memory_per_block, int64_t 
max_threads_per_block,
+ int64_t max_thread_x, int64_t max_thread_y, 
int64_t max_thread_z,
+ int64_t max_vthread, int64_t max_vector_bytes) {
 max_local_memory_per_block_ = 
static_cast(max_local_memory_per_block);
 max_shared_memory_per_block_ = 
static_cast(max_shared_memory_per_block);
 max_threads_per_block_ = static_cast(max_threads_per_block);
@@ -52,7 +53,7 @@ class GPUCodeVerifier : public StmtExprVisitor {
 // TODO(jcf94): Add support of detecting CUDA Misaligned Address error
 this->VisitStmt(stmt);
 
-return valid_;
+return errors_;
   }
 
   void VisitStmt_(const AllocateNode* op) final {
@@ -66,7 +67,13 @@ class GPUCodeVerifier : public StmtExprVisitor {
   shared_memory_per_block_ += size * op->dtype.bytes() * op->dtype.lanes();
 }
 if (op->dtype.lanes() > 1) {
-  valid_ &= static_cast(op->dtype.lanes() * op->dtype.bytes()) <= 
max_vector_bytes_;
+  if (static_cast(op->dtype.lanes() * op->dtype.bytes()) > 
max_vector_bytes_) {
+std::stringstream s;
+s << "Number of lanes (" << op->dtype.lanes() << ") times number of 
bytes ("
+  << op->dtype.bytes() << ") for dtype " << op->dtype
+  << " is greater than the maximum number of vector bytes (" << 
max_vector_bytes_ << ")";
+errors_.push_back(s.str());
+  }
 }
   }
 
@@ -98,27 +105,39 @@ class GPUCodeVerifier : public StmtExprVisitor {
   visited_threads_.insert(name);
   thread_per_block_ *= length;
 
+  auto err = [this](std::string id, size_t ext, size_t m) {
+if (ext > m) {
+  std::stringstream s;
+  s << "Extent of " << id << " (" << ext << ") is greater than 
maximum allowed (" << m
+<< ");";
+  errors_.push_back(s.str());
+}
+  };
+
   if (name == "threadIdx.x") {
-valid_ &= length <= max_thread_x_;
+err("threadIdx.x", length, max_thread_x_);
 thread_x_extent_ = length;
   } else if (name == "threadIdx.y") {
-valid_ &= length <= max_thread_y_;
+err("threadIdx.y", length, max_thread_y_);
 thread_y_extent_ = length;
   } else if (name == "threadIdx.z") {
-valid_ &= length <= max_thread_z_;
+err("threadIdx.z", length, max_thread_z_);
 thread_z_extent_ = length;
   } else if (name == "vthread") {
-valid_ &= length <= max_vthread_;
+err("vthread", length, max_vthread_);
   }
 } else {
   // the thread should be bound to axes with the same length
-  if (name == "threadIdx.x") {
-valid_ &= length == thread_x_extent_;
-  } else if (name == "threadIdx.y") {
-valid_ &= length == thread_y_extent_;
-  } else if (name == "threadIdx.z") {
-valid_ &= length == thread_z_extent_;
-  }
+  auto err = [this, name](std::string id, size_t ext, size_t m) {
+if (name == id && ext != m) {
+  std::stringstream s;
+  s << "Extent of " << id << " (" << ext << ") does not match the 
bound " << m;
+  errors_.push_back(s.str());
+}
+  };
+  err("threadIdx.x", length, thread_x_extent_

[GitHub] [incubator-tvm] tqchen commented on pull request #6221: [TFLite] axis can be a scalar

2020-08-14 Thread GitBox


tqchen commented on pull request #6221:
URL: https://github.com/apache/incubator-tvm/pull/6221#issuecomment-674329985


   I think it should be in 
https://github.com/apache/incubator-tvm/blob/master/tests/python/frontend/tflite
   Feel free to add a unitest to that folder



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Target] Creating Target from JSON-like Configuration (#6218)

2020-08-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 38a44c8  [Target] Creating Target from JSON-like Configuration (#6218)
38a44c8 is described below

commit 38a44c82b1af6652b83fbbf9f055e9aa7c0b5ba0
Author: Junru Shao 
AuthorDate: Fri Aug 14 18:33:37 2020 -0700

[Target] Creating Target from JSON-like Configuration (#6218)

* [Target] Creating Target from JSON-like Configuration

* Address comments from Cody

* fix unittest

* More testcases as suggested by @comaniac
---
 include/tvm/target/target.h  |  46 -
 include/tvm/target/target_kind.h |  20 +-
 src/target/target.cc | 413 ++-
 src/target/target_kind.cc| 289 ---
 tests/cpp/target_test.cc | 131 ++---
 5 files changed, 516 insertions(+), 383 deletions(-)

diff --git a/include/tvm/target/target.h b/include/tvm/target/target.h
index 4a83579..258b2d8 100644
--- a/include/tvm/target/target.h
+++ b/include/tvm/target/target.h
@@ -31,6 +31,7 @@
 #include 
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -62,6 +63,13 @@ class TargetNode : public Object {
 v->Visit("attrs", &attrs);
   }
 
+  /*!
+   * \brief Get an entry from attrs of the target
+   * \tparam TObjectRef Type of the attribute
+   * \param attr_key The name of the attribute key
+   * \param default_value The value returned if the key is not present
+   * \return An optional, NullOpt if not found, otherwise the value found
+   */
   template 
   Optional GetAttr(
   const std::string& attr_key,
@@ -75,15 +83,19 @@ class TargetNode : public Object {
   return default_value;
 }
   }
-
+  /*!
+   * \brief Get an entry from attrs of the target
+   * \tparam TObjectRef Type of the attribute
+   * \param attr_key The name of the attribute key
+   * \param default_value The value returned if the key is not present
+   * \return An optional, NullOpt if not found, otherwise the value found
+   */
   template 
   Optional GetAttr(const std::string& attr_key, TObjectRef 
default_value) const {
 return GetAttr(attr_key, Optional(default_value));
   }
-
   /*! \brief Get the keys for this target as a vector of string */
   TVM_DLL std::vector GetKeys() const;
-
   /*! \brief Get the keys for this target as an unordered_set of string */
   TVM_DLL std::unordered_set GetLibs() const;
 
@@ -93,6 +105,26 @@ class TargetNode : public Object {
  private:
   /*! \brief Internal string repr. */
   mutable std::string str_repr_;
+  /*!
+   * \brief Parsing TargetNode::attrs from a list of raw strings
+   * \param obj The attribute to be parsed
+   * \param info The runtime type information for parsing
+   * \return The attribute parsed
+   */
+  ObjectRef ParseAttr(const ObjectRef& obj, const 
TargetKindNode::ValueTypeInfo& info) const;
+  /*!
+   * \brief Parsing TargetNode::attrs from a list of raw strings
+   * \param options The raw string of fields to be parsed
+   * \return The attributes parsed
+   */
+  Map ParseAttrsFromRaw(const std::vector& 
options) const;
+  /*!
+   * \brief Serialize the attributes of a target to raw string
+   * \param attrs The attributes to be converted to string
+   * \return The string converted, NullOpt if attrs is empty
+   */
+  Optional StringifyAttrsToRaw(const Map& attrs) 
const;
+
   friend class Target;
 };
 
@@ -103,10 +135,18 @@ class TargetNode : public Object {
 class Target : public ObjectRef {
  public:
   Target() {}
+  /*! \brief Constructor from ObjectPtr */
   explicit Target(ObjectPtr n) : ObjectRef(n) {}
   /*!
+   * \brief Create a Target using a JSON-like configuration
+   * \param config The JSON-like configuration
+   * \return The target created
+   */
+  TVM_DLL static Target FromConfig(const Map& config);
+  /*!
* \brief Create a Target given a string
* \param target_str the string to parse
+   * \return The target created
*/
   TVM_DLL static Target Create(const String& target_str);
   /*!
diff --git a/include/tvm/target/target_kind.h b/include/tvm/target/target_kind.h
index a661efa..e4e7c2f 100644
--- a/include/tvm/target/target_kind.h
+++ b/include/tvm/target/target_kind.h
@@ -45,9 +45,6 @@ struct ValueTypeInfoMaker;
 
 class Target;
 
-/*! \brief Perform schema validation */
-TVM_DLL void TargetValidateSchema(const Map& config);
-
 template 
 class TargetKindAttrMap;
 
@@ -67,14 +64,14 @@ class TargetKindNode : public Object {
 v->Visit("default_keys", &default_keys);
   }
 
-  Map ParseAttrsFromRaw(const std::vector& 
options) const;
-
-  Optional StringifyAttrsToRaw(const Map& attrs) 
const;
-
   static constexpr const char* _type_key = "TargetKind";
   TVM_DECLARE_FINAL_OBJECT_INFO(TargetKindNode, Object);
 
  private:
+  /*! \brief Return the index stored in attr

[GitHub] [incubator-tvm] tqchen commented on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


tqchen commented on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-674329761


   Thanks @junrushao1994 @comaniac !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


tqchen merged pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] windclarion edited a comment on pull request #6221: [TFLite] axis can be a scalar

2020-08-14 Thread GitBox


windclarion edited a comment on pull request #6221:
URL: https://github.com/apache/incubator-tvm/pull/6221#issuecomment-670817189


   @leandron @tqchen I didn't find tflite converter test case in 
tests/python/frontend/tflite/test_forward.py,   those code didn't use any 
function 
in python/tvm/relay/frontend/tflite.py, where can I find some similar 
tflite converter test cases?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] electriclilies opened a new pull request #6284: [RELAY][DYN] Implementation of the dynamic pad operator

2020-08-14 Thread GitBox


electriclilies opened a new pull request #6284:
URL: https://github.com/apache/incubator-tvm/pull/6284


   Implementation of the dynamic pad operator 
   
   @mbrookhart @zhiics please take a look and let me know if you have any 
suggestions
   
   Also, clang-format changed a bunch of stuff in include/tvm/relay/attrs/nn.h 
-- what files are supposed to be formatted using clang-format?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] chanwutk commented on pull request #6277: [Parser] Add support for parsing the any dimension.

2020-08-14 Thread GitBox


chanwutk commented on pull request #6277:
URL: https://github.com/apache/incubator-tvm/pull/6277#issuecomment-674319586


   Thank you!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] csullivan commented on pull request #6276: [ONNX] Update slice to infer attributes when not graph inputs

2020-08-14 Thread GitBox


csullivan commented on pull request #6276:
URL: https://github.com/apache/incubator-tvm/pull/6276#issuecomment-674315293


   cc @mbrookhart @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6265: KeyError: ‘InceptionResnetV1/Logits/Flatten/flatten/Reshape/shape/1’

2020-08-14 Thread GitBox


tqchen commented on issue #6265:
URL: https://github.com/apache/incubator-tvm/issues/6265#issuecomment-674306860


   Thanks for reporting the problem, the community uses https://discuss.tvm.ai/ 
for quick trouble shooting and discussions, please open a new thread there



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6266: Why is the inference time obtained with module.time_evaluator method less, but the inference time obtained with the method defined by myself i

2020-08-14 Thread GitBox


tqchen commented on issue #6266:
URL: https://github.com/apache/incubator-tvm/issues/6266#issuecomment-674306737


   Thanks for reporting the problem, the community uses https://discuss.tvm.ai/ 
for quick trouble shooting and discussions, please open a new thread there



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6247: NotImplementedError occur when i convert pytorch model to tvm

2020-08-14 Thread GitBox


tqchen commented on issue #6247:
URL: https://github.com/apache/incubator-tvm/issues/6247#issuecomment-674306825


   Thanks for reporting the problem, the community uses https://discuss.tvm.ai/ 
for quick trouble shooting and discussions, please open a new thread there



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #6247: NotImplementedError occur when i convert pytorch model to tvm

2020-08-14 Thread GitBox


tqchen closed issue #6247:
URL: https://github.com/apache/incubator-tvm/issues/6247


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #6265: KeyError: ‘InceptionResnetV1/Logits/Flatten/flatten/Reshape/shape/1’

2020-08-14 Thread GitBox


tqchen closed issue #6265:
URL: https://github.com/apache/incubator-tvm/issues/6265


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #6266: Why is the inference time obtained with module.time_evaluator method less, but the inference time obtained with the method defined by myself is slow

2020-08-14 Thread GitBox


tqchen closed issue #6266:
URL: https://github.com/apache/incubator-tvm/issues/6266


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6280: [Build] Reflect Compile-Time CMake Options into libtvm.so

2020-08-14 Thread GitBox


junrushao1994 commented on pull request #6280:
URL: https://github.com/apache/incubator-tvm/pull/6280#issuecomment-674305020


   I see. We should set it to something like “git-not-found” then



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #6281: Improve error messages for memory verifier and gpu memory verifier

2020-08-14 Thread GitBox


jroesch commented on pull request #6281:
URL: https://github.com/apache/incubator-tvm/pull/6281#issuecomment-674303717


   LGTM, cc @junrushao1994 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #6281: Improve error messages for memory verifier and gpu memory verifier

2020-08-14 Thread GitBox


jroesch commented on pull request #6281:
URL: https://github.com/apache/incubator-tvm/pull/6281#issuecomment-674303639


   Hopefully we can port this to diagnostics when we get there. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6283: [CI] Update ci-cpu to the latest

2020-08-14 Thread GitBox


tqchen commented on pull request #6283:
URL: https://github.com/apache/incubator-tvm/pull/6283#issuecomment-674302212


   cc @jroesch @tmoreau89 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #6283: [CI] Update ci-cpu to the latest

2020-08-14 Thread GitBox


tqchen opened a new pull request #6283:
URL: https://github.com/apache/incubator-tvm/pull/6283


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6280: [Build] Reflect Compile-Time CMake Options into libtvm.so

2020-08-14 Thread GitBox


tqchen commented on pull request #6280:
URL: https://github.com/apache/incubator-tvm/pull/6280#issuecomment-674302138


   Yes, we can skip the info if that is the case



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6280: [Build] Reflect Compile-Time CMake Options into libtvm.so

2020-08-14 Thread GitBox


junrushao1994 commented on pull request #6280:
URL: https://github.com/apache/incubator-tvm/pull/6280#issuecomment-67438


   @tqchen If so, we don't have a git commit id, is that correct?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] interesaaat opened a new issue #6282: [Frontend][PyTorch]NotImplementedError: The following operators are not implemented: ['aten::index_select']

2020-08-14 Thread GitBox


interesaaat opened a new issue #6282:
URL: https://github.com/apache/incubator-tvm/issues/6282


   ```
  model, params = relay.frontend.from_pytorch(ts_model, test_input)
 File 
"/.pyenv/versions/3.6.7/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-macosx-10.15-x86_64.egg/tvm/relay/frontend/pytorch.py",
 line 2641, in from_pytorch
   _report_missing_conversion(op_names, convert_map)
 File 
"/.pyenv/versions/3.6.7/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-macosx-10.15-x86_64.egg/tvm/relay/frontend/pytorch.py",
 line 2127, in _report_missing_conversion
   raise NotImplementedError(msg)
   NotImplementedError: The following operators are not implemented: 
['aten::index_select']
   ```
   
   Any chance to have this supported?
   
   torch==1.6.0 and latest tvm from master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6280: [Build] Reflect Compile-Time CMake Options into libtvm.so

2020-08-14 Thread GitBox


tqchen commented on pull request #6280:
URL: https://github.com/apache/incubator-tvm/pull/6280#issuecomment-674299602


   Thanks @junrushao1994 we will need also to support the build where the code 
base is not part of a git repo(e.g. in a tarball during release).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige opened a new pull request #6281: Improve error messages for memory verifier and gpu memory verifier

2020-08-14 Thread GitBox


tkonolige opened a new pull request #6281:
URL: https://github.com/apache/incubator-tvm/pull/6281


   This PR improves the TIR analysis passes memory verifier and gpu memory 
verifier to print the actual error they encountered. Previous they just printed 
a generic error message.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 opened a new pull request #6280: [Build] Reflect Compile-Time CMake Options into libtvm.so

2020-08-14 Thread GitBox


junrushao1994 opened a new pull request #6280:
URL: https://github.com/apache/incubator-tvm/pull/6280


   Reflecting those info is very helpful in TVM development, packaging, and 
especially trouble shooting. This PR showcases how to incorporate compile-time 
information into TVM library, specifically, git commit hash in this PR. Other 
flags can be done in similar fashion, by just appending lines like 
"TVM_USE_CUDA=${USE_CUDA}" in the CMakeList.txt.
   
   **How to use this functionality.** Run the command below:
   ```
   > python -c "import tvm; print(tvm.support.libinfo())"
   {"GIT_COMMIT_HASH": "921d4e0efea280c149940669bb23ef8c4de366e9"}
   ```
   
   Per discussion with @jroesch. CC: @jwfromm @tqchen.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6274: [Diagnostics][Relay][InferType] Refactor InferType to work on whole module, and use new diagnostics.

2020-08-14 Thread GitBox


junrushao1994 commented on pull request #6274:
URL: https://github.com/apache/incubator-tvm/pull/6274#issuecomment-674260555


   May be out of the scope of this PR, but shall we also consider make type 
inference non-recursive some time?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


junrushao1994 commented on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-674260296


   Thank you @comaniac for the valuable feedbacks! Made me think deeper and 
really learned a lot!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6274: [Diagnostics][Relay][InferType] Refactor InferType to work on whole module, and use new diagnostics.

2020-08-14 Thread GitBox


junrushao1994 commented on a change in pull request #6274:
URL: https://github.com/apache/incubator-tvm/pull/6274#discussion_r470848299



##
File path: include/tvm/ir/diagnostic.h
##
@@ -0,0 +1,254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file diagnostic.h
+ * \brief A new diagnostic interface for TVM error reporting.
+ *
+ * A prototype of the new diagnostic reporting interface for TVM.
+ *
+ * Eventually we hope to promote this file to the top-level and
+ * replace the existing errors.h.
+ */
+
+#ifndef TVM_IR_DIAGNOSTIC_H_
+#define TVM_IR_DIAGNOSTIC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+
+using tvm::parser::SourceMap;
+
+static const char* kTVM_INTERNAL_ERROR_MESSAGE = "An internal invariant was 
violated during the execution of TVM" \
+  "please read TVM's error reporting 
guidelines at discuss.tvm.ai/thread";
+
+static const char* kINDENT = "  ";
+
+#define ICHECK_BINARY_OP(name, op, x, y)  \
+  if (dmlc::LogCheckError _check_err = dmlc::LogCheck##name(x, y))\
+dmlc::LogMessageFatal(__FILE__, __LINE__).stream()\
+  << kTVM_INTERNAL_ERROR_MESSAGE << std::endl \
+  << kIndent << "Check failed: " << #x " " #op " " #y << *(_check_err.str) 
<< ": "
+
+#define ICHECK(x)  \
+  if (!(x))\
+dmlc::LogMessageFatal(__FILE__, __LINE__).stream() \
+  << kTVM_INTERNAL_ERROR_MESSAGE   \
+  << kINDENT << "Check failed: " #x << ": "
+
+#define ICHECK_LT(x, y) ICHECK_BINARY_OP(_LT, <, x, y)
+#define ICHECK_GT(x, y) ICHECK_BINARY_OP(_GT, >, x, y)
+#define ICHECK_LE(x, y) ICHECK_BINARY_OP(_LE, <=, x, y)
+#define ICHECK_GE(x, y) ICHECK_BINARY_OP(_GE, >=, x, y)
+#define ICHECK_EQ(x, y) ICHECK_BINARY_OP(_EQ, ==, x, y)
+#define ICHECK_NE(x, y) ICHECK_BINARY_OP(_NE, !=, x, y)
+#define ICHECK_NOTNULL(x) \
+  ((x) == NULL ? dmlc::LogMessageFatal(__FILE__, __LINE__).stream() \
+  << kTVM_INTERNAL_ERROR_MESSAGE\
+  << kINDENT << "Check not null: "  #x << ' ', (x) : (x)) // NOLINT(*)
+
+/*! \brief The diagnostic level, controls the printing of the message. */
+enum class DiagnosticLevel {
+  kBug,
+  kError,
+  kWarning,
+  kNote,
+  kHelp,
+};

Review comment:
   Maybe using exact integers?
   
   ```suggestion
   enum class DiagnosticLevel : int {
 kBug = 50,
 kError = 40,
 kWarning = 30,
 kNote = 20,
 kHelp = 10,
   };
   ```

##
File path: include/tvm/ir/diagnostic.h
##
@@ -0,0 +1,254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file diagnostic.h
+ * \brief A new diagnostic interface for TVM error reporting.
+ *
+ * A prototype of the new diagnostic reporting interface for TVM.
+ *
+ * Eventually we hope to promote this file to the top-level and
+ * replace the existing errors.h.
+ */
+
+#ifndef TVM_IR_DIAGNOSTIC_H_
+#define TVM_IR_DIAGNOSTIC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+
+using tvm::parser::SourceMap;
+
+static const char* kTVM_INTERNAL_ERROR_MESSAGE = "An internal invariant was 
violated during the execution of TVM" \
+  

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6274: [Diagnostics][Relay][InferType] Refactor InferType to work on whole module, and use new diagnostics.

2020-08-14 Thread GitBox


junrushao1994 commented on a change in pull request #6274:
URL: https://github.com/apache/incubator-tvm/pull/6274#discussion_r470847640



##
File path: include/tvm/ir/diagnostic.h
##
@@ -0,0 +1,254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file diagnostic.h
+ * \brief A new diagnostic interface for TVM error reporting.
+ *
+ * A prototype of the new diagnostic reporting interface for TVM.
+ *
+ * Eventually we hope to promote this file to the top-level and
+ * replace the existing errors.h.
+ */
+
+#ifndef TVM_IR_DIAGNOSTIC_H_
+#define TVM_IR_DIAGNOSTIC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+
+using tvm::parser::SourceMap;
+
+static const char* kTVM_INTERNAL_ERROR_MESSAGE = "An internal invariant was 
violated during the execution of TVM" \
+  "please read TVM's error reporting 
guidelines at discuss.tvm.ai/thread";

Review comment:
   ```suggestion
   static const char* kTVM_INTERNAL_ERROR_MESSAGE = "An internal invariant was 
violated during the execution of TVM." \
 "Please read TVM's error reporting 
guidelines at discuss.tvm.ai/thread";
   ```

##
File path: include/tvm/ir/diagnostic.h
##
@@ -0,0 +1,254 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file diagnostic.h
+ * \brief A new diagnostic interface for TVM error reporting.
+ *
+ * A prototype of the new diagnostic reporting interface for TVM.
+ *
+ * Eventually we hope to promote this file to the top-level and
+ * replace the existing errors.h.
+ */
+
+#ifndef TVM_IR_DIAGNOSTIC_H_
+#define TVM_IR_DIAGNOSTIC_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+
+using tvm::parser::SourceMap;
+
+static const char* kTVM_INTERNAL_ERROR_MESSAGE = "An internal invariant was 
violated during the execution of TVM" \
+  "please read TVM's error reporting 
guidelines at discuss.tvm.ai/thread";

Review comment:
   ```suggestion
   static const char* kTVM_INTERNAL_ERROR_MESSAGE = "An internal invariant was 
violated during the execution of TVM. " \
 "Please read TVM's error reporting 
guidelines at discuss.tvm.ai/thread";
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 edited a comment on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


junrushao1994 edited a comment on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-674253829


   @comaniac I just added a few more tests, covering the cases
   
   * Creation failure: "kind" is not provided
   * Creation failure: type mismatch
   * Duplicate keys
   * "Device"/"Keys" is provided
   
   The test for tag will be added after tag is supported.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


junrushao1994 commented on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-674253829


   @comaniac I just added a few more tests, covering the cases
   
   * Creation failure: "kind" is not provided
   * Creation failure: type mismatch
   * Duplicate keys
   * "Device" is provided
   
   The test for tag will be added after tag is supported.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] 652994331 commented on issue #4464: [RFC] Add TVMDSOOp to integrate any TVM operator with TensorFlow

2020-08-14 Thread GitBox


652994331 commented on issue #4464:
URL: https://github.com/apache/incubator-tvm/issues/4464#issuecomment-674247367


   @tobegit3hub It seems there are some lines about tensorflow path from the 
cmakelist.txt of tftvm projects(which's deprecated like u said) 
https://github.com/tobegit3hub/tftvm/blob/master/CMakeLists.txt
   but in the cmakelist.txt of incubator-tvm projects(i am using the master 
branch), i cant find these lines about tensorflow path.
   i think this is the problem, am i right using the master branch of 
incubator-tvm to build tvm with tvmsdoop.
   
   Thanks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r470837423



##
File path: src/target/target.cc
##
@@ -162,14 +314,164 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();
+target->kind = TargetKind::Get(GetRef(kind));
+config.erase(kKind);
+  } else {
+LOG(FATAL) << "AttributeError: Field 'kind' is not found";
+  }
+  // parse "tag"
+  if (config.count(kTag)) {
+const auto* tag = config[kTag].as();
+CHECK(tag != nullptr) << "AttributeError: Expect type of field 'tag' is 
string, but get: "
+  << config[kTag]->GetTypeKey();
+target->tag = GetRef(tag);
+config.erase(kTag);
+  } else {
+target->tag = "";
+  }
+  // parse "keys"
+  if (config.count(kKeys)) {
+std::vector keys;
+// user provided keys
+const auto* cfg_keys = config[kKeys].as();
+CHECK(cfg_keys != nullptr)
+<< "AttributeError: Expect type of field 'keys' is an Array, but get: "
+<< config[kTag]->GetTypeKey();
+for (const ObjectRef& e : *cfg_keys) {
+  const auto* key = e.as();
+  CHECK(key != nullptr) << "AttributeError: Expect 'keys' to be an array 
of strings, but it "
+   "contains an element of type: "
+<< e->GetTypeKey();
+  keys.push_back(GetRef(key));
+}
+// add device name
+if (config_dict.count("device") && 
config_dict.at("device")->IsInstance()) {
+  keys.push_back(Downcast(config_dict.at("device")));
+}
+// add default keys
+for (const auto& key : target->kind->default_keys) {
+  keys.push_back(key);
+}
+// de-duplicate keys
+target->keys = DeduplicateKeys(keys);

Review comment:
   > {"kind": "llvm"} for X86, {"kind": "llvm", "device": "arm_cpu"} for 
ARM.
   
   This can be done via the helper functions we already hav

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


comaniac commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r470828288



##
File path: src/target/target.cc
##
@@ -162,14 +314,164 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();
+target->kind = TargetKind::Get(GetRef(kind));
+config.erase(kKind);
+  } else {
+LOG(FATAL) << "AttributeError: Field 'kind' is not found";
+  }
+  // parse "tag"
+  if (config.count(kTag)) {
+const auto* tag = config[kTag].as();
+CHECK(tag != nullptr) << "AttributeError: Expect type of field 'tag' is 
string, but get: "
+  << config[kTag]->GetTypeKey();
+target->tag = GetRef(tag);
+config.erase(kTag);
+  } else {
+target->tag = "";
+  }
+  // parse "keys"
+  if (config.count(kKeys)) {
+std::vector keys;
+// user provided keys
+const auto* cfg_keys = config[kKeys].as();
+CHECK(cfg_keys != nullptr)
+<< "AttributeError: Expect type of field 'keys' is an Array, but get: "
+<< config[kTag]->GetTypeKey();
+for (const ObjectRef& e : *cfg_keys) {
+  const auto* key = e.as();
+  CHECK(key != nullptr) << "AttributeError: Expect 'keys' to be an array 
of strings, but it "
+   "contains an element of type: "
+<< e->GetTypeKey();
+  keys.push_back(GetRef(key));
+}
+// add device name
+if (config_dict.count("device") && 
config_dict.at("device")->IsInstance()) {
+  keys.push_back(Downcast(config_dict.at("device")));
+}
+// add default keys
+for (const auto& key : target->kind->default_keys) {
+  keys.push_back(key);
+}
+// de-duplicate keys
+target->keys = DeduplicateKeys(keys);

Review comment:
   Fair enough. Then I would just have 3 points accordingly:
   - We don't aim to have a comprehensive semantic checking in the target 
parser. Ins

[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r470823089



##
File path: src/target/target.cc
##
@@ -162,14 +314,164 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();
+target->kind = TargetKind::Get(GetRef(kind));
+config.erase(kKind);
+  } else {
+LOG(FATAL) << "AttributeError: Field 'kind' is not found";
+  }
+  // parse "tag"
+  if (config.count(kTag)) {

Review comment:
   Good points! I will add more testcases :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


junrushao1994 commented on a change in pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#discussion_r470822681



##
File path: src/target/target.cc
##
@@ -162,14 +314,164 @@ Target Target::Create(const String& target_str) {
   return CreateTarget(splits[0], {splits.begin() + 1, splits.end()});
 }
 
+ObjectRef TargetNode::ParseAttr(const ObjectRef& obj,
+const TargetKindNode::ValueTypeInfo& info) 
const {
+  if (info.type_index == 
Integer::ContainerType::_GetOrAllocRuntimeTypeIndex()) {
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'int', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == String::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+const auto* v = obj.as();
+CHECK(v != nullptr) << "Expect type 'str', but get: " << obj->GetTypeKey();
+return GetRef(v);
+  }
+  if (info.type_index == Target::ContainerType::_GetOrAllocRuntimeTypeIndex()) 
{
+CHECK(obj->IsInstance())
+<< "Expect type 'dict' to construct Target, but get: " << 
obj->GetTypeKey();
+return Target::FromConfig(Downcast>(obj));
+  }
+  if (info.type_index == ArrayNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'list', but get: " << 
obj->GetTypeKey();
+Array array = Downcast>(obj);
+std::vector result;
+int i = 0;
+for (const ObjectRef& e : array) {
+  ++i;
+  try {
+result.push_back(TargetNode::ParseAttr(e, *info.key));
+  } catch (const dmlc::Error& e) {
+LOG(FATAL) << "Error occurred when parsing element " << i << " of the 
array: " << array
+   << ". Details:\n"
+   << e.what();
+  }
+}
+return Array(result);
+  }
+  if (info.type_index == MapNode::_GetOrAllocRuntimeTypeIndex()) {
+CHECK(obj->IsInstance()) << "Expect type 'dict', but get: " << 
obj->GetTypeKey();
+std::unordered_map result;
+for (const auto& kv : Downcast>(obj)) {
+  ObjectRef key, val;
+  try {
+key = TargetNode::ParseAttr(kv.first, *info.key);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a key of the dict: " << 
kv.first
+   << ". Details:\n"
+   << e.what();
+  }
+  try {
+val = TargetNode::ParseAttr(kv.second, *info.val);
+  } catch (const tvm::Error& e) {
+LOG(FATAL) << "Error occurred when parsing a value of the dict: " << 
kv.second
+   << ". Details:\n"
+   << e.what();
+  }
+  result[key] = val;
+}
+return Map(result);
+  }
+  LOG(FATAL) << "Unsupported type registered: \"" << info.type_key
+ << "\", and the type given is: " << obj->GetTypeKey();
+  throw;
+}
+
+Target Target::FromConfig(const Map& config_dict) {
+  const String kKind = "kind";
+  const String kTag = "tag";
+  const String kKeys = "keys";
+  std::unordered_map config(config_dict.begin(), 
config_dict.end());
+  ObjectPtr target = make_object();
+  // parse 'kind'
+  if (config.count(kKind)) {
+const auto* kind = config[kKind].as();
+CHECK(kind != nullptr) << "AttributeError: Expect type of field 'kind' is 
string, but get: "
+   << config[kKind]->GetTypeKey();
+target->kind = TargetKind::Get(GetRef(kind));
+config.erase(kKind);
+  } else {
+LOG(FATAL) << "AttributeError: Field 'kind' is not found";
+  }
+  // parse "tag"
+  if (config.count(kTag)) {
+const auto* tag = config[kTag].as();
+CHECK(tag != nullptr) << "AttributeError: Expect type of field 'tag' is 
string, but get: "
+  << config[kTag]->GetTypeKey();
+target->tag = GetRef(tag);
+config.erase(kTag);
+  } else {
+target->tag = "";
+  }
+  // parse "keys"
+  if (config.count(kKeys)) {
+std::vector keys;
+// user provided keys
+const auto* cfg_keys = config[kKeys].as();
+CHECK(cfg_keys != nullptr)
+<< "AttributeError: Expect type of field 'keys' is an Array, but get: "
+<< config[kTag]->GetTypeKey();
+for (const ObjectRef& e : *cfg_keys) {
+  const auto* key = e.as();
+  CHECK(key != nullptr) << "AttributeError: Expect 'keys' to be an array 
of strings, but it "
+   "contains an element of type: "
+<< e->GetTypeKey();
+  keys.push_back(GetRef(key));
+}
+// add device name
+if (config_dict.count("device") && 
config_dict.at("device")->IsInstance()) {
+  keys.push_back(Downcast(config_dict.at("device")));
+}
+// add default keys
+for (const auto& key : target->kind->default_keys) {
+  keys.push_back(key);
+}
+// de-duplicate keys
+target->keys = DeduplicateKeys(keys);

Review comment:
   **In response to the second example.** We can see that the second 
example exposes an extreme case that the keys can be semantically unnatu

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-14 Thread GitBox


anijain2305 commented on a change in pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r470812443



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
 # convert to tflite model
 converter = tf.lite.TFLiteConverter.from_session(
 sess, input_tensors, output_tensors)
-
+converter.experimental_new_converter=experimental_new_converter

Review comment:
   Ok. I understand. If CI upgrade to future TFLite versions causes 
problem, we can remove the flag.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-14 Thread GitBox


anijain2305 commented on pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#issuecomment-674224325


   Thanks @d-smirnov @u99127 This is merged



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (8d91058 -> aa0271e)

2020-08-14 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 8d91058  Update precision in the ONNX strided_slice, update precision 
of ToScalar (#6272)
 add aa0271e  Added support for tflite quantized maximum and minimum (#6018)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  42 +-
 tests/python/frontend/tflite/test_forward.py | 118 ++-
 2 files changed, 86 insertions(+), 74 deletions(-)



[GitHub] [incubator-tvm] anijain2305 merged pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-14 Thread GitBox


anijain2305 merged pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6218: [Target] Creating Target from JSON-like Configuration

2020-08-14 Thread GitBox


junrushao1994 commented on pull request #6218:
URL: https://github.com/apache/incubator-tvm/pull/6218#issuecomment-674212797


   Sorry for the late reply. I was in PTO last two days. Thank you @comaniac 
for the very constructive comments, and I will reply correspondingly :-)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] csullivan commented on pull request #6251: [ONNX] Add Clip importer to handle when min/max are provided as inputs.

2020-08-14 Thread GitBox


csullivan commented on pull request #6251:
URL: https://github.com/apache/incubator-tvm/pull/6251#issuecomment-674210859


   cc @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6275: [Support] Add parallel_for support to run a loop in parallel

2020-08-14 Thread GitBox


tqchen commented on pull request #6275:
URL: https://github.com/apache/incubator-tvm/pull/6275#issuecomment-674184384


   @jcf94 it would be great if we can simplify the parallel for implementation, 
e.g. std::thread has pretty low launching overhead, and we can likely drop the 
threadpool and start std::thread on each parallel_for round. As long as the 
function cost is not high, it should be a simpler implementation.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-14 Thread GitBox


yzhliu commented on pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#issuecomment-674181792


   @tqchen it's ready



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] samskalicky edited a comment on pull request #5986: Fixes for GraphRuntime destruction

2020-08-14 Thread GitBox


samskalicky edited a comment on pull request #5986:
URL: https://github.com/apache/incubator-tvm/pull/5986#issuecomment-674181057


   > the particular error message seems is still due to the use of global 
states(perhaps ndarray given that the graph rt is now resolved) 
somewhere(perhaps in the python),
   
   True, im running TVM inside a custom subgraph operator in MXNet. so the 
subgraph operator is stateful and loads the graphruntime in its constructor. So 
the DeviceAPI objects will be destructed before the runtime is.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] samskalicky commented on pull request #5986: Fixes for GraphRuntime destruction

2020-08-14 Thread GitBox


samskalicky commented on pull request #5986:
URL: https://github.com/apache/incubator-tvm/pull/5986#issuecomment-674181057


   > the particular error message seems is still due to the use of global 
states(perhaps ndarray given that the graph rt is now resolved) 
somewhere(perhaps in the python),
   
   True, im running TVM inside a custom subgraph operator in MXNet. so the 
subgraph operator is stateful and loads the graphruntime in its constructor



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


zhiics merged pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


zhiics commented on pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272#issuecomment-674181036


   Thanks @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics closed issue #6263: #4312 broke Huggingface BERT ONNX import

2020-08-14 Thread GitBox


zhiics closed issue #6263:
URL: https://github.com/apache/incubator-tvm/issues/6263


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (37912a1 -> 8d91058)

2020-08-14 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 37912a1  [TESTS] Decrease test times by introducing testing model 
(#6235)
 add 8d91058  Update precision in the ONNX strided_slice, update precision 
of ToScalar (#6272)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  |  8 
 src/relay/transforms/pattern_util.h|  6 +++---
 tests/python/frontend/onnx/test_forward.py | 11 ++-
 3 files changed, 13 insertions(+), 12 deletions(-)



[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470754362



##
File path: tests/python/unittest/test_custom_datatypes.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, *arrays):
+"""Converts NDArray(s) into the specified datatype"""
+def convert(array):
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+return tuple([convert(x) for x in arrays])
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, _ = change_dtype(src_dtype, dst_dtype, module, [])
+ex = relay.create_executor("graph", mod=module)
+# converts all inputs to dst_dtype
+x_converted = convert_ndarray(dst_dtype, *input)
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **params)
+# TODO(andrew) this only works on single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)[0]
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func(
+{
+(4, 32): 'IntToPosit32es2',
+(4, 16): 'IntToPosit16es2',
+(4, 8): 'IntToPosit8es2'
+}), 
+"Cast", "llvm", "int", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: 'Posit8es2Sub'

[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470754362



##
File path: tests/python/unittest/test_custom_datatypes.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, *arrays):
+"""Converts NDArray(s) into the specified datatype"""
+def convert(array):
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+return tuple([convert(x) for x in arrays])
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, _ = change_dtype(src_dtype, dst_dtype, module, [])
+ex = relay.create_executor("graph", mod=module)
+# converts all inputs to dst_dtype
+x_converted = convert_ndarray(dst_dtype, *input)
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **params)
+# TODO(andrew) this only works on single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)[0]
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func(
+{
+(4, 32): 'IntToPosit32es2',
+(4, 16): 'IntToPosit16es2',
+(4, 8): 'IntToPosit8es2'
+}), 
+"Cast", "llvm", "int", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: 'Posit8es2Sub'

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5913: [random] support random fill

2020-08-14 Thread GitBox


comaniac commented on a change in pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#discussion_r470753329



##
File path: cmake/config.cmake
##
@@ -140,7 +140,7 @@ set(USE_MKLDNN OFF)
 set(USE_OPENMP none)
 
 # Whether use contrib.random in runtime
-set(USE_RANDOM OFF)
+set(USE_RANDOM ON)

Review comment:
   Would this flag name too vague? At least we should improve the 
description by mentioning what happen if we set it to OFF.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


mbrookhart commented on a change in pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272#discussion_r470747397



##
File path: src/relay/transforms/pattern_util.h
##
@@ -374,7 +374,7 @@ inline bool IsEqualScalar(const Expr& a, const Expr& b) {
  * \param i element index
  * \return Converted scalar value.
  */
-static inline double ToScalar(const runtime::NDArray& array, size_t i = 0) {
+static inline long double ToScalar(const runtime::NDArray& array, size_t i = 
0) {

Review comment:
   On x86, long double has 63 bits of mantissa and 1 bit of sign, just like 
int64. On PowerPC and ARM, it's a 128bit floating point with 106 bits of 
mantissa.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


mbrookhart commented on a change in pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272#discussion_r470747397



##
File path: src/relay/transforms/pattern_util.h
##
@@ -374,7 +374,7 @@ inline bool IsEqualScalar(const Expr& a, const Expr& b) {
  * \param i element index
  * \return Converted scalar value.
  */
-static inline double ToScalar(const runtime::NDArray& array, size_t i = 0) {
+static inline long double ToScalar(const runtime::NDArray& array, size_t i = 
0) {

Review comment:
   long double has 63 bits of mantissa and 1 bit of sign on x86, just like 
int64. On PowerPC and ARM, it's a 128bit floating point with 106 bits of 
mantissa.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5986: Fixes for GraphRuntime destruction

2020-08-14 Thread GitBox


tqchen commented on pull request #5986:
URL: https://github.com/apache/incubator-tvm/pull/5986#issuecomment-674175359


   the particular error message seems is still due to the use of global 
states(perhaps ndarray given that the graph rt is now resolved) 
somewhere(perhaps in the python), 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-14 Thread GitBox


comaniac commented on pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#issuecomment-674175010


   `TVM_ARM_RPC_*` is confusing, especially it is only used by the testing 
infra. IMHO, `TVM_TEST_ARM_RPC_*` would be slightly better. On the other hand, 
`TVM_RPC_*` seems too general. I don't think we need a series of `TVM_RPC_*` 
environment variables to control the RPC setting else where.
   
   Meanwhile, I am not a fan of this style to be honest. From the unit test and 
CI point of view we should not depend on environment variables if we don't have 
to. Would like to hear from other's opinions.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


mbrookhart commented on a change in pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272#discussion_r470747397



##
File path: src/relay/transforms/pattern_util.h
##
@@ -374,7 +374,7 @@ inline bool IsEqualScalar(const Expr& a, const Expr& b) {
  * \param i element index
  * \return Converted scalar value.
  */
-static inline double ToScalar(const runtime::NDArray& array, size_t i = 0) {
+static inline long double ToScalar(const runtime::NDArray& array, size_t i = 
0) {

Review comment:
   long double has 63 bits of mantissa and 1 bit of sign, just like int64





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


mbrookhart commented on a change in pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272#discussion_r470746434



##
File path: src/relay/transforms/pattern_util.h
##
@@ -374,7 +374,7 @@ inline bool IsEqualScalar(const Expr& a, const Expr& b) {
  * \param i element index
  * \return Converted scalar value.
  */
-static inline double ToScalar(const runtime::NDArray& array, size_t i = 0) {
+static inline long double ToScalar(const runtime::NDArray& array, size_t i = 
0) {

Review comment:
   So we get rounding errors if we pass in large int64_t values





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


mbrookhart commented on a change in pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272#discussion_r470746250



##
File path: src/relay/transforms/pattern_util.h
##
@@ -374,7 +374,7 @@ inline bool IsEqualScalar(const Expr& a, const Expr& b) {
  * \param i element index
  * \return Converted scalar value.
  */
-static inline double ToScalar(const runtime::NDArray& array, size_t i = 0) {
+static inline long double ToScalar(const runtime::NDArray& array, size_t i = 
0) {

Review comment:
   Because a double only has 52 bits of mantissa, it can't store the full 
precision of an int64_t.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6272: Update precision in the ONNX strided_slice, update precision of ToScalar

2020-08-14 Thread GitBox


zhiics commented on a change in pull request #6272:
URL: https://github.com/apache/incubator-tvm/pull/6272#discussion_r470738929



##
File path: src/relay/transforms/pattern_util.h
##
@@ -374,7 +374,7 @@ inline bool IsEqualScalar(const Expr& a, const Expr& b) {
  * \param i element index
  * \return Converted scalar value.
  */
-static inline double ToScalar(const runtime::NDArray& array, size_t i = 0) {
+static inline long double ToScalar(const runtime::NDArray& array, size_t i = 
0) {

Review comment:
   why double is not sufficient?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] samskalicky commented on pull request #5986: Fixes for GraphRuntime destruction

2020-08-14 Thread GitBox


samskalicky commented on pull request #5986:
URL: https://github.com/apache/incubator-tvm/pull/5986#issuecomment-674158779


   Lots of testing over the past month, definitely reduced the occurrence of 
the problem by making the runtime not static. But still seeing intermittent 
failures (depending on model can be more prevalent)
   ```
   Segmentation fault: 11
   
   *** Error in `python': double free or corruption (!prev): 0x55becd8c4460 
***
   === Backtrace: =
   /lib/x86_64-linux-gnu/libc.so.6(+0x777f5)[0x7fd5a64827f5]
   /lib/x86_64-linux-gnu/libc.so.6(+0x8038a)[0x7fd5a648b38a]
   /lib/x86_64-linux-gnu/libc.so.6(cfree+0x4c)[0x7fd5a648f58c]
   /lib/x86_64-linux-gnu/libc.so.6(+0x3a035)[0x7fd5a6445035]
   /lib/x86_64-linux-gnu/libc.so.6(+0x3a055)[0x7fd5a6445055]
   
/home/ubuntu/anaconda3/lib/python3.7/site-packages/mxnet/libmxnet.so(+0x7fc3125)[0x7fd5498ea125]
   /lib/x86_64-linux-gnu/libc.so.6(+0x354c0)[0x7fd5a64404c0]
   /usr/local/cuda/lib64/libcudart.so.10.0(+0x1d9fe)[0x7fd4fc1909fe]
   /usr/local/cuda/lib64/libcudart.so.10.0(+0x2296b)[0x7fd4fc19596b]
   /usr/local/cuda/lib64/libcudart.so.10.0(cudaSetDevice+0x47)[0x7fd4fc1bd087]
   
/home/ubuntu/anaconda3/lib/python3.7/site-packages/neomxnet/libdlr.so(_ZN3tvm7runtime13CUDADeviceAPI13FreeDataSpaceE9DLContextPv+0x3a)[0x7fd4eda8652a]
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470731513



##
File path: tests/python/unittest/test_custom_datatypes.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, *arrays):
+"""Converts NDArray(s) into the specified datatype"""
+def convert(array):
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+return tuple([convert(x) for x in arrays])
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, _ = change_dtype(src_dtype, dst_dtype, module, [])
+ex = relay.create_executor("graph", mod=module)
+# converts all inputs to dst_dtype
+x_converted = convert_ndarray(dst_dtype, *input)
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **params)
+# TODO(andrew) this only works on single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)[0]
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func(
+{
+(4, 32): 'IntToPosit32es2',
+(4, 16): 'IntToPosit16es2',
+(4, 8): 'IntToPosit8es2'
+}), 
+"Cast", "llvm", "int", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: 'Posit8es2Sub'

[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470731292



##
File path: tests/python/unittest/test_custom_datatypes.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, *arrays):
+"""Converts NDArray(s) into the specified datatype"""
+def convert(array):
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+return tuple([convert(x) for x in arrays])
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, _ = change_dtype(src_dtype, dst_dtype, module, [])
+ex = relay.create_executor("graph", mod=module)
+# converts all inputs to dst_dtype
+x_converted = convert_ndarray(dst_dtype, *input)
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **params)
+# TODO(andrew) this only works on single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)[0]
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func(
+{
+(4, 32): 'IntToPosit32es2',
+(4, 16): 'IntToPosit16es2',
+(4, 8): 'IntToPosit8es2'
+}), 
+"Cast", "llvm", "int", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: 'Posit8es2Sub'

[GitHub] [incubator-tvm] jroesch merged pull request #6235: [TESTS] Decrease test times by introducing testing model

2020-08-14 Thread GitBox


jroesch merged pull request #6235:
URL: https://github.com/apache/incubator-tvm/pull/6235


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (4b2c01a -> 37912a1)

2020-08-14 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 4b2c01a  [Parser] Add support for parsing the any dimension.  (#6277)
 add 37912a1  [TESTS] Decrease test times by introducing testing model 
(#6235)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/testing/__init__.py   |   1 +
 python/tvm/relay/testing/init.py   |  10 ++
 python/tvm/relay/testing/synthetic.py  | 120 +
 tests/micro/test_runtime_micro_on_arm.py   |   1 -
 .../relay/test_analysis_extract_fused_functions.py |   4 +-
 tests/python/relay/test_change_batch.py|  10 +-
 tests/python/relay/test_pass_auto_quantize.py  |  13 +--
 tests/python/relay/test_vm.py  |   2 +-
 tests/python/relay/test_vm_serialization.py|   9 +-
 .../unittest/test_autotvm_graph_tuner_utils.py |   4 +-
 tests/python/unittest/test_runtime_micro.py|   1 -
 .../test_runtime_module_based_interface.py |  61 ++-
 .../python/unittest/test_runtime_module_export.py  |  30 +++---
 tests/python/unittest/test_target_codegen_blob.py  |  16 +--
 14 files changed, 209 insertions(+), 73 deletions(-)
 create mode 100644 python/tvm/relay/testing/synthetic.py



[GitHub] [incubator-tvm] jroesch commented on pull request #6235: [TESTS] Decrease test times by introducing testing model

2020-08-14 Thread GitBox


jroesch commented on pull request #6235:
URL: https://github.com/apache/incubator-tvm/pull/6235#issuecomment-674150100


   cc @areusch 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Parser] Add support for parsing the any dimension. (#6277)

2020-08-14 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 4b2c01a  [Parser] Add support for parsing the any dimension.  (#6277)
4b2c01a is described below

commit 4b2c01a8fcba1f5941ccd18d2b1940fe8cefa7f1
Author: Jared Roesch 
AuthorDate: Fri Aug 14 09:13:42 2020 -0700

[Parser] Add support for parsing the any dimension.  (#6277)

* Add case for any dimensions

* Fix second test case
---
 src/parser/parser.cc |  5 +++--
 tests/python/relay/test_ir_parser.py | 28 
 2 files changed, 31 insertions(+), 2 deletions(-)

diff --git a/src/parser/parser.cc b/src/parser/parser.cc
index 71d4304..8055d91 100644
--- a/src/parser/parser.cc
+++ b/src/parser/parser.cc
@@ -1502,6 +1502,8 @@ class Parser {
   tvm::PrimExpr dim;
   if (Peek()->token_type == TokenType::kMetaReference) {
 dim = Downcast(ParseMetaRef());
+  } else if (WhenMatch(TokenType::kQuestion)) {
+dim = tvm::tir::Any();
   } else {
 dim = Downcast(Match(TokenType::kInteger)->data);
   }
@@ -1585,8 +1587,7 @@ class Parser {
   return ParseNonPrimitiveType(tok);
 }
   }
-}
-if (WhenMatch(TokenType::kUnderscore)) {
+} else if (WhenMatch(TokenType::kUnderscore)) {
   return IncompleteType();
 } else {
   this->diag_ctx->EmitFatal(Diagnostic::Error(tok->span)
diff --git a/tests/python/relay/test_ir_parser.py 
b/tests/python/relay/test_ir_parser.py
index 3fcc7da..6d581b6 100644
--- a/tests/python/relay/test_ir_parser.py
+++ b/tests/python/relay/test_ir_parser.py
@@ -591,6 +591,16 @@ def test_tensor_type():
 )
 )
 
+assert_parses_as(
+"let %_ : Tensor[(?, 1), float32] = (); ()",
+relay.Let(
+relay.Var("_", relay.TensorType((tvm.tir.Any(), 1), "float32")),
+UNIT,
+UNIT
+)
+)
+
+
 
 def test_function_type():
 assert_parses_as(
@@ -678,6 +688,24 @@ def test_adt_defn():
 mod
 )
 
+def test_adt_any():
+code = """
+type my_dtype {
+my_cons(Tensor[(?, 1), uint16]),
+}
+"""
+mod = parse_module(code)
+items = mod.type_definitions.items()
+global_type_var, type_data = items[0]
+assert global_type_var.name_hint == "my_dtype"
+ctors = type_data.constructors
+assert len(ctors) == 1
+my_cons = ctors[0]
+assert my_cons.name_hint == "my_cons"
+ty_shape = my_cons.inputs[0].shape
+assert isinstance(ty_shape[0], tvm.tir.Any)
+assert ty_shape[1] == 1
+
 
 def test_empty_adt_defn():
 mod = tvm.IRModule()



[GitHub] [incubator-tvm] tmoreau89 merged pull request #6277: [Parser] Add support for parsing the any dimension.

2020-08-14 Thread GitBox


tmoreau89 merged pull request #6277:
URL: https://github.com/apache/incubator-tvm/pull/6277


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on pull request #6275: [Support] Add parallel_for support to run a loop in parallel

2020-08-14 Thread GitBox


tkonolige commented on pull request #6275:
URL: https://github.com/apache/incubator-tvm/pull/6275#issuecomment-674148457


   I think I'm a little late to this discussion, but what is the reason for 
having our own thread pool/parallel_for implementation? OpenMP is already an 
optional dependency, we could use it when available.
   
   I can understand that having our own implementation means we don't have to 
depend on another library, but it also means we need to maintain it and add 
features as we need them (though this doesn't seem like much code).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470716989



##
File path: tests/python/unittest/test_custom_datatypes.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, *arrays):
+"""Converts NDArray(s) into the specified datatype"""
+def convert(array):
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+return tuple([convert(x) for x in arrays])
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, _ = change_dtype(src_dtype, dst_dtype, module, [])
+ex = relay.create_executor("graph", mod=module)
+# converts all inputs to dst_dtype
+x_converted = convert_ndarray(dst_dtype, *input)
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **params)
+# TODO(andrew) this only works on single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)[0]
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func(
+{
+(4, 32): 'IntToPosit32es2',
+(4, 16): 'IntToPosit16es2',
+(4, 8): 'IntToPosit8es2'
+}), 
+"Cast", "llvm", "int", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: 'Posit8es2Sub'

[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470715346



##
File path: tests/python/unittest/test_custom_datatypes.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, *arrays):
+"""Converts NDArray(s) into the specified datatype"""
+def convert(array):
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+return tuple([convert(x) for x in arrays])
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, _ = change_dtype(src_dtype, dst_dtype, module, [])
+ex = relay.create_executor("graph", mod=module)
+# converts all inputs to dst_dtype
+x_converted = convert_ndarray(dst_dtype, *input)
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **params)
+# TODO(andrew) this only works on single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)[0]
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func(
+{
+(4, 32): 'IntToPosit32es2',
+(4, 16): 'IntToPosit16es2',
+(4, 8): 'IntToPosit8es2'
+}), 
+"Cast", "llvm", "int", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: 'Posit8es2Sub'

[GitHub] [incubator-tvm] leandron commented on a change in pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-14 Thread GitBox


leandron commented on a change in pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#discussion_r470713940



##
File path: tvmc/README.md
##
@@ -0,0 +1,122 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# TVMC
+
+```tvmc``` is a tool that provides useful command line invocations to compile,
+run and tune models using TVM graph runtime.
+
+In order to compile and tune, ```tvmc``` takes a model file and parameters as 
inputs,
+and outputs a TAR file that contains the TVM modules that represent the
+input model, graph and weights, for the required target. Target can be native 
or
+cross-compiled.
+
+When running a given model, ```tvmc``` expects a compiled model and input 
tensor values, so
+that it can produce the outputs, when running on the required target, local or 
remote.
+
+This document presents an overview and a short tutorial about ```tvmc```.
+
+## Installation
+
+```tvmc``` is a Python tool and - provided TVM and dependencies are available 
- it can be
+installed in various ways.
+
+The recommended way to install ```tvmc``` is via it's ```setuptools``` 
configuration file,
+located at ```tvm/tvmc/setup.py```. To do that, go to the the TVM directory 
and run the
+installation command, as described below:
+
+cd tvm/tvmc
+python setup.py install
+
+The command above should install everything needed to get started with 
```tvmc```, including
+all the the supported frontends.
+
+Once ```tvmc``` is installed, the main entry-point is the ```tvmc``` command 
line. A set of
+sub-commands are available, to run the specific tasks offered by ```tvmc```: 
```tune```,
+```compile``` and ```run```.
+
+The simplest way to get more information about a specific sub-command is 
```tvmc 
+-- help```.
+
+tvmc compile --help
+
+##  Usage
+
+Now, let's compile a network and generate a few predictions using ```tvmc```.
+
+As described above, in order to compile a model using ```tvmc```, the first 
thing we need is
+a model file. For the sake of this example, let's use a MobileNet V1 model, in 
TFLite format.
+More information about the model is available on
+[this page](https://www.tensorflow.org/lite/guide/hosted_models).
+
+To download and un-compress the ```.tgz``` file (34Mb), so that we can access 
the TFLite model,
+run the command lines below:
+
+wget 
https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz
+tar xvzf mobilenet_v1_1.0_224_quant.tgz
+
+With these commands, we should be able to provide the MobileNet V1 file 
(```mobilenet_v1_1.0_224_quant.tflite```)
+to ```tvmc```, and obtain our TVM compiled model as an output. To do that, run 
the
+following command line:
+
+tvmc compile -v mobilenet_v1_1.0_224_quant.tflite -o compiled_model.tar
+
+As an output, you will notice a ```compiled_model.tar```, in the same 
directory.
+
+Now it is time to feed the model with some input, that will generate a 
prediction using TVM.
+As models are very diverse in terms of input formats and the source of those 
inputs (images, streams,
+sensors, sound, to name a few), ```tvmc``` supports ```.npz``` (serialized 
NumPy arrays) as the
+main format for ```tvmc run```. To learn more about the ```.npz``` format, 
please read the
+[documentation](https://numpy.org/doc/stable/reference/generated/numpy.savez.html)
 on NumPy website.
+
+MobileNet V1 expects a ```(224, 224, 3)``` input tensor. The Python code 
snippet below, can be used
+as an example on how to convert a PNG file into a ```.npz``` file in the 
expected shape.
+The example below uses [PIL](https://pillow.readthedocs.io/en/stable/) and
+[NumPy](https://numpy.org) functions to import the image and generate the 
expected file.
+
+from tvm.contrib.download import download_testdata
+from PIL import Image
+import numpy as np
+
+cat_url = 
'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'
+image_path = download_testdata(cat_url, 'imagenet_cat.png', module='data')
+resized_image = Image.open(image_path).resize((224, 224))
+image_data = np.asarray(resized_image).astype("float32")
+image_data = np.expand_dims(image_data, axis=0)
+
+np.savez("imagenet_cat", input=image_data)

Review comment:
   The specific points discussed here will make more sense in the next PR, 
when introducing the `compile` subcommand, so I'll mark it as resolved for now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] u99127 commented on a change in pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-14 Thread GitBox


u99127 commented on a change in pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#discussion_r470713079



##
File path: tests/python/contrib/test_arm_compute_lib/infrastructure.py
##
@@ -25,15 +26,22 @@
 from tvm.contrib import graph_runtime
 from tvm.relay.op.contrib import arm_compute_lib
 from tvm.contrib import util
+from tvm.autotvm.measure import request_remote
 
 
 class Device:
-"""Adjust the following settings to connect to and use a remote device for 
tests."""
-use_remote = False
-target = "llvm -mtriple=aarch64-linux-gnu -mattr=+neon"
-# Enable cross compilation when connecting a remote device from a non-arm 
platform.
-cross_compile = None
-# cross_compile = "aarch64-linux-gnu-g++"
+"""
+Use the following environment variables to connect to and use a remote 
device.
+The RPC mechanism can be used in the case of compiling an ACL module on an 
x86 machine
+but running on an AArch64 machine.
+"""
+rpc_host = os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_HOST", "localhost")
+rpc_port = int(os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_PORT", 9090))
+rpc_device_key = os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_DEVICE_KEY", "")
+target = os.environ.get("TVM_ARM_COMPUTE_LIB_TARGET", "llvm 
-mtriple=aarch64-linux-gnu -mattr=+neon")

Review comment:
   Yeah we'd need ways of skipping the test on devices that aren't 
supported. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 commented on a change in pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-14 Thread GitBox


lhutton1 commented on a change in pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#discussion_r470711363



##
File path: tests/python/contrib/test_arm_compute_lib/infrastructure.py
##
@@ -25,15 +26,22 @@
 from tvm.contrib import graph_runtime
 from tvm.relay.op.contrib import arm_compute_lib
 from tvm.contrib import util
+from tvm.autotvm.measure import request_remote
 
 
 class Device:
-"""Adjust the following settings to connect to and use a remote device for 
tests."""
-use_remote = False
-target = "llvm -mtriple=aarch64-linux-gnu -mattr=+neon"
-# Enable cross compilation when connecting a remote device from a non-arm 
platform.
-cross_compile = None
-# cross_compile = "aarch64-linux-gnu-g++"
+"""
+Use the following environment variables to connect to and use a remote 
device.
+The RPC mechanism can be used in the case of compiling an ACL module on an 
x86 machine
+but running on an AArch64 machine.
+"""
+rpc_host = os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_HOST", "localhost")
+rpc_port = int(os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_PORT", 9090))
+rpc_device_key = os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_DEVICE_KEY", "")
+target = os.environ.get("TVM_ARM_COMPUTE_LIB_TARGET", "llvm 
-mtriple=aarch64-linux-gnu -mattr=+neon")

Review comment:
   I did have a think about this, although different tests may require 
different machines. For example CoreML uses similar environment variables to 
connect to a machine running ios. I feel like this approach may enforce that we 
use the same device for every test which uses RPC?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470710592



##
File path: Makefile
##
@@ -90,6 +90,10 @@ scalalint:
 
 lint: cpplint pylint jnilint
 
+# Test scripts
+pyunittest:
+   ./tests/scripts/task_python_unittest.sh

Review comment:
   - [ ] @gussmith23 remove





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470710364



##
File path: 3rdparty/nop-type/nop-type.cc
##
@@ -0,0 +1,30 @@
+#include 

Review comment:
   Similarly here, please resolve these if noptype is removed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470710164



##
File path: tests/python/unittest/test_custom_datatypes_change_dtype.py
##
@@ -81,163 +81,109 @@ def setup():
 # You can pick a code for your datatype arbitrarily, as long as it is
 # greater than 128 and has not already been chosen.
 
-register("posit32", 131)
-
-register_op(create_lower_func("FloatToPosit32es2"), "Cast", "llvm",
-"posit32", "float")
-register_op(create_lower_func("Posit32es2ToFloat"), "Cast", "llvm",
-"float", "posit32")
-register_op(create_lower_func("IntToPosit32es2"), "Cast", "llvm",
-"posit32", "int")
-register_op(create_lower_func("Posit32es2Add"), "Add", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Sub"), "Sub", "llvm", "posit32")
-register_op(create_lower_func("FloatToPosit32es2"), "FloatImm", "llvm",
-"posit32")
-register_op(create_lower_func("Posit32es2Mul"), "Mul", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Div"), "Div", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Max"), "Max", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Sqrt"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="sqrt")
-# TODO(gus) not sure if this will work...
-register_op(lower_ite,
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="tvm_if_then_else")
-register_op(create_lower_func("Posit32es2Exp"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="exp")
-register_op(create_lower_func("Posit32es2Log"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="log")
-register_op(create_lower_func("Posit32es2Sigmoid"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="sigmoid")
-register_op(create_lower_func("Posit32es2Tanh"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="tanh")
-register_min_func(lambda num_bits: 
-1.329227995784915872903807060280344576e36, "posit32")
-
-register("posit8", 132)
-register_op(create_lower_func("FloatToPosit8es2"), "Cast", "llvm",
-"posit8", "float")
-register_op(create_lower_func("Posit8es2ToFloat"), "Cast", "llvm", "float",
-"posit8")
-register_op(create_lower_func("IntToPosit8es2"), "Cast", "llvm", "posit8",
-"int")
-register_op(create_lower_func("Posit8es2Add"), "Add", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Sub"), "Sub", "llvm", "posit8")
-register_op(create_lower_func("FloatToPosit8es2"), "FloatImm", "llvm",
-"posit8")
-register_op(create_lower_func("Posit8es2Mul"), "Mul", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Div"), "Div", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Max"), "Max", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Sqrt"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="sqrt")
-# TODO(gus) not sure if this will work...
-register_op(lower_ite,
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="tvm_if_then_else")
-register_op(create_lower_func("Posit8es2Exp"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="exp")
-register_op(create_lower_func("Posit8es2Log"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="log")
-register_op(create_lower_func("Posit8es2Sigmoid"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="sigmoid")
-register_op(create_lower_func("Posit8es2Tanh"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="tanh")
-register_min_func(lambda num_bits: -16777216, "posit8")
-
-register("posit16", 133)
-register_op(create_lower_func("FloatToPosit16es2"), "Cast", "llvm",
-"posit16", "float")
-register_op(create_lower_func("Posit16es2ToFloat"), "Cast", "llvm",
-"float", "posit16")
-register_op(create_lower_func("IntToPosit16es2"), "Cast", "llvm",
-"posit16", "int")
-register_op(create_lower_func("Posit16es2Add"), "Add", "llvm", "posit16")
-register_op(create_lower_func("Posit16es2Sub"), "Sub", "llvm", "posit16")
-register_op(create_lower_func("FloatToPosit16es2"), "FloatImm", "llvm",
-"posit16")
-

[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470709788



##
File path: tests/python/unittest/test_custom_datatypes_change_dtype.py
##
@@ -0,0 +1,553 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+
+
+def convert_ndarray(dst_dtype, array):
+"""Converts an NDArray into the specified datatype"""
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posit32", 131)
+
+register_op(create_lower_func("FloatToPosit32es2"), "Cast", "llvm",
+"posit32", "float")
+register_op(create_lower_func("Posit32es2ToFloat"), "Cast", "llvm",
+"float", "posit32")
+register_op(create_lower_func("IntToPosit32es2"), "Cast", "llvm",
+"posit32", "int")
+register_op(create_lower_func("Posit32es2Add"), "Add", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sub"), "Sub", "llvm", "posit32")
+register_op(create_lower_func("FloatToPosit32es2"), "FloatImm", "llvm",
+"posit32")
+register_op(create_lower_func("Posit32es2Mul"), "Mul", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Div"), "Div", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Max"), "Max", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sqrt"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="sqrt")
+# TODO(gus) not sure if this will work...
+register_op(lower_ite,
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="tvm_if_then_else")
+register_op(create_lower_func("Posit32es2Exp"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="exp")
+register_op(create_lower_func("Posit32es2Log"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="log")
+register_op(create_lower_func("Posit32es2Sigmoid"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="sigmoid")
+register_op(create_lower_func("Posit32es2Tanh"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="tanh")
+# TODO(gus) these aren't actually right. these are double min(actually 
lowest)/max.
+register_min_func(lambda num_bits: -1.79769e+308, "posit32")
+
+register("posit8", 132)
+register_op(create_lower_func("FloatToPosit8es0"), "Cast", "llvm",
+"posit8", "float")
+

[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470709788



##
File path: tests/python/unittest/test_custom_datatypes_change_dtype.py
##
@@ -0,0 +1,553 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+
+
+def convert_ndarray(dst_dtype, array):
+"""Converts an NDArray into the specified datatype"""
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posit32", 131)
+
+register_op(create_lower_func("FloatToPosit32es2"), "Cast", "llvm",
+"posit32", "float")
+register_op(create_lower_func("Posit32es2ToFloat"), "Cast", "llvm",
+"float", "posit32")
+register_op(create_lower_func("IntToPosit32es2"), "Cast", "llvm",
+"posit32", "int")
+register_op(create_lower_func("Posit32es2Add"), "Add", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sub"), "Sub", "llvm", "posit32")
+register_op(create_lower_func("FloatToPosit32es2"), "FloatImm", "llvm",
+"posit32")
+register_op(create_lower_func("Posit32es2Mul"), "Mul", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Div"), "Div", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Max"), "Max", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sqrt"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="sqrt")
+# TODO(gus) not sure if this will work...
+register_op(lower_ite,
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="tvm_if_then_else")
+register_op(create_lower_func("Posit32es2Exp"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="exp")
+register_op(create_lower_func("Posit32es2Log"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="log")
+register_op(create_lower_func("Posit32es2Sigmoid"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="sigmoid")
+register_op(create_lower_func("Posit32es2Tanh"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="tanh")
+# TODO(gus) these aren't actually right. these are double min(actually 
lowest)/max.
+register_min_func(lambda num_bits: -1.79769e+308, "posit32")
+
+register("posit8", 132)
+register_op(create_lower_func("FloatToPosit8es0"), "Cast", "llvm",
+"posit8", "float")
+

[GitHub] [incubator-tvm] tkonolige commented on pull request #6235: [TESTS] Decrease test times by introducing testing model

2020-08-14 Thread GitBox


tkonolige commented on pull request #6235:
URL: https://github.com/apache/incubator-tvm/pull/6235#issuecomment-674139468


   I seem to have fixed the sporadically failing tests too!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470708020



##
File path: tests/python/unittest/test_custom_datatypes_change_dtype.py
##
@@ -0,0 +1,553 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+
+
+def convert_ndarray(dst_dtype, array):
+"""Converts an NDArray into the specified datatype"""
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posit32", 131)
+
+register_op(create_lower_func("FloatToPosit32es2"), "Cast", "llvm",
+"posit32", "float")
+register_op(create_lower_func("Posit32es2ToFloat"), "Cast", "llvm",
+"float", "posit32")
+register_op(create_lower_func("IntToPosit32es2"), "Cast", "llvm",
+"posit32", "int")
+register_op(create_lower_func("Posit32es2Add"), "Add", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sub"), "Sub", "llvm", "posit32")
+register_op(create_lower_func("FloatToPosit32es2"), "FloatImm", "llvm",
+"posit32")
+register_op(create_lower_func("Posit32es2Mul"), "Mul", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Div"), "Div", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Max"), "Max", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sqrt"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="sqrt")
+# TODO(gus) not sure if this will work...
+register_op(lower_ite,
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="tvm_if_then_else")
+register_op(create_lower_func("Posit32es2Exp"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="exp")
+register_op(create_lower_func("Posit32es2Log"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="log")
+register_op(create_lower_func("Posit32es2Sigmoid"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="sigmoid")
+register_op(create_lower_func("Posit32es2Tanh"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="tanh")
+# TODO(gus) these aren't actually right. these are double min(actually 
lowest)/max.

Review comment:
   great!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitH

[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r470707899



##
File path: tests/python/unittest/test_custom_datatypes_change_dtype.py
##
@@ -0,0 +1,553 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+
+
+def convert_ndarray(dst_dtype, array):
+"""Converts an NDArray into the specified datatype"""
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posit32", 131)
+
+register_op(create_lower_func("FloatToPosit32es2"), "Cast", "llvm",
+"posit32", "float")
+register_op(create_lower_func("Posit32es2ToFloat"), "Cast", "llvm",
+"float", "posit32")
+register_op(create_lower_func("IntToPosit32es2"), "Cast", "llvm",
+"posit32", "int")
+register_op(create_lower_func("Posit32es2Add"), "Add", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sub"), "Sub", "llvm", "posit32")
+register_op(create_lower_func("FloatToPosit32es2"), "FloatImm", "llvm",
+"posit32")
+register_op(create_lower_func("Posit32es2Mul"), "Mul", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Div"), "Div", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Max"), "Max", "llvm", "posit32")
+register_op(create_lower_func("Posit32es2Sqrt"),
+"Call",
+"llvm",
+"posit32",
+intrinsic_name="sqrt")
+# TODO(gus) not sure if this will work...

Review comment:
   Great, can you document this in the test that tests different ops?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-14 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r469616234



##
File path: tests/python/unittest/test_custom_datatypes_change_dtype.py
##
@@ -0,0 +1,481 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Utilities for changing datatypes of models."""
+import tvm
+import topi.testing
+import numpy as np
+from numpy.random import MT19937, RandomState, SeedSequence
+from tvm import relay
+from tvm.relay.testing.inception_v3 import get_workload as get_inception
+from tvm.relay.testing.resnet import get_workload as get_resnet
+from tvm.relay.testing.mobilenet import get_workload as get_mobilenet
+from tvm.target.datatype import register, register_min_func, register_op, 
create_lower_func, lower_ite
+from nose.tools import nottest
+
+tgt = "llvm"
+# we use a random seed to generate input_data
+# to guarantee stable tests
+rs = RandomState(MT19937(SeedSequence(123456789)))
+
+def convert_ndarray(dst_dtype, *arrays):
+"""Converts NDArray(s) into the specified datatype"""
+def convert(array):
+x = relay.var('x', shape=array.shape, dtype=str(array.dtype))
+cast = relay.Function([x], x.astype(dst_dtype))
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+return relay.create_executor('graph').evaluate(cast)(array)
+
+return tuple([convert(x) for x in arrays])
+
+
+def change_dtype(src, dst, module, params):
+module = relay.frontend.ChangeDatatype(src, dst)(module)
+module = relay.transform.InferType()(module)
+params = dict((p, convert_ndarray(dst, params[p])) for p in params)
+return module, params
+
+def compare(module, input, src_dtype, dst_dtype, rtol, atol, params = {}):
+ex = relay.create_executor("graph", mod=module)
+
+correct = ex.evaluate()(*input, **params)
+
+module, _ = change_dtype(src_dtype, dst_dtype, module, [])
+ex = relay.create_executor("graph", mod=module)
+# converts all inputs to dst_dtype
+x_converted = convert_ndarray(dst_dtype, *input)
+
+# Vectorization is not implemented with custom datatypes
+with tvm.transform.PassContext(config={"tir.disable_vectorize": True}):
+maybe_correct = ex.evaluate()(*x_converted, **params)
+# TODO(andrew) this only works on single output
+maybe_correct_converted = convert_ndarray(src_dtype, maybe_correct)[0]
+np.testing.assert_allclose(maybe_correct_converted.asnumpy(),
+correct.asnumpy(),
+rtol=rtol,
+atol=atol)
+
+def setup():
+"""Set up tests
+
+Currently, this registers some custom datatypes using the Bring Your
+Own Datatypes framework.
+"""
+
+# To use datatype operations in an external library, you should first load
+# the library containing the datatype implementation:
+# CDLL("libposit.so", RTLD_GLOBAL)
+# In this case, the datatype library we are using is built right into TVM,
+# so we do not need to explicitly load any library.
+
+# You can pick a code for your datatype arbitrarily, as long as it is
+# greater than 128 and has not already been chosen.
+
+register("posites2", 131)
+
+register_op(create_lower_func(
+{
+(32, 32): "FloatToPosit32es2",
+(32, 16): "FloatToPosit16es2",
+(32, 8): 'FloatToPosit8es2',
+}), 
+"Cast", "llvm", "posites2", "float")
+register_op(create_lower_func(
+{
+(32, 32): "Posit32es2ToFloat",
+(16, 32): 'Posit16es2ToFloat',
+(8, 32): 'Posit8es2ToFloat',
+}), 
+"Cast", "llvm", "float", "posites2")
+register_op(create_lower_func(
+{
+(4, 32): 'IntToPosit32es2',
+(4, 16): 'IntToPosit16es2',
+(4, 8): 'IntToPosit8es2'
+}), 
+"Cast", "llvm", "posites2", "int")
+register_op(create_lower_func({
+32: 'Posit32es2Add',
+16: 'Posit16es2Add',
+8: 'Posit8es2Add'
+}), "Add", "llvm", "posites2")
+register_op(create_lower_func({
+32: 'Posit32es2Sub',
+16: 'Posit16es2Sub',
+8: '

[GitHub] [incubator-tvm] u99127 commented on a change in pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-14 Thread GitBox


u99127 commented on a change in pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#discussion_r470704519



##
File path: tests/python/contrib/test_arm_compute_lib/infrastructure.py
##
@@ -25,15 +26,22 @@
 from tvm.contrib import graph_runtime
 from tvm.relay.op.contrib import arm_compute_lib
 from tvm.contrib import util
+from tvm.autotvm.measure import request_remote
 
 
 class Device:
-"""Adjust the following settings to connect to and use a remote device for 
tests."""
-use_remote = False
-target = "llvm -mtriple=aarch64-linux-gnu -mattr=+neon"
-# Enable cross compilation when connecting a remote device from a non-arm 
platform.
-cross_compile = None
-# cross_compile = "aarch64-linux-gnu-g++"
+"""
+Use the following environment variables to connect to and use a remote 
device.
+The RPC mechanism can be used in the case of compiling an ACL module on an 
x86 machine
+but running on an AArch64 machine.
+"""
+rpc_host = os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_HOST", "localhost")
+rpc_port = int(os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_PORT", 9090))
+rpc_device_key = os.environ.get("TVM_ARM_COMPUTE_LIB_RPC_DEVICE_KEY", "")
+target = os.environ.get("TVM_ARM_COMPUTE_LIB_TARGET", "llvm 
-mtriple=aarch64-linux-gnu -mattr=+neon")

Review comment:
   I wonder if we can make these generic environment variables provided in 
a common manner in the test infrastructure TVM_RPC_HOST, TVM_RPC_PORT and 
TVM_RPC_DEVICE_KEY. There's nothing ACL specific about this ... 
   
   I am not sure if there are such provisions in pytest for such extensions 
which would be pretty cool.
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-14 Thread GitBox


leandron commented on a change in pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279#discussion_r470687919



##
File path: tests/python/contrib/test_arm_compute_lib/infrastructure.py
##
@@ -42,27 +50,15 @@ def __init__(self):
 @classmethod
 def _get_remote(cls):
 """Get a remote (or local) device to use for testing."""
-if cls.use_remote:
-# Here you may adjust settings to run the ACL unit tests via a 
remote
-# device using the RPC mechanism. Use this in the case you want to 
compile
-# an ACL module on a different machine to what you run the module 
on i.e.
-# x86 -> AArch64.
-#
-# Use the following to connect directly to a remote device:
-# device = rpc.connect(
-# hostname="0.0.0.0",
-# port=9090)
-#
-# Or connect via a tracker:
-# device = tvm.autotvm.measure.request_remote(
-# host="0.0.0.0",
-# port=9090,
-# device_key="device_key",
-# timeout=1000)
-#
-# return device
-raise NotImplementedError(
-"Please adjust these settings to connect to your remote 
device.")
+if cls.rpc_host != "localhost":
+if cls.rpc_device_key:
+device = request_remote(cls.rpc_device_key,
+cls.rpc_host,
+cls.rpc_port,
+timeout=1000)
+else:
+device = rpc.connect(cls.rpc_host, cls.rpc_port)
+return device

Review comment:
   nitpick: You can simplify this function by having only one return 
statement. I suggest taking this return out of the `if` (also the one on line 
64)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lhutton1 opened a new pull request #6279: [BYOC][ACL] Enable remote device via environment variables

2020-08-14 Thread GitBox


lhutton1 opened a new pull request #6279:
URL: https://github.com/apache/incubator-tvm/pull/6279


   Improves the ACL remote testing infrastructure by allowing a remote device 
to be specified via environment variables. This means external scripts can be 
used to enable the runtime tests. By default an RPC server will not be used and 
the runtime tests will be skipped.
   
   cc @leandron @u99127 @comaniac @zhiics 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] u99127 commented on a change in pull request #6018: Added support for tflite quantized maximum and minimum

2020-08-14 Thread GitBox


u99127 commented on a change in pull request #6018:
URL: https://github.com/apache/incubator-tvm/pull/6018#discussion_r470636451



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -250,7 +256,7 @@ def compare_tflite_with_tvm(in_data, in_name, input_tensors,
 # convert to tflite model
 converter = tf.lite.TFLiteConverter.from_session(
 sess, input_tensors, output_tensors)
-
+converter.experimental_new_converter=experimental_new_converter

Review comment:
   @anijain2305  - sorry for some reason github is refusing to send 
notifications to email when tagged here :( and that's the reason for my delay 
in responding to this in addition to holidays.
   
   AFAIUI there is no way of freezing a tflite model that contains quantized 
max or min using the toco converter and thus we need to use the API in that 
form to get the testsuite coverage. While the API to use this is "subject to 
change", from my pov it's a use in the testsuite , we aren't using it in the 
main code base and thus using it is less risky.
   
   Note also that the tflite converter in tensorflow is now defaulting to the 
mlir based converter as per the latest docs so this use is still a conservative 
move forward as we are sticking to the default but it gives us additional 
operator coverage. 
   
   Does that help ? 
   
   Ramana
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >