[GitHub] [incubator-tvm] jcf94 commented on pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


jcf94 commented on pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#issuecomment-671763210


   > Overall LGTM, but I'd like to raise the discussion about the file 
organization for search policy. Now `sketch_search_policy.cc` has about one 
thousand line and it might continue to grow in the future. Here is the 
organization in my mind:
   > 
   > ```
   > auto_scheduler
   > |- serach_policy
   >   |- sketch_search
   > |- sketch_search_policy.{h,cc}
   > |- sketch_rules.{h,cc}
   > |- utils.{h,cc}
   >   |- empty_search
   >   |- utils.{h,cc}
   > ```
   > 
   > * Have `auto_scheduler/search_policy/{sketch_policy, empty_policy}`.
   > * Separate all `SketchGenerationRule` and `InitPopulationRule` to 
`search_policy/sketch_policy/sketch_rules`.
   > * Rename `src/auto_scheduler/search_policy/utils.{h,cc}` to 
'src/auto_scheduler/search_policy/utils.{h,cc}' (still under `search_policy`), 
and move all sketch search specific functions such as Mutation (not included in 
this PR) to `auto_scheduler/search_policy/sketch_policy/utils.{h,cc}`.
   
   Currently I just split out the `sketch_policy_rules.h/cc`, we can continue 
to consider about the directory structure.
   We can also put the Evolutionary Search to a separate file.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-10 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359065



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes
+
+#   float[sizes[0]]feature for record 1
+#   float[sizes[1]]feature for record 2
+#   ...feature for record i...
+#   float[sizes[n-1]]  feature for record n
+
+#   float[sizes[n]]normalized throughput for n records
+#   int[sizes[n+1]]task id for n records
+# }
+
+vec_len = DEFAULT_FEATURE_VEC_LEN
+
+# unpack sizes
+offset = 0
+n = struct.unpack_from("1i", byte_arr, offset=offset)[0]
+offset += SIZE_OF_INT
+
+sizes = struct.unpack_from("%di" % (n+2), byte_arr, offset=offset)
+offset += SIZE_OF_INT * (n+2)
+
+# unpack features
+features = []
+for size in sizes[:-2]:
+row = []
+
+# Now, we need to unpack the feature for multiple statements.
+# The format is:
+# {
+# int n_stmts
+# float[n_stmt][vec_len] feature_vecs
+# }
+# where vec_len can be calculated by `(size - 1) / n_stmts`
+
+if size == 0:
+# failed during lowering
+features.append(np.zeros((1, vec_len)))
+else:
+n_stmts = struct.unpack_from("f", byte_arr, offset=offset)
+offset += SIZE_OF_FLOAT
+
+n_stmts = int(n_stmts[0] + 0.5)

Review comment:
   Some of them are int while the others are float. I want to store all of 
them in a single array, but we do not have union in tvm::Object. So I use a 
single float array to store both int and float





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-10 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359834



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   The `int sizes[n + 1]` proposed by you is not a valid declaration in C. 
I think the existing form is better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-10 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359834



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   ` int sizes[n + 1]` is not a valid declaration in C. I think the 
existing form is better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-10 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359834



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes

Review comment:
   The `int sizes[n + 1]` proposed by you is not a valid declaration in C. 
I think the old comment is better.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-10 Thread GitBox


merrymercy commented on a change in pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#discussion_r468359065



##
File path: python/tvm/auto_scheduler/feature.py
##
@@ -0,0 +1,242 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+Python API for Feature extraction. The extracted features vector are used by 
cost models.
+
+We extract one feature vector per BufferStoreNode statement in a TIR Stmt,
+so we call this feature as "Per Store" feature.
+The cost model also does prediction for each BufferStoreNode statement and 
aggregates
+the predicted score of each BufferStoreNode as the score of a TIR Stmt.
+
+The feature specification is defined by 
`src/auto_scheduler/feature.cc::FeatureSet`
+"""
+
+from typing import List, Tuple, Union, Optional
+import struct
+
+import numpy as np
+
+from .loop_state import State, StateObject
+from .measure import MeasureInput, MeasureResult
+from . import _ffi_api
+
+# The maximum number of extracted buffers for one statement
+DEFAULT_MAX_N_BUFS = 5
+
+# The length of the feature vector
+DEFAULT_FEATURE_VEC_LEN = 164
+
+# The size of int and float in bytes
+SIZE_OF_INT = 4
+SIZE_OF_FLOAT = 4
+
+def unpack_feature(byte_arr: bytearray) -> Tuple[np.ndarray, np.ndarray, 
np.ndarray]:
+"""Unpack the flatten feature (in byte array format) from c++
+
+Parameters
+--
+byte_arr: bytearray
+The two-dimensional feature vector in serialized byte array format
+
+Returns
+---
+features: np.ndarray
+Feature vectors
+normalized_throughputs: np.ndarray
+Normalized throughputs
+task_ids: np.ndarray
+Task ids
+"""
+
+# The format for n records is:
+# {
+#   int n;
+#   int[n+2] sizes
+
+#   float[sizes[0]]feature for record 1
+#   float[sizes[1]]feature for record 2
+#   ...feature for record i...
+#   float[sizes[n-1]]  feature for record n
+
+#   float[sizes[n]]normalized throughput for n records
+#   int[sizes[n+1]]task id for n records
+# }
+
+vec_len = DEFAULT_FEATURE_VEC_LEN
+
+# unpack sizes
+offset = 0
+n = struct.unpack_from("1i", byte_arr, offset=offset)[0]
+offset += SIZE_OF_INT
+
+sizes = struct.unpack_from("%di" % (n+2), byte_arr, offset=offset)
+offset += SIZE_OF_INT * (n+2)
+
+# unpack features
+features = []
+for size in sizes[:-2]:
+row = []
+
+# Now, we need to unpack the feature for multiple statements.
+# The format is:
+# {
+# int n_stmts
+# float[n_stmt][vec_len] feature_vecs
+# }
+# where vec_len can be calculated by `(size - 1) / n_stmts`
+
+if size == 0:
+# failed during lowering
+features.append(np.zeros((1, vec_len)))
+else:
+n_stmts = struct.unpack_from("f", byte_arr, offset=offset)
+offset += SIZE_OF_FLOAT
+
+n_stmts = int(n_stmts[0] + 0.5)

Review comment:
   Some of them are int while the others are float. I want to store all of 
them in a single array, but we do not have union in tvm::Object.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


jcf94 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r468346294



##
File path: tests/python/unittest/test_auto_scheduler_sketch_generation.py
##
@@ -0,0 +1,107 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+""" Test sketch generation. """
+
+import tvm
+from tvm import te, auto_scheduler
+
+from test_auto_scheduler_common import (matmul_auto_scheduler_test, 
conv2d_nchw_bn_relu_auto_scheduler_test,
+max_pool2d_auto_scheduler_test, 
min_nm_auto_scheduler_test,
+softmax_nm_auto_scheduler_test, 
softmax_abcd_auto_scheduler_test,
+
conv2d_winograd_nhwc_auto_scheduler_test)
+
+def print_sketches(sketches):

Review comment:
   Merged this with the `SketchPolicy::generate_sketches()`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhouyongxyz opened a new issue #6247: NotImplementedError occur when i convert pytorch model to tvm

2020-08-10 Thread GitBox


zhouyongxyz opened a new issue #6247:
URL: https://github.com/apache/incubator-tvm/issues/6247


 File "compile_pytorch.py", line 48, in 
   net, params = relay.frontend.from_pytorch(scripted_model, shape_list)
   
 File 
"/home/haifan/haifan/zhouyong/mxnet/maskfaceapi/tvm/python/tvm/relay/frontend/pytorch.py",
 line 1616, in from_pytorch
   _report_missing_conversion(op_names)
   
 File 
"/home/haifan/haifan/zhouyong/mxnet/maskfaceapi/tvm/python/tvm/relay/frontend/pytorch.py",
 line 1212, in _report_missing_conversion
   raise NotImplementedError(msg)
   
   NotImplementedError: The following operators are not implemented: 
['aten::remainder', 'aten::stack', 'aten::arange', 'aten::grid_sampler', 
'aten::affine_grid_generator', 'aten::copy_']
   
   The model is from https://github.com/nicehuster/cpm-facial-landmarks 
   
   
   Thanks
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #6168: Gather operation with indices as tensor expr in TFLite frontend

2020-08-10 Thread GitBox


siju-samuel commented on a change in pull request #6168:
URL: https://github.com/apache/incubator-tvm/pull/6168#discussion_r468302077



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1321,14 +1321,15 @@ def convert_gather(self, op):
 input_tensors = self.get_input_tensors(op)
 assert len(input_tensors) == 2, "input tensors length should be 2"
 
-data = self.get_expr(input_tensors[0].tensor_idx)
-
+if self.has_expr(input_tensors[0].tensor_idx):
+data = self.get_expr(input_tensors[0].tensor_idx)
+else:
+data = 
self.exp_tab.new_const(self.get_tensor_value(input_tensors[0]),
+  
dtype=self.get_tensor_type_str(input_tensors[0]\
+ 
.tensor.Type()))

Review comment:
   suggested to use `get_tensor_expr` for this code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #6144: [DOCS][REFACTOR] Update Pass Infra Docs to Reflect the latest State

2020-08-10 Thread GitBox


tqchen closed issue #6144:
URL: https://github.com/apache/incubator-tvm/issues/6144


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6182: [Topi,x86] Split MKL from BLAS.

2020-08-10 Thread GitBox


tqchen commented on pull request #6182:
URL: https://github.com/apache/incubator-tvm/pull/6182#issuecomment-671684265


   Thanks @tkonolige @icemelon9 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (ed04cdd -> ee33056)

2020-08-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ed04cdd  fix cuda half math function is undefined: hpow, htanh (#6225)
 add ee33056  [Topi,x86] Split MKL from BLAS. (#6182)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |  2 +-
 cmake/config.cmake | 16 ++--
 cmake/modules/contrib/BLAS.cmake   | 56 +++--
 python/tvm/contrib/cblas.py| 33 
 python/tvm/contrib/{cblas.py => mkl.py}| 10 +--
 python/tvm/contrib/{rocblas.py => mkldnn.py}   | 27 ---
 python/tvm/relay/backend/compile_engine.py | 27 +--
 python/tvm/relay/op/strategy/x86.py| 30 ++-
 python/tvm/topi/x86/dense.py   | 49 ++--
 src/runtime/contrib/cblas/cblas.cc | 83 +---
 src/runtime/contrib/cblas/{cblas.cc => mkl.cc} | 91 +++---
 .../{nnpack/nnpack_utils.h => cblas/mkldnn.cc} | 33 +---
 tests/python/contrib/test_cblas.py | 79 +--
 13 files changed, 260 insertions(+), 276 deletions(-)
 copy python/tvm/contrib/{cblas.py => mkl.py} (91%)
 copy python/tvm/contrib/{rocblas.py => mkldnn.py} (73%)
 copy src/runtime/contrib/cblas/{cblas.cc => mkl.cc} (76%)
 copy src/runtime/contrib/{nnpack/nnpack_utils.h => cblas/mkldnn.cc} (56%)



[GitHub] [incubator-tvm] tqchen merged pull request #6182: [Topi,x86] Split MKL from BLAS.

2020-08-10 Thread GitBox


tqchen merged pull request #6182:
URL: https://github.com/apache/incubator-tvm/pull/6182


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tom-gall opened a new issue #6246: new --runtime=c fails for uTVM

2020-08-10 Thread GitBox


tom-gall opened a new issue #6246:
URL: https://github.com/apache/incubator-tvm/issues/6246


   @areusch 
   
   The failure looks like 
 graph, c_mod, params = relay.build(mod, target=target, params=params)
   Traceback (most recent call last):
 File "./working-micro-st-tflite.py", line 48, in 
   micro_mod = micro.create_micro_mod(c_mod, dev_config)
 File "/home/tgall/tvm/tvm/python/tvm/micro/base.py", line 213, in 
create_micro_mod
   micro_mod = tvm.runtime.load_module(lib_obj_path)
 File "/home/tgall/tvm/tvm/python/tvm/runtime/module.py", line 407, in 
load_module
   return _ffi_api.ModuleLoadFromFile(path, fmt)
 File "/home/tgall/tvm/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 
225, in __call__
   raise get_last_ffi_error()
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 [bt] (6) /home/tgall/tvm/tvm/build/libtvm.so(TVMFuncCall+0x63) 
[0x7f0b6f5594e3]
 [bt] (5) /home/tgall/tvm/tvm/build/libtvm.so(std::_Function_handler, std::allocator 
> const&, std::__cxx11::basic_string, 
std::allocator > const&)>::AssignTypedLambda, 
std::allocator > const&, std::__cxx11::basic_string, std::allocator > const&)>(tvm::runtime::Module 
(*)(std::__cxx11::basic_string, 
std::allocator > const&, std::__cxx11::basic_string, std::allocator > 
const&))::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, 
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xa3) [0x7f0b6f57a9e3]
 [bt] (4) 
/home/tgall/tvm/tvm/build/libtvm.so(tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string, std::allocator > const&, 
std::__cxx11::basic_string, std::allocator > 
const&)+0x1d8) [0x7f0b6f575c38]
 [bt] (3) /home/tgall/tvm/tvm/build/libtvm.so(+0x18a270a) [0x7f0b6f5f070a]
 [bt] (2) 
/home/tgall/tvm/tvm/build/libtvm.so(tvm::runtime::MicroSession::LoadBinary(std::__cxx11::basic_string, std::allocator > const&, bool)+0x14f) 
[0x7f0b6f5f5e9f]
 [bt] (1) 
/home/tgall/tvm/tvm/build/libtvm.so(tvm::runtime::MicroSession::AllocateInSection(tvm::runtime::SectionKind,
 unsigned long)+0x1a5) [0x7f0b6f5f5795]
 [bt] (0) /home/tgall/tvm/tvm/build/libtvm.so(+0x18a4898) [0x7f0b6f5f2898]
 File "/home/tgall/tvm/tvm/src/runtime/micro/micro_section_allocator.h", 
line 68
   TVMError: Check failed: size_ + size < capacity_: cannot alloc 208 bytes in 
section "rodata" (start_addr=0x20004650, used=0, capacity=100)
   
   I think we've seen this before haven't we?
   
   code 
   
   import os
   import numpy as np
   import tvm
   import tvm.micro as micro
   from tvm.contrib import graph_runtime, util
   
   from tvm import relay
   from tvm.contrib.download import download_testdata
   
   target = "c --system-lib  --runtime=c"
   model_dir ="/home/tgall/tvm/utvm-exp/"
   tflite_model_file = os.path.join(model_dir, "sine_model.tflite")
   tflite_model_buf = open(tflite_model_file, "rb").read()
   
   try:
   import tflite
   tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0)
   version = tflite_model.Version()
   print ("Model Version: " + version)
   except AttributeError:
   import tflite.Model
   tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buf, 0)
   version = tflite_model.Version()
   print ("Model Version: " + str(version))
   
   input_tensor = "dense_4_input"
   input_shape = (1,)
   input_dtype = "float32"
   
   dev_config = micro.device.arm.stm32f746xx.generate_config("127.0.0.1", )
   
   mod, params = relay.frontend.from_tflite(tflite_model,
shape_dict={input_tensor: 
input_shape},
dtype_dict={input_tensor: 
input_dtype})
   
   with micro.Session(dev_config) as sess:
   ctx = tvm.micro_dev(0)
   
   with tvm.transform.PassContext(disabled_pass={'FuseOps'}, 
config={"tir.disable_vectorize": True}):
   graph, c_mod, params = relay.build(mod, target=target, params=params)
   
   micro_mod = micro.create_micro_mod(c_mod, dev_config)
   mod = graph_runtime.create(graph, micro_mod, ctx)
   
   mod.set_input(**params)
   
   #throw a simple single bogus number at the model
   mod.set_input(input_tensor, tvm.nd.array(np.array([0.5], 
dtype="float32")))
   
   mod.run()
   
   tvm_output = mod.get_output(0).asnumpy()
   
   print("result is: "+str(tvm_output))



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (b29f79e -> ed04cdd)

2020-08-10 Thread wuwei
This is an automated email from the ASF dual-hosted git repository.

wuwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b29f79e  [Relay]Refine tensorflow frontend 1.x & 2.x compatibility 
(#6240)
 add ed04cdd  fix cuda half math function is undefined: hpow, htanh (#6225)

No new revisions were added by this update.

Summary of changes:
 src/target/source/literal/cuda_half_t.h | 13 +
 1 file changed, 13 insertions(+)



[GitHub] [incubator-tvm] vinx13 merged pull request #6225: fix cuda half math function is undefined: hpow, htanh

2020-08-10 Thread GitBox


vinx13 merged pull request #6225:
URL: https://github.com/apache/incubator-tvm/pull/6225


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


comaniac commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r468178808



##
File path: python/tvm/auto_scheduler/auto_schedule.py
##
@@ -88,10 +91,76 @@ class SearchPolicy(Object):
 class EmptyPolicy(SearchPolicy):
 """ This is an example empty search policy which will always generate
 the init state of ComputeDAG.
+
+Parameters
+--
+task : SearchTask
+The SearchTask for the computation declaration.
+init_search_callbacks : Optional[List[SearchCallback]]
+Callback functions called before the search process.
+"""
+def __init__(self, task, init_search_callbacks=None):
+self.__init_handle_by_constructor__(_ffi_api.EmptyPolicy, task, 
init_search_callbacks)
+
+
+@tvm._ffi.register_object("auto_scheduler.SketchSearchPolicy")
+class SketchSearchPolicy(SearchPolicy):
+"""  The search policy that searches in a hierarchical search space 
defined by sketches.
+The policy randomly samples programs from the space defined by sketches
+and use evolutionary search to fine-tune them.
+
+Parameters
+--
+task : SearchTask
+The SearchTask for the computation declaration.
+schedule_cost_model : CostModel = RandomModel()
+The cost model to estimate the complete schedules.
+params : Optional[Dict[str, value]]

Review comment:
   s/value/Any/

##
File path: python/tvm/auto_scheduler/auto_schedule.py
##
@@ -175,17 +236,6 @@ def auto_schedule(task, search_policy='default', 
tuning_options=None):
 raise ValueError("Invalid task: " + task +
  " . `auto_scheduler.auto_schedule` expects a 
SearchTask.")
 
-if isinstance(search_policy, str):
-if search_policy == 'default':
-# TODO(jcf94): This is an example policy for minimum system, will 
be upgrated to
-# formal search policy later.
-search_policy = EmptyPolicy()
-else:
-raise ValueError("Invalid search policy: " + search_policy)
-elif not isinstance(search_policy, SearchPolicy):
-raise ValueError("Invalid search policy: " + search_policy +
- " . `auto_scheduler.auto_schedule` expects a 
SearchPolicy or a string.")
-
-sch, tensors = _ffi_api.AutoSchedule(task, search_policy,
- tuning_options if tuning_options else 
TuningOptions())
+# TODO(jcf94): Remove EmptyPolicy after finish the porting of 
SketchSearchPolicy

Review comment:
   Maybe we can rename it to `NoPolicy` or something like that to output 
one record with no transform steps, so that we can use it to be a baseline.

##
File path: python/tvm/auto_scheduler/auto_schedule.py
##
@@ -175,17 +236,6 @@ def auto_schedule(task, search_policy='default', 
tuning_options=None):
 raise ValueError("Invalid task: " + task +
  " . `auto_scheduler.auto_schedule` expects a 
SearchTask.")
 
-if isinstance(search_policy, str):
-if search_policy == 'default':
-# TODO(jcf94): This is an example policy for minimum system, will 
be upgrated to
-# formal search policy later.
-search_policy = EmptyPolicy()
-else:
-raise ValueError("Invalid search policy: " + search_policy)
-elif not isinstance(search_policy, SearchPolicy):
-raise ValueError("Invalid search policy: " + search_policy +
- " . `auto_scheduler.auto_schedule` expects a 
SearchPolicy or a string.")
-
-sch, tensors = _ffi_api.AutoSchedule(task, search_policy,
- tuning_options if tuning_options else 
TuningOptions())
+# TODO(jcf94): Remove EmptyPolicy after finish the porting of 
SketchSearchPolicy
+sch, tensors = _ffi_api.AutoSchedule(search_policy or EmptyPolicy(task), 
tuning_options)

Review comment:
   Should we make sketch search policy as the default one? Otherwise I can 
imagine lots of people will ask in the discuss forum about why auto scheduler 
doesn't do any search...

##
File path: python/tvm/auto_scheduler/auto_schedule.py
##
@@ -88,10 +91,76 @@ class SearchPolicy(Object):
 class EmptyPolicy(SearchPolicy):
 """ This is an example empty search policy which will always generate
 the init state of ComputeDAG.
+
+Parameters
+--
+task : SearchTask
+The SearchTask for the computation declaration.
+init_search_callbacks : Optional[List[SearchCallback]]
+Callback functions called before the search process.
+"""
+def __init__(self, task, init_search_callbacks=None):
+self.__init_handle_by_constructor__(_ffi_api.EmptyPolicy, task, 
init_search_callbacks)
+
+
+@tvm._ffi.register_object("auto_scheduler.SketchSearchPolicy")
+class SketchSearchPolicy(SearchPolicy):
+"""  The search policy that searches 

[GitHub] [incubator-tvm] trevor-m opened a new pull request #6245: [Mxnet] Support _contrib_SyncBatchNorm

2020-08-10 Thread GitBox


trevor-m opened a new pull request #6245:
URL: https://github.com/apache/incubator-tvm/pull/6245


   This op can be mapped directly to batch norm.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


merrymercy commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r468171563



##
File path: src/auto_scheduler/compute_dag.cc
##
@@ -342,11 +343,16 @@ AccessAnalyzer::AccessAnalyzer(const Array& 
tensors) {
 has_expensive_op |= HasExpensiveOp(expr);
   }
   if (has_expensive_op || has_branch[op]) {
-is_strict_inlineable = false;
+is_strictly_inlineable = false;
+  }
+
+  // constant tensor is strict-inlineable
+  if (node->read_from[op].empty()) {
+is_strictly_inlineable = true;
   }

Review comment:
   @FrozenGene is correct. The later rules have higher priority and 
overwrite previous ones.

##
File path: src/auto_scheduler/search_policy/sketch_search_policy.cc
##
@@ -0,0 +1,985 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_scheduler/search_policy/sketch_search_policy.h
+ * \brief The search policy that searches in a hierarchical search space 
defined by sketches.
+ * The policy randomly samples programs from the space defined by sketches
+ * and use evolutionary search to fine-tune them.
+ */
+
+#include "sketch_search_policy.h"
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace auto_scheduler {
+
+TVM_REGISTER_NODE_TYPE(SketchSearchPolicyNode);
+
+/** Sketch Generation Rule **/
+
+// The rule that simply skips the current stage(return a unchanged state and 
try the next stage).

Review comment:
   ```suggestion
   // The rule that simply skips the current stage. It returns an unchanged 
state and move to the next stage.
   ```

##
File path: python/tvm/auto_scheduler/auto_schedule.py
##
@@ -175,17 +236,6 @@ def auto_schedule(task, search_policy='default', 
tuning_options=None):
 raise ValueError("Invalid task: " + task +
  " . `auto_scheduler.auto_schedule` expects a 
SearchTask.")
 
-if isinstance(search_policy, str):
-if search_policy == 'default':
-# TODO(jcf94): This is an example policy for minimum system, will 
be upgrated to
-# formal search policy later.
-search_policy = EmptyPolicy()
-else:
-raise ValueError("Invalid search policy: " + search_policy)
-elif not isinstance(search_policy, SearchPolicy):
-raise ValueError("Invalid search policy: " + search_policy +
- " . `auto_scheduler.auto_schedule` expects a 
SearchPolicy or a string.")
-
-sch, tensors = _ffi_api.AutoSchedule(task, search_policy,
- tuning_options if tuning_options else 
TuningOptions())
+# TODO(jcf94): Remove EmptyPolicy after finish the porting of 
SketchSearchPolicy

Review comment:
   We can keep it as an example

##
File path: include/tvm/auto_scheduler/search_policy.h
##
@@ -89,46 +100,54 @@ class SearchCallback : public ObjectRef {
   TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS(SearchCallback, ObjectRef, 
SearchCallbackNode);
 };
 
+/*! \brief Attribute keys of ops used for SearchPolicy. */
+struct SearchPolicyKey {
+  /*! \brief Always apply unroll to the inner most iterator of the specificed 
iterators. */
+  static constexpr const char* always_unroll_inner = 
"auto_scheduler_always_unroll_inner";
+  /*! \brief The specified iterators will not be placed as the inner most 
iterator. */
+  static constexpr const char* no_split_at_inner = 
"auto_scheduler_no_split_at_inner";
+  /*! \brief The specified iterators will not be placed as the outter most 
iterator. */
+  static constexpr const char* no_split_at_outer = 
"auto_scheduler_no_split_at_outer";

Review comment:
   "auto_scheduler_no_split_at_inner" is useful for some sparse operators. 
We can keep it.

##
File path: include/tvm/auto_scheduler/compute_dag.h
##
@@ -69,7 +69,7 @@ class AccessAnalyzerNode : public Object {
   /*! \brief Store whether the operation is strictly-inlineable
* (e.g., injective, broadcast and elementwise without reduction, branch or 
expenive operations)
*/

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-10 Thread GitBox


comaniac commented on a change in pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#discussion_r468122151



##
File path: python/tvm/driver/tvmc/main.py
##
@@ -0,0 +1,90 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+TVMC - TVM driver command-line interface
+"""
+import argparse
+import logging
+import sys
+
+import pkg_resources
+
+from tvm.driver.tvmc.common import TVMCException
+
+
+def add_help_parser(subparsers):
+""" Include parser for 'help' subcommand """
+
+parser = subparsers.add_parser("help", help="show help page")
+# 'func' points to a function that will receive all the arguments
+# provided by the user. This is the only required attribute
+parser.set_defaults(func=drive_help)
+
+
+def drive_help(args):
+""" Show help page """
+
+print("This is a placeholder command. Args = {0}".format(args))
+
+
+
+def _main(argv):
+""" TVM command line interface. """
+
+parser = argparse.ArgumentParser(
+prog='tvmc',
+formatter_class=argparse.RawDescriptionHelpFormatter,
+description="TVM compiler driver",
+epilog=__doc__,
+)
+parser.add_argument(
+"-v", "--verbose", action="count", default=0, help="increase verbosity"
+)
+parser.add_argument(
+"--version", action="store_true", help="print the version and exit"
+)
+
+subparsers = parser.add_subparsers(title="commands")
+
+add_help_parser(subparsers)

Review comment:
   Based on this I imagine we will have the following in the future:
   ```python
   add_help_parser(subparsers)
   add_compile_parser(subparsers)
   add_tune_parser(subparsers)
   # ...
   ```
   
   And maybe we will have something like `tvmc/compile.py`, and have `from 
compile import add_compile_parser` in this file.
   
   IIUC, we can setup a registration mechanism as follow, although I'm not sure 
if it's overkilled:
   
   In `main.py`:
   
   ```python
   REGISTERED_PARSER = []
   def register_parser(make_subparser):
   REGISTERED_PARSER.append(make_subparser)
   return make_subparser
   
   def _main(argv):
   subparser = parser.add_subparsers()
   for make_subparser in REGISTERED_PARSER:
 make_subparser(subparser)
   ```
   
   In `compile.py`:
   
   ```python
   from main import register_parser
   
   @register_parser
   def _compile_parser(main_subparser):
   subparser = main_subparser.add_parser('compile', help='...')
   # ...
   ```
   
   In this way, we don't need to touch `main.py` anymore when we add a new 
subparser.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yzhliu commented on pull request #6078: [Autodiff] Optimize and eliminate the Jacobian tensor for te.autodiff

2020-08-10 Thread GitBox


yzhliu commented on pull request #6078:
URL: https://github.com/apache/incubator-tvm/pull/6078#issuecomment-671536959


   @sergei-grechanik I agree. Perhaps we need performance integration test run 
periodically as mentioned in 
https://discuss.tvm.ai/t/efforts-on-benchmarking-for-tvm/
   
   @tqchen could you also take a look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-10 Thread GitBox


comaniac commented on a change in pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#discussion_r468109265



##
File path: python/tvm/driver/tvmc/main.py
##
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+TVMC - TVM driver command-line interface
+"""
+import argparse
+import logging
+import sys
+
+import pkg_resources
+
+from tvm.driver.tvmc.common import TVMCException
+
+
+def _main(argv):
+""" TVM command line interface. """
+
+parser = argparse.ArgumentParser(
+prog='tvmc',
+formatter_class=argparse.RawDescriptionHelpFormatter,
+description="tvm compiler driver",
+epilog=__doc__,
+)
+parser.add_argument(
+"-v", "--verbose", action="count", default=0, help="increase verbosity"
+)
+parser.add_argument(
+"--version", action="store_true", help="print the version and exit"
+)
+
+# TODO: subparsers will come in follow-up patches (@leandron)
+_ = parser.add_subparsers(title="commands")
+
+args = parser.parse_args(argv)
+if args.verbose > 4:
+args.verbose = 4
+
+logging.getLogger().setLevel(40 - args.verbose * 10)
+
+if args.version:
+version = pkg_resources.get_distribution("tvm").version
+sys.stdout.write("%s\n" % version)
+return 0
+
+if not hasattr(args, "func"):
+parser.error("missing subcommand")

Review comment:
   make sense.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-10 Thread GitBox


leandron commented on a change in pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#discussion_r468104675



##
File path: python/tvm/driver/tvmc/main.py
##
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+TVMC - TVM driver command-line interface
+"""
+import argparse
+import logging
+import sys
+
+import pkg_resources
+
+from tvm.driver.tvmc.common import TVMCException
+
+
+def _main(argv):
+""" TVM command line interface. """
+
+parser = argparse.ArgumentParser(
+prog='tvmc',
+formatter_class=argparse.RawDescriptionHelpFormatter,
+description="tvm compiler driver",
+epilog=__doc__,
+)
+parser.add_argument(
+"-v", "--verbose", action="count", default=0, help="increase verbosity"
+)
+parser.add_argument(
+"--version", action="store_true", help="print the version and exit"
+)
+
+# TODO: subparsers will come in follow-up patches (@leandron)
+_ = parser.add_subparsers(title="commands")
+
+args = parser.parse_args(argv)
+if args.verbose > 4:
+args.verbose = 4
+
+logging.getLogger().setLevel(40 - args.verbose * 10)
+
+if args.version:
+version = pkg_resources.get_distribution("tvm").version
+sys.stdout.write("%s\n" % version)
+return 0
+
+if not hasattr(args, "func"):
+parser.error("missing subcommand")

Review comment:
   Reading that again, it seems most cases will be covered by `argparse` 
and that should really be an assert, because lacking a `func` only is expected 
to happen during development.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6228: Constant input attr added to fully connected operation in TFLite frontend

2020-08-10 Thread GitBox


d-smirnov commented on a change in pull request #6228:
URL: https://github.com/apache/incubator-tvm/pull/6228#discussion_r468032556



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -2456,25 +2456,27 @@ def test_forward_sparse_to_dense():
 # Fully Connected
 # ---
 
-def _test_fully_connected(tensor_in_sizes, filter_in_sizes, bias_in_size=None):
+def _test_fully_connected(tensor_in_sizes, wrap_input, filter_in_sizes, 
bias_in_size=None):
 """ One iteration of fully connected """
 
-total_size_1 = 1
-total_size_2 = 1
-for s in tensor_in_sizes:
-total_size_1 *= s
-for s in filter_in_sizes:
-total_size_2 *= s
-# Initializes the input tensor with array containing incrementing
-# numbers from 1.
-data_array = [f * 1.0 for f in range(1, total_size_1 + 1)]
-filter_array = [f * 1.0 for f in range(1, total_size_2 + 1)]
+total_size_1 = np.prod( tensor_in_sizes )
+total_size_2 = np.prod( filter_in_sizes )
+
 assert int(total_size_1 / tensor_in_sizes[0]) == filter_in_sizes[0], \
 "input size and filter size are mismatched"
 
+# Initializes the input tensor with array containing incrementing
+# numbers from 1.
+data_array = np.arange(1, total_size_1 + 1, dtype=np.float32)
+filter_array = np.arange(1, total_size_2 + 1, dtype=np.float32)
+
 with tf.Graph().as_default():
-in_data = array_ops.placeholder(shape=tensor_in_sizes, dtype='float32')
-in_filter = constant_op.constant(filter_array, shape=filter_in_sizes, 
dtype='float32')
+in_name="input"

Review comment:
   Some networks use literal value instead of tensor placeholder for first 
argument. wrap flag switches the wrapping of the actual value of first operand.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #6024: [Relay][TF] Make StridedSlice support dynamic input and constant attrs

2020-08-10 Thread GitBox


kevinthesun commented on a change in pull request #6024:
URL: https://github.com/apache/incubator-tvm/pull/6024#discussion_r468093208



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -1458,6 +1458,15 @@ def _impl(inputs, attr, params, mod):
 
 return ret
 
+def _dyn():
+for d in data_shape:
+if not isinstance(d, int):
+return True
+return False
+
+if _dyn():

Review comment:
   Why do we need to special hand this in tf frontend and skip mask 
transformation?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-10 Thread GitBox


leandron commented on a change in pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#discussion_r468086427



##
File path: python/tvm/driver/tvmc/main.py
##
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+TVMC - TVM driver command-line interface
+"""
+import argparse
+import logging
+import sys
+
+import pkg_resources
+
+from tvm.driver.tvmc.common import TVMCException
+
+
+def _main(argv):
+""" TVM command line interface. """
+
+parser = argparse.ArgumentParser(
+prog='tvmc',
+formatter_class=argparse.RawDescriptionHelpFormatter,
+description="tvm compiler driver",
+epilog=__doc__,
+)
+parser.add_argument(
+"-v", "--verbose", action="count", default=0, help="increase verbosity"
+)
+parser.add_argument(
+"--version", action="store_true", help="print the version and exit"
+)
+
+# TODO: subparsers will come in follow-up patches (@leandron)
+_ = parser.add_subparsers(title="commands")
+
+args = parser.parse_args(argv)
+if args.verbose > 4:
+args.verbose = 4
+
+logging.getLogger().setLevel(40 - args.verbose * 10)
+
+if args.version:
+version = pkg_resources.get_distribution("tvm").version
+sys.stdout.write("%s\n" % version)
+return 0
+
+if not hasattr(args, "func"):
+parser.error("missing subcommand")

Review comment:
   Sure, I'll add a small placeholder example, so that in the specific 
(real) subparsers we can review the whole funcionality.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-10 Thread GitBox


comaniac commented on a change in pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#discussion_r468076880



##
File path: python/tvm/driver/tvmc/main.py
##
@@ -0,0 +1,74 @@
+#!/usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+TVMC - TVM driver command-line interface
+"""
+import argparse
+import logging
+import sys
+
+import pkg_resources
+
+from tvm.driver.tvmc.common import TVMCException
+
+
+def _main(argv):
+""" TVM command line interface. """
+
+parser = argparse.ArgumentParser(
+prog='tvmc',
+formatter_class=argparse.RawDescriptionHelpFormatter,
+description="tvm compiler driver",
+epilog=__doc__,
+)
+parser.add_argument(
+"-v", "--verbose", action="count", default=0, help="increase verbosity"
+)
+parser.add_argument(
+"--version", action="store_true", help="print the version and exit"
+)
+
+# TODO: subparsers will come in follow-up patches (@leandron)
+_ = parser.add_subparsers(title="commands")
+
+args = parser.parse_args(argv)
+if args.verbose > 4:
+args.verbose = 4
+
+logging.getLogger().setLevel(40 - args.verbose * 10)
+
+if args.version:
+version = pkg_resources.get_distribution("tvm").version
+sys.stdout.write("%s\n" % version)
+return 0
+
+if not hasattr(args, "func"):
+parser.error("missing subcommand")

Review comment:
   Since you don't have any subparser in this PR, I guess you will require 
each subparser to define an argument `func` and set its default value to the 
entry function. IIUC, this is a pure internal error and the error message has 
to be improved.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] csullivan commented on pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-10 Thread GitBox


csullivan commented on pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229#issuecomment-671475808


   @FrozenGene Thanks for the good suggestions. I updated the documentation and 
build system for embedded linux to use the same flow as previously described 
the docs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


comaniac commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468047987



##
File path: tests/python/contrib/test_ethosn/infrastructure.py
##
@@ -0,0 +1,225 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Expose Ethos test functions to the Python front end"""
+
+from __future__ import absolute_import, print_function
+import tvm
+from tvm import relay
+from tvm.contrib import util, graph_runtime, download
+from tvm.relay.testing import run_opt_pass
+from enum import Enum
+from hashlib import md5
+from itertools import zip_longest, combinations
+import numpy as np
+from PIL import Image
+import os
+
+from . import _infrastructure
+from tvm.relay.op.contrib import get_pattern_table
+
+
+class Available(Enum):
+UNAVAILABLE = 0
+SW_ONLY = 1
+SW_AND_HW = 2
+
+
+def ethosn_available():
+"""Return whether Ethos-N software and hardware support is available"""
+if not tvm.get_global_func("relay.ethos-n.query", True):
+print("skip because Ethos-N module is not available")
+return Available.UNAVAILABLE
+else:
+hw = tvm.get_global_func("relay.ethos-n.query")()
+return Available.SW_AND_HW if hw else Available.SW_ONLY
+
+
+def get_real_image(im_height, im_width):
+repo_base = 
'https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/'
+img_name = 'elephant-299.jpg'
+image_url = os.path.join(repo_base, img_name)
+img_path = download.download_testdata(image_url, img_name, module='data')
+image = Image.open(img_path).resize((im_height, im_width))
+x = np.array(image).astype('uint8')
+data = np.reshape(x, (1, im_height, im_width, 3))
+return data
+
+
+def assert_lib_hash(lib, golden):
+temp = util.tempdir()
+path = temp.relpath("lib.cmm")
+lib.imported_modules[1].save(path)
+lib_hash = md5(open(path, 'rb').read()).hexdigest()
+assert lib_hash == golden, "Expected hash: {} Got hash: {}".format(golden, 
lib_hash)
+
+
+def make_module(func, params):
+func = relay.Function(relay.analysis.free_vars(func), func)
+if len(params):
+relay.build_module.bind_params_by_name(func, params)
+return tvm.IRModule.from_expr(func)
+
+
+def make_ethosn_composite(ethosn_expr, name):
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function([relay.Var("a")], ethosn_expr)
+func = func.with_attr("Composite", name)
+call = relay.Call(func, vars)
+return call
+
+
+def make_ethosn_partition(ethosn_expr):
+# Create an Ethos-N global function
+mod = tvm.IRModule({})
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function(vars, ethosn_expr)
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", "ethos-n")
+func = func.with_attr("global_symbol", "ethos-n_0")
+g1 = relay.GlobalVar("ethos-n_0")
+mod[g1] = func
+
+# These are the vars to call the Ethos-N partition with
+more_vars = relay.analysis.free_vars(ethosn_expr)
+# Call the Ethos-N partition in main
+call_fn1 = g1(*more_vars)
+mod["main"] = relay.Function(more_vars, call_fn1)
+return mod
+
+
+def get_cpu_op_count(mod):
+class Counter(tvm.relay.ExprVisitor):
+def __init__(self):
+super().__init__()
+self.count = 0
+
+def visit_call(self, call):
+if isinstance(call.op, tvm.ir.Op):
+self.count += 1
+
+super().visit_call(call)
+
+c = Counter()
+c.visit(mod["main"])
+return c.count
+
+
+def build(mod, params, npu=True, cpu_ops=0, npu_partitions=1):
+relay.backend.compile_engine.get().clear()
+with tvm.transform.PassContext(opt_level=3, config={
+"relay.ext.ethos-n.options": {"variant": 0}
+}):
+with tvm.target.create("llvm -mcpu=core-avx2"):

Review comment:
   Oops...call @zhiics 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific c

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


comaniac commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468046750



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.h
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_
+#define TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ethosn_support_library/Support.hpp"
+#include "ethosn_support_library/SupportQueries.hpp"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+
+struct ConcatenateParams {
+  sl::QuantizationInfo qInfo;
+  sl::ConcatenationInfo concat_info = sl::ConcatenationInfo(1, qInfo);
+  std::vector input_infos;
+};
+
+struct SplitParams {
+  sl::SplitInfo split_info = sl::SplitInfo(0, {});
+  sl::TensorInfo input_info;
+};
+
+class ErrStrm {
+ public:
+  template 
+  ErrStrm& operator<<(const T& val) {  // NOLINT(*)
+stream_ << val;
+return *this;
+  }
+
+ private:
+  std::stringstream stream_;
+  friend class EthosnError;
+};
+
+class EthosnError {
+ public:
+  EthosnError() {}
+  explicit EthosnError(const Array& msgs) : msgs(msgs) {}
+  explicit EthosnError(const String& msg) { msgs.push_back(msg); }
+  explicit EthosnError(const ErrStrm& err) : EthosnError(err.stream_.str()) {}
+
+  explicit operator bool() const { return !msgs.empty(); }
+
+  EthosnError& operator+=(const EthosnError& other) {
+msgs.insert(msgs.end(), other.msgs.begin(), other.msgs.end());
+return *this;
+  }
+
+  Array msgs;
+};
+
+class EthosnAPI {
+ public:
+  static std::unique_ptr 
Compile(std::shared_ptr network,
+  const 
sl::CompilationOptions& options);
+
+  static sl::CompilationOptions CreateOptions();
+
+  static bool IsEthosFunc(const Call& call, const std::string& op_name);
+  static bool IsEthosOp(const Call& call, const std::string& op_name);
+
+  static EthosnError Concatenate(const Expr& expr, ConcatenateParams* params);
+  static EthosnError Split(const Expr& expr, SplitParams* params);
+
+ private:
+  static EthosnError Tvm2Npu(const Array& shape, sl::TensorShape* 
npu_shape);
+  static EthosnError Tvm2Npu(const tvm::DataType& dtype, sl::DataType* 
data_type);

Review comment:
   This API actually takes the shape and type of a TVM tensor and allocates 
a support library tensor. To me, this is more like creating a support library 
tensor with provided spec (shape and type). "TVM data structures to Support 
Library data structures" is more like creating a Support Library tensor and 
filling in data from the given TVM tensor.
   
   In terms of the name you just proposed, I personally think `Tvm2Npu` is 
better than `Tvm2SL` because "SL" to "Support Library" is not straightforward 
as "NPU" to "Ethos-N".





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


comaniac commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r468041057



##
File path: src/relay/backend/contrib/ethosn/codegen.cc
##
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/ethosn/codegen.cc
+ * \brief The Relay -> Ethos-N command stream compiler.
+ */
+#include 
+#include 
+
+#include "codegen_ethosn.h"
+#include "ethosn_api.h"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+sl::TensorInfo GetTensorInfo(std::map> 
tensor_table,
+ const Call& call) {
+  if (tensor_table.find(call) != tensor_table.end()) return 
tensor_table[call][0];
+
+  return sl::TensorInfo();
+}
+
+void InferTensorsVisitor::InferCall(const CallNode* cn) {

Review comment:
   Make sense to me in the case of more ops will be added to this function 
in the future, although the flow `Call -> HandleCall` looks a bit weird. Let's 
keep the current implementation for both functions for now and see if there is 
a better way after most ops are added.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6228: Constant input attr added to fully connected operation in TFLite frontend

2020-08-10 Thread GitBox


d-smirnov commented on a change in pull request #6228:
URL: https://github.com/apache/incubator-tvm/pull/6228#discussion_r468032556



##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -2456,25 +2456,27 @@ def test_forward_sparse_to_dense():
 # Fully Connected
 # ---
 
-def _test_fully_connected(tensor_in_sizes, filter_in_sizes, bias_in_size=None):
+def _test_fully_connected(tensor_in_sizes, wrap_input, filter_in_sizes, 
bias_in_size=None):
 """ One iteration of fully connected """
 
-total_size_1 = 1
-total_size_2 = 1
-for s in tensor_in_sizes:
-total_size_1 *= s
-for s in filter_in_sizes:
-total_size_2 *= s
-# Initializes the input tensor with array containing incrementing
-# numbers from 1.
-data_array = [f * 1.0 for f in range(1, total_size_1 + 1)]
-filter_array = [f * 1.0 for f in range(1, total_size_2 + 1)]
+total_size_1 = np.prod( tensor_in_sizes )
+total_size_2 = np.prod( filter_in_sizes )
+
 assert int(total_size_1 / tensor_in_sizes[0]) == filter_in_sizes[0], \
 "input size and filter size are mismatched"
 
+# Initializes the input tensor with array containing incrementing
+# numbers from 1.
+data_array = np.arange(1, total_size_1 + 1, dtype=np.float32)
+filter_array = np.arange(1, total_size_2 + 1, dtype=np.float32)
+
 with tf.Graph().as_default():
-in_data = array_ops.placeholder(shape=tensor_in_sizes, dtype='float32')
-in_filter = constant_op.constant(filter_array, shape=filter_in_sizes, 
dtype='float32')
+in_name="input"

Review comment:
   Some networks uses literal value instead of tensor placeholder for first 
argument. wrap flag switches the wrapping of the actual value of first operand.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6228: Constant input attr added to fully connected operation in TFLite frontend

2020-08-10 Thread GitBox


d-smirnov commented on a change in pull request #6228:
URL: https://github.com/apache/incubator-tvm/pull/6228#discussion_r468024543



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1720,7 +1719,7 @@ def convert_fully_connected(self, op):
 # Dense expected Weight shape: [out_dim, n_units]
 # Dense output shape: [batch_size, out_dim]
 target_shape = tuple((-1, weight_tensor_shape[1]))
-in_expr = self.get_expr(input_tensor_idx)
+in_expr = self.get_tensor_expr(input_tensor)

Review comment:
   Yes





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] d-smirnov commented on a change in pull request #6228: Constant input attr added to fully connected operation in TFLite frontend

2020-08-10 Thread GitBox


d-smirnov commented on a change in pull request #6228:
URL: https://github.com/apache/incubator-tvm/pull/6228#discussion_r468024161



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1695,10 +1695,9 @@ def convert_fully_connected(self, op):
 raise ImportError("The tflite package must be installed")
 
 input_tensors = self.get_input_tensors(op)
-assert len(input_tensors) >= 2, "input tensors length should be >= 2"
+assert len(input_tensors) in (2, 3), "input tensors length should be 
two or three"

Review comment:
   line 1756. Related to bias, if it is provided.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gussmith23 commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-10 Thread GitBox


gussmith23 commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r468018504



##
File path: tests/python/unittest/test_custom_datatypes_change_dtype.py
##
@@ -81,163 +81,109 @@ def setup():
 # You can pick a code for your datatype arbitrarily, as long as it is
 # greater than 128 and has not already been chosen.
 
-register("posit32", 131)
-
-register_op(create_lower_func("FloatToPosit32es2"), "Cast", "llvm",
-"posit32", "float")
-register_op(create_lower_func("Posit32es2ToFloat"), "Cast", "llvm",
-"float", "posit32")
-register_op(create_lower_func("IntToPosit32es2"), "Cast", "llvm",
-"posit32", "int")
-register_op(create_lower_func("Posit32es2Add"), "Add", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Sub"), "Sub", "llvm", "posit32")
-register_op(create_lower_func("FloatToPosit32es2"), "FloatImm", "llvm",
-"posit32")
-register_op(create_lower_func("Posit32es2Mul"), "Mul", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Div"), "Div", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Max"), "Max", "llvm", "posit32")
-register_op(create_lower_func("Posit32es2Sqrt"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="sqrt")
-# TODO(gus) not sure if this will work...
-register_op(lower_ite,
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="tvm_if_then_else")
-register_op(create_lower_func("Posit32es2Exp"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="exp")
-register_op(create_lower_func("Posit32es2Log"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="log")
-register_op(create_lower_func("Posit32es2Sigmoid"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="sigmoid")
-register_op(create_lower_func("Posit32es2Tanh"),
-"Call",
-"llvm",
-"posit32",
-intrinsic_name="tanh")
-register_min_func(lambda num_bits: 
-1.329227995784915872903807060280344576e36, "posit32")
-
-register("posit8", 132)
-register_op(create_lower_func("FloatToPosit8es2"), "Cast", "llvm",
-"posit8", "float")
-register_op(create_lower_func("Posit8es2ToFloat"), "Cast", "llvm", "float",
-"posit8")
-register_op(create_lower_func("IntToPosit8es2"), "Cast", "llvm", "posit8",
-"int")
-register_op(create_lower_func("Posit8es2Add"), "Add", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Sub"), "Sub", "llvm", "posit8")
-register_op(create_lower_func("FloatToPosit8es2"), "FloatImm", "llvm",
-"posit8")
-register_op(create_lower_func("Posit8es2Mul"), "Mul", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Div"), "Div", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Max"), "Max", "llvm", "posit8")
-register_op(create_lower_func("Posit8es2Sqrt"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="sqrt")
-# TODO(gus) not sure if this will work...
-register_op(lower_ite,
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="tvm_if_then_else")
-register_op(create_lower_func("Posit8es2Exp"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="exp")
-register_op(create_lower_func("Posit8es2Log"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="log")
-register_op(create_lower_func("Posit8es2Sigmoid"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="sigmoid")
-register_op(create_lower_func("Posit8es2Tanh"),
-"Call",
-"llvm",
-"posit8",
-intrinsic_name="tanh")
-register_min_func(lambda num_bits: -16777216, "posit8")
-
-register("posit16", 133)
-register_op(create_lower_func("FloatToPosit16es2"), "Cast", "llvm",
-"posit16", "float")
-register_op(create_lower_func("Posit16es2ToFloat"), "Cast", "llvm",
-"float", "posit16")
-register_op(create_lower_func("IntToPosit16es2"), "Cast", "llvm",
-"posit16", "int")
-register_op(create_lower_func("Posit16es2Add"), "Add", "llvm", "posit16")
-register_op(create_lower_func("Posit16es2Sub"), "Sub", "llvm", "posit16")
-register_op(create_lower_func("FloatToPosit16es2"), "FloatImm", "llvm",
-"posit16")
-

[incubator-tvm] branch master updated (fc7a705 -> b29f79e)

2020-08-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from fc7a705  [BYOC][ACL] Improve installation tutorial (#6170)
 add b29f79e  [Relay]Refine tensorflow frontend 1.x & 2.x compatibility 
(#6240)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tensorflow_parser.py | 14 +-
 1 file changed, 9 insertions(+), 5 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on pull request #6240: [Relay]Refine tensorflow frontend 1.x & 2.x compatibility

2020-08-10 Thread GitBox


tqchen commented on pull request #6240:
URL: https://github.com/apache/incubator-tvm/pull/6240#issuecomment-671442393


   Thanks @xutianming !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6240: [Relay]Refine tensorflow frontend 1.x & 2.x compatibility

2020-08-10 Thread GitBox


tqchen merged pull request #6240:
URL: https://github.com/apache/incubator-tvm/pull/6240


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5940: Add Quantize/Dequantize Partitioning

2020-08-10 Thread GitBox


tqchen edited a comment on pull request #5940:
URL: https://github.com/apache/incubator-tvm/pull/5940#issuecomment-671440740


   @weberlo please rebase to resolve the conflict. @ZihengJiang please help to 
manage this PR



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5940: Add Quantize/Dequantize Partitioning

2020-08-10 Thread GitBox


tqchen commented on pull request #5940:
URL: https://github.com/apache/incubator-tvm/pull/5940#issuecomment-671440740


   @weberlo please rebase to resolve the conflict. @ZihengJiang please help 
manage this PR



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6170: [BYOC][ACL] Improve installation tutorial

2020-08-10 Thread GitBox


tqchen commented on pull request #6170:
URL: https://github.com/apache/incubator-tvm/pull/6170#issuecomment-671439854


   Thanks @lhutton1 @leandron @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6127: quanitze operation expanded to take const argument

2020-08-10 Thread GitBox


tqchen commented on pull request #6127:
URL: https://github.com/apache/incubator-tvm/pull/6127#issuecomment-671440132


   cc @anijain2305 please followup



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6170: [BYOC][ACL] Improve installation tutorial

2020-08-10 Thread GitBox


tqchen merged pull request #6170:
URL: https://github.com/apache/incubator-tvm/pull/6170


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6162: [Parser] Parser 2.0 part 2

2020-08-10 Thread GitBox


tqchen commented on pull request #6162:
URL: https://github.com/apache/incubator-tvm/pull/6162#issuecomment-671439329


   @jroesch please followup to address the CI error



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (3dfbae3 -> fc7a705)

2020-08-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 3dfbae3  [TOPI, Cuda] Fix conv2d_transpose output padding (#6236)
 add fc7a705  [BYOC][ACL] Improve installation tutorial (#6170)

No new revisions were added by this update.

Summary of changes:
 cmake/modules/contrib/ArmComputeLib.cmake|  2 +
 docker/install/ubuntu_install_arm_compute_lib.sh | 11 +++-
 docs/deploy/arm_compute_lib.rst  | 64 
 3 files changed, 65 insertions(+), 12 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on pull request #6190: [Ansor][AutoTVM v2.0] Phase 1: feature extraction for cost models

2020-08-10 Thread GitBox


tqchen commented on pull request #6190:
URL: https://github.com/apache/incubator-tvm/pull/6190#issuecomment-671439086


   per discussion with @merrymercy :
   - split the feature extraction logic into FeatureExtractor(callback by 
visitor) and visitor. 
   - Provide FeatureExtractor for each group of features



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6138: Add `init` member to ReduceNode

2020-08-10 Thread GitBox


tqchen edited a comment on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-671433629


   Thanks @quic-sanirudh what you said about rfactor makes sense. We can still 
support rfactor, by checking the factor indices, and only assign the init value 
if the factor indices equals the initial one, however, we may not be able to 
express the computation as a related primitive. Given that rfactor is not 
usually used together with the usage of init, this might be fine.
   
   It would also be great to add a few compiled testcases 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6138: Add `init` member to ReduceNode

2020-08-10 Thread GitBox


tqchen commented on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-671433629


   Thanks @quic-sanirudh what you said about rfactor makes sense. We can still 
support rfactor, by checking the factor indices, and only assign the init value 
if the factor indices equals the initial one. It would also be great to add a 
few compiled testcases



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6138: Add `init` member to ReduceNode

2020-08-10 Thread GitBox


tqchen edited a comment on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-670992782


   Thanks @quic-sanirudh for proposing the new change. My only conern wrt to 
the custom initialization value is that it might break the follow up 
primitives, e.g. rfactor and cross thread allreduce will require the init value 
to be the identity element. As a result, we might want to pause a bit.
   
   The initial value can still be added by introducing an additional stage(with 
a small overhead).
   
   There is early plan to introduce scheduling for TIR, which might bring the 
possibility to include such custom initialization stage, which we can then 
support this feature



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6138: Add `init` member to ReduceNode

2020-08-10 Thread GitBox


tqchen commented on pull request #6138:
URL: https://github.com/apache/incubator-tvm/pull/6138#issuecomment-671431691


   TIR level scheduling is still in an early stage so no RFC yet, will keep you 
updated once the RFC is out. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6236: [TOPI, Cuda] Fix conv2d_transpose output padding

2020-08-10 Thread GitBox


tqchen commented on pull request #6236:
URL: https://github.com/apache/incubator-tvm/pull/6236#issuecomment-671429187


   Thanks @vinx13 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6236: [TOPI, Cuda] Fix conv2d_transpose output padding

2020-08-10 Thread GitBox


tqchen merged pull request #6236:
URL: https://github.com/apache/incubator-tvm/pull/6236


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen closed issue #6179: Fix conv2d grad for strided cases under the new conv2d_transpose def

2020-08-10 Thread GitBox


tqchen closed issue #6179:
URL: https://github.com/apache/incubator-tvm/issues/6179


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [TOPI, Cuda] Fix conv2d_transpose output padding (#6236)

2020-08-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 3dfbae3  [TOPI, Cuda] Fix conv2d_transpose output padding (#6236)
3dfbae3 is described below

commit 3dfbae31b76e9c83f7387ee1bdbca3b26391f803
Author: Wuwei Lin 
AuthorDate: Mon Aug 10 11:37:09 2020 -0400

[TOPI, Cuda] Fix conv2d_transpose output padding (#6236)
---
 python/tvm/topi/cuda/conv2d_transpose_nchw.py | 4 ++--
 tests/python/relay/test_op_grad_level2.py | 3 +--
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/python/tvm/topi/cuda/conv2d_transpose_nchw.py 
b/python/tvm/topi/cuda/conv2d_transpose_nchw.py
index d0a683e..7e41209 100644
--- a/python/tvm/topi/cuda/conv2d_transpose_nchw.py
+++ b/python/tvm/topi/cuda/conv2d_transpose_nchw.py
@@ -65,13 +65,13 @@ def conv2d_transpose_nchw(cfg, data, kernel, stride, 
padding, out_dtype,
 out_width = (inp_width - 1) * stride_width + \
 kernel_width - pad_left - pad_right + outpad_width
 pad_left = kernel_width - 1 - pad_left
-pad_right = kernel_width - 1 - pad_right
+pad_right = kernel_width - 1 - pad_right + outpad_width
 dilated_width = stride_width * (inp_width - 1) + 1
 
 out_height = (inp_height - 1) * stride_height + \
 kernel_height - pad_top - pad_bottom + outpad_height
 pad_top = kernel_height - 1 - pad_top
-pad_bottom = kernel_height - 1 - pad_bottom
+pad_bottom = kernel_height - 1 - pad_bottom + outpad_height
 dilated_height = stride_height * (inp_height - 1) + 1
 
 # compute pad
diff --git a/tests/python/relay/test_op_grad_level2.py 
b/tests/python/relay/test_op_grad_level2.py
index 8b434d6..50e3585 100644
--- a/tests/python/relay/test_op_grad_level2.py
+++ b/tests/python/relay/test_op_grad_level2.py
@@ -151,8 +151,7 @@ def verify_conv2d_grad(dshape, wshape, strides, padding, 
dilation, groups=1, mod
 def test_conv2d_grad():
 verify_conv2d_grad((1, 4, 16, 16), (16, 4, 3, 3), [1, 1], [1, 1], [1, 1])
 verify_conv2d_grad((1, 4, 16, 16), (16, 4, 1, 1), [1, 1], [0, 0], [1, 1])
-# TODO(@vinx13) recover the test after we fix the conv2d grad.
-# verify_conv2d_grad((1, 4, 16, 16), (16, 4, 1, 1), [2, 2], [0, 0], [1, 1])
+verify_conv2d_grad((1, 4, 16, 16), (16, 4, 1, 1), [2, 2], [0, 0], [1, 1])
 verify_conv2d_grad((1, 4, 16, 16), (16, 4, 3, 3), [1, 1], [1, 1], [1, 1], 
mode='first_order')
 
 



[GitHub] [incubator-tvm] tqchen commented on pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-10 Thread GitBox


tqchen commented on pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227#issuecomment-671422431


   Thanks @spectrometerHBH @junrushao1994 @Hzfengsy . This PR is now merged



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (8bb99fb -> 87d6ccd)

2020-08-10 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 8bb99fb  [TFLite] Implemented ONE_HOT Operator for TFLite (#6223)
 add 87d6ccd  [TIR][Hybrid] Hybrid Script Support for TIR (#6227)

No new revisions were added by this update.

Summary of changes:
 python/tvm/__init__.py |   3 +
 .../cost_model => hybrid}/__init__.py  |   6 +-
 python/tvm/{ir => hybrid}/_ffi_api.py  |   4 +-
 python/tvm/hybrid/intrin.py| 136 
 python/tvm/hybrid/meta_unparser.py |  50 ++
 python/tvm/hybrid/parser.py| 755 ++
 python/tvm/hybrid/registry.py  | 231 ++
 python/tvm/hybrid/scope_emitter.py |  62 ++
 python/tvm/hybrid/scope_handler.py |  89 +++
 python/tvm/hybrid/special_stmt.py  | 102 +++
 python/tvm/hybrid/ty.py|  63 ++
 python/tvm/hybrid/utils.py |  96 +++
 src/printer/tir_hybrid_printer.cc  | 845 +
 tests/python/unittest/test_hybrid_error_report.py  | 105 +++
 tests/python/unittest/test_hybrid_roundtrip.py | 536 +
 15 files changed, 3078 insertions(+), 5 deletions(-)
 copy python/tvm/{auto_scheduler/cost_model => hybrid}/__init__.py (83%)
 copy python/tvm/{ir => hybrid}/_ffi_api.py (91%)
 create mode 100644 python/tvm/hybrid/intrin.py
 create mode 100644 python/tvm/hybrid/meta_unparser.py
 create mode 100644 python/tvm/hybrid/parser.py
 create mode 100644 python/tvm/hybrid/registry.py
 create mode 100644 python/tvm/hybrid/scope_emitter.py
 create mode 100644 python/tvm/hybrid/scope_handler.py
 create mode 100644 python/tvm/hybrid/special_stmt.py
 create mode 100644 python/tvm/hybrid/ty.py
 create mode 100644 python/tvm/hybrid/utils.py
 create mode 100644 src/printer/tir_hybrid_printer.cc
 create mode 100644 tests/python/unittest/test_hybrid_error_report.py
 create mode 100644 tests/python/unittest/test_hybrid_roundtrip.py



[GitHub] [incubator-tvm] tqchen merged pull request #6227: [TIR][Hybrid] Hybrid Script Support for TIR

2020-08-10 Thread GitBox


tqchen merged pull request #6227:
URL: https://github.com/apache/incubator-tvm/pull/6227


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6228: Constant input attr added to fully connected operation in TFLite frontend

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6228:
URL: https://github.com/apache/incubator-tvm/pull/6228#discussion_r467972074



##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1695,10 +1695,9 @@ def convert_fully_connected(self, op):
 raise ImportError("The tflite package must be installed")
 
 input_tensors = self.get_input_tensors(op)
-assert len(input_tensors) >= 2, "input tensors length should be >= 2"
+assert len(input_tensors) in (2, 3), "input tensors length should be 
two or three"

Review comment:
   input_tensors can now have 3 elements, but I can't see the 3rd tensors 
(input_tensors[2]) being used anywhere?

##
File path: tests/python/frontend/tflite/test_forward.py
##
@@ -2456,25 +2456,27 @@ def test_forward_sparse_to_dense():
 # Fully Connected
 # ---
 
-def _test_fully_connected(tensor_in_sizes, filter_in_sizes, bias_in_size=None):
+def _test_fully_connected(tensor_in_sizes, wrap_input, filter_in_sizes, 
bias_in_size=None):
 """ One iteration of fully connected """
 
-total_size_1 = 1
-total_size_2 = 1
-for s in tensor_in_sizes:
-total_size_1 *= s
-for s in filter_in_sizes:
-total_size_2 *= s
-# Initializes the input tensor with array containing incrementing
-# numbers from 1.
-data_array = [f * 1.0 for f in range(1, total_size_1 + 1)]
-filter_array = [f * 1.0 for f in range(1, total_size_2 + 1)]
+total_size_1 = np.prod( tensor_in_sizes )
+total_size_2 = np.prod( filter_in_sizes )
+
 assert int(total_size_1 / tensor_in_sizes[0]) == filter_in_sizes[0], \
 "input size and filter size are mismatched"
 
+# Initializes the input tensor with array containing incrementing
+# numbers from 1.
+data_array = np.arange(1, total_size_1 + 1, dtype=np.float32)
+filter_array = np.arange(1, total_size_2 + 1, dtype=np.float32)
+
 with tf.Graph().as_default():
-in_data = array_ops.placeholder(shape=tensor_in_sizes, dtype='float32')
-in_filter = constant_op.constant(filter_array, shape=filter_in_sizes, 
dtype='float32')
+in_name="input"

Review comment:
   ```suggestion
   in_name = "input"
   ```

##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -1720,7 +1719,7 @@ def convert_fully_connected(self, op):
 # Dense expected Weight shape: [out_dim, n_units]
 # Dense output shape: [batch_size, out_dim]
 target_shape = tuple((-1, weight_tensor_shape[1]))
-in_expr = self.get_expr(input_tensor_idx)
+in_expr = self.get_tensor_expr(input_tensor)

Review comment:
   Is this a related change?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#issuecomment-671392518


   In addition to the suggested changes, I've added some further 
comments/documentation and moved implementations out of header files.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467945546



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.h
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_
+#define TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ethosn_support_library/Support.hpp"
+#include "ethosn_support_library/SupportQueries.hpp"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+
+struct ConcatenateParams {
+  sl::QuantizationInfo qInfo;
+  sl::ConcatenationInfo concat_info = sl::ConcatenationInfo(1, qInfo);
+  std::vector input_infos;
+};
+
+struct SplitParams {
+  sl::SplitInfo split_info = sl::SplitInfo(0, {});
+  sl::TensorInfo input_info;
+};
+
+class ErrStrm {
+ public:
+  template 
+  ErrStrm& operator<<(const T& val) {  // NOLINT(*)
+stream_ << val;
+return *this;
+  }
+
+ private:
+  std::stringstream stream_;
+  friend class EthosnError;
+};
+
+class EthosnError {
+ public:
+  EthosnError() {}
+  explicit EthosnError(const Array& msgs) : msgs(msgs) {}
+  explicit EthosnError(const String& msg) { msgs.push_back(msg); }
+  explicit EthosnError(const ErrStrm& err) : EthosnError(err.stream_.str()) {}
+
+  explicit operator bool() const { return !msgs.empty(); }
+
+  EthosnError& operator+=(const EthosnError& other) {
+msgs.insert(msgs.end(), other.msgs.begin(), other.msgs.end());
+return *this;
+  }
+
+  Array msgs;
+};
+
+class EthosnAPI {
+ public:
+  static std::unique_ptr 
Compile(std::shared_ptr network,
+  const 
sl::CompilationOptions& options);
+
+  static sl::CompilationOptions CreateOptions();
+
+  static bool IsEthosFunc(const Call& call, const std::string& op_name);
+  static bool IsEthosOp(const Call& call, const std::string& op_name);
+
+  static EthosnError Concatenate(const Expr& expr, ConcatenateParams* params);
+  static EthosnError Split(const Expr& expr, SplitParams* params);
+
+ private:
+  static EthosnError Tvm2Npu(const Array& shape, sl::TensorShape* 
npu_shape);
+  static EthosnError Tvm2Npu(const tvm::DataType& dtype, sl::DataType* 
data_type);

Review comment:
   I've added some docs around them as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467945163



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.h
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_

Review comment:
   To improve the clarity here, I've restricted the scope of ethosn_api to 
just the translation functions and moved the ones to do with compilation into 
codegen.cc.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jainris commented on pull request #6243: [TFLite] Implemented EXPAND_DIMS Operator for TFLite.

2020-08-10 Thread GitBox


jainris commented on pull request #6243:
URL: https://github.com/apache/incubator-tvm/pull/6243#issuecomment-671357199


   cc @anijain2305 @u99127 @mbaret @FrozenGene @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467886421



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.cc
##
@@ -0,0 +1,268 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "ethosn_api.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "capabilities.h"
+#include "ethosn_support_library/Support.hpp"
+#include "ethosn_support_library/SupportQueries.hpp"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+std::unique_ptr 
EthosnAPI::Compile(std::shared_ptr network,
+const 
sl::CompilationOptions& options) {
+  std::vector> compiled_network =
+  sl::Compile(*network, options);
+  CHECK_GE(compiled_network.size(), 1) << "Ethos-N compiler failed to compile 
network";
+
+  return std::move(compiled_network[0]);
+}
+
+struct EthosnCompilerConfigNode : public 
tvm::AttrsNode {
+  int variant;
+  bool strategy0;
+  bool strategy1;
+  bool strategy3;
+  bool strategy4;
+  bool strategy6;
+  bool strategy7;
+  bool dump_ram;
+  bool initial_sram_dump;
+  bool block_config_16x16;
+  bool block_config_32x8;
+  bool block_config_8x32;
+  bool block_config_8x8;
+  bool enable_intermediate_compression;
+  bool disable_winograd;
+  bool dump_debug_files;
+  String debug_dir;
+  bool enable_cascading;
+
+  TVM_DECLARE_ATTRS(EthosnCompilerConfigNode, 
"ext.attrs.EthosnCompilerConfigNode") {
+TVM_ATTR_FIELD(variant)
+.describe("0 for Ethos-N77, 1 for Ethos-N57, 2 for Ethos-N37. See 
Ethos-N documentation.")
+.set_default(0);
+TVM_ATTR_FIELD(strategy0).set_default(true);
+TVM_ATTR_FIELD(strategy1).set_default(true);
+TVM_ATTR_FIELD(strategy3).set_default(true);
+TVM_ATTR_FIELD(strategy4).set_default(true);
+TVM_ATTR_FIELD(strategy6).set_default(true);
+TVM_ATTR_FIELD(strategy7).set_default(true);
+TVM_ATTR_FIELD(dump_ram).set_default(false);
+TVM_ATTR_FIELD(initial_sram_dump).set_default(false);
+TVM_ATTR_FIELD(block_config_16x16).set_default(true);
+TVM_ATTR_FIELD(block_config_32x8).set_default(true);
+TVM_ATTR_FIELD(block_config_8x32).set_default(true);
+TVM_ATTR_FIELD(block_config_8x8).set_default(true);
+TVM_ATTR_FIELD(enable_intermediate_compression).set_default(true);
+TVM_ATTR_FIELD(disable_winograd).set_default(false);
+TVM_ATTR_FIELD(dump_debug_files).set_default(false);
+TVM_ATTR_FIELD(debug_dir).set_default(".");
+TVM_ATTR_FIELD(enable_cascading).set_default(false);
+  }
+};
+
+class EthosnCompilerConfig : public Attrs {
+ public:
+  TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(EthosnCompilerConfig, Attrs, 
EthosnCompilerConfigNode);
+};
+
+TVM_REGISTER_NODE_TYPE(EthosnCompilerConfigNode);
+TVM_REGISTER_PASS_CONFIG_OPTION("relay.ext.ethos-n.options", 
EthosnCompilerConfig);
+
+sl::CompilationOptions EthosnAPI::CreateOptions() {
+  auto ctx = transform::PassContext::Current();
+  auto cfg = ctx->GetConfig("relay.ext.ethos-n.options");
+  if (!cfg.defined()) {
+cfg = AttrsWithDefaultValues();
+  }
+
+  sl::CompilationOptions options(variants[cfg.value()->variant]);
+  options.m_Strategy0 = cfg.value()->strategy0;
+  options.m_Strategy1 = cfg.value()->strategy1;
+  options.m_Strategy3 = cfg.value()->strategy3;
+  options.m_Strategy4 = cfg.value()->strategy4;
+  options.m_Strategy6 = cfg.value()->strategy6;
+  options.m_Strategy7 = cfg.value()->strategy7;
+  options.m_DebugInfo.m_DumpRam = cfg.value()->dump_ram;
+  options.m_DebugInfo.m_InitialSramDump = cfg.value()->initial_sram_dump;
+  options.m_BlockConfig16x16 = cfg.value()->block_config_16x16;
+  options.m_BlockConfig32x8 = cfg.value()->block_config_32x8;
+  options.m_BlockConfig8x32 = cfg.value()->block_config_8x32;
+  options.m_BlockConfig8x8 = cfg.value()->block_config_8x8;
+  options.m_EnableIntermediateCompression = 
cfg.value()->enable_intermediate_compression;
+  options.m_DisableWinograd = cfg.value()->disable_winograd;
+  options.m_DebugInfo.m_DumpDebugFiles = cfg.value()->dump_debug_files;
+  options.m_DebugInfo.

[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467859711



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.h
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_

Review comment:
   `ethosn_api` is there to translate calls in Relay to their Support 
Library equivalents. It's used both during the codegen and also before during 
the 'supported' checks. The `ethosn_codegen` is what actually traverses through 
a Relay function and builds up a Support Library graph representation using 
`ethosn_api` as it goes to translate each individual call.
   
   Additionally it also has some other functions which interact with support 
library's API, in particular to get compilation options and compile a network.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467886079



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.h
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_

Review comment:
   I've added docstrings now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


FrozenGene commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r467880423



##
File path: src/auto_scheduler/search_policy/utils.cc
##
@@ -0,0 +1,301 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_scheduler/utils.cc
+ * \brief Common utilities
+ */
+
+#include "utils.h"
+
+#include 
+
+namespace tvm {
+namespace auto_scheduler {
+
+State DoMultiLevelTiling(const State& state, int stage_id, const std::string& 
format,
+ std::vector* spatial_split_step_ids) {
+  // Temporal object to be used if the input pointer is nullptr
+  std::vector temp_split_step_ids;
+  if (spatial_split_step_ids == nullptr) {
+spatial_split_step_ids = &temp_split_step_ids;
+  }
+  std::vector> space_levels;
+  std::vector> reduce_levels;
+  std::vector space_outer, space_inner, reduce_outer, reduce_inner;
+  Array split_res;
+
+  for (const auto c : format) {
+if (tolower(c) == 's') {
+  space_levels.emplace_back();
+} else if (tolower(c) == 'r') {
+  reduce_levels.emplace_back();
+} else {
+  LOG(FATAL) << "Invalid multi-level tiling format: " << format;
+}
+  }
+  size_t n_space = space_levels.size();
+  size_t n_reduce = reduce_levels.size();
+
+  spatial_split_step_ids->clear();
+
+  State tmp_s = state;
+  const Stage& stage = state->stages[stage_id];
+  const auto& no_split_name_pair = GetNoSplitAxisAttr(stage);  // handle 
special split strategy
+  const std::set& no_split_at_inner_name_set = 
no_split_name_pair.first;
+  const std::set& no_split_at_outer_name_set = 
no_split_name_pair.second;
+
+  for (const auto& iter : state->stages[stage_id]->iters) {
+if (iter->iter_kind == IteratorKind::kSpatial) {
+  if (!no_split_at_inner_name_set.count(iter->name) &&
+  !no_split_at_outer_name_set.count(iter->name)) {
+CHECK_GE(n_space, 1);
+
+if (n_space == 1) {
+  space_levels[0].push_back(iter);
+} else {
+  split_res = tmp_s.split(stage_id, iter, 
Array>(n_space - 1, NullOpt));
+  for (size_t i = 0; i < n_space; i++) {
+space_levels[i].push_back(split_res[i]);
+  }
+  spatial_split_step_ids->push_back(tmp_s->transform_steps.size() - 1);
+}
+  } else {
+if (no_split_at_inner_name_set.count(iter->name)) {
+  space_inner.push_back(iter);
+}
+if (no_split_at_outer_name_set.count(iter->name)) {
+  space_outer.push_back(iter);
+}
+  }
+} else if (iter->iter_kind == IteratorKind::kReduction) {
+  if (!no_split_at_inner_name_set.count(iter->name) &&
+  !no_split_at_outer_name_set.count(iter->name)) {
+CHECK_GE(n_reduce, 1);
+
+if (n_reduce == 1) {
+  reduce_levels[0].push_back(iter);
+} else {
+  split_res = tmp_s.split(stage_id, iter, 
Array>(n_reduce - 1, NullOpt));
+  for (size_t i = 0; i < n_reduce; i++) {
+reduce_levels[i].push_back(split_res[i]);
+  }
+}
+  } else {
+if (no_split_at_inner_name_set.count(iter->name)) {
+  reduce_inner.push_back(iter);
+}
+if (no_split_at_outer_name_set.count(iter->name)) {
+  reduce_outer.push_back(iter);
+}
+  }
+} else {
+  LOG(FATAL) << "Invalid iter type: " << int(iter->iter_kind);
+}
+  }
+
+  if (!space_outer.empty()) {
+CHECK(!space_levels.empty());
+space_levels.front().insert(space_levels.front().begin(),
+std::make_move_iterator(space_outer.begin()),
+std::make_move_iterator(space_outer.end()));
+  }
+  if (!space_inner.empty()) {
+CHECK(!space_levels.empty());
+space_levels.back().insert(space_levels.back().begin(),
+   std::make_move_iterator(space_inner.begin()),
+   std::make_move_iterator(space_inner.end()));
+  }
+
+  if (!reduce_outer.empty()) {
+CHECK(!reduce_levels.empty());
+reduce_levels.front().insert(reduce_levels.front().begin(),
+ std::make_move_iterator(reduce_outer.begin

[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


jcf94 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r467878808



##
File path: src/auto_scheduler/search_policy/utils.cc
##
@@ -0,0 +1,301 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_scheduler/utils.cc
+ * \brief Common utilities
+ */
+
+#include "utils.h"
+
+#include 
+
+namespace tvm {
+namespace auto_scheduler {
+
+State DoMultiLevelTiling(const State& state, int stage_id, const std::string& 
format,
+ std::vector* spatial_split_step_ids) {
+  // Temporal object to be used if the input pointer is nullptr
+  std::vector temp_split_step_ids;
+  if (spatial_split_step_ids == nullptr) {
+spatial_split_step_ids = &temp_split_step_ids;
+  }
+  std::vector> space_levels;
+  std::vector> reduce_levels;
+  std::vector space_outer, space_inner, reduce_outer, reduce_inner;
+  Array split_res;
+
+  for (const auto c : format) {
+if (tolower(c) == 's') {
+  space_levels.emplace_back();
+} else if (tolower(c) == 'r') {
+  reduce_levels.emplace_back();
+} else {
+  LOG(FATAL) << "Invalid multi-level tiling format: " << format;
+}
+  }
+  size_t n_space = space_levels.size();
+  size_t n_reduce = reduce_levels.size();
+
+  spatial_split_step_ids->clear();
+
+  State tmp_s = state;
+  const Stage& stage = state->stages[stage_id];
+  const auto& no_split_name_pair = GetNoSplitAxisAttr(stage);  // handle 
special split strategy
+  const std::set& no_split_at_inner_name_set = 
no_split_name_pair.first;
+  const std::set& no_split_at_outer_name_set = 
no_split_name_pair.second;
+
+  for (const auto& iter : state->stages[stage_id]->iters) {
+if (iter->iter_kind == IteratorKind::kSpatial) {
+  if (!no_split_at_inner_name_set.count(iter->name) &&
+  !no_split_at_outer_name_set.count(iter->name)) {
+CHECK_GE(n_space, 1);
+
+if (n_space == 1) {
+  space_levels[0].push_back(iter);
+} else {
+  split_res = tmp_s.split(stage_id, iter, 
Array>(n_space - 1, NullOpt));
+  for (size_t i = 0; i < n_space; i++) {
+space_levels[i].push_back(split_res[i]);
+  }
+  spatial_split_step_ids->push_back(tmp_s->transform_steps.size() - 1);
+}
+  } else {
+if (no_split_at_inner_name_set.count(iter->name)) {
+  space_inner.push_back(iter);
+}
+if (no_split_at_outer_name_set.count(iter->name)) {
+  space_outer.push_back(iter);
+}
+  }
+} else if (iter->iter_kind == IteratorKind::kReduction) {
+  if (!no_split_at_inner_name_set.count(iter->name) &&
+  !no_split_at_outer_name_set.count(iter->name)) {
+CHECK_GE(n_reduce, 1);
+
+if (n_reduce == 1) {
+  reduce_levels[0].push_back(iter);
+} else {
+  split_res = tmp_s.split(stage_id, iter, 
Array>(n_reduce - 1, NullOpt));
+  for (size_t i = 0; i < n_reduce; i++) {
+reduce_levels[i].push_back(split_res[i]);
+  }
+}
+  } else {
+if (no_split_at_inner_name_set.count(iter->name)) {
+  reduce_inner.push_back(iter);
+}
+if (no_split_at_outer_name_set.count(iter->name)) {
+  reduce_outer.push_back(iter);
+}
+  }
+} else {
+  LOG(FATAL) << "Invalid iter type: " << int(iter->iter_kind);
+}
+  }
+
+  if (!space_outer.empty()) {
+CHECK(!space_levels.empty());
+space_levels.front().insert(space_levels.front().begin(),
+std::make_move_iterator(space_outer.begin()),
+std::make_move_iterator(space_outer.end()));
+  }
+  if (!space_inner.empty()) {
+CHECK(!space_levels.empty());
+space_levels.back().insert(space_levels.back().begin(),
+   std::make_move_iterator(space_inner.begin()),
+   std::make_move_iterator(space_inner.end()));
+  }
+
+  if (!reduce_outer.empty()) {
+CHECK(!reduce_levels.empty());
+reduce_levels.front().insert(reduce_levels.front().begin(),
+ std::make_move_iterator(reduce_outer.begin()),

[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


jcf94 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r467878505



##
File path: include/tvm/auto_scheduler/search_policy.h
##
@@ -89,46 +100,54 @@ class SearchCallback : public ObjectRef {
   TVM_DEFINE_MUTABLE_OBJECT_REF_METHODS(SearchCallback, ObjectRef, 
SearchCallbackNode);
 };
 
+/*! \brief Attribute keys of ops used for SearchPolicy. */
+struct SearchPolicyKey {
+  /*! \brief Always apply unroll to the inner most iterator of the specificed 
iterators. */
+  static constexpr const char* always_unroll_inner = 
"auto_scheduler_always_unroll_inner";
+  /*! \brief The specified iterators will not be placed as the inner most 
iterator. */
+  static constexpr const char* no_split_at_inner = 
"auto_scheduler_no_split_at_inner";
+  /*! \brief The specified iterators will not be placed as the outter most 
iterator. */
+  static constexpr const char* no_split_at_outer = 
"auto_scheduler_no_split_at_outer";

Review comment:
   Ok, then I'll remove them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


FrozenGene commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r467875817



##
File path: src/auto_scheduler/search_policy/utils.cc
##
@@ -0,0 +1,301 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_scheduler/utils.cc
+ * \brief Common utilities
+ */
+
+#include "utils.h"
+
+#include 
+
+namespace tvm {
+namespace auto_scheduler {
+
+State DoMultiLevelTiling(const State& state, int stage_id, const std::string& 
format,
+ std::vector* spatial_split_step_ids) {
+  // Temporal object to be used if the input pointer is nullptr
+  std::vector temp_split_step_ids;
+  if (spatial_split_step_ids == nullptr) {
+spatial_split_step_ids = &temp_split_step_ids;
+  }
+  std::vector> space_levels;
+  std::vector> reduce_levels;
+  std::vector space_outer, space_inner, reduce_outer, reduce_inner;
+  Array split_res;
+
+  for (const auto c : format) {
+if (tolower(c) == 's') {
+  space_levels.emplace_back();
+} else if (tolower(c) == 'r') {
+  reduce_levels.emplace_back();
+} else {
+  LOG(FATAL) << "Invalid multi-level tiling format: " << format;
+}
+  }
+  size_t n_space = space_levels.size();
+  size_t n_reduce = reduce_levels.size();
+
+  spatial_split_step_ids->clear();
+
+  State tmp_s = state;
+  const Stage& stage = state->stages[stage_id];
+  const auto& no_split_name_pair = GetNoSplitAxisAttr(stage);  // handle 
special split strategy
+  const std::set& no_split_at_inner_name_set = 
no_split_name_pair.first;
+  const std::set& no_split_at_outer_name_set = 
no_split_name_pair.second;
+
+  for (const auto& iter : state->stages[stage_id]->iters) {
+if (iter->iter_kind == IteratorKind::kSpatial) {
+  if (!no_split_at_inner_name_set.count(iter->name) &&
+  !no_split_at_outer_name_set.count(iter->name)) {
+CHECK_GE(n_space, 1);
+
+if (n_space == 1) {
+  space_levels[0].push_back(iter);
+} else {
+  split_res = tmp_s.split(stage_id, iter, 
Array>(n_space - 1, NullOpt));
+  for (size_t i = 0; i < n_space; i++) {
+space_levels[i].push_back(split_res[i]);
+  }
+  spatial_split_step_ids->push_back(tmp_s->transform_steps.size() - 1);
+}
+  } else {
+if (no_split_at_inner_name_set.count(iter->name)) {
+  space_inner.push_back(iter);
+}
+if (no_split_at_outer_name_set.count(iter->name)) {
+  space_outer.push_back(iter);
+}
+  }
+} else if (iter->iter_kind == IteratorKind::kReduction) {
+  if (!no_split_at_inner_name_set.count(iter->name) &&
+  !no_split_at_outer_name_set.count(iter->name)) {
+CHECK_GE(n_reduce, 1);
+
+if (n_reduce == 1) {
+  reduce_levels[0].push_back(iter);
+} else {
+  split_res = tmp_s.split(stage_id, iter, 
Array>(n_reduce - 1, NullOpt));
+  for (size_t i = 0; i < n_reduce; i++) {
+reduce_levels[i].push_back(split_res[i]);
+  }
+}
+  } else {
+if (no_split_at_inner_name_set.count(iter->name)) {
+  reduce_inner.push_back(iter);
+}
+if (no_split_at_outer_name_set.count(iter->name)) {
+  reduce_outer.push_back(iter);
+}
+  }
+} else {
+  LOG(FATAL) << "Invalid iter type: " << int(iter->iter_kind);
+}
+  }
+
+  if (!space_outer.empty()) {
+CHECK(!space_levels.empty());
+space_levels.front().insert(space_levels.front().begin(),
+std::make_move_iterator(space_outer.begin()),
+std::make_move_iterator(space_outer.end()));
+  }
+  if (!space_inner.empty()) {
+CHECK(!space_levels.empty());
+space_levels.back().insert(space_levels.back().begin(),
+   std::make_move_iterator(space_inner.begin()),
+   std::make_move_iterator(space_inner.end()));
+  }
+
+  if (!reduce_outer.empty()) {
+CHECK(!reduce_levels.empty());
+reduce_levels.front().insert(reduce_levels.front().begin(),
+ std::make_move_iterator(reduce_outer.begin

[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467867668



##
File path: src/runtime/contrib/ethosn/ethosn_device.cc
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ethosn_device.cc
+ * \brief Ethos-N NPU device integration.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "ethosn_driver_library/Buffer.hpp"
+#include "ethosn_support_library/Support.hpp"
+
+#if defined ETHOSN_HW
+
+#include "ethosn_driver_library/Inference.hpp"
+#include "ethosn_driver_library/Network.hpp"
+
+namespace tvm {
+namespace runtime {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+namespace dl = ::ethosn::driver_library;
+
+int64_t GetTensorSize(const DLTensor& tensor) {
+  int64_t size = 1;
+  for (int i = 0; i < tensor.ndim; i++) {
+size *= tensor.shape[i];
+  }
+  return size;
+}
+
+bool WaitForInference(dl::Inference* inference, int timeout) {
+  // Wait for inference to complete
+  int fd = inference->GetFileDescriptor();
+  struct pollfd fds;
+  memset(&fds, 0, sizeof(fds));
+  fds.fd = fd;
+  fds.events = POLLIN;  // Wait for any available input.
+
+  const int ms_per_seconds = 1000;
+  int poll_result = poll(&fds, 1, timeout * ms_per_seconds);
+  if (poll_result > 0) {
+dl::InferenceResult result;
+if (read(fd, &result, sizeof(result)) != sizeof(result)) {
+  return false;
+}
+if (result != dl::InferenceResult::Completed) {
+  return false;
+}
+  } else if (poll_result == 0) {
+return false;
+  } else {
+return false;
+  }
+  return true;
+}
+
+template 
+void CopyOutput(dl::Buffer* source_buffers[], std::vector* outputs) 
{
+  for (DLTensor* tensor : *outputs) {
+dl::Buffer* source_buffer = source_buffers[0];
+uint8_t* source_buffer_data = source_buffer->GetMappedBuffer();
+size_t size = source_buffer->GetSize();
+T* dest_pointer = static_cast(tensor->data);
+std::copy_backward(source_buffer_data, source_buffer_data + size, 
dest_pointer + size);
+source_buffers++;
+  }
+}
+
+void CreateBuffers(std::vector >* fm,
+   const std::vector& tensors) {
+  int index = 0;
+  for (auto buffer : tensors) {
+auto* data = static_cast(buffer->data);
+// The NPU only needs the size of the tensor * uint8_t.
+auto data_size = static_cast(GetTensorSize(*buffer));
+(*fm)[index++] = std::make_shared(data, data_size, 
dl::DataFormat::NHWC);
+  }
+}
+
+bool Inference(tvm::runtime::TVMArgs args, sl::CompiledNetwork* network,
+   std::vector input_order, std::vector 
output_order) {
+  // Unpack parameters
+  uint8_t argc = 0;
+  std::vector inputs(input_order.size());
+  for (uint8_t i = 0; i < network->GetInputBufferInfos().size(); i++) {
+inputs[input_order[i]] = args[argc++];
+  }
+  auto out_infos = network->GetOutputBufferInfos();
+  std::vector outputs(output_order.size());
+  for (uint8_t i = 0; i < network->GetOutputBufferInfos().size(); i++) {
+outputs[output_order[i]] = args[argc++];
+  }
+
+  // Set up input buffers
+  std::vector > ifm(inputs.size());
+  CreateBuffers(&ifm, inputs);
+
+  // Set up output buffers
+  std::vector > ofm(outputs.size());
+  CreateBuffers(&ofm, outputs);
+
+  // Raw pointers for the inference
+  dl::Buffer* ifm_raw[inputs.size()];
+  for (size_t i = 0; i < inputs.size(); i++) {
+ifm_raw[i] = ifm[i].get();
+  }
+  dl::Buffer* ofm_raw[outputs.size()];
+  for (size_t i = 0; i < outputs.size(); i++) {
+ofm_raw[i] = ofm[i].get();
+  }
+
+  auto npu = std::make_unique(*network);
+
+  // Execute the inference.
+  std::unique_ptr result(
+  npu->ScheduleInference(ifm_raw, sizeof(ifm_raw) / sizeof(ifm_raw[0]), 
ofm_raw,
+ sizeof(ofm_raw) / sizeof(ofm_raw[0])));
+  bool inferenceCompleted = WaitForInference(result.get(), 60);
+  if (inferenceCompleted) {
+switch ((outputs)[0]->dtype.bits) {
+  case 8: {
+dl::Buffer** ofms = &ofm_raw[0];
+for (DLTensor* tensor : outputs) {
+  uint8_t* source_buffer_data = (*ofms++)->GetMappedBuffer();
+  uint8_t* dest_pointer = static_cast(ten

[GitHub] [incubator-tvm] jainris opened a new pull request #6243: [TFLite] Implemented EXPAND_DIMS Operator for TFLite.

2020-08-10 Thread GitBox


jainris opened a new pull request #6243:
URL: https://github.com/apache/incubator-tvm/pull/6243


   * Added implementation for EXPAND_DIMS Operator.
   * Added tests for EXPAND_DIMS Operator.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #6223: [TFLite] Implemented ONE_HOT Operator for TFLite.

2020-08-10 Thread GitBox


FrozenGene commented on pull request #6223:
URL: https://github.com/apache/incubator-tvm/pull/6223#issuecomment-671322235


   Thanks @jainris @leandron 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (5ed7d31 -> 8bb99fb)

2020-08-10 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 5ed7d31  [COMMUNITY] jcf94 -> Reviewer (#6241)
 add 8bb99fb  [TFLite] Implemented ONE_HOT Operator for TFLite (#6223)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 51 
 tests/python/frontend/tflite/test_forward.py | 29 
 2 files changed, 80 insertions(+)



[GitHub] [incubator-tvm] FrozenGene merged pull request #6223: [TFLite] Implemented ONE_HOT Operator for TFLite.

2020-08-10 Thread GitBox


FrozenGene merged pull request #6223:
URL: https://github.com/apache/incubator-tvm/pull/6223


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467861916



##
File path: tests/python/contrib/test_ethosn/infrastructure.py
##
@@ -0,0 +1,225 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Expose Ethos test functions to the Python front end"""
+
+from __future__ import absolute_import, print_function
+import tvm
+from tvm import relay
+from tvm.contrib import util, graph_runtime, download
+from tvm.relay.testing import run_opt_pass
+from enum import Enum
+from hashlib import md5
+from itertools import zip_longest, combinations
+import numpy as np
+from PIL import Image
+import os
+
+from . import _infrastructure
+from tvm.relay.op.contrib import get_pattern_table
+
+
+class Available(Enum):
+UNAVAILABLE = 0
+SW_ONLY = 1
+SW_AND_HW = 2
+
+
+def ethosn_available():
+"""Return whether Ethos-N software and hardware support is available"""
+if not tvm.get_global_func("relay.ethos-n.query", True):
+print("skip because Ethos-N module is not available")
+return Available.UNAVAILABLE
+else:
+hw = tvm.get_global_func("relay.ethos-n.query")()
+return Available.SW_AND_HW if hw else Available.SW_ONLY
+
+
+def get_real_image(im_height, im_width):
+repo_base = 
'https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/'
+img_name = 'elephant-299.jpg'
+image_url = os.path.join(repo_base, img_name)
+img_path = download.download_testdata(image_url, img_name, module='data')
+image = Image.open(img_path).resize((im_height, im_width))
+x = np.array(image).astype('uint8')
+data = np.reshape(x, (1, im_height, im_width, 3))
+return data
+
+
+def assert_lib_hash(lib, golden):
+temp = util.tempdir()
+path = temp.relpath("lib.cmm")
+lib.imported_modules[1].save(path)
+lib_hash = md5(open(path, 'rb').read()).hexdigest()
+assert lib_hash == golden, "Expected hash: {} Got hash: {}".format(golden, 
lib_hash)
+
+
+def make_module(func, params):
+func = relay.Function(relay.analysis.free_vars(func), func)
+if len(params):
+relay.build_module.bind_params_by_name(func, params)
+return tvm.IRModule.from_expr(func)
+
+
+def make_ethosn_composite(ethosn_expr, name):
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function([relay.Var("a")], ethosn_expr)
+func = func.with_attr("Composite", name)
+call = relay.Call(func, vars)
+return call
+
+
+def make_ethosn_partition(ethosn_expr):
+# Create an Ethos-N global function
+mod = tvm.IRModule({})
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function(vars, ethosn_expr)
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", "ethos-n")
+func = func.with_attr("global_symbol", "ethos-n_0")
+g1 = relay.GlobalVar("ethos-n_0")
+mod[g1] = func
+
+# These are the vars to call the Ethos-N partition with
+more_vars = relay.analysis.free_vars(ethosn_expr)
+# Call the Ethos-N partition in main
+call_fn1 = g1(*more_vars)
+mod["main"] = relay.Function(more_vars, call_fn1)
+return mod
+
+
+def get_cpu_op_count(mod):
+class Counter(tvm.relay.ExprVisitor):
+def __init__(self):
+super().__init__()
+self.count = 0
+
+def visit_call(self, call):
+if isinstance(call.op, tvm.ir.Op):
+self.count += 1
+
+super().visit_call(call)
+
+c = Counter()
+c.visit(mod["main"])
+return c.count
+
+
+def build(mod, params, npu=True, cpu_ops=0, npu_partitions=1):
+relay.backend.compile_engine.get().clear()
+with tvm.transform.PassContext(opt_level=3, config={
+"relay.ext.ethos-n.options": {"variant": 0}
+}):
+with tvm.target.create("llvm -mcpu=core-avx2"):
+if npu:
+f = relay.build_module.bind_params_by_name(mod["main"], params)
+mod = tvm.IRModule()
+mod["main"] = f
+mod = relay.transform.AnnotateTarget("ethos-n")(mod)
+mod = rela

[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467861628



##
File path: src/runtime/contrib/ethosn/ethosn_device.cc
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file ethosn_device.cc
+ * \brief Ethos-N NPU device integration.
+ */
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+#include "ethosn_driver_library/Buffer.hpp"
+#include "ethosn_support_library/Support.hpp"
+
+#if defined ETHOSN_HW
+
+#include "ethosn_driver_library/Inference.hpp"
+#include "ethosn_driver_library/Network.hpp"
+
+namespace tvm {
+namespace runtime {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+namespace dl = ::ethosn::driver_library;
+
+int64_t GetTensorSize(const DLTensor& tensor) {

Review comment:
   Would be useful :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467860284



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.cc
##
@@ -0,0 +1,268 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "ethosn_api.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "capabilities.h"
+#include "ethosn_support_library/Support.hpp"
+#include "ethosn_support_library/SupportQueries.hpp"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+std::unique_ptr 
EthosnAPI::Compile(std::shared_ptr network,
+const 
sl::CompilationOptions& options) {
+  std::vector> compiled_network =
+  sl::Compile(*network, options);
+  CHECK_GE(compiled_network.size(), 1) << "Ethos-N compiler failed to compile 
network";
+
+  return std::move(compiled_network[0]);
+}
+
+struct EthosnCompilerConfigNode : public 
tvm::AttrsNode {
+  int variant;
+  bool strategy0;
+  bool strategy1;
+  bool strategy3;
+  bool strategy4;
+  bool strategy6;
+  bool strategy7;
+  bool dump_ram;
+  bool initial_sram_dump;
+  bool block_config_16x16;
+  bool block_config_32x8;
+  bool block_config_8x32;
+  bool block_config_8x8;
+  bool enable_intermediate_compression;
+  bool disable_winograd;
+  bool dump_debug_files;
+  String debug_dir;
+  bool enable_cascading;
+
+  TVM_DECLARE_ATTRS(EthosnCompilerConfigNode, 
"ext.attrs.EthosnCompilerConfigNode") {
+TVM_ATTR_FIELD(variant)
+.describe("0 for Ethos-N77, 1 for Ethos-N57, 2 for Ethos-N37. See 
Ethos-N documentation.")
+.set_default(0);
+TVM_ATTR_FIELD(strategy0).set_default(true);
+TVM_ATTR_FIELD(strategy1).set_default(true);
+TVM_ATTR_FIELD(strategy3).set_default(true);
+TVM_ATTR_FIELD(strategy4).set_default(true);
+TVM_ATTR_FIELD(strategy6).set_default(true);
+TVM_ATTR_FIELD(strategy7).set_default(true);
+TVM_ATTR_FIELD(dump_ram).set_default(false);
+TVM_ATTR_FIELD(initial_sram_dump).set_default(false);
+TVM_ATTR_FIELD(block_config_16x16).set_default(true);
+TVM_ATTR_FIELD(block_config_32x8).set_default(true);
+TVM_ATTR_FIELD(block_config_8x32).set_default(true);
+TVM_ATTR_FIELD(block_config_8x8).set_default(true);
+TVM_ATTR_FIELD(enable_intermediate_compression).set_default(true);
+TVM_ATTR_FIELD(disable_winograd).set_default(false);
+TVM_ATTR_FIELD(dump_debug_files).set_default(false);
+TVM_ATTR_FIELD(debug_dir).set_default(".");
+TVM_ATTR_FIELD(enable_cascading).set_default(false);
+  }
+};
+
+class EthosnCompilerConfig : public Attrs {
+ public:
+  TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(EthosnCompilerConfig, Attrs, 
EthosnCompilerConfigNode);
+};
+
+TVM_REGISTER_NODE_TYPE(EthosnCompilerConfigNode);
+TVM_REGISTER_PASS_CONFIG_OPTION("relay.ext.ethos-n.options", 
EthosnCompilerConfig);
+
+sl::CompilationOptions EthosnAPI::CreateOptions() {
+  auto ctx = transform::PassContext::Current();
+  auto cfg = ctx->GetConfig("relay.ext.ethos-n.options");
+  if (!cfg.defined()) {
+cfg = AttrsWithDefaultValues();
+  }
+
+  sl::CompilationOptions options(variants[cfg.value()->variant]);
+  options.m_Strategy0 = cfg.value()->strategy0;
+  options.m_Strategy1 = cfg.value()->strategy1;
+  options.m_Strategy3 = cfg.value()->strategy3;
+  options.m_Strategy4 = cfg.value()->strategy4;
+  options.m_Strategy6 = cfg.value()->strategy6;
+  options.m_Strategy7 = cfg.value()->strategy7;
+  options.m_DebugInfo.m_DumpRam = cfg.value()->dump_ram;
+  options.m_DebugInfo.m_InitialSramDump = cfg.value()->initial_sram_dump;
+  options.m_BlockConfig16x16 = cfg.value()->block_config_16x16;
+  options.m_BlockConfig32x8 = cfg.value()->block_config_32x8;
+  options.m_BlockConfig8x32 = cfg.value()->block_config_8x32;
+  options.m_BlockConfig8x8 = cfg.value()->block_config_8x8;
+  options.m_EnableIntermediateCompression = 
cfg.value()->enable_intermediate_compression;
+  options.m_DisableWinograd = cfg.value()->disable_winograd;
+  options.m_DebugInfo.m_DumpDebugFiles = cfg.value()->dump_debug_files;
+  options.m_DebugInfo.

[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467859711



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.h
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_

Review comment:
   `ethosn_api` is there to translate calls in Relay to their Support 
Library equivalents. It's used both during the codegen and also before during 
the 'supported' checks. The `ethosn_codegen` is what actually traverses through 
a Relay function and builds up a Support Library graph representation using 
`ethosn_api` as it goes to translate each individual call.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467856126



##
File path: src/relay/backend/contrib/ethosn/codegen.cc
##
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/ethosn/codegen.cc
+ * \brief The Relay -> Ethos-N command stream compiler.
+ */
+#include 
+#include 
+
+#include "codegen_ethosn.h"
+#include "ethosn_api.h"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+sl::TensorInfo GetTensorInfo(std::map> 
tensor_table,
+ const Call& call) {
+  if (tensor_table.find(call) != tensor_table.end()) return 
tensor_table[call][0];
+
+  return sl::TensorInfo();
+}
+
+void InferTensorsVisitor::InferCall(const CallNode* cn) {

Review comment:
   The motivation behind this is principally clarity rather than necessity. 
The InferCall function ends up getting very long as more operators are 
introduced and we wanted to separate this lengthy function from the traversal 
logic so that it is quick to reason about the traversal without having to scan 
through a huge block of code. If you don't think this clarity if worthwhile, 
then we can inline it.

##
File path: src/relay/backend/contrib/ethosn/codegen.cc
##
@@ -0,0 +1,214 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/ethosn/codegen.cc
+ * \brief The Relay -> Ethos-N command stream compiler.
+ */
+#include 
+#include 
+
+#include "codegen_ethosn.h"
+#include "ethosn_api.h"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+sl::TensorInfo GetTensorInfo(std::map> 
tensor_table,
+ const Call& call) {
+  if (tensor_table.find(call) != tensor_table.end()) return 
tensor_table[call][0];
+
+  return sl::TensorInfo();
+}
+
+void InferTensorsVisitor::InferCall(const CallNode* cn) {
+  EthosnError err;
+  Call call = GetRef(cn);
+  // Determine call -> NPU mapping
+  if (EthosnAPI::IsEthosOp(call, "qnn.concatenate")) {
+ConcatenateParams params;
+err = EthosnAPI::Concatenate(call, ¶ms);
+tensor_table_[cn->args[0]] = params.input_infos;
+  } else if (EthosnAPI::IsEthosOp(call, "split")) {
+SplitParams params;
+params.input_info = GetTensorInfo(tensor_table_, call);
+err = EthosnAPI::Split(call, ¶ms);
+tensor_table_[cn->args[0]] = {params.input_info};
+  } else {
+err = EthosnError("unknown operator");
+  }
+  if (err) {
+ReportFatalError(call, err);
+  }
+}
+
+// This will only visit an expression if the expression's tensor info
+// has already been entirely inferred.
+// An example where this is important is a tuple node where each
+// get item node will only infer one field of the tuple's expression info.
+// We don't want to traverse the tuple until all of its fields have been 
inferred.
+void InferTensorsVisitor::VisitInferred(const Expr& expr) {
+  if (tensor_table_.find(expr) != tensor_table_.end()) {
+for (const auto& tensor_info : tensor_table_[expr]) {
+  if (tensor_info == sl::TensorInfo()) return;
+}
+VisitExpr(expr);
+  }
+}
+
+void InferTensorsVisitor::VisitExpr_(const CallNode* cn) {
+  InferCall(cn);
+  // Pre-order visitor
+  for (const auto& arg : cn->args) {
+VisitInferred(arg);
+  }
+}
+
+void InferTensorsVisitor::VisitExpr_(const TupleNode* 

[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467850167



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.h
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_
+#define TVM_RELAY_BACKEND_CONTRIB_ETHOSN_ETHOSN_API_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "ethosn_support_library/Support.hpp"
+#include "ethosn_support_library/SupportQueries.hpp"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+namespace sl = ::ethosn::support_library;
+
+struct ConcatenateParams {
+  sl::QuantizationInfo qInfo;
+  sl::ConcatenationInfo concat_info = sl::ConcatenationInfo(1, qInfo);
+  std::vector input_infos;
+};
+
+struct SplitParams {
+  sl::SplitInfo split_info = sl::SplitInfo(0, {});
+  sl::TensorInfo input_info;
+};
+
+class ErrStrm {
+ public:
+  template 
+  ErrStrm& operator<<(const T& val) {  // NOLINT(*)
+stream_ << val;
+return *this;
+  }
+
+ private:
+  std::stringstream stream_;
+  friend class EthosnError;
+};
+
+class EthosnError {
+ public:
+  EthosnError() {}
+  explicit EthosnError(const Array& msgs) : msgs(msgs) {}
+  explicit EthosnError(const String& msg) { msgs.push_back(msg); }
+  explicit EthosnError(const ErrStrm& err) : EthosnError(err.stream_.str()) {}
+
+  explicit operator bool() const { return !msgs.empty(); }
+
+  EthosnError& operator+=(const EthosnError& other) {
+msgs.insert(msgs.end(), other.msgs.begin(), other.msgs.end());
+return *this;
+  }
+
+  Array msgs;
+};
+
+class EthosnAPI {
+ public:
+  static std::unique_ptr 
Compile(std::shared_ptr network,
+  const 
sl::CompilationOptions& options);
+
+  static sl::CompilationOptions CreateOptions();
+
+  static bool IsEthosFunc(const Call& call, const std::string& op_name);
+  static bool IsEthosOp(const Call& call, const std::string& op_name);
+
+  static EthosnError Concatenate(const Expr& expr, ConcatenateParams* params);
+  static EthosnError Split(const Expr& expr, SplitParams* params);
+
+ private:
+  static EthosnError Tvm2Npu(const Array& shape, sl::TensorShape* 
npu_shape);
+  static EthosnError Tvm2Npu(const tvm::DataType& dtype, sl::DataType* 
data_type);

Review comment:
   The naming here generally refers to conversions from TVM data structures 
to Support Library data structures. Would Tvm2SL be clearer?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467850461



##
File path: src/relay/backend/contrib/ethosn/ethosn_api.cc
##
@@ -0,0 +1,268 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include "ethosn_api.h"
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "capabilities.h"
+#include "ethosn_support_library/Support.hpp"
+#include "ethosn_support_library/SupportQueries.hpp"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace ethosn {
+
+std::unique_ptr 
EthosnAPI::Compile(std::shared_ptr network,
+const 
sl::CompilationOptions& options) {
+  std::vector> compiled_network =
+  sl::Compile(*network, options);
+  CHECK_GE(compiled_network.size(), 1) << "Ethos-N compiler failed to compile 
network";
+
+  return std::move(compiled_network[0]);
+}
+
+struct EthosnCompilerConfigNode : public 
tvm::AttrsNode {
+  int variant;
+  bool strategy0;
+  bool strategy1;
+  bool strategy3;
+  bool strategy4;
+  bool strategy6;
+  bool strategy7;
+  bool dump_ram;
+  bool initial_sram_dump;
+  bool block_config_16x16;
+  bool block_config_32x8;
+  bool block_config_8x32;
+  bool block_config_8x8;
+  bool enable_intermediate_compression;
+  bool disable_winograd;
+  bool dump_debug_files;
+  String debug_dir;
+  bool enable_cascading;
+
+  TVM_DECLARE_ATTRS(EthosnCompilerConfigNode, 
"ext.attrs.EthosnCompilerConfigNode") {
+TVM_ATTR_FIELD(variant)
+.describe("0 for Ethos-N77, 1 for Ethos-N57, 2 for Ethos-N37. See 
Ethos-N documentation.")
+.set_default(0);
+TVM_ATTR_FIELD(strategy0).set_default(true);
+TVM_ATTR_FIELD(strategy1).set_default(true);
+TVM_ATTR_FIELD(strategy3).set_default(true);
+TVM_ATTR_FIELD(strategy4).set_default(true);
+TVM_ATTR_FIELD(strategy6).set_default(true);
+TVM_ATTR_FIELD(strategy7).set_default(true);
+TVM_ATTR_FIELD(dump_ram).set_default(false);
+TVM_ATTR_FIELD(initial_sram_dump).set_default(false);
+TVM_ATTR_FIELD(block_config_16x16).set_default(true);
+TVM_ATTR_FIELD(block_config_32x8).set_default(true);
+TVM_ATTR_FIELD(block_config_8x32).set_default(true);
+TVM_ATTR_FIELD(block_config_8x8).set_default(true);
+TVM_ATTR_FIELD(enable_intermediate_compression).set_default(true);
+TVM_ATTR_FIELD(disable_winograd).set_default(false);
+TVM_ATTR_FIELD(dump_debug_files).set_default(false);
+TVM_ATTR_FIELD(debug_dir).set_default(".");
+TVM_ATTR_FIELD(enable_cascading).set_default(false);
+  }
+};
+
+class EthosnCompilerConfig : public Attrs {
+ public:
+  TVM_DEFINE_NOTNULLABLE_OBJECT_REF_METHODS(EthosnCompilerConfig, Attrs, 
EthosnCompilerConfigNode);
+};
+
+TVM_REGISTER_NODE_TYPE(EthosnCompilerConfigNode);
+TVM_REGISTER_PASS_CONFIG_OPTION("relay.ext.ethos-n.options", 
EthosnCompilerConfig);
+
+sl::CompilationOptions EthosnAPI::CreateOptions() {
+  auto ctx = transform::PassContext::Current();
+  auto cfg = ctx->GetConfig("relay.ext.ethos-n.options");
+  if (!cfg.defined()) {
+cfg = AttrsWithDefaultValues();
+  }
+
+  sl::CompilationOptions options(variants[cfg.value()->variant]);
+  options.m_Strategy0 = cfg.value()->strategy0;
+  options.m_Strategy1 = cfg.value()->strategy1;
+  options.m_Strategy3 = cfg.value()->strategy3;
+  options.m_Strategy4 = cfg.value()->strategy4;
+  options.m_Strategy6 = cfg.value()->strategy6;
+  options.m_Strategy7 = cfg.value()->strategy7;
+  options.m_DebugInfo.m_DumpRam = cfg.value()->dump_ram;
+  options.m_DebugInfo.m_InitialSramDump = cfg.value()->initial_sram_dump;
+  options.m_BlockConfig16x16 = cfg.value()->block_config_16x16;
+  options.m_BlockConfig32x8 = cfg.value()->block_config_32x8;
+  options.m_BlockConfig8x32 = cfg.value()->block_config_8x32;
+  options.m_BlockConfig8x8 = cfg.value()->block_config_8x8;
+  options.m_EnableIntermediateCompression = 
cfg.value()->enable_intermediate_compression;
+  options.m_DisableWinograd = cfg.value()->disable_winograd;
+  options.m_DebugInfo.m_DumpDebugFiles = cfg.value()->dump_debug_files;
+  options.m_DebugInfo.

[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6242: [relay][ir] add string type to relay ir

2020-08-10 Thread GitBox


cloud-mxd commented on pull request #6242:
URL: https://github.com/apache/incubator-tvm/pull/6242#issuecomment-671304980


   At the same time, thanks to @MarisaKirisame  for the help and support



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron edited a comment on pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-10 Thread GitBox


leandron edited a comment on pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#issuecomment-671295731


   @comaniac here is a new version of this patch with only the minimal code to 
enable the command line. Follow up patches, that I will send after this, will 
deal with `compile`, `run` and `tune` functionalities.
   
   So, here is a checklist:
   - [x]  tvmc enablement (this)
   - [ ] `compile`
   - [ ] `run`
   - [ ]  `tune`
   - [ ] Tutorial



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on pull request #6112: TVMC - a command line driver for TVM (Part 1)

2020-08-10 Thread GitBox


leandron commented on pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#issuecomment-671295731


   @comaniac here is a new version of this patch with only the minimal code to 
enable the command line. Follow up patches, that I will send after this, will 
deal with `compile`, `run` and `tune` functionalities.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on pull request #6112: TVMC - a command line driver for TVM

2020-08-10 Thread GitBox


leandron commented on pull request #6112:
URL: https://github.com/apache/incubator-tvm/pull/6112#issuecomment-671292078


   > The command name`tvmc` sounds strange, it is not immediately obvious what 
the c stands for. Can the shell command be called just `tvm compile` `tvm tune` 
etc. same as [aws cli](https://aws.amazon.com/cli/) or [github 
cli](https://github.com/cli/cli).
   
   This is open for discussion - for the moment I'm keeping the original name 
of the work `tvmc`, but I'm happy to consider other options (including `tvm`).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cloud-mxd commented on pull request #6242: [relay][ir] add string type to relay ir

2020-08-10 Thread GitBox


cloud-mxd commented on pull request #6242:
URL: https://github.com/apache/incubator-tvm/pull/6242#issuecomment-671282626


   Thanks for reply and support, RFC link below:
   
   https://discuss.tvm.ai/t/rfc-relay-containers-array-map-string/7560
   
   @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6222: [BYOC][ETHOSN] Introduce the Ethos-N BYOC integration

2020-08-10 Thread GitBox


mbaret commented on a change in pull request #6222:
URL: https://github.com/apache/incubator-tvm/pull/6222#discussion_r467806804



##
File path: tests/python/contrib/test_ethosn/infrastructure.py
##
@@ -0,0 +1,225 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Expose Ethos test functions to the Python front end"""
+
+from __future__ import absolute_import, print_function
+import tvm
+from tvm import relay
+from tvm.contrib import util, graph_runtime, download
+from tvm.relay.testing import run_opt_pass
+from enum import Enum
+from hashlib import md5
+from itertools import zip_longest, combinations
+import numpy as np
+from PIL import Image
+import os
+
+from . import _infrastructure
+from tvm.relay.op.contrib import get_pattern_table
+
+
+class Available(Enum):
+UNAVAILABLE = 0
+SW_ONLY = 1
+SW_AND_HW = 2
+
+
+def ethosn_available():
+"""Return whether Ethos-N software and hardware support is available"""
+if not tvm.get_global_func("relay.ethos-n.query", True):
+print("skip because Ethos-N module is not available")
+return Available.UNAVAILABLE
+else:
+hw = tvm.get_global_func("relay.ethos-n.query")()
+return Available.SW_AND_HW if hw else Available.SW_ONLY
+
+
+def get_real_image(im_height, im_width):
+repo_base = 
'https://github.com/dmlc/web-data/raw/master/tensorflow/models/InceptionV1/'
+img_name = 'elephant-299.jpg'
+image_url = os.path.join(repo_base, img_name)
+img_path = download.download_testdata(image_url, img_name, module='data')
+image = Image.open(img_path).resize((im_height, im_width))
+x = np.array(image).astype('uint8')
+data = np.reshape(x, (1, im_height, im_width, 3))
+return data
+
+
+def assert_lib_hash(lib, golden):
+temp = util.tempdir()
+path = temp.relpath("lib.cmm")
+lib.imported_modules[1].save(path)
+lib_hash = md5(open(path, 'rb').read()).hexdigest()
+assert lib_hash == golden, "Expected hash: {} Got hash: {}".format(golden, 
lib_hash)
+
+
+def make_module(func, params):
+func = relay.Function(relay.analysis.free_vars(func), func)
+if len(params):
+relay.build_module.bind_params_by_name(func, params)
+return tvm.IRModule.from_expr(func)
+
+
+def make_ethosn_composite(ethosn_expr, name):
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function([relay.Var("a")], ethosn_expr)
+func = func.with_attr("Composite", name)
+call = relay.Call(func, vars)
+return call
+
+
+def make_ethosn_partition(ethosn_expr):
+# Create an Ethos-N global function
+mod = tvm.IRModule({})
+vars = relay.analysis.free_vars(ethosn_expr)
+func = relay.Function(vars, ethosn_expr)
+func = func.with_attr("Primitive", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Inline", tvm.tir.IntImm("int32", 1))
+func = func.with_attr("Compiler", "ethos-n")
+func = func.with_attr("global_symbol", "ethos-n_0")
+g1 = relay.GlobalVar("ethos-n_0")
+mod[g1] = func
+
+# These are the vars to call the Ethos-N partition with
+more_vars = relay.analysis.free_vars(ethosn_expr)
+# Call the Ethos-N partition in main
+call_fn1 = g1(*more_vars)
+mod["main"] = relay.Function(more_vars, call_fn1)
+return mod
+
+
+def get_cpu_op_count(mod):
+class Counter(tvm.relay.ExprVisitor):
+def __init__(self):
+super().__init__()
+self.count = 0
+
+def visit_call(self, call):
+if isinstance(call.op, tvm.ir.Op):
+self.count += 1
+
+super().visit_call(call)
+
+c = Counter()
+c.visit(mod["main"])
+return c.count
+
+
+def build(mod, params, npu=True, cpu_ops=0, npu_partitions=1):
+relay.backend.compile_engine.get().clear()
+with tvm.transform.PassContext(opt_level=3, config={
+"relay.ext.ethos-n.options": {"variant": 0}
+}):
+with tvm.target.create("llvm -mcpu=core-avx2"):

Review comment:
   It's in there as a workaround for 
https://discuss.tvm.ai/t/segfault-in-llvm/3567





This is an automated message from the Apache Git Service.
To respond to the message, please log

[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6184: [Ansor][AutoTVM v2.0] Phase 2: Basic CPU Sketch Search Policy

2020-08-10 Thread GitBox


jcf94 commented on a change in pull request #6184:
URL: https://github.com/apache/incubator-tvm/pull/6184#discussion_r467778942



##
File path: src/auto_scheduler/compute_dag.cc
##
@@ -342,11 +343,16 @@ AccessAnalyzer::AccessAnalyzer(const Array& 
tensors) {
 has_expensive_op |= HasExpensiveOp(expr);
   }
   if (has_expensive_op || has_branch[op]) {
-is_strict_inlineable = false;
+is_strictly_inlineable = false;
+  }
+
+  // constant tensor is strict-inlineable
+  if (node->read_from[op].empty()) {
+is_strictly_inlineable = true;
   }

Review comment:
   @merrymercy The transform_matrices A, B of the winograd conv2d is a 
constant tensor but with branch, so the order of `has_branch[op]` and 
`read_from[op]` check will result on different `is_strictly_inlineable`.
   Should this intend to be strictly inlined?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #6229: [RPC] Update build support for cross compiling apps/cpp_rpc with OpenCL

2020-08-10 Thread GitBox


FrozenGene commented on pull request #6229:
URL: https://github.com/apache/incubator-tvm/pull/6229#issuecomment-671245122


   ping @csullivan 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (7926a5d -> 5ed7d31)

2020-08-10 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 7926a5d  [Relay][Op] Add unbiased variance op and corresponding 
support in pytorch frontend (#6232)
 add 5ed7d31  [COMMUNITY] jcf94 -> Reviewer (#6241)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)



[GitHub] [incubator-tvm] FrozenGene merged pull request #6241: [COMMUNITY] jcf94 -> Reviewer

2020-08-10 Thread GitBox


FrozenGene merged pull request #6241:
URL: https://github.com/apache/incubator-tvm/pull/6241


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org