[GitHub] [tvm] VertexC edited a comment on issue #7399: TVM compile pytorch model mutliple times seems to have memory leak

2021-02-19 Thread GitBox


VertexC edited a comment on issue #7399:
URL: https://github.com/apache/tvm/issues/7399#issuecomment-782576070


   Hi @masahi 
   
   Thansk for the reply.
   I tried with the fellowing modified script and memory still increases.
   ```python3
   def main()
   process = psutil.Process(os.getpid())
   print('Used Memory before building:', process.memory_info().rss / 1024 / 
1024, 'MB')
   
   total = 10
   input_name = "input0"
   shape = [1, 3, 224, 224]
   data = torch.randn(shape, dtype=torch.float32)
   model = torchvision.models.resnet50(pretrained=False, progress=True)
   
   shape_list = [(input_name, data.shape)]
   scripted_model = torch.jit.trace(model, data).eval()
   mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
   
   for i in range(0, total+1):
   opt_level = 3
   with tvm.transform.PassContext(opt_level=opt_level):
   lib = relay.build(mod,
   target='llvm',
   target_host='llvm',
   params=params)
   
   ctx = tvm.cpu()
   module = graph_runtime.GraphModule(lib["default"](ctx))
   module.set_input(input_name, data)
   
   print('Used Memory after building {} times:'.format(i+1), 
process.memory_info().rss / 1024 / 1024, 'MB')
   ```
   
   ```bash
   Used Memory before building: 235.1640625 MB
   ...
   Used Memory after building 1 times: 907.3828125 MB
   Used Memory after building 2 times: 1153.3046875 MB
   Used Memory after building 3 times: 1273.3515625 MB
   Used Memory after building 4 times: 1386.25390625 MB
   Used Memory after building 5 times: 1489.9296875 MB
   ```
   
   btw, if i comment out 
   ```python3
   ctx = tvm.cpu()
   module = graph_runtime.GraphModule(lib["default"](ctx))
   module.set_input(input_name, data)
   ```
   the memory issue seems to be eased
   ```bash
   Used Memory before building: 236.8046875 MB
   ...
   Used Memory after building 1 times: 816.05859375 MB
   Used Memory after building 2 times: 1026.9375 MB
   Used Memory after building 3 times: 1054.21875 MB
   Used Memory after building 4 times: 1062.2734375 MB
   Used Memory after building 5 times: 1079.9296875 MB
   Used Memory after building 6 times: 1061.29296875 MB
   Used Memory after building 7 times: 1079.0703125 MB
   Used Memory after building 8 times: 1096.953125 MB
   Used Memory after building 9 times: 1078.56640625 MB
   Used Memory after building 10 times: 1105.34375 MB
   Used Memory after building 11 times: 1087.70703125 MB
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] VertexC commented on issue #7399: TVM compile pytorch model mutliple times seems to have memory leak

2021-02-19 Thread GitBox


VertexC commented on issue #7399:
URL: https://github.com/apache/tvm/issues/7399#issuecomment-782576070


   Hi @masahi 
   
   Thansk for the reply.
   I tried with the fellowing modified script and memory still increases.
   ```python3
   def main()
   process = psutil.Process(os.getpid())
   print('Used Memory before building:', process.memory_info().rss / 1024 / 
1024, 'MB')
   
   total = 10
   input_name = "input0"
   shape = [1, 3, 224, 224]
   data = torch.randn(shape, dtype=torch.float32)
   model = torchvision.models.resnet50(pretrained=False, progress=True)
   
   shape_list = [(input_name, data.shape)]
   scripted_model = torch.jit.trace(model, data).eval()
   mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
   
   for i in range(0, total+1):
   opt_level = 3
   with tvm.transform.PassContext(opt_level=opt_level):
   lib = relay.build(mod,
   target='llvm',
   target_host='llvm',
   params=params)
   
   ctx = tvm.cpu()
   module = graph_runtime.GraphModule(lib["default"](ctx))
   module.set_input(input_name, data)
   
   print('Used Memory after building {} times:'.format(i+1), 
process.memory_info().rss / 1024 / 1024, 'MB')
   ```
   
   ```bash
   Used Memory before building: 235.1640625 MB
   ...
   Used Memory after building 1 times: 907.3828125 MB
   Used Memory after building 2 times: 1153.3046875 MB
   Used Memory after building 3 times: 1273.3515625 MB
   Used Memory after building 4 times: 1386.25390625 MB
   Used Memory after building 5 times: 1489.9296875 MB
   ```
   
   btw, if i comment out 
   ```python3
   #ctx = tvm.cpu()
   #module = graph_runtime.GraphModule(lib["default"](ctx))
   #module.set_input(input_name, data)
   ```
   the memory issue seems to be eased
   ```bash
   Used Memory before building: 236.8046875 MB
   ...
   Used Memory after building 1 times: 816.05859375 MB
   Used Memory after building 2 times: 1026.9375 MB
   Used Memory after building 3 times: 1054.21875 MB
   Used Memory after building 4 times: 1062.2734375 MB
   Used Memory after building 5 times: 1079.9296875 MB
   Used Memory after building 6 times: 1061.29296875 MB
   Used Memory after building 7 times: 1079.0703125 MB
   Used Memory after building 8 times: 1096.953125 MB
   Used Memory after building 9 times: 1078.56640625 MB
   Used Memory after building 10 times: 1105.34375 MB
   Used Memory after building 11 times: 1087.70703125 MB
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ANSHUMAN87 opened a new pull request #7484: [Frontend] Sparse reorder support

2021-02-19 Thread GitBox


ANSHUMAN87 opened a new pull request #7484:
URL: https://github.com/apache/tvm/pull/7484


   Sparse reorder support added for Tensorflow frontend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tmoreau89 commented on pull request #6126: [VTA][OpenCL] intelfocl

2021-02-19 Thread GitBox


tmoreau89 commented on pull request #6126:
URL: https://github.com/apache/tvm/pull/6126#issuecomment-782559505


   You are indeed correct @liangfu the Chisel design doesn't derive parameters 
from the hardware_params.h file, so we'll have to reflect the parameterization 
in the Chisel design. CC-ing @vegaluisjose 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on pull request #7445: [Frontend][Tensorflow] Support explicit_paddings for TF 2.x

2021-02-19 Thread GitBox


FrozenGene commented on pull request #7445:
URL: https://github.com/apache/tvm/pull/7445#issuecomment-782557714


   Thanks @trevor-m Merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Frontend][Tensorflow] Support explicit_paddings for TF 2.x (#7445)

2021-02-19 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 5688068  [Frontend][Tensorflow] Support explicit_paddings for TF 2.x 
(#7445)
5688068 is described below

commit 5688068eb02912a4ec926a88f5cad3f0f370454e
Author: Trevor Morris 
AuthorDate: Fri Feb 19 20:26:55 2021 -0800

[Frontend][Tensorflow] Support explicit_paddings for TF 2.x (#7445)

* Ignore some TF2.0 attributes

* Support explicit padding for conv2d, max_pool, conv3d

* Remove conv3d explicit padding test since TF API doesn't allow it
---
 python/tvm/relay/frontend/tensorflow.py  | 44 +---
 tests/python/frontend/tensorflow/test_forward.py | 40 +
 2 files changed, 79 insertions(+), 5 deletions(-)

diff --git a/python/tvm/relay/frontend/tensorflow.py 
b/python/tvm/relay/frontend/tensorflow.py
index 6a29ce2..ac52ab7 100644
--- a/python/tvm/relay/frontend/tensorflow.py
+++ b/python/tvm/relay/frontend/tensorflow.py
@@ -268,6 +268,13 @@ def _pooling(name):
 pad_h = _get_pad_pair(in_w, kernel_w, stride_w)
 
 attr["padding"] = [pad_v[0], pad_h[0], pad_v[1], pad_h[1]]
+elif attr["padding"] == "EXPLICIT":
+paddings = attr["explicit_paddings"]
+assert len(paddings) == 8
+if flip_layout or attr["data_format"] == "NHWC":
+attr["padding"] = [paddings[2], paddings[4], paddings[3], 
paddings[5]]
+else:
+attr["padding"] = [paddings[4], paddings[6], paddings[5], 
paddings[7]]
 else:
 msg = 'Value {} in attribute "padding" of operator Pooling is ' 
"not valid."
 raise tvm.error.OpAttributeInvalid(msg.format(attr["padding"]))
@@ -278,7 +285,7 @@ def _pooling(name):
 out = AttrCvt(
 op_name=_dimension_picker(name),
 transforms={"kernel_shape": "pool_size", "data_format": "layout"},
-ignores=["ksize"],
+ignores=["ksize", "explicit_paddings"],
 extras={"ceil_mode": False},
 custom_check=_dimension_constraint(),
 )(inputs, attr)
@@ -418,6 +425,13 @@ def _conv(opname):
 pad_h = _get_pad_pair(in_w, dilated_kernel_w, stride_w)
 
 attr["padding"] = [pad_v[0], pad_h[0], pad_v[1], pad_h[1]]
+elif attr["padding"] == "EXPLICIT":
+paddings = attr["explicit_paddings"]
+assert len(paddings) == 8
+if flip_layout or attr["data_format"] == "NHWC":
+attr["padding"] = [paddings[2], paddings[4], paddings[3], 
paddings[5]]
+else:
+attr["padding"] = [paddings[4], paddings[6], paddings[5], 
paddings[7]]
 else:
 msg = 'Value {} in attribute "padding" of operator Conv is not ' 
"valid."
 raise tvm.error.OpAttributeInvalid(msg.format(attr["padding"]))
@@ -626,7 +640,27 @@ def _conv3d(opname):
 pad_h = _get_pad_pair(in_w, dilated_kernel_w, stride_w)
 
 attr["padding"] = [pad_d[0], pad_v[0], pad_h[0], pad_d[1], 
pad_v[1], pad_h[1]]
-
+elif attr["padding"] == "EXPLICIT":
+paddings = attr["explicit_paddings"]
+assert len(paddings) == 10
+if flip_layout or attr["data_format"] == "NDHWC":
+attr["padding"] = [
+paddings[2],
+paddings[4],
+paddings[6],
+paddings[3],
+paddings[5],
+paddings[7],
+]
+else:
+attr["padding"] = [
+paddings[4],
+paddings[6],
+paddings[8],
+paddings[5],
+paddings[7],
+paddings[9],
+]
 else:
 msg = 'Value {} in attribute "padding" of operator Conv is not ' 
"valid."
 raise tvm.error.OpAttributeInvalid(msg.format(attr["padding"]))
@@ -1445,9 +1479,9 @@ def _squeeze():
 def _impl(inputs, attr, params, mod):
 if len(attr["squeeze_dims"]) == 0:
 attr["squeeze_dims"] = None
-return AttrCvt(op_name="squeeze", transforms={"squeeze_dims": "axis"}, 
ignores=["T"])(
-inputs, attr
-)
+return AttrCvt(
+op_name="squeeze", transforms={"squeeze_dims": "axis"}, 
ignores=["T", "_cloned"]
+)(inputs, attr)
 
 return _impl
 
diff --git a/tests/python/frontend/tensorflow/test_forward.py 
b/tests/python/frontend/tensorflow/test_forward.py
index f956ea0..ecf6441 100644
--- a/tests/python/frontend/tensorflow/test_forward.py
+++ b/tests/python/frontend/tensorflow/test_forward.py
@@ -414,6 +414,16 @@ def test_forward_pooling():
 pooling_type=pool_type,
 dil

[GitHub] [tvm] FrozenGene merged pull request #7445: [Frontend][Tensorflow] Support explicit_paddings for TF 2.x

2021-02-19 Thread GitBox


FrozenGene merged pull request #7445:
URL: https://github.com/apache/tvm/pull/7445


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] liangfu commented on pull request #6126: [VTA][OpenCL] intelfocl

2021-02-19 Thread GitBox


liangfu commented on pull request #6126:
URL: https://github.com/apache/tvm/pull/6126#issuecomment-782551208


   I think the root cause is that Chisel VTA isn't updated with recent change 
in ISA.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


masahi commented on pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#issuecomment-782546478


   I can do the GPU version. It will likely require ir builder. But let me know 
if you *want* to do GPU as well, you can certainly do it. The idea is identical 
with CPU version, just using different parallelization.
   
   If `unique_with_counts` can be supported by adding another option to 
`unique`, that sounds good. We shouldn't add `relay.unique_with_counts` or 
`topi.unique_with_counts`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ymwangg edited a comment on pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


ymwangg edited a comment on pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#issuecomment-782545404


   @masahi Yeah, I only added CPU version in this PR. I'm not very familiar 
with GPU IR now but I can do it later. If the overall structure looks good, I 
can add `unique_with_counts` since their implementations are very similar. 
   
   I'll add the pytorch frontend in this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ymwangg edited a comment on pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


ymwangg edited a comment on pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#issuecomment-782545404


   @masahi Yeah, I only added CPU version in this PR. I'm not very familiar 
with GPU IR now but I can do it later. If the overall structure looks good, I 
can add `unique_with_counts` in future PR since their implementations are very 
similar. 
   
   I'll add the pytorch frontend in this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ymwangg commented on pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


ymwangg commented on pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#issuecomment-782545404


   @masahi Yeah, I only added CPU version in this PR. I'm not very familiar 
with GPU IR now but I can do it later. If the overall structure looks good, I 
can add `unique_with_counts` in the future since their implementations are very 
similar.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


masahi commented on pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#issuecomment-782545316


   Can you also add pytorch frontend? Not all option need to be supported. 
Likely the same as tf conversion



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ymwangg commented on a change in pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


ymwangg commented on a change in pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#discussion_r579581782



##
File path: python/tvm/topi/unique.py
##
@@ -0,0 +1,118 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name
+"""Unique operator"""
+from ..te import hybrid
+from .cumsum import cumsum
+from .sort import sort, argsort
+
+
+@hybrid.script
+def _calc_adjacent_diff(data):
+output = output_tensor(data.shape, "int32")
+output[0] = int32(0)
+for i in range(1, data.shape[0]):
+output[i] = int32(1) if data[i] != data[i - 1] else int32(0)
+return output
+
+
+@hybrid.script
+def _calc_num_unique(data):
+output = output_tensor((1,), "int32")
+output[0] = data[data.shape[0] - 1] + 1
+return output
+
+
+@hybrid.script
+def _calc_unique_sorted(data, argsorted_indices, inc_scan):
+unique_elements = output_tensor(data.shape, data.dtype)
+indices = output_tensor(data.shape, "int32")
+for i in range(data.shape[0]):
+indices[argsorted_indices[i]] = inc_scan[i]

Review comment:
   actually all loops can be done in parallel, I changed all loops to 
parallel.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7468: [CUDA][THRUST] Enforce -libs=thrust to allow thrust offload

2021-02-19 Thread GitBox


mbrookhart commented on pull request #7468:
URL: https://github.com/apache/tvm/pull/7468#issuecomment-782544265


   I have no complaints with this but also very little experience with this 
part of the code, so I'm happy with it if no one objects.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ymwangg commented on a change in pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


ymwangg commented on a change in pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#discussion_r579581566



##
File path: python/tvm/topi/unique.py
##
@@ -0,0 +1,118 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name
+"""Unique operator"""
+from ..te import hybrid
+from .cumsum import cumsum
+from .sort import sort, argsort
+
+
+@hybrid.script
+def _calc_adjacent_diff(data):
+output = output_tensor(data.shape, "int32")
+output[0] = int32(0)
+for i in range(1, data.shape[0]):

Review comment:
   changed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ymwangg commented on a change in pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


ymwangg commented on a change in pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#discussion_r579580281



##
File path: tests/python/relay/test_op_level3.py
##
@@ -1453,5 +1453,53 @@ def verify_scatter_nd_with_stack(data_np, indices_np, 
shape, ref_res, rtol=1e-5,
 verify_scatter_nd_with_stack(data, indices, shape, out)
 
 
+@tvm.testing.uses_gpu
+def test_unique():
+def calc_numpy_unique(data, is_sorted=False):
+uniq, index, inverse, counts = np.unique(
+data, return_index=True, return_inverse=True, return_counts=True
+)
+num_uniq = np.array([len(uniq)]).astype("int32")
+if not is_sorted:
+order = np.argsort(index)
+reverse_order = np.argsort(order)
+uniq = uniq[order].astype(data.dtype)
+inverse = np.array([reverse_order[i] for i in 
inverse]).astype("int32")
+counts = counts[order].astype("int32")
+return [uniq.astype(data.dtype), inverse.astype("int32"), counts, 
num_uniq]
+
+def verify_unique(n, dtype, is_dyn=False, is_sorted=False):
+if is_dyn:
+x = relay.var("x", relay.TensorType([relay.Any()], dtype))
+else:
+x = relay.var("x", relay.TensorType([n], dtype))
+outs = relay.unique(x, is_sorted)
+outs = outs.astuple()
+func = relay.Function([x], outs)
+x_data = np.random.randint(50, size=n).astype(dtype)
+
+if is_dyn:
+backends = ["vm", "debug"]
+else:
+backends = ["graph", "debug"]
+for target, ctx in tvm.testing.enabled_targets():

Review comment:
   thanks, will fix it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#discussion_r579575955



##
File path: python/tvm/topi/unique.py
##
@@ -0,0 +1,118 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name
+"""Unique operator"""
+from ..te import hybrid
+from .cumsum import cumsum
+from .sort import sort, argsort
+
+
+@hybrid.script
+def _calc_adjacent_diff(data):
+output = output_tensor(data.shape, "int32")
+output[0] = int32(0)
+for i in range(1, data.shape[0]):

Review comment:
   Parallel





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#discussion_r579575906



##
File path: python/tvm/topi/unique.py
##
@@ -0,0 +1,118 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name
+"""Unique operator"""
+from ..te import hybrid
+from .cumsum import cumsum
+from .sort import sort, argsort
+
+
+@hybrid.script
+def _calc_adjacent_diff(data):
+output = output_tensor(data.shape, "int32")
+output[0] = int32(0)
+for i in range(1, data.shape[0]):
+output[i] = int32(1) if data[i] != data[i - 1] else int32(0)
+return output
+
+
+@hybrid.script
+def _calc_num_unique(data):
+output = output_tensor((1,), "int32")
+output[0] = data[data.shape[0] - 1] + 1
+return output
+
+
+@hybrid.script
+def _calc_unique_sorted(data, argsorted_indices, inc_scan):
+unique_elements = output_tensor(data.shape, data.dtype)
+indices = output_tensor(data.shape, "int32")
+for i in range(data.shape[0]):
+indices[argsorted_indices[i]] = inc_scan[i]

Review comment:
   We can do this loop in parallel





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] domin1985 commented on pull request #7347: [RELAY][Parser] Optimize relay parser to restore calls attrs

2021-02-19 Thread GitBox


domin1985 commented on pull request #7347:
URL: https://github.com/apache/tvm/pull/7347#issuecomment-782522148


   Please help review @masahi @jroesch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] domin1985 edited a comment on pull request #7347: [RELAY][Parser] Optimize relay parser to restore calls attrs

2021-02-19 Thread GitBox


domin1985 edited a comment on pull request #7347:
URL: https://github.com/apache/tvm/pull/7347#issuecomment-771305246


   Rebased on master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] domin1985 edited a comment on pull request #7347: [RELAY][Parser] Optimize relay parser to restore calls attrs

2021-02-19 Thread GitBox


domin1985 edited a comment on pull request #7347:
URL: https://github.com/apache/tvm/pull/7347#issuecomment-771305246


   Rebased on master. Please help review @masahi @jroesch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Laurawly edited a comment on pull request #7468: [CUDA][THRUST] Enforce -libs=thrust to allow thrust offload

2021-02-19 Thread GitBox


Laurawly edited a comment on pull request #7468:
URL: https://github.com/apache/tvm/pull/7468#issuecomment-782517490


   LGTM, just a small comment: could you also add a comment in the [deploy ssd 
tutorial](https://tvm.apache.org/docs/tutorials/frontend/deploy_ssd_gluoncv.html)
 to let people know this change if they want to use cuda + thrust? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Laurawly edited a comment on pull request #7468: [CUDA][THRUST] Enforce -libs=thrust to allow thrust offload

2021-02-19 Thread GitBox


Laurawly edited a comment on pull request #7468:
URL: https://github.com/apache/tvm/pull/7468#issuecomment-782517490


   Could you also add a comment in the [deploy ssd 
tutorial](https://tvm.apache.org/docs/tutorials/frontend/deploy_ssd_gluoncv.html)
 to let people know this change if they want to use cuda + thrust?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Laurawly commented on pull request #7468: [CUDA][THRUST] Enforce -libs=thrust to allow thrust offload

2021-02-19 Thread GitBox


Laurawly commented on pull request #7468:
URL: https://github.com/apache/tvm/pull/7468#issuecomment-782517490


   Could you also add a comment in the deploy ssd tutorial to let people know 
this change if they want to use cuda + thrust?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#discussion_r579566303



##
File path: tests/python/relay/test_op_level3.py
##
@@ -1453,5 +1453,53 @@ def verify_scatter_nd_with_stack(data_np, indices_np, 
shape, ref_res, rtol=1e-5,
 verify_scatter_nd_with_stack(data, indices, shape, out)
 
 
+@tvm.testing.uses_gpu
+def test_unique():
+def calc_numpy_unique(data, is_sorted=False):
+uniq, index, inverse, counts = np.unique(
+data, return_index=True, return_inverse=True, return_counts=True
+)
+num_uniq = np.array([len(uniq)]).astype("int32")
+if not is_sorted:
+order = np.argsort(index)
+reverse_order = np.argsort(order)
+uniq = uniq[order].astype(data.dtype)
+inverse = np.array([reverse_order[i] for i in 
inverse]).astype("int32")
+counts = counts[order].astype("int32")
+return [uniq.astype(data.dtype), inverse.astype("int32"), counts, 
num_uniq]
+
+def verify_unique(n, dtype, is_dyn=False, is_sorted=False):
+if is_dyn:
+x = relay.var("x", relay.TensorType([relay.Any()], dtype))
+else:
+x = relay.var("x", relay.TensorType([n], dtype))
+outs = relay.unique(x, is_sorted)
+outs = outs.astuple()
+func = relay.Function([x], outs)
+x_data = np.random.randint(50, size=n).astype(dtype)
+
+if is_dyn:
+backends = ["vm", "debug"]
+else:
+backends = ["graph", "debug"]
+for target, ctx in tvm.testing.enabled_targets():

Review comment:
   This will probably try to run on GPU





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7441: [Frontend][Tensorflow] Add unique operator

2021-02-19 Thread GitBox


masahi commented on pull request #7441:
URL: https://github.com/apache/tvm/pull/7441#issuecomment-782513594


   Looks good :+1: GPU is not supported right?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7468: [CUDA][THRUST] Enforce -libs=thrust to allow thrust offload

2021-02-19 Thread GitBox


masahi commented on pull request #7468:
URL: https://github.com/apache/tvm/pull/7468#issuecomment-782512215


   If there is no comment, I assume everyone is cool with this change, and I'll 
ask someone to merge this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7483: [TOPI] Fix cuda nms handling of additional per box features

2021-02-19 Thread GitBox


masahi commented on pull request #7483:
URL: https://github.com/apache/tvm/pull/7483#issuecomment-782510416


   please verify that there is no perf regression for normal cases (PyTorch, 
TF). I now understand why MXNet NMS expects weird packed inputs lol



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on pull request #7477: SparseReshape Op

2021-02-19 Thread GitBox


codeislife99 commented on pull request #7477:
URL: https://github.com/apache/tvm/pull/7477#issuecomment-782474714


   @tkonolige @mbrookhart PTAL. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m opened a new pull request #7483: [TOPI] Fix cuda nms handling of additional per box features

2021-02-19 Thread GitBox


trevor-m opened a new pull request #7483:
URL: https://github.com/apache/tvm/pull/7483


   For NMS, boxes typically have 5 or 6 features which are the 4 box 
coordinates, per-box scores, and sometimes per-box classes. However, the boxes 
are also allowed to have any amount of additional features. We didn't have any 
unit tests for that situation, so have added one. After recent changes to CUDA 
nms implementation, those additional features were not being copied around 
anymore.
   
   Additional features per box: 
https://mxnet.incubator.apache.org/versions/1.7.0/api/python/docs/api/symbol/contrib/index.html#mxnet.symbol.contrib.box_nms
   
   > By default, a box is [id, score, xmin, ymin, xmax, ymax, …], additional 
elements are allowed.
   
   @masahi @mbrookhart @anijain2305 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on pull request #7125: Sparse reshape op

2021-02-19 Thread GitBox


codeislife99 commented on pull request #7125:
URL: https://github.com/apache/tvm/pull/7125#issuecomment-782473949


   I have a completely new implementation which addresses the many issues in 
this PR regarding performance and dynamic shapes in #7477 . I am closing this 
PR. Please review the new PR [here](#7477)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 closed pull request #7125: Sparse reshape op

2021-02-19 Thread GitBox


codeislife99 closed pull request #7125:
URL: https://github.com/apache/tvm/pull/7125


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch opened a new pull request #7482: make test_runtime_rpc use pytest.main()

2021-02-19 Thread GitBox


areusch opened a new pull request #7482:
URL: https://github.com/apache/tvm/pull/7482


   Just a cleanup PR to remove our laundry list of functions at the end. The 
global PackedFunc required me to do this at the top of the file, so I figured 
it was a good PR to send.
   
   @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rkimball commented on pull request #7480: Do not allow exceptions in destructors

2021-02-19 Thread GitBox


rkimball commented on pull request #7480:
URL: https://github.com/apache/tvm/pull/7480#issuecomment-782464224


   There is a better solution. It is still not correct to raise exception from 
a destructor and we should not rely on bad code for error messages. A proper 
alternative is to print an error message when we catch an exception. This will 
give you the exact result you want. raising an exception in a destructor does 
not raise an exception, it just prints a message that you are trying to raise 
an exception where you are not supposed to.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7480: Do not allow exceptions in destructors

2021-02-19 Thread GitBox


areusch commented on pull request #7480:
URL: https://github.com/apache/tvm/pull/7480#issuecomment-782462941


   I know we should not be raising exceptions from `__del__`, but we kinda went 
down this path and now exceptions raised in `__del__` can serve as a note to 
the user that they left objects live before terminating the program. in a 
server setting, that might be important.
   
   in this case, the underlying problem was that a Node subclass did not call 
super().__init__() straightaway, and then raised an exception. This caused 
`Object.__del__` to enter a recursive loop because `self.handle` slot was not 
populated. #7481 makes __del__ robust to missing handle. I prefer we merge that 
rather than suppress all of these error messages, as it's fairly opinionated of 
our library that users may not care they are leaking memory



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7481: Fix stack overflow when partially-__init__ Node raises exception.

2021-02-19 Thread GitBox


areusch commented on pull request #7481:
URL: https://github.com/apache/tvm/pull/7481#issuecomment-782460728


   @tqchen @merrymercy @rkimball @jroesch @tkonolige @junrushao1994 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch opened a new pull request #7481: Fix stack overflow when partially-__init__ Node raises exception.

2021-02-19 Thread GitBox


areusch opened a new pull request #7481:
URL: https://github.com/apache/tvm/pull/7481


* If a Node subclass raises an exception and ctypes is in use before
  __init_handle_by_constructor__ is called (or self.handle is
  otherwise set), a Python stack overflow could result. This is
  because the unset handle slot causes self.handle accesses to
  fallback on the getattr(self, 'handle') method, invoking
  NodeGetAttr.
* Then I believe this causes an infinite loop.
* The fix is to make Node.__getattr__ raise AttributeError for all
  attributes in __slots__, then make __del__ tolerant to missing
  self.handle.
* I don't believe cython is affected because it implements a
  descriptor to access its underlying chandle and that shouldn't be unset.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7304: [TVMC] Add composite target passes for compilation and tuning

2021-02-19 Thread GitBox


comaniac commented on pull request #7304:
URL: https://github.com/apache/tvm/pull/7304#issuecomment-782457877


   Merged first to avoid unnecessary blocking. In case the codegen registration 
has to be improved for other backends, we will do that in the follow-up PRs.
   
   Thanks @leandron @manupa-arm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac merged pull request #7304: [TVMC] Add composite target passes for compilation and tuning

2021-02-19 Thread GitBox


comaniac merged pull request #7304:
URL: https://github.com/apache/tvm/pull/7304


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (256b9cf -> d16f282)

2021-02-19 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 256b9cf  Get tvmc version from tvm (#7478)
 add d16f282  [TVMC] Add composite target passes for compilation and tuning 
(#7304)

No new revisions were added by this update.

Summary of changes:
 python/tvm/driver/tvmc/autotuner.py   |   9 +-
 python/tvm/driver/tvmc/common.py  | 188 +-
 python/tvm/driver/tvmc/compiler.py|  23 ++-
 python/tvm/driver/tvmc/composite_target.py|  68 
 python/tvm/relay/op/contrib/ethosn.py |  35 
 tests/python/driver/tvmc/test_common.py   |  91 +++
 tests/python/driver/tvmc/test_compiler.py |  47 +-
 tests/python/driver/tvmc/test_composite_target.py |  62 +++
 8 files changed, 506 insertions(+), 17 deletions(-)
 create mode 100644 python/tvm/driver/tvmc/composite_target.py
 create mode 100644 tests/python/driver/tvmc/test_composite_target.py



[GitHub] [tvm] comaniac commented on pull request #7478: Get tvmc version from tvm

2021-02-19 Thread GitBox


comaniac commented on pull request #7478:
URL: https://github.com/apache/tvm/pull/7478#issuecomment-782456572


   Thanks @NicolaLancellotti @leandron 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (e204209 -> 256b9cf)

2021-02-19 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from e204209  [AutoScheduler] Fix the type inference for conv3d (#7475)
 add 256b9cf  Get tvmc version from tvm (#7478)

No new revisions were added by this update.

Summary of changes:
 python/tvm/driver/tvmc/main.py | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)



[GitHub] [tvm] comaniac merged pull request #7478: Get tvmc version from tvm

2021-02-19 Thread GitBox


comaniac merged pull request #7478:
URL: https://github.com/apache/tvm/pull/7478


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #7480: Do not allow exceptions in destructors

2021-02-19 Thread GitBox


junrushao1994 commented on pull request #7480:
URL: https://github.com/apache/tvm/pull/7480#issuecomment-782389918


   It is exactly a bug that troubles me for years! Thanks Bob! CC: @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rkimball opened a new pull request #7480: Do not allow exceptions in destructors

2021-02-19 Thread GitBox


rkimball opened a new pull request #7480:
URL: https://github.com/apache/tvm/pull/7480


   Exceptions raised in destructors are ignored and a warning message is 
displayed. Catch exceptions raised and prevent them from propagating.
   
   Raising exceptions in the ObjectBase destructor was causing stack overflow 
in the unit test `tests/python/unittest/test_auto_scheduler_cost_model.py` if 
xgboost was not installed in both windows and linux. This PR eliminates the 
stack overflow in that case.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi opened a new pull request #7479: [Torch] Avoid adding unnecessary slicing

2021-02-19 Thread GitBox


masahi opened a new pull request #7479:
URL: https://github.com/apache/tvm/pull/7479


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m commented on pull request #7445: [Frontend][Tensorflow] Support explicit_paddings for TF 2.x

2021-02-19 Thread GitBox


trevor-m commented on pull request #7445:
URL: https://github.com/apache/tvm/pull/7445#issuecomment-782329275


   @FrozenGene I've updated the PR with explicit padding support. PTAL
   
   I noticed that while many ops now have the "explicit_paddings" attribute, 
the ability to actually use explicit padding in the TF python API has only 
exposed it for a few ops.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7472: [RUNTIME] Add device specific timers

2021-02-19 Thread GitBox


areusch commented on pull request #7472:
URL: https://github.com/apache/tvm/pull/7472#issuecomment-782250234


   I think in the future i'd like to see us move to a log-based approach where 
each event of interest and timer used gets a unique id assigned to it at 
compile time. but I think this is a good change in the meantime, as that system 
is way more complicated and would need an RFC or two.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7428: Add pass to annotate ops with on_device for non-BYOC heterogeneous

2021-02-19 Thread GitBox


comaniac commented on pull request #7428:
URL: https://github.com/apache/tvm/pull/7428#issuecomment-782237751


   @rkimball Exactly. We should eliminate "external" and "internal" but just 
treat every target evenly. Meanwhile, the corresponding changes in compile 
engine have to be made to deal with built in targets, as you pointed out.
   
   While I still feel we should not have two mechanisms doing similar things, 
I'd suggest having an RFC to improve the BYOC flow so that it could be even 
more general.
   
   cc @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on pull request #7478: Get tvmc version from tvm

2021-02-19 Thread GitBox


leandron commented on pull request #7478:
URL: https://github.com/apache/tvm/pull/7478#issuecomment-782231452


   cc @comaniac for a review on this one, when possible
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rkimball commented on pull request #7428: Add pass to annotate ops with on_device for non-BYOC heterogeneous

2021-02-19 Thread GitBox


rkimball commented on pull request #7428:
URL: https://github.com/apache/tvm/pull/7428#issuecomment-782219803


   @comaniac and @manupa-arm to your questions
   This PR is to provide a simple demo of heterogeneous execution on CPU and 
Vulkan which are both tvm internal compilers.
   
   * I started with `compiler_begin` and `compiler_end` and was able to 
annotate my simple example model. These attributes worked great until up until 
calling the compilers. In my use case I need to compile code to use the CPU and 
Vulkan backends, both of which are built into tvm. With the 
`compiler_begin/end` annotation it really looks like it only works with 
external compilers, with no way to use tvm's built in compilers for the second 
device. For a quick demo I used the older `on_device` annotation which does 
directly work with both backends built into tvm.
   * There is a pass to annotate nodes, `AnnotateDevicePlacement` in 
`src/relay/transforms/annotate_device_placement.cc`. The pass simply call a 
callback for each CallNode and then annotates the nodes with the returned 
`on_device`. This pass is sufficient for the demo I had put together. What the 
user specifically wants to do is still under investigation.
   * As a more long-term goal I agree with you that consolidating the two 
heterogeneous approaches into a single approach is desired and hope to do that. 
AnnotateTarget looks to be complete. I though about wrapping CPU and GPU as 
"external" compilers but that seems like a more roundabout solution to the 
problem, better would be to unify the "external" and "internal" compilers so 
that either could be used with the `compile_begin/end` annotation.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #7435: [TOPI] Sparse Add Op added

2021-02-19 Thread GitBox


tkonolige commented on a change in pull request #7435:
URL: https://github.com/apache/tvm/pull/7435#discussion_r579325284



##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -2148,6 +2148,54 @@ def sparse_transpose(x):
 return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+r"""
+Computes the matrix addition of `dense_mat` and `sparse_mat`, where 
`dense_mat` is
+a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple 
with
+fields `data`, `indices`, and `indptr`.
+
+.. math::
+
+\mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = 
\mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+where `as_dense` returns dense equivalent of the given S(sparse matrix)
+while performing addition with given D(dense matrix).
+
+Parameters
+--
+dense_mat : tvm.relay.Expr
+The input dense matrix for the matrix multiplication
+
+sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+The input sparse matrix for the matrix multiplication.
+
+Returns
+---
+result: tvm.relay.Expr
+The computed result.
+
+Examples
+---
+.. code-block:: python
+dense_data = [[ 3.,   4.,   4. ]
+  [ 4.,  2.,  5. ]]

Review comment:
   nit: alignment seems off here.

##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -2148,6 +2148,54 @@ def sparse_transpose(x):
 return expr.TupleWrapper(_make.sparse_transpose(x[0], x[1], x[2]), 3)
 
 
+# pylint: disable=no-else-return,inconsistent-return-statements
+def sparse_add(dense_mat, sparse_mat):
+r"""
+Computes the matrix addition of `dense_mat` and `sparse_mat`, where 
`dense_mat` is
+a dense matrix and `sparse_mat` is a sparse (either BSR or CSR) namedtuple 
with
+fields `data`, `indices`, and `indptr`.
+
+.. math::
+
+\mbox{sparse_add}(dense_mat, sparse_mat)[m, n] = 
\mbox{add}(\mbox{as_dense}(S), (D))[m, n]
+
+where `as_dense` returns dense equivalent of the given S(sparse matrix)
+while performing addition with given D(dense matrix).
+
+Parameters
+--
+dense_mat : tvm.relay.Expr
+The input dense matrix for the matrix multiplication
+
+sparse_mat : Union[namedtuple, Tuple[ndarray, ndarray, ndarray]].
+The input sparse matrix for the matrix multiplication.
+
+Returns
+---
+result: tvm.relay.Expr
+The computed result.
+
+Examples
+---
+.. code-block:: python
+dense_data = [[ 3.,   4.,   4. ]
+  [ 4.,  2.,  5. ]]
+sparse_data = [4., 8.]
+sparse_indices =[0, 2]
+sparse_indptr =[0, 1, 2]
+dense_shape = [2, 3]

Review comment:
   Is this dense_shape necessary?

##
File path: src/relay/op/nn/sparse.cc
##
@@ -196,5 +196,44 @@ RELAY_REGISTER_OP("nn.sparse_transpose")
 .set_support_level(1)
 .add_type_rel("SparseTranspose", SparseTransposeRel);
 
+// relay.nn.sparse_add
+bool SparseAddRel(const Array& types, int num_inputs, const Attrs& attrs,
+  const TypeReporter& reporter) {
+  ICHECK_EQ(types.size(), 5);
+  const auto* dense_data = types[0].as();
+  const auto* sparse_data = types[1].as();
+  ICHECK(reporter->Assert(sparse_data->dtype == dense_data->dtype));
+  ICHECK(reporter->Assert(sparse_data->shape.size() == 1));
+  const auto* sparse_indices = types[2].as();
+  ICHECK(reporter->Assert(sparse_indices->shape.size() == 1));

Review comment:
   Can you add error messages on all the ICHECKs?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jtuyls commented on a change in pull request #7350: [BYOC][VitisAI] Fix issue in Vitis AI codegen out tensor names matching & update docs and docker

2021-02-19 Thread GitBox


jtuyls commented on a change in pull request #7350:
URL: https://github.com/apache/tvm/pull/7350#discussion_r579214604



##
File path: docs/deploy/vitis_ai.rst
##
@@ -541,20 +551,55 @@ TVM.
import tvm
import tvm.relay as relay
from tvm.contrib.target import vitis_ai
-   from tvm.contrib import util, graph_runtime
+   from tvm.contrib import utils, graph_runtime
from tvm.relay.build_module import bind_params_by_name
from tvm.relay.op.contrib.vitis_ai import annotation
 
 After importing a convolutional neural network model using the usual
 Relay API's, annotate the Relay expression for the given Vitis-AI DPU
 target and partition the graph.
 
+.. note::
+
+We recommend converting DPU convolutions' data layouts to NHWC and CPU 
convolutions'
+data layouts to NCHW for best DPU and out of the box CPU performance. You 
can use the
+ConvertLayout transformation pass two times to achieve this as 
demonstrated in the code
+block underneath. You can also leave the CPU convolution layouts in NHWC 
and tune ARM CPU
+performance for this data layout to avoid the layout transformation 
overheads introduced by
+executing DPU convolutions in NHWC and CPU convolutions in NCHW
+(check out the `AutoScheduling 
`__
+and `AutoTuning 
`__
+tutorials for this).
+
 .. code:: python
 
mod["main"] = bind_params_by_name(mod["main"], params)
+   
+   # For edge DPU we recommend converting the convolutions' data layout
+   #to NHWC for best performance. Therefore, we first convert the layouts
+   #of all convolutions to NHWC before partitioning. Afterwards, we can
+   #convert any remaining convolutions (to be executed on CPU) back to 
NCHW.
+   desired_layouts = {'nn.conv2d': ['NHWC', 'default']}
+   seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
+   
relay.transform.ConvertLayout(desired_layouts),
+   relay.transform.FoldConstant()])
+   with tvm.transform.PassContext(opt_level=3):
+   mod = seq(mod)
+
+   # Annotate and partition the Relay expression for the given target
mod = annotation(mod, params, target)
mod = relay.transform.MergeCompilerRegions()(mod)
mod = relay.transform.PartitionGraph()(mod)
+   
+   # After partitioning we recommend transforming the remaining convolutions
+   #(that will be executed on CPU, if any) back to NCHW data layout
+   #for best CPU performance
+   desired_layouts = {'nn.conv2d': ['NCHW', 'default']}
+   seq = tvm.transform.Sequential([relay.transform.RemoveUnusedFunctions(),

Review comment:
   I am not sure. I just took this from the ConvertLayout documentation a 
while ago. It seems like it's still mentioned there: 
https://tvm.apache.org/docs/dev/convert_layout.html?highlight=layout_transform. 
But if it's not necessary we can remove it here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] NicolaLancellotti opened a new pull request #7478: Get tvmc version from tvm

2021-02-19 Thread GitBox


NicolaLancellotti opened a new pull request #7478:
URL: https://github.com/apache/tvm/pull/7478


   This PR allows tvmc to get the version from tvm.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene edited a comment on pull request #7445: [Frontend][Tensorflow] Ignore some TF 2.x attributes

2021-02-19 Thread GitBox


FrozenGene edited a comment on pull request #7445:
URL: https://github.com/apache/tvm/pull/7445#issuecomment-782072291


   > > I worry about we can not ignore it simply. According to doc: 
https://www.tensorflow.org/api_docs/python/tf/nn/conv2d
   > > > Either the string "SAME" or "VALID" indicating the type of padding 
algorithm to use, or a list indicating the explicit paddings at the start and 
end of each dimension. When explicit padding is used and data_format is "NHWC", 
this should be in the form [[0, 0], [pad_top,pad_bottom], [pad_left, 
pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this 
should be in the form [[0, 0], [0, 0],[pad_top, pad_bottom], [pad_left, 
pad_right]].
   > > 
   > > 
   > > If `explicit_padding` is not None, we should apply its values. However, 
@trevor-m you could make a double check.
   > 
   > Thanks for the review! That is true, I will update this PR to properly 
support explicit padding.
   
   Thanks @trevor-m Our convolution op ignore `explicit_padding` attribute 
simply is wrong too (Note: be careful of `conv2d_transpose`: 
https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose, its explicit 
padding is not the same as `conv2d`), not only just pooling op. You could 
correct it too.Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on pull request #7445: [Frontend][Tensorflow] Ignore some TF 2.x attributes

2021-02-19 Thread GitBox


FrozenGene commented on pull request #7445:
URL: https://github.com/apache/tvm/pull/7445#issuecomment-782072291


   > > I worry about we can not ignore it simply. According to doc: 
https://www.tensorflow.org/api_docs/python/tf/nn/conv2d
   > > > Either the string "SAME" or "VALID" indicating the type of padding 
algorithm to use, or a list indicating the explicit paddings at the start and 
end of each dimension. When explicit padding is used and data_format is "NHWC", 
this should be in the form [[0, 0], [pad_top,pad_bottom], [pad_left, 
pad_right], [0, 0]]. When explicit padding used and data_format is "NCHW", this 
should be in the form [[0, 0], [0, 0],[pad_top, pad_bottom], [pad_left, 
pad_right]].
   > > 
   > > 
   > > If `explicit_padding` is not None, we should apply its values. However, 
@trevor-m you could make a double check.
   > 
   > Thanks for the review! That is true, I will update this PR to properly 
support explicit padding.
   
   Thanks @trevor-m Our convolution op ignore `explicit_padding` attribute 
simply is wrong too, not only just pooling op. You could correct it too.Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [AutoScheduler] Fix the type inference for conv3d (#7475)

2021-02-19 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new e204209  [AutoScheduler] Fix the type inference for conv3d (#7475)
e204209 is described below

commit e2042093cddcd2249bf1a7b7659cda6d39046a1c
Author: Lianmin Zheng 
AuthorDate: Fri Feb 19 19:15:31 2021 +0800

[AutoScheduler] Fix the type inference for conv3d (#7475)
---
 src/relay/op/nn/convolution.h | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/src/relay/op/nn/convolution.h b/src/relay/op/nn/convolution.h
index c08d355..5b4850e 100644
--- a/src/relay/op/nn/convolution.h
+++ b/src/relay/op/nn/convolution.h
@@ -24,6 +24,7 @@
 #ifndef TVM_RELAY_OP_NN_CONVOLUTION_H_
 #define TVM_RELAY_OP_NN_CONVOLUTION_H_
 
+#include 
 #include 
 #include 
 
@@ -369,7 +370,18 @@ bool Conv3DRel(const Array& types, int num_inputs, 
const Attrs& attrs,
   } else {
 // use weight to infer the conv shape.
 if (weight == nullptr) return false;
-auto wshape = trans_kernel_layout.ForwardShape(weight->shape);
+
+Array wshape;
+if (param->auto_scheduler_rewritten_layout.size() == 0) {
+  wshape = weight->shape;
+} else {
+  // works for the default kernel layout "DHWIO"
+  ICHECK_EQ(param->kernel_layout, "DHWIO");
+  wshape = 
auto_scheduler::GetShapeFromRewrittenLayout(param->auto_scheduler_rewritten_layout,
+   {"rd", "rh", "rw", 
"rc", "cc"});
+}
+
+wshape = trans_kernel_layout.ForwardShape(wshape);
 if (param->kernel_size.defined()) {
   ICHECK_EQ(param->kernel_size.size(), 3);
   // check the size



[GitHub] [tvm] merrymercy merged pull request #7475: [AutoScheduler] Fix the type inference for conv3d

2021-02-19 Thread GitBox


merrymercy merged pull request #7475:
URL: https://github.com/apache/tvm/pull/7475


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7474: Quantization in TVM

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7474:
URL: https://github.com/apache/tvm/pull/7474#discussion_r578913474



##
File path: python/tvm/relay/transform/quantize/_requantizer.py
##
@@ -0,0 +1,312 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Removes extraneous qnn.quantize and qnn.dequantize from calibrated modules, 
and replaces them
+with qnn.requanize ops."""
+import math
+
+import tvm
+from tvm import relay
+from tvm.relay.dataflow_pattern import DFPatternCallback, wildcard, is_op, 
dominates, rewrite
+
+
+class Requantizer:
+"""Removes extraneous qnn.quantize and qnn.dequantize and replaces
+them with qnn.requantize."""
+
+class RequantizerCallback(DFPatternCallback):
+"""First pass that inserts requantize ops, specifically taking
+qnn.dequantize -> qnn.quantize to qnn.requantize
+and
+qnn.dequantize -> int8_op* -> qnn.quantize to requantize -> int8_op*
+"""
+
+def __init__(self):
+super().__init__()
+
+self.data = wildcard()
+self.dequantize_scale = wildcard()
+self.dequantize_zp = wildcard()
+
+self.quantize_scale = wildcard()
+self.quantize_zp = wildcard()
+
+# Ops that are permitted inbetween quantize and dequantize if we 
are
+# rewriting to requantize
+self.is_int_8_op = (
+is_op("nn.max_pool2d")(wildcard())
+| is_op("nn.max_pool2d")(wildcard())
+| is_op("nn.max_pool3d")(wildcard())
+| is_op("nn.relu")(wildcard())
+| is_op("transpose")(wildcard())
+| is_op("reshape")(wildcard())
+| is_op("nn.pad")(wildcard())
+| is_op("squeeze")(wildcard())
+| is_op("nn.global_avg_pool2d")
+| is_op("nn.batch_flatten")
+| is_op("copy")
+| is_op("mean")
+| is_op("sqrt")
+)

Review comment:
   This is too ad hoc, it can easily break and leaves more 
dequantize/quantize than necessary. And the patterns are not correct.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7474: Quantization in TVM

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7474:
URL: https://github.com/apache/tvm/pull/7474#discussion_r579077073



##
File path: python/tvm/relay/transform/quantize/_quantizer.py
##
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Quantizes functions by inserting qnn.quantize and qnn.dequantize ops."""
+from typing import List
+
+import tvm
+from tvm import relay
+from tvm.relay.dataflow_pattern import _DFPatternCallback
+from tvm.relay.transform.quantize import QuantizerPattern
+from tvm.relay.frontend.common import infer_type
+
+from . import _ffi as ffi
+
+
+class Quantizer:

Review comment:
   I think this is redundant, since all you do is to do some stuff in the 
constructor and immediately pass this object to `QuantizationCalibrator`. It is 
better to directly do the same initialization in the `QuantizationCalibrator` 
constructor.
   And probably I'd rename `QuantizationCalibrator` to `Quantizer`.

##
File path: python/tvm/relay/transform/quantize/_quantizer.py
##
@@ -0,0 +1,155 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Quantizes functions by inserting qnn.quantize and qnn.dequantize ops."""
+from typing import List
+
+import tvm
+from tvm import relay
+from tvm.relay.dataflow_pattern import _DFPatternCallback
+from tvm.relay.transform.quantize import QuantizerPattern
+from tvm.relay.frontend.common import infer_type
+
+from . import _ffi as ffi
+
+
+class Quantizer:

Review comment:
   I think this class is redundant, since all you do is to do some stuff in 
the constructor and immediately pass this object to `QuantizationCalibrator`. 
It is better to directly do the same initialization in the 
`QuantizationCalibrator` constructor.
   And probably I'd rename `QuantizationCalibrator` to `Quantizer`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7474: Quantization in TVM

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7474:
URL: https://github.com/apache/tvm/pull/7474#discussion_r579072951



##
File path: python/tvm/relay/transform/quantize/_quantizer_patterns.py
##
@@ -0,0 +1,712 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Patterns to quantize and how to quantize them."""
+
+import tvm
+from tvm import relay
+
+from tvm.relay.transform.quantize import CalibrationCallback
+from tvm.relay.dataflow_pattern import (
+is_op,
+wildcard,
+is_constant,
+DFPatternCallback,
+_DFPatternCallback,
+)
+from tvm.relay.dataflow_pattern import ffi as pattern_ffi
+from tvm.relay.frontend.common import infer_type
+from tvm.relay.op.nn.utils import get_pad_tuple2d
+
+
+class QuantizerPattern(DFPatternCallback):
+"""DFPatternCallback to rewrite patterns as quantized. Also contains extra 
information
+used for quantization and calibration.
+
+Parameters
+--
+calibration_callback : CalibrationCallback
+The method we will use to calibrate the nn.conv2d pattern.
+"""
+
+# Counts the number of times we've added a scale and zp for variable naming
+# This needs to be a global variable and not initialized in __init__ 
because
+# each scale and zero point must be unique, even if they are created by 
different
+# instances.
+scales_count = 0
+zp_count = 0
+
+def __init__(self, calibration_callback: CalibrationCallback = None):
+super().__init__()
+self.calibration_callback = calibration_callback
+
+def calibrate_pattern(self, calibration_info):
+"""Calculates the scale and zero points for quantizing parts of a 
generic pattern. By
+default, we call the calibrate_pattern method of the 
CalibrationCallback object that is
+passed into QuantizerPattern during initialization. However, if you 
want a pattern specific
+quantization method or a per-channel quantization method, you should 
overwrite the
+QuantizerPattern's calibrate_pattern method.
+
+Parameters
+--
+calibration_info : CalibrationInfo
+The class containing relevant information and utility functions to 
calibrate one
+instance of a pattern.
+
+Returns
+---
+scale_zp_map : Dictionary
+A map from the names of scales and zero point variables in this 
pattern to their
+values.
+"""
+return self.calibration_callback.calibrate_pattern(calibration_info)
+
+def callback(self, pre, post, node_map):
+raise NotImplementedError
+
+def scale(self, name):
+"""Helper to create the scale variable for qnn.quantize when rewriting 
our pattern.
+
+Parameters
+--
+name : str
+Identifier at the beginning of the scale variable.
+
+is_weight : bool
+Whether this scale is a weight scale or a data scale. If it is a 
weight scale, we
+the returned variable has shape (channels,). Only used for 
per-channel quantization.
+
+Returns
+---
+var : relay.Var
+Relay variable for scale. If the input name is 'conv2d_data', then 
the name of the
+relay variable might be 'conv2d_data_scale_0'.
+"""
+
+var = relay.var(
+str(name) + "_scale_" + str(QuantizerPattern.scales_count), 
shape=(), dtype="float32"
+)
+QuantizerPattern.scales_count += 1
+return var
+
+def zero_point(self, name):
+"""Helper to create the zero point variable for qnn.quantize when 
rewriting our
+our pattern.
+
+Parameters
+--
+name : str
+Identifier at the beginning of the variable.
+
+Returns
+---
+var : relay.Var
+Relay variable for scale. If the input name is 'conv2d_data', then 
the name of the
+relay variable might be 'conv2d_data_zero_pt_0'.
+"""
+var = relay.var(
+str(name) + "_zero_pt_" + str(QuantizerPattern.zp_count), 
shape=(), dtype="int32"
+)
+QuantizerPattern.zp_count += 1
+return var
+
+def cre

[GitHub] [tvm] masahi commented on a change in pull request #7474: Quantization in TVM

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7474:
URL: https://github.com/apache/tvm/pull/7474#discussion_r579070402



##
File path: python/tvm/relay/transform/quantize/_quantizer_patterns.py
##
@@ -0,0 +1,712 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Patterns to quantize and how to quantize them."""
+
+import tvm
+from tvm import relay
+
+from tvm.relay.transform.quantize import CalibrationCallback
+from tvm.relay.dataflow_pattern import (
+is_op,
+wildcard,
+is_constant,
+DFPatternCallback,
+_DFPatternCallback,
+)
+from tvm.relay.dataflow_pattern import ffi as pattern_ffi
+from tvm.relay.frontend.common import infer_type
+from tvm.relay.op.nn.utils import get_pad_tuple2d
+
+
+class QuantizerPattern(DFPatternCallback):
+"""DFPatternCallback to rewrite patterns as quantized. Also contains extra 
information
+used for quantization and calibration.
+
+Parameters
+--
+calibration_callback : CalibrationCallback
+The method we will use to calibrate the nn.conv2d pattern.
+"""
+
+# Counts the number of times we've added a scale and zp for variable naming
+# This needs to be a global variable and not initialized in __init__ 
because
+# each scale and zero point must be unique, even if they are created by 
different
+# instances.
+scales_count = 0
+zp_count = 0
+
+def __init__(self, calibration_callback: CalibrationCallback = None):
+super().__init__()
+self.calibration_callback = calibration_callback
+
+def calibrate_pattern(self, calibration_info):
+"""Calculates the scale and zero points for quantizing parts of a 
generic pattern. By
+default, we call the calibrate_pattern method of the 
CalibrationCallback object that is
+passed into QuantizerPattern during initialization. However, if you 
want a pattern specific
+quantization method or a per-channel quantization method, you should 
overwrite the
+QuantizerPattern's calibrate_pattern method.
+
+Parameters
+--
+calibration_info : CalibrationInfo
+The class containing relevant information and utility functions to 
calibrate one
+instance of a pattern.
+
+Returns
+---
+scale_zp_map : Dictionary
+A map from the names of scales and zero point variables in this 
pattern to their
+values.
+"""
+return self.calibration_callback.calibrate_pattern(calibration_info)
+
+def callback(self, pre, post, node_map):
+raise NotImplementedError
+
+def scale(self, name):
+"""Helper to create the scale variable for qnn.quantize when rewriting 
our pattern.
+
+Parameters
+--
+name : str
+Identifier at the beginning of the scale variable.
+
+is_weight : bool
+Whether this scale is a weight scale or a data scale. If it is a 
weight scale, we
+the returned variable has shape (channels,). Only used for 
per-channel quantization.
+
+Returns
+---
+var : relay.Var
+Relay variable for scale. If the input name is 'conv2d_data', then 
the name of the
+relay variable might be 'conv2d_data_scale_0'.
+"""
+
+var = relay.var(
+str(name) + "_scale_" + str(QuantizerPattern.scales_count), 
shape=(), dtype="float32"
+)
+QuantizerPattern.scales_count += 1
+return var
+
+def zero_point(self, name):
+"""Helper to create the zero point variable for qnn.quantize when 
rewriting our
+our pattern.
+
+Parameters
+--
+name : str
+Identifier at the beginning of the variable.
+
+Returns
+---
+var : relay.Var
+Relay variable for scale. If the input name is 'conv2d_data', then 
the name of the
+relay variable might be 'conv2d_data_zero_pt_0'.
+"""
+var = relay.var(
+str(name) + "_zero_pt_" + str(QuantizerPattern.zp_count), 
shape=(), dtype="int32"
+)
+QuantizerPattern.zp_count += 1
+return var
+
+def cre

[GitHub] [tvm] leandron commented on a change in pull request #7462: [Target] Add target host field for target specification

2021-02-19 Thread GitBox


leandron commented on a change in pull request #7462:
URL: https://github.com/apache/tvm/pull/7462#discussion_r579047173



##
File path: python/tvm/target/target.py
##
@@ -46,7 +46,7 @@ class Target(Object):
 - :py:func:`tvm.target.intel_graphics` create Intel Graphics target
 """
 
-def __init__(self, tag_or_str_or_dict):
+def __init__(self, tag_or_str_or_dict, host_tag_or_str_or_dict=None):

Review comment:
   > for the API, I would almost expect `Target(target_host, sub_target_0, 
sub_target_1, ...)`
   
   In terms of API, that would be very good indeed!
   
   If internally we would convert that to represent the required `composite 
target`, with specific dictionary representing all the internal data 
structures, etc., that's implementation details the _we care about_, but the 
end-user, interested in compiling a model shouldn't.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on pull request #7304: [TVMC] Add composite target passes for compilation and tuning

2021-02-19 Thread GitBox


leandron commented on pull request #7304:
URL: https://github.com/apache/tvm/pull/7304#issuecomment-781944239


   > Overall LGTM. Two left comments:
   > 
   > 1. The function naming.
   Updated the names for the ones suggested here.
   
   > 2. Clarify the status of adding TensorRT.
   I'd really prefer to poke someone (maybe @trevor-m) to add TensorRT on a 
separate patch - see my notes about the default values on the partition 
function in https://github.com/apache/tvm/pull/7304#discussion_r576770049.
   
   There is also vitis-ai that comes to mind, for example, we could poke 
@jtuyls to see whether there is interest in adding it on a separate patch, once 
the mechanism here is merged.
   
   @comaniac can you also have a look on the patch again, when you have some 
time?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7474: Quantization in TVM

2021-02-19 Thread GitBox


masahi commented on pull request #7474:
URL: https://github.com/apache/tvm/pull/7474#issuecomment-781925088


   @electriclilies Can you add an end to end runnable example? Like importing a 
pytorch or onnx graph and quantize it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7474: Quantization in TVM

2021-02-19 Thread GitBox


masahi commented on a change in pull request #7474:
URL: https://github.com/apache/tvm/pull/7474#discussion_r579008341



##
File path: python/tvm/relay/transform/quantize/_calibrator.py
##
@@ -0,0 +1,382 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""API for calibrating a quantized function."""
+import numpy as np
+
+import tvm
+from tvm import relay
+from tvm.contrib import graph_runtime
+import tvm.relay.build_module as build_module
+
+
+class QuantizationCalibrator:
+"""The QuantizationCalibrator picks scales and zero points for all qnn ops 
in the quantized
+module.
+
+Parameters
+--
+quantizer : Quantizer
+Quantizer created with the mod we are calibrating.
+
+target : String, optional
+The target to run the quantized function on during calibration.
+
+ctx : String, optional
+The ctx used for running the quantized function on during calibration.
+
+dataset_manager : DatasetManager, optional
+The dataset manager containing data used to run the graph during
+data-aware calibration.
+"""
+
+def __init__(self, quantizer, target="llvm", ctx=tvm.cpu(), 
dataset_manager=None,
+ show_scale_zps=False):
+self.quantizer = quantizer
+
+self.calibration_info = CalibrationInfo(
+quantizer.tuple_subgraph_func,
+quantizer.q_tuple_subgraph_func,
+quantizer.partition_infos,
+dataset_manager,
+target,
+ctx,
+)
+
+self.show_scale_zps = show_scale_zps
+
+def calibrate(self):
+"""Picks the scales and zero points for all qnn ops in the quantized 
graph, using the
+calibrate_pattern function from the quantizer.
+
+Returns
+---
+calibrated_func : relay.Function
+The quantized function with the values for scales and zero points 
substituted into the
+function.
+"""
+# Create a map of DFPatternCallback to QuantizerPattern
+pattern_map = {pattern.pattern: pattern for pattern in 
self.quantizer.patterns}
+
+for partition_info in self.calibration_info.partition_infos:
+# Set the partition info so we can access it from the callback
+self.calibration_info.set_current_partition_info(partition_info)
+quantizer_pattern = pattern_map[partition_info.pattern]
+
+# Get the values for scales and ZPs in this layer, store
+scale_zps = 
quantizer_pattern.calibrate_pattern(self.calibration_info)
+if self.show_scale_zps:
+self.report_scale_zps(scale_zps)
+self.calibration_info.update_scale_zp_map(scale_zps)
+
+calibrated_func = build_module.bind_params_by_name(
+self.quantizer.q_tuple_subgraph_func, 
self.calibration_info.scale_zp_value_map
+)
+
+# If num_orig_outputs is -1, original output wasn't a tuple
+params = calibrated_func.params
+if self.quantizer.num_orig_outputs == -1:
+calibrated_func = relay.Function(params, 
calibrated_func.body.fields[0])
+else:
+new_body = relay.Tuple(calibrated_func.body.fields[0 : 
self.quantizer.num_orig_outputs])
+calibrated_func = relay.Function(params, new_body)
+
+return calibrated_func
+
+def report_scale_zps(self, scale_zp_map):
+"""Prints the scales and zero points out.
+
+Parameters
+--
+scale_zp_map : dict of str to value
+The map from names of scale and zero point variables to their 
assigned values.
+"""
+for key, value in scale_zp_map.items():
+print("Set ", key, " variable to ", value)
+
+
+class CalibrationInfo:
+"""Helper class that contains information necessary for picking scales and 
zero points into
+calibrate_pattern. The state of CalibrationInfo is updated by 
QuantizationCalibrator.
+
+Parameters
+--
+tuple_subgraph_func : relay.Function
+A function whose output is a tuple that contains values we will need 
to access during
+calibration.
+
+q_tuple_subgraph_func : relay.Function
+   

[GitHub] [tvm] zhanghaohit commented on pull request #6126: [VTA][OpenCL] intelfocl

2021-02-19 Thread GitBox


zhanghaohit commented on pull request #6126:
URL: https://github.com/apache/tvm/pull/6126#issuecomment-781915329


   > > @tmoreau89 @liangfu I think there is some jenkins caching issue that 
makes CI to pull old commit, which not exists any more. Any idea on how to 
solve this problem?
   > 
   > I'm not quite sure about resolving the caching issue in this case. 
Meanwhile, do you mind trying to dump all the changes to a single commit, 
rebase on latest TVM main branch, and force push to the 
`4paradigm:feature/intelfocl-pr` branch?
   
   the caching issue was fixed. But the tsim test failed in the same place 
again like the PR in vta repo https://github.com/apache/tvm-vta/pull/9. 
   
   Is it possible that tsim implementation use some hard-coded config rather 
than loading from hw_spec.h?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org