[GitHub] [incubator-tvm] were closed pull request #4829: [Pact 0] Can you take a look at my glue?

2020-02-05 Thread GitBox
were closed pull request #4829: [Pact 0] Can you take a look at my glue?
URL: https://github.com/apache/incubator-tvm/pull/4829
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] were opened a new pull request #4829: [Pact 0] Can you take a look at my glue?

2020-02-05 Thread GitBox
were opened a new pull request #4829: [Pact 0] Can you take a look at my glue?
URL: https://github.com/apache/incubator-tvm/pull/4829
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] LiangHao151941 commented on issue #4828: [QNN][TFLite] TFLite rounding mode support

2020-02-05 Thread GitBox
LiangHao151941 commented on issue #4828: [QNN][TFLite] TFLite rounding mode 
support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-582771717
 
 
   Makes sense! I'll follow up on those. @FrozenGene 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM 
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r375662346
 
 

 ##
 File path: python/tvm/relay/frontend/onnx.py
 ##
 @@ -32,6 +32,55 @@
 __all__ = ['from_onnx']
 
 
+class onnx_input():
 
 Review comment:
   I really like this design - very sleek. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM 
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r375661042
 
 

 ##
 File path: python/tvm/relay/frontend/onnx.py
 ##
 @@ -1190,6 +1250,145 @@ def expand_shape(in_shape, shape):
 return _op.broadcast_to(inputs[0], shape=tuple(shape))
 
 
+class LSTM(OnnxOpConverter):
+""" Operator converter for LSTM.
+"""
+
+@classmethod
+def _activation_helper(cls, activation, alpha, beta):
+convert_map = _get_convert_map(1)
+attrs = {}
+if alpha is not None:
+attrs['alpha'] = alpha
+if beta is not None:
+attrs['beta'] = beta
+return lambda x: convert_map[activation.decode("utf-8")]([x], attrs, 
{})
+
+@classmethod
+def _activation_needs_alpha(cls, activation):
+needs_alpha = [
+"Affine",
+"LeakyRelu",
+"ThresholdedRelu",
+"ScaledTanh",
+"HardSigmoid",
+"Elu",
+]
+return activation.decode("utf-8") in needs_alpha
+
+@classmethod
+def _activation_needs_beta(cls, activation):
+needs_beta = [
+"Affine",
+"ScaledTanh",
+"HardSigmoid",
+]
+return activation.decode("utf-8") in needs_beta
+
+@classmethod
+def _impl_v7(cls, inputs, attr, params):
+# Unpack inputs, note that if optional and not provided then value 
will be None.
+X = inputs[0]
+W = inputs[1]
 
 Review comment:
   Is there any case when the weights won’t be constant? If they’re constant, 
we can remove some operations from the graph and compute them here (like 
squeeze). 
   
   By constant, I mean we can call `infer_value` on it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM 
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r375661224
 
 

 ##
 File path: tests/python/frontend/onnx/test_forward.py
 ##
 @@ -1962,6 +1962,126 @@ def test_pooling():
auto_pad='SAME_UPPER')
 
 
+def verify_lstm(seq_length,
+batch_size,
+input_size,
+hidden_size,
+use_bias=False,
+activations=None,
+alphas=None,
+betas=None):
+x_np = np.random.uniform(size=(seq_length, batch_size, 
input_size)).astype('float32')
+w_np = np.random.uniform(size=(1, 4 * hidden_size, 
input_size)).astype('float32')
+r_np = np.random.uniform(size=(1, 4 * hidden_size, 
hidden_size)).astype('float32')
+input_names = ["X", "W", "R"]
+input_tensors = [
+helper.make_tensor_value_info("X", TensorProto.FLOAT, 
list(x_np.shape)),
+helper.make_tensor_value_info("W", TensorProto.FLOAT, 
list(w_np.shape)),
+helper.make_tensor_value_info("R", TensorProto.FLOAT, list(r_np.shape))
+]
+input_values = [x_np, w_np, r_np]
+if use_bias:
+b_np = np.random.uniform(size=(1, 8 * hidden_size)).astype('float32')
+input_names.append("B")
+input_tensors.append(
+helper.make_tensor_value_info("B", TensorProto.FLOAT, [1, 8 * 
hidden_size]))
+input_values.append(b_np)
+
+Y_shape = [seq_length, 1, batch_size, hidden_size]
+Y_h_shape = [1, batch_size, hidden_size]
+Y_c_shape = [1, batch_size, hidden_size]
+
+if activations is None:
+lstm_node = helper.make_node(
+'LSTM', inputs=input_names, outputs=["Y", "Y_h", "Y_c"], 
hidden_size=hidden_size)
+elif alphas is None:
+lstm_node = helper.make_node(
+'LSTM',
+inputs=input_names,
+outputs=["Y", "Y_h", "Y_c"],
+hidden_size=hidden_size,
+activations=activations)
+else:
+lstm_node = helper.make_node(
+'LSTM',
+inputs=input_names,
+outputs=["Y", "Y_h", "Y_c"],
+hidden_size=hidden_size,
+activations=activations,
+activation_alpha=alphas,
+activation_beta=betas)
+
+graph = helper.make_graph([lstm_node],
+  "lstm_test",
+  inputs=input_tensors,
+  outputs=[
+  helper.make_tensor_value_info("Y", 
TensorProto.FLOAT,
+list(Y_shape)),
+  helper.make_tensor_value_info("Y_h", 
TensorProto.FLOAT,
+
list(Y_h_shape)),
+  helper.make_tensor_value_info("Y_c", 
TensorProto.FLOAT,
+
list(Y_c_shape))
+  ])
+
+model = helper.make_model(graph, producer_name='lstm_test')
+
+for target, ctx in ctx_list():
+onnx_out = get_onnxruntime_output(model, input_values, 'float32')
+tvm_out = get_tvm_output(
+model,
+input_values,
+target,
+ctx, [Y_shape, Y_h_shape, Y_c_shape],
+output_dtype=['float32', 'float32', 'float32'])
+for o_out, t_out in zip(onnx_out, tvm_out):
+tvm.testing.assert_allclose(o_out, t_out, rtol=5e-3, atol=5e-3)
+
+
+def test_lstm():
 
 Review comment:
   Can you also add a test where initial c and h states are set to something 
other than 0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM 
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r375661042
 
 

 ##
 File path: python/tvm/relay/frontend/onnx.py
 ##
 @@ -1190,6 +1250,145 @@ def expand_shape(in_shape, shape):
 return _op.broadcast_to(inputs[0], shape=tuple(shape))
 
 
+class LSTM(OnnxOpConverter):
+""" Operator converter for LSTM.
+"""
+
+@classmethod
+def _activation_helper(cls, activation, alpha, beta):
+convert_map = _get_convert_map(1)
+attrs = {}
+if alpha is not None:
+attrs['alpha'] = alpha
+if beta is not None:
+attrs['beta'] = beta
+return lambda x: convert_map[activation.decode("utf-8")]([x], attrs, 
{})
+
+@classmethod
+def _activation_needs_alpha(cls, activation):
+needs_alpha = [
+"Affine",
+"LeakyRelu",
+"ThresholdedRelu",
+"ScaledTanh",
+"HardSigmoid",
+"Elu",
+]
+return activation.decode("utf-8") in needs_alpha
+
+@classmethod
+def _activation_needs_beta(cls, activation):
+needs_beta = [
+"Affine",
+"ScaledTanh",
+"HardSigmoid",
+]
+return activation.decode("utf-8") in needs_beta
+
+@classmethod
+def _impl_v7(cls, inputs, attr, params):
+# Unpack inputs, note that if optional and not provided then value 
will be None.
+X = inputs[0]
+W = inputs[1]
 
 Review comment:
   Is there any case when the weights won’t be constant? If they’re constant, 
we can remove some operations from the graph and compute them here (like 
squeeze). 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
soiferj commented on a change in pull request #4825: [Frontend][ONNX] LSTM 
Support
URL: https://github.com/apache/incubator-tvm/pull/4825#discussion_r375661224
 
 

 ##
 File path: tests/python/frontend/onnx/test_forward.py
 ##
 @@ -1962,6 +1962,126 @@ def test_pooling():
auto_pad='SAME_UPPER')
 
 
+def verify_lstm(seq_length,
+batch_size,
+input_size,
+hidden_size,
+use_bias=False,
+activations=None,
+alphas=None,
+betas=None):
+x_np = np.random.uniform(size=(seq_length, batch_size, 
input_size)).astype('float32')
+w_np = np.random.uniform(size=(1, 4 * hidden_size, 
input_size)).astype('float32')
+r_np = np.random.uniform(size=(1, 4 * hidden_size, 
hidden_size)).astype('float32')
+input_names = ["X", "W", "R"]
+input_tensors = [
+helper.make_tensor_value_info("X", TensorProto.FLOAT, 
list(x_np.shape)),
+helper.make_tensor_value_info("W", TensorProto.FLOAT, 
list(w_np.shape)),
+helper.make_tensor_value_info("R", TensorProto.FLOAT, list(r_np.shape))
+]
+input_values = [x_np, w_np, r_np]
+if use_bias:
+b_np = np.random.uniform(size=(1, 8 * hidden_size)).astype('float32')
+input_names.append("B")
+input_tensors.append(
+helper.make_tensor_value_info("B", TensorProto.FLOAT, [1, 8 * 
hidden_size]))
+input_values.append(b_np)
+
+Y_shape = [seq_length, 1, batch_size, hidden_size]
+Y_h_shape = [1, batch_size, hidden_size]
+Y_c_shape = [1, batch_size, hidden_size]
+
+if activations is None:
+lstm_node = helper.make_node(
+'LSTM', inputs=input_names, outputs=["Y", "Y_h", "Y_c"], 
hidden_size=hidden_size)
+elif alphas is None:
+lstm_node = helper.make_node(
+'LSTM',
+inputs=input_names,
+outputs=["Y", "Y_h", "Y_c"],
+hidden_size=hidden_size,
+activations=activations)
+else:
+lstm_node = helper.make_node(
+'LSTM',
+inputs=input_names,
+outputs=["Y", "Y_h", "Y_c"],
+hidden_size=hidden_size,
+activations=activations,
+activation_alpha=alphas,
+activation_beta=betas)
+
+graph = helper.make_graph([lstm_node],
+  "lstm_test",
+  inputs=input_tensors,
+  outputs=[
+  helper.make_tensor_value_info("Y", 
TensorProto.FLOAT,
+list(Y_shape)),
+  helper.make_tensor_value_info("Y_h", 
TensorProto.FLOAT,
+
list(Y_h_shape)),
+  helper.make_tensor_value_info("Y_c", 
TensorProto.FLOAT,
+
list(Y_c_shape))
+  ])
+
+model = helper.make_model(graph, producer_name='lstm_test')
+
+for target, ctx in ctx_list():
+onnx_out = get_onnxruntime_output(model, input_values, 'float32')
+tvm_out = get_tvm_output(
+model,
+input_values,
+target,
+ctx, [Y_shape, Y_h_shape, Y_c_shape],
+output_dtype=['float32', 'float32', 'float32'])
+for o_out, t_out in zip(onnx_out, tvm_out):
+tvm.testing.assert_allclose(o_out, t_out, rtol=5e-3, atol=5e-3)
+
+
+def test_lstm():
 
 Review comment:
   Can you also add a test where initial c and h states are set?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4828: tflite rounding mode support

2020-02-05 Thread GitBox
FrozenGene commented on issue #4828: tflite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828#issuecomment-582757548
 
 
   Thanks @LiangHao151941 ! I haven't reviewed the code, however, I have some 
high level comments:
   
   1. if we have TFLITE rounding support, we should make TFLite frontend using 
TFLITE rounding and we should get the bit exact result as TFLite. 
   
   2. you should also modify the `test_forward.py` for tflite, like test_qnn* 
related test cases, we shouldn't need `atol=1` any more ideally. 
   
   3. you could add `q_conv2d` unit testing case, we could get the same result 
compared with TFLite. we lack of this unit testing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4824: Tflite frontend needs to use zero point of input tensor while lowering qnn.conv2d for padding

2020-02-05 Thread GitBox
FrozenGene commented on issue #4824: Tflite frontend needs to use zero point of 
input tensor while lowering qnn.conv2d for padding
URL: https://github.com/apache/incubator-tvm/issues/4824#issuecomment-58275
 
 
   Nice suggestion! We won't do similar things before. For some critical 
patches, we should port it back to our previous releases (like 0.6 mentioned 
here). I come up with two suggestions:
   
   1.  make one new release named as `0.6 SP1 / SP2 / SP3 /...`, SP means 
service pack, the inspiration comes from Microsoft's release concept. Like 
Visual Studio SP1 / SP2 / SP3.
   
   2. make one new release named as `0.6.1 / 0.6.2 / 0.6.3 / ...`. .1 / . 2 / 
.3 is the patch version. This inspiration comes from LLVM's release concept. 
Like LLVM 3.7.0 / 3.7.1 / 3.7.2 /.
   
   cc @tqchen @zhiics @yzhliu @icemelon9 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] LiangHao151941 opened a new pull request #4828: tflite rounding mode support

2020-02-05 Thread GitBox
LiangHao151941 opened a new pull request #4828: tflite rounding mode support
URL: https://github.com/apache/incubator-tvm/pull/4828
 
 
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   
   Add tflite rounding support with corresponding test cases. The tflite 
rounding mode golden results are generated with a testbench using 
MultiplyByQuantizedMultiplier function here: 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels/internal/common.h#L148
   
   @FrozenGene @anijain2305 
   
   Might help fix the problem described here:
   https://discuss.tvm.ai/t/supporting-bit-exact-tflite-qnn-inference/5528


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
masahi commented on issue #4825: [Frontend][ONNX] LSTM Support
URL: https://github.com/apache/incubator-tvm/pull/4825#issuecomment-582737938
 
 
   looks great! Will wait for @soiferj's review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-05 Thread GitBox
jwfromm commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r374966045
 
 

 ##
 File path: python/tvm/relay/op/strategy/x86.py
 ##
 @@ -0,0 +1,277 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Definition of x86 operator strategy."""
+# pylint: 
disable=invalid-name,unused-argument,wildcard-import,unused-wildcard-import
+from __future__ import absolute_import
+
+import logging
+
+import topi
+from .generic import *
+from .. import op as _op
+from schedule import SpecializedCondition
+
+logger = logging.getLogger('strategy')
+
+@schedule_injective.register("cpu")
+def schedule_injective_cpu(attrs, outs, target):
+"""schedule injective ops for x86"""
+with target:
+return topi.x86.schedule_injective(outs)
+
+@schedule_reduce.register("cpu")
+def schedule_reduce_cpu(attrs, outs, target):
+"""schedule reduction ops for x86"""
+with target:
+return topi.x86.schedule_reduce(outs)
+
+@schedule_concatenate.register("cpu")
+def schedule_concatenate_cpu(attrs, outs, target):
+"""schedule concatenate op for x86"""
+with target:
+return topi.x86.schedule_concatenate(outs)
+
+@schedule_pool.register("cpu")
+def schedule_pool_cpu(attrs, outs, target):
+"""schedule pooling ops for x86"""
+with target:
+return topi.x86.schedule_pool(outs, attrs.layout)
+
+@schedule_adaptive_pool.register("cpu")
+def schedule_adaptive_pool_cpu(attrs, outs, target):
+"""schedule adaptive pooling ops for x86"""
+with target:
+return topi.x86.schedule_adaptive_pool(outs)
+
+@schedule_softmax.register("cpu")
+def schedule_softmax_cpu(attrs, outs, target):
+"""schedule softmax for x86"""
+with target:
+return topi.x86.schedule_softmax(outs)
+
+@conv2d_strategy.register("cpu")
+def conv2d_strategy_cpu(attrs, inputs, out_type, target):
+"""conv2d x86 strategy"""
+strategy = _op.OpStrategy()
+data, kernel = inputs
+dilation_h, dilation_w = get_const_tuple(attrs.dilation)
+groups = attrs.groups
+layout = attrs.data_layout
+kernel_layout = attrs.kernel_layout
+if dilation_h < 1 or dilation_w < 1:
+raise ValueError("dilation should be positive value")
+
+if groups == 1:
+if layout == "NCHW":
+assert kernel_layout == "OIHW"
+if topi.x86.is_int8_hw_support(data.dtype, kernel.dtype):
+strategy.add_implement(
+wrap_compute_conv2d(topi.x86.conv2d_nchw_int8),
+wrap_topi_schedule(topi.x86.schedule_conv2d_nchw_int8))
+else:
+strategy.add_implement(
+wrap_compute_conv2d(topi.x86.conv2d_nchw),
+wrap_topi_schedule(topi.x86.schedule_conv2d_nchw))
+elif layout == "NHWC":
+assert kernel_layout == "HWIO"
+logger.warning("For x86 target, NCHW layout is recommended for 
conv2d.")
+strategy.add_implement(
+wrap_compute_conv2d(topi.nn.conv2d_nhwc),
+wrap_topi_schedule(topi.x86.schedule_conv2d_nhwc))
+elif layout == "HWCN":
+assert kernel_layout == "HWIO"
+logger.warning("For x86 target, NCHW layout is recommended for 
conv2d.")
+strategy.add_implement(
+wrap_compute_conv2d(topi.nn.conv2d_hwcn),
+wrap_topi_schedule(topi.generic.schedule_conv2d_hwcn))
+else:
+raise RuntimeError("Unsupported conv2d layout {} for 
cpu".format(layout))
+elif is_depthwise_conv2d(data.shape, layout, kernel.shape, kernel_layout, 
groups):
+if layout == "NCHW":
+assert kernel_layout == "OIHW"
+channel_multiplier = get_const_tuple(inputs[1].shape)[1]
+if channel_multiplier == 1:
+strategy.add_implement(
+wrap_compute_conv2d(topi.x86.depthwise_conv2d_nchw),
+
wrap_topi_schedule(topi.x86.schedule_depthwise_conv2d_nchw))
+else:
+logger.warning("For x86 target, depthwise_conv2d with channel "
+   "multiplier greater than 1 is not optim

[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-05 Thread GitBox
jwfromm commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r374870865
 
 

 ##
 File path: python/tvm/relay/op/strategy/hls.py
 ##
 @@ -0,0 +1,151 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Definition of HLS operator strategy."""
+# pylint: 
disable=invalid-name,unused-argument,wildcard-import,unused-wildcard-import
+from __future__ import absolute_import
+
+import topi
+from .generic import *
+from .. import op as _op
+
+@schedule_injective.register("hls")
+def schedule_injective_hls(attrs, outs, target):
+"""schedule injective ops for hls"""
+with target:
+return topi.hls.schedule_injective(outs)
+
+@schedule_reduce.register("hls")
+def schedule_reduce_hls(attrs, outs, target):
+"""schedule reduction ops for hls"""
+with target:
+return topi.hls.schedule_reduce(outs)
+
+@schedule_concatenate.register("hls")
+def schedule_concatenate_hls(attrs, outs, target):
+"""schedule concatenate for hls"""
+with target:
+return topi.hls.schedule_injective(outs)
+
+@schedule_pool.register("hls")
+def schedule_pool_hls(attrs, outs, target):
+"""schedule pooling ops for hls"""
+with target:
+return topi.hls.schedule_pool(outs, attrs.layout)
+
+@schedule_adaptive_pool.register("hls")
+def schedule_adaptive_pool_hls(attrs, outs, target):
+"""schedule adaptive pooling ops for hls"""
+with target:
+return topi.hls.schedule_adaptive_pool(outs)
+
+@schedule_softmax.register("hls")
+def schedule_softmax_hls(attrs, outs, target):
+"""schedule softmax for hls"""
+with target:
+return topi.hls.schedule_softmax(outs)
+
+@override_native_generic_func("conv2d_strategy")
+def conv2d_strategy_hls(attrs, inputs, out_type, target):
+"""conv2d hls strategy"""
+strategy = _op.OpStrategy()
+data, kernel = inputs
+dilation = get_const_tuple(attrs.dilation)
+groups = attrs.groups
+layout = attrs.data_layout
+kernel_layout = attrs.kernel_layout
+(dilation_h, dilation_w) = dilation
+if dilation_h < 1 or dilation_w < 1:
+raise ValueError("dilation should be positive value")
+
+if groups == 1:
+if layout == "NCHW":
+assert kernel_layout == "OIHW"
+strategy.add_implement(
+wrap_compute_conv2d(topi.nn.conv2d_nchw),
+wrap_topi_schedule(topi.hls.schedule_conv2d_nchw))
+elif layout == "NHWC":
+assert kernel_layout == "HWIO"
+strategy.add_implement(
+wrap_compute_conv2d(topi.nn.conv2d_nhwc),
+wrap_topi_schedule(topi.hls.schedule_conv2d_nhwc))
+else:
+raise RuntimeError("Unsupported conv2d layout {}".format(layout))
+elif is_depthwise_conv2d(data.shape, layout, kernel.shape, kernel_layout, 
groups):
+if layout == "NCHW":
+assert kernel_layout == "OIHW"
+strategy.add_implement(
+wrap_compute_conv2d(topi.nn.depthwise_conv2d_nchw),
+wrap_topi_schedule(topi.hls.schedule_depthwise_conv2d_nchw))
+elif layout == "NHWC":
+assert kernel_layout == "HWOI"
+strategy.add_implement(
+wrap_compute_conv2d(topi.nn.depthwise_conv2d_nhwc),
+wrap_topi_schedule(topi.hls.schedule_depthwise_conv2d_nhwc))
+else:
+raise RuntimeError("Unsupported depthwise_conv2d layout 
{}".format(layout))
+else: # group_conv2d
+raise RuntimeError("group_conv2d is not supported for hls")
+return strategy
+
+@override_native_generic_func("conv2d_NCHWc_strategy")
+def conv2d_NCHWc_strategy_hls(attrs, inputs, out_type, target):
+"""conv2d_NCHWc hls strategy"""
+strategy = _op.OpStrategy()
+strategy.add_implement(
+wrap_compute_conv2d(topi.nn.conv2d_NCHWc, True, True),
+wrap_topi_schedule(topi.hls.schedule_conv2d_NCHWc))
+return strategy
+
+@conv2d_transpose_strategy.register("hls")
+def conv2d_transpose_strategy_hls(attrs, inputs, out_type, target):
+"""conv2d_transpose hls strategy"""
+layout = attrs.data_layout
+dilation = get_const_tuple(attrs.dilatio

[GitHub] [incubator-tvm] zhiics commented on issue #4564: [Doc] Introduction to module serialization

2020-02-05 Thread GitBox
zhiics commented on issue #4564: [Doc] Introduction to module serialization
URL: https://github.com/apache/incubator-tvm/pull/4564#issuecomment-582726010
 
 
   LGTM now. I will wait till tomorrow to see if there are any more comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wyc-ruiker commented on a change in pull request #4822: [Frontend][TFLite] Add MIRROR_PAD operator

2020-02-05 Thread GitBox
wyc-ruiker commented on a change in pull request #4822: [Frontend][TFLite] Add 
MIRROR_PAD operator
URL: https://github.com/apache/incubator-tvm/pull/4822#discussion_r375629498
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -1422,10 +1423,48 @@ def convert_pad(self, op):
 # convert list of lists to tuple of tuples
 paddings = tuple(tuple(l) for l in pad_list)
 
-# Use default pad_value 0 because TFLite does not support 
constant_values parameter
+# Use default pad_value 0 because TFLite PAD does not support 
constant_values parameter
 out = _op.nn.pad(in_expr, paddings)
 return out
 
+def convert_mirror_pad(self, op):
+"""Convert TFLite MIRROR_PAD"""
+try:
+from tflite.Operator import Operator
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.MirrorPadOptions import MirrorPadOptions
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+# the quantized form MirrorPad is not yet implemented in TFLite.
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized MIRROR_PAD operator is not supported yet.')
 
 Review comment:
   I think this exception will not be triggered when quantized MIRROR_PAD is 
not implemented by TFLite. If the exception is triggered, it certainly means 
the TVM stack doesn't support quantized MIRROR_PAD.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4826: [CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6

2020-02-05 Thread GitBox
masahi commented on a change in pull request #4826: [CI][DOCKER] Update ci-gpu 
torch1.4 and onnx1.6
URL: https://github.com/apache/incubator-tvm/pull/4826#discussion_r375627008
 
 

 ##
 File path: docker/install/ubuntu_install_onnx.sh
 ##
 @@ -21,7 +21,7 @@ set -u
 set -o pipefail
 
 # fix to certain version for now
-pip3 install onnx==1.5.0
+pip3 install onnx==1.6.0
 pip3 install onnxruntime==1.0.0
 
 # torch depends on a number of other packages, but unhelpfully, does
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-02-05 Thread GitBox
tqchen commented on issue #4543: [FRONTEND][TFLITE] Add support for 
TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#issuecomment-582721889
 
 
   @mbarrett97 please rebase, @FrozenGene please followup :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4827: [CI][DOCKER] Update ci-gpu to v0.60

2020-02-05 Thread GitBox
tqchen commented on issue #4827: [CI][DOCKER] Update ci-gpu to v0.60
URL: https://github.com/apache/incubator-tvm/pull/4827#issuecomment-582721675
 
 
   ```
   docker/bash.sh tvmai/ci-gpu:v0.60
   
   >>> import torch
   >>> import torchvision
   >>> torch.__version__
   '1.4.0'
   >>> torchvision.__version__
   '0.5.0'
   >>> 
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4826: [CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6

2020-02-05 Thread GitBox
tqchen commented on issue #4826: [CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6
URL: https://github.com/apache/incubator-tvm/pull/4826#issuecomment-582721513
 
 
   @masahi please take another look


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4826: [CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6

2020-02-05 Thread GitBox
tqchen commented on a change in pull request #4826: [CI][DOCKER] Update ci-gpu 
torch1.4 and onnx1.6
URL: https://github.com/apache/incubator-tvm/pull/4826#discussion_r375626423
 
 

 ##
 File path: docker/install/ubuntu_install_onnx.sh
 ##
 @@ -21,7 +21,7 @@ set -u
 set -o pipefail
 
 # fix to certain version for now
-pip3 install onnx==1.5.0
+pip3 install onnx==1.6.0
 pip3 install onnxruntime==1.0.0
 
 # torch depends on a number of other packages, but unhelpfully, does
 
 Review comment:
   nice catch, the binary image is correct, somehow i missed the patch.
   ```
   Type "help", "copyright", "credits" or "license" for more information.
   >>> import torch
   i>>> import torchvision
   >>> torch.__version__
   '1.4.0'
   >>> torchvision.__version__
   '0.5.0'
   >>> 
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4826: [CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6

2020-02-05 Thread GitBox
masahi commented on a change in pull request #4826: [CI][DOCKER] Update ci-gpu 
torch1.4 and onnx1.6
URL: https://github.com/apache/incubator-tvm/pull/4826#discussion_r375625372
 
 

 ##
 File path: docker/install/ubuntu_install_onnx.sh
 ##
 @@ -21,7 +21,7 @@ set -u
 set -o pipefail
 
 # fix to certain version for now
-pip3 install onnx==1.5.0
+pip3 install onnx==1.6.0
 pip3 install onnxruntime==1.0.0
 
 # torch depends on a number of other packages, but unhelpfully, does
 
 Review comment:
   have you updated torch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

2020-02-05 Thread GitBox
tqchen commented on issue #4696: [Relay][Frontend][TFlite] Add support for 
quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-582719957
 
 
   @inadob please rebase, @FrozenGene please 
https://docs.tvm.ai/contribute/code_review.html#approve-and-request-changes-explicitly


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4564: [Doc] Introduction to module serialization

2020-02-05 Thread GitBox
tqchen commented on issue #4564: [Doc] Introduction to module serialization
URL: https://github.com/apache/incubator-tvm/pull/4564#issuecomment-582719710
 
 
   ping @zhiics @yzhliu @FrozenGene @yangjunpro  please followup on this thread 
:)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed pull request #4270: [Codgen] Thread variable use before define

2020-02-05 Thread GitBox
tqchen closed pull request #4270: [Codgen] Thread variable use before define
URL: https://github.com/apache/incubator-tvm/pull/4270
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #3943: Qnn quantize with min max using Mxnet flavor to support Mxnet prequantized models.

2020-02-05 Thread GitBox
tqchen commented on issue #3943: Qnn quantize with min max using Mxnet flavor 
to support Mxnet prequantized models.
URL: https://github.com/apache/incubator-tvm/pull/3943#issuecomment-582719478
 
 
   closue due to inactive status


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed pull request #3943: Qnn quantize with min max using Mxnet flavor to support Mxnet prequantized models.

2020-02-05 Thread GitBox
tqchen closed pull request #3943: Qnn quantize with min max using Mxnet flavor 
to support Mxnet prequantized models.
URL: https://github.com/apache/incubator-tvm/pull/3943
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed pull request #4370: [WIP] Relay visualization: exporter + visualizer

2020-02-05 Thread GitBox
tqchen closed pull request #4370: [WIP] Relay visualization: exporter + 
visualizer
URL: https://github.com/apache/incubator-tvm/pull/4370
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4370: [WIP] Relay visualization: exporter + visualizer

2020-02-05 Thread GitBox
tqchen commented on issue #4370: [WIP] Relay visualization: exporter + 
visualizer
URL: https://github.com/apache/incubator-tvm/pull/4370#issuecomment-582719302
 
 
   close due to inactive status, @hcho3 feel free to reopen :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed pull request #4456: [Relay][Legalize] Legalize conv2d cuda for NHWC

2020-02-05 Thread GitBox
tqchen closed pull request #4456: [Relay][Legalize] Legalize conv2d cuda for 
NHWC
URL: https://github.com/apache/incubator-tvm/pull/4456
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #3934: [WIP][Runtime] MISRA-C compliant TVM runtime

2020-02-05 Thread GitBox
tqchen commented on issue #3934: [WIP][Runtime] MISRA-C compliant TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/3934#issuecomment-582719122
 
 
   ping @liangfu please see if you are interested in continue pushing this 
thread


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed pull request #4475: onnx frontend support layout choice depend on hardware target support…

2020-02-05 Thread GitBox
tqchen closed pull request #4475: onnx frontend support layout choice depend on 
hardware target support…
URL: https://github.com/apache/incubator-tvm/pull/4475
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4475: onnx frontend support layout choice depend on hardware target support…

2020-02-05 Thread GitBox
tqchen commented on issue #4475: onnx frontend support layout choice depend on 
hardware target support…
URL: https://github.com/apache/incubator-tvm/pull/4475#issuecomment-582719028
 
 
   Close this for now, as it is likely superceded by the layout transform by 
@anijain2305 
   Thanks @Beya2019 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4750: Fix onnx import bugs

2020-02-05 Thread GitBox
tqchen commented on issue #4750: Fix onnx import bugs
URL: https://github.com/apache/incubator-tvm/pull/4750#issuecomment-582718831
 
 
   ping @kice can you please add a few testcases?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4815: [TOPI][Relay] Add bitwise ops

2020-02-05 Thread GitBox
tqchen commented on issue #4815: [TOPI][Relay] Add bitwise ops
URL: https://github.com/apache/incubator-tvm/pull/4815#issuecomment-582718578
 
 
   Thanks @jroesch @jwfromm @abergeron !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (19d0d15 -> 2bd2f99)

2020-02-05 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 19d0d15  [CONTRIB][CC] Enhance cc.cross_compiler (#4817)
 add 2bd2f99  [TOPI][Relay] Add bitwise ops (#4815)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/_tensor.py   |  7 +++
 python/tvm/relay/op/tensor.py| 70 ++
 src/relay/op/tensor/binary.cc| 18 
 src/relay/op/tensor/unary.cc | 13 +-
 topi/include/topi/broadcast.h| 42 ++
 topi/include/topi/elemwise.h | 17 
 topi/python/topi/broadcast.py| 73 
 topi/src/topi.cc |  8 
 topi/tests/python/test_topi_broadcast.py | 72 +++
 9 files changed, 319 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] tqchen merged pull request #4815: [TOPI][Relay] Add bitwise ops

2020-02-05 Thread GitBox
tqchen merged pull request #4815: [TOPI][Relay] Add bitwise ops
URL: https://github.com/apache/incubator-tvm/pull/4815
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4756: [Docker] update onnx to 1.6 and torch to 1.4

2020-02-05 Thread GitBox
tqchen commented on issue #4756: [Docker] update onnx to 1.6 and torch to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-582718289
 
 
   updates
   - https://github.com/apache/incubator-tvm/pull/4827
   - https://github.com/apache/incubator-tvm/pull/4826


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #4827: [CI][DOCKER] Update ci-gpu to v0.60

2020-02-05 Thread GitBox
tqchen opened a new pull request #4827: [CI][DOCKER] Update ci-gpu to v0.60
URL: https://github.com/apache/incubator-tvm/pull/4827
 
 
   cc @masahi 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen opened a new pull request #4826: [CI][DOCKER] Update ci-gpu torch1.4 and onnx1.6

2020-02-05 Thread GitBox
tqchen opened a new pull request #4826: [CI][DOCKER] Update ci-gpu torch1.4 and 
onnx1.6
URL: https://github.com/apache/incubator-tvm/pull/4826
 
 
   cc @masahi 
   
   This is a docker image verified with testcases pass


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
FrozenGene commented on a change in pull request #4497: [Relay] Add a PyTorch 
to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375619314
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
FrozenGene commented on a change in pull request #4497: [Relay] Add a PyTorch 
to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375617997
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
FrozenGene commented on a change in pull request #4497: [Relay] Add a PyTorch 
to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375618821
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,

[GitHub] [incubator-tvm] soiferj commented on issue #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
soiferj commented on issue #4825: [Frontend][ONNX] LSTM Support
URL: https://github.com/apache/incubator-tvm/pull/4825#issuecomment-582713458
 
 
   Awesome!! I was looking at implementing this myself yesterday. I’ll take a 
look as soon as possible. Thanks for sending the PR!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene merged pull request #4817: [CONTRIB][CC] Enhance cc.cross_compiler

2020-02-05 Thread GitBox
FrozenGene merged pull request #4817: [CONTRIB][CC] Enhance cc.cross_compiler
URL: https://github.com/apache/incubator-tvm/pull/4817
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (5ea4f0d -> 19d0d15)

2020-02-05 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 5ea4f0d  [Relay] Conv2D padding representation (#4787)
 add 19d0d15  [CONTRIB][CC] Enhance cc.cross_compiler (#4817)

No new revisions were added by this update.

Summary of changes:
 python/tvm/contrib/cc.py  | 69 +++
 tests/python/unittest/test_module_load.py |  7 ++--
 2 files changed, 46 insertions(+), 30 deletions(-)



[GitHub] [incubator-tvm] FrozenGene commented on issue #4817: [CONTRIB][CC] Enhance cc.cross_compiler

2020-02-05 Thread GitBox
FrozenGene commented on issue #4817: [CONTRIB][CC] Enhance cc.cross_compiler
URL: https://github.com/apache/incubator-tvm/pull/4817#issuecomment-582711329
 
 
   Thanks @tqchen @jroesch It is Merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm opened a new pull request #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
jwfromm opened a new pull request #4825: [Frontend][ONNX] LSTM Support
URL: https://github.com/apache/incubator-tvm/pull/4825
 
 
   This PR adds LSTM support to the relay Onnx frontend. 
   
   Besides adding the LSTM parser itself, we encountered an issue where for 
some Onnx operations (like LSTMs) arguments are optional. The current method 
for passing arguments to converters is just to pack them into a list however as 
some arguments are optional the position of each input is inconsistent. 
Instead, we should be using a dictionary mapping input names to their value. 
However, changing all inputs to a dictionary would require changing all the 
current operators and present problems with direct Onnx to relay conversions. 
Our workaround here is to add the `onnx_input` class that can be accessed as a 
list as we previously did or with input name dictionary style lookup.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on issue #4825: [Frontend][ONNX] LSTM Support

2020-02-05 Thread GitBox
jwfromm commented on issue #4825: [Frontend][ONNX] LSTM Support
URL: https://github.com/apache/incubator-tvm/pull/4825#issuecomment-582692382
 
 
   @masahi, @soiferj, @mbrookhart can you take a look at this PR?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-05 Thread GitBox
zhiics commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r375526504
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'. This is primarily intended to be used alongside the
+ * external codegen infrastructure to support the case where multiple
+ * Relay operators map to a single external operator.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
+
+class MergeCompositeWrapper : public ExprMutator {
+ public:
+  explicit MergeCompositeWrapper(const std::string& pattern_name, const Expr& 
pattern)
+: pattern_name_(pattern_name), pattern_(pattern) {}
+
+  Expr ExtractPattern(const Var& pattern, const Expr& root,
+  Map>* var_map) {
+if (var_map->find(pattern->name_hint()) == var_map->end()) {
+  // if we haven't encountered this var yet, make a new free var and 
associate
+  // it with the value at 'root'
+  auto free_var = VarNode::make(pattern->name_hint(), Type());
+  var_map->Set(pattern->name_hint(), Array({free_var, root}));
+  return std::move(free_var);
+} else {
+  // if we have encountered this var already, return the free var that was 
created
+  return (*var_map)[pattern->name_hint()][0];
+}
+  }
+
+  Expr ExtractPattern(const Constant& pattern, const Expr& root,
+  Map>* var_map) {
+return root;
+  }
+
+  /* How does this work?
 
 Review comment:
   Let's document this in the following style
   
   \brief
   
   \param A
   ...
   \param N
   
   \return
   
   \note How does it work? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-05 Thread GitBox
zhiics commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r375530819
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'. This is primarily intended to be used alongside the
+ * external codegen infrastructure to support the case where multiple
+ * Relay operators map to a single external operator.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
+
+class MergeCompositeWrapper : public ExprMutator {
+ public:
+  explicit MergeCompositeWrapper(const std::string& pattern_name, const Expr& 
pattern)
+: pattern_name_(pattern_name), pattern_(pattern) {}
+
+  Expr ExtractPattern(const Var& pattern, const Expr& root,
+  Map>* var_map) {
+if (var_map->find(pattern->name_hint()) == var_map->end()) {
+  // if we haven't encountered this var yet, make a new free var and 
associate
+  // it with the value at 'root'
+  auto free_var = VarNode::make(pattern->name_hint(), Type());
+  var_map->Set(pattern->name_hint(), Array({free_var, root}));
+  return std::move(free_var);
+} else {
+  // if we have encountered this var already, return the free var that was 
created
+  return (*var_map)[pattern->name_hint()][0];
+}
+  }
+
+  Expr ExtractPattern(const Constant& pattern, const Expr& root,
+  Map>* var_map) {
+return root;
+  }
+
+  /* How does this work?
+   *
+   * A pattern consists of Relay expression containing only operator call 
nodes, constants
+   * and free variables. The free variables indicate where the pattern can 
'attach' in your
+   * graph. This function takes the final call node of the pattern and the 
call node currently
+   * being traversed in the Relay graph. It traverses through the pattern in 
lockstep with call node
+   * from the graph (referred to as the 'root' node here) to check they're 
identical. If at any point
+   * they differ, an empty expression is returned to signify the extract 
failed. If a free var is
+   * reached in the pattern, the corresponding value in the root is associated 
with the name of the
+   * free var (via the var_map) so that when we construct the composite 
function, the inputs match
+   * up correctly with the rest of the graph. The return value of this 
function when successful is
+   * a new Relay expression ready to be wrapped into a composite function.
+   */
+  Expr ExtractPattern(const Call& pattern, const Call& root,
+  Map>* var_map) {
+// check to make sure both calls are to operators (not functions)
+if (!pattern->op->IsInstance() || !root->op->IsInstance())
+  return Expr();
+if (pattern->op.as()->name != root->op.as()->name)
+  return Expr();
+
+unsigned int i = 0;
+Array new_args;
+for (const auto& arg : pattern->args) {
+  Expr new_arg;
+  if (arg->IsInstance()) {
+// fail if the root argument is not also a call node
+if (!root->args[i]->IsInstance()) {
+  return Expr();
+}
+// if it's a call node, recursively call this function
+new_arg = ExtractPattern(Downcast(arg),
+ Downcast(root->args[i]),
+ var_map);
+  } else if (arg->IsInstance()) {
+// if there's a var in the pattern, it must be a free var
+// so call the function to update the var_map
+new_arg = ExtractPattern(Downcast(arg),
+ root->args[i],
+ var_map);
+  } else if (arg->IsInstance()) {
+// if there's a constant, simply get the corresponding
+// value of the constant from the root
+new_arg = ExtractPattern(Downcast(arg),
+ root->args[i],
+ var_map);
+  }
+  if (!new_arg.defined()) {
+return Expr();
+  }
+   

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-05 Thread GitBox
zhiics commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r375524422
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'. This is primarily intended to be used alongside the
+ * external codegen infrastructure to support the case where multiple
+ * Relay operators map to a single external operator.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
 
 Review comment:
   remove one blank line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-05 Thread GitBox
zhiics commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r375542736
 
 

 ##
 File path: tests/python/relay/test_pass_merge_composite.py
 ##
 @@ -0,0 +1,439 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Unit tests for merge composite."""
+from tvm import expr
+from tvm import relay
+from tvm.relay.testing import run_opt_pass
+
+"""
+The merge composite pass is designed to merge multiple relay operators, that
+match a given pattern, and combine them into a single relay function.
+
+For example suppose we have the graph:
+
+conv2d
+  |   (merge composite pass)
+   bias_add>   conv2d_bias_relu
+  |(our target)
+ relu
+
+Our Relay IR before the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+%0 = nn.conv2d(%data, %kernel, kernel_size=[1, 1])
+/* ty=Tensor[(1, 256, 28, 28), float32] */;
+%1 = nn.bias_add(%0, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+Our Relay IR after the pass:
+fn (%data: Tensor[(1, 512, 28, 28), float32], %kernel: Tensor[(256, 512, 
1, 1), float32],
+%bias: Tensor[(256), float32]) -> Tensor[(1, 256, 28, 28), 
float32] {
+  %2 = fn (%x: Tensor[(1, 512, 28, 28), float32], %y: Tensor[(256, 512, 1, 
1), float32],
+%z: Tensor[(256), float32], Primitive=1, 
Composite="conv2d_bias_relu") ->
+Tensor[(1, 256, 28, 28), float32] {
+%0 = nn.conv2d(%x, %y, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 28, 
28), float32] */;
+%1 = nn.bias_add(%0, %z) /* ty=Tensor[(1, 256, 28, 28), float32] */;
+nn.relu(%1) /* ty=Tensor[(1, 256, 28, 28), float32] */
+  };
+  %2(%data, %kernel, %bias) /* ty=Tensor[(1, 256, 28, 28), float32] */
+}
+
+As you can see in the second relay example, the pattern we specified has been 
wrapped
+in a function. The function is then called, producing the same result as the 
first relay
+example.
+
+One convenient use for this pass is to offload multiple operators to a single 
external
+codegen function.
+"""
+
+
+def make_add_sub_mul_pattern():
+"""Create a pattern to match the following graph.
+
+add  sub
+ \   /
+  \ /
+  mul
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+sub_node = relay.subtract(x, y)
+mul_node = relay.multiply(add_node, sub_node)
+return mul_node
+
+
+def make_add_relu_pattern():
+"""Create a pattern to match the following graph.
+
+add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+add_node = relay.add(x, y)
+r = relay.nn.relu(add_node)
+return r
+
+
+def make_conv_bias_relu_pattern():
+"""Create a pattern to match the following graph.
+
+   conv2d
+ |
+  bias_add
+ |
+   relu
+"""
+x = relay.var('x')
+y = relay.var('y')
+z = relay.var('z')
+conv_node = relay.nn.conv2d(x, y)
+bias_node = relay.nn.bias_add(conv_node, z)
+r = relay.nn.relu(bias_node)
+return r
+
+
+def test_simple_merge():
+"""Test composite function is correctly produced from simple graph.
+
+We could expect the pattern `make_add_relu_pattern` to be merged
+into a single op `add_relu`.
+
+a  b
+\ /   a  b
+add>  \ /
+ | add_relu
+   relu
+
+"""
+pattern_table = [
+("add_relu", make_add_relu_pattern())
+]
+
+def before():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+add_node = relay.add(a, b)
+r = relay.nn.relu(add_node)
+return relay.Function([a, b], r)
+
+def expected():
+a = relay.var('a', shape=(10, 10))
+b = relay.var('b', shape=(10, 10))
+
+# add_relu function
+in_1 = relay.var('in_1', shape=(10, 10))
+in_2 = relay.var('in_2', shape=(

[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-05 Thread GitBox
zhiics commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r375526504
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'. This is primarily intended to be used alongside the
+ * external codegen infrastructure to support the case where multiple
+ * Relay operators map to a single external operator.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
+
+class MergeCompositeWrapper : public ExprMutator {
+ public:
+  explicit MergeCompositeWrapper(const std::string& pattern_name, const Expr& 
pattern)
+: pattern_name_(pattern_name), pattern_(pattern) {}
+
+  Expr ExtractPattern(const Var& pattern, const Expr& root,
+  Map>* var_map) {
+if (var_map->find(pattern->name_hint()) == var_map->end()) {
+  // if we haven't encountered this var yet, make a new free var and 
associate
+  // it with the value at 'root'
+  auto free_var = VarNode::make(pattern->name_hint(), Type());
+  var_map->Set(pattern->name_hint(), Array({free_var, root}));
+  return std::move(free_var);
+} else {
+  // if we have encountered this var already, return the free var that was 
created
+  return (*var_map)[pattern->name_hint()][0];
+}
+  }
+
+  Expr ExtractPattern(const Constant& pattern, const Expr& root,
+  Map>* var_map) {
+return root;
+  }
+
+  /* How does this work?
 
 Review comment:
   Let's document this in the following style
   
   \brief
   
   \param A
   ...
   \param N
   
   \return
   
   \Note How does it work? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zhiics commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-02-05 Thread GitBox
zhiics commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r375547370
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,205 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'. This is primarily intended to be used alongside the
+ * external codegen infrastructure to support the case where multiple
+ * Relay operators map to a single external operator.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
+
+class MergeCompositeWrapper : public ExprMutator {
+ public:
+  explicit MergeCompositeWrapper(const std::string& pattern_name, const Expr& 
pattern)
+: pattern_name_(pattern_name), pattern_(pattern) {}
+
+  Expr ExtractPattern(const Var& pattern, const Expr& root,
+  Map>* var_map) {
+if (var_map->find(pattern->name_hint()) == var_map->end()) {
+  // if we haven't encountered this var yet, make a new free var and 
associate
+  // it with the value at 'root'
+  auto free_var = VarNode::make(pattern->name_hint(), Type());
+  var_map->Set(pattern->name_hint(), Array({free_var, root}));
+  return std::move(free_var);
+} else {
+  // if we have encountered this var already, return the free var that was 
created
+  return (*var_map)[pattern->name_hint()][0];
+}
+  }
+
+  Expr ExtractPattern(const Constant& pattern, const Expr& root,
+  Map>* var_map) {
+return root;
+  }
+
+  /* How does this work?
+   *
+   * A pattern consists of Relay expression containing only operator call 
nodes, constants
+   * and free variables. The free variables indicate where the pattern can 
'attach' in your
+   * graph. This function takes the final call node of the pattern and the 
call node currently
+   * being traversed in the Relay graph. It traverses through the pattern in 
lockstep with call node
+   * from the graph (referred to as the 'root' node here) to check they're 
identical. If at any point
+   * they differ, an empty expression is returned to signify the extract 
failed. If a free var is
+   * reached in the pattern, the corresponding value in the root is associated 
with the name of the
+   * free var (via the var_map) so that when we construct the composite 
function, the inputs match
+   * up correctly with the rest of the graph. The return value of this 
function when successful is
+   * a new Relay expression ready to be wrapped into a composite function.
+   */
+  Expr ExtractPattern(const Call& pattern, const Call& root,
+  Map>* var_map) {
+// check to make sure both calls are to operators (not functions)
+if (!pattern->op->IsInstance() || !root->op->IsInstance())
+  return Expr();
+if (pattern->op.as()->name != root->op.as()->name)
+  return Expr();
+
+unsigned int i = 0;
+Array new_args;
+for (const auto& arg : pattern->args) {
+  Expr new_arg;
+  if (arg->IsInstance()) {
+// fail if the root argument is not also a call node
+if (!root->args[i]->IsInstance()) {
+  return Expr();
+}
+// if it's a call node, recursively call this function
+new_arg = ExtractPattern(Downcast(arg),
+ Downcast(root->args[i]),
+ var_map);
+  } else if (arg->IsInstance()) {
+// if there's a var in the pattern, it must be a free var
+// so call the function to update the var_map
+new_arg = ExtractPattern(Downcast(arg),
+ root->args[i],
+ var_map);
+  } else if (arg->IsInstance()) {
+// if there's a constant, simply get the corresponding
+// value of the constant from the root
+new_arg = ExtractPattern(Downcast(arg),
+ root->args[i],
+ var_map);
+  }
+  if (!new_arg.defined()) {
+return Expr();
+  }
+   

[GitHub] [incubator-tvm] tqchen commented on issue #4821: OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted and OSError: [WinError 1

2020-02-05 Thread GitBox
tqchen commented on issue #4821: OSError: [WinError 10048] Only one usage of 
each socket address (protocol/network address/port) is normally permitted and 
OSError: [WinError 10049] The requested address is not valid in its context
URL: https://github.com/apache/incubator-tvm/issues/4821#issuecomment-582672692
 
 
   Given that we do not have a clearly actionable item atm, I would recommend 
to bring a trouble shooting thread on https://discuss.tvm.ai/ instead. Feel 
free to open new thread after we have something that is actionable


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen closed issue #4821: OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted and OSError: [WinError 10049]

2020-02-05 Thread GitBox
tqchen closed issue #4821: OSError: [WinError 10048] Only one usage of each 
socket address (protocol/network address/port) is normally permitted and 
OSError: [WinError 10049] The requested address is not valid in its context
URL: https://github.com/apache/incubator-tvm/issues/4821
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (79ce87f -> 5ea4f0d)

2020-02-05 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository.

kevinthesun pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 79ce87f  [Relay][Frontend][TFLite] Add parser support for logical 
operators (#4642)
 add 5ea4f0d  [Relay] Conv2D padding representation (#4787)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/nn/nn.py| 23 --
 python/tvm/relay/op/nn/util.py  | 56 +
 tests/python/relay/test_pass_alter_op_layout.py |  2 +-
 tests/python/unittest/test_graph_tuner_core.py  | 22 +-
 topi/python/topi/cuda/conv2d.py |  3 +-
 5 files changed, 90 insertions(+), 16 deletions(-)
 create mode 100644 python/tvm/relay/op/nn/util.py



[GitHub] [incubator-tvm] kevinthesun commented on issue #4787: [Relay] Conv2D padding representation

2020-02-05 Thread GitBox
kevinthesun commented on issue #4787: [Relay] Conv2D padding representation
URL: https://github.com/apache/incubator-tvm/pull/4787#issuecomment-582664192
 
 
   Thanks @zxy844288792 @icemelon9 @comaniac 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4790: Fast exponent

2020-02-05 Thread GitBox
anijain2305 commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r375563628
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,66 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+/*
+ * \brief Fast exponential function implementation
+ * log2(e^x) = x * log2(e) * log2(2) =>
+ * log2(e^x) = log2(2^(x*log2(e))) =>
+ * e^x = 2^(x*log2(e))
+ * Splitting power x*log2(e) into integer and fractional parts:
+ * e^(n+f) = e^n * e^f
+ * n = floor(x*log2(e) + 1/2)
+ * f = x - n * ln(2)
+ * exp(x) = 2^n * exp(y)
+ * Approximation for fractional part:
+ * y = exp(f) = 1 + 2 * P(x**2)/(Q(x**2) - P(x**2))
+ */
+inline Tensor fast_exp(const Tensor& _x,
+   std::string name,
+   std::string tag) {
+  auto x_hi = make_const(DataType::Float(32), 88.3762626647950f);
+  auto x_lo = make_const(DataType::Float(32), -88.3762626647949f);
+  auto log2e = make_const(DataType::Float(32), 1.44269504088896341f);
+  auto ln2 = make_const(DataType::Float(32), 0.6931471805599453f);
+  PrimExpr p[6] = {make_const(DataType::Float(32), 1.9875691500E-4f),
+   make_const(DataType::Float(32), 1.3981999507E-3f),
+   make_const(DataType::Float(32), 8.3334519073E-3f),
+   make_const(DataType::Float(32), 4.1665795894E-2f),
+   make_const(DataType::Float(32), 1.665459E-1f),
+   make_const(DataType::Float(32), 5.001201E-1f)};
+  auto one = make_const(DataType::Float(32), 1.0f);
+  auto one_half = make_const(DataType::Float(32), 0.5f);
+  auto b = make_const(DataType::Float(32), 127.0f);
+
+  return compute(_x->shape,
+ [&](const Array& i) {
+   // clamp x
+   auto x = ::tvm::max(::tvm::min(_x(i), x_hi), x_lo);
+   // integer part
+   auto n = ::tvm::floor(x * log2e + one_half);
+   // fractional part
+   auto f = x - n * ln2;
+   auto y = (p[0] * f + p[1]) * f + p[2]) * f + p[3])* f+ 
p[4]) * f
+ + p[5]) * f* f + f + one;
+   // Return 2^m * exp(r).
+   auto ef = tvm::reinterpret(DataType::Float(32),
+  ::tvm::cast(DataType::Int(32), n 
+ b) << 23);
+   return ::tvm::max(ef * y, _x(i)); // NOLINT(*)
+ },
+ name, tag);
+}
+
+
+inline Tensor exp(const Tensor& x,
+  std::string name = "T_exp",
+  std::string tag = kElementWise) {
+  if (x->dtype == DataType::Float(32)) {
+return fast_exp(x, name, tag);
 
 Review comment:
   How about having 3 new relay contrib operators - contrib.fast_exp, 
contrib.fast_tanh, contrib.fast_softmax. We can then add a Relay pass with 
opt_level 4, that legalizes these functions to their approximate counterparts.
   
   Edit - Sorry should have told why these 3. For softmax, we are essentially 
playing with exp op. Softmax takes substantial time in SSD models, where input 
shape is very large. For tanh, we already have a fast_tanh that is enabled by 
default. We should change that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun merged pull request #4787: [Relay] Conv2D padding representation

2020-02-05 Thread GitBox
kevinthesun merged pull request #4787: [Relay] Conv2D padding representation
URL: https://github.com/apache/incubator-tvm/pull/4787
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] junrushao1994 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-02-05 Thread GitBox
junrushao1994 commented on a change in pull request #4644: [WIP] Relay op 
strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r375566300
 
 

 ##
 File path: include/tvm/relay/op_attr_types.h
 ##
 @@ -207,14 +216,182 @@ enum AnyCodegenStrategy {
   kVariableDimensions
 };
 
-/* \brief A runtime representation of shape. */
+/*! \brief A runtime representation of shape. */
 using Shape = Array;
 
 using FShapeFunc = runtime::TypedPackedFunc<
   Array(const Attrs& attrs,
  const Array& inputs,
  const Array& out_ndims)>;
 
+/*!
+ * \brief Operator implementation in TVM.
+ */
+class OpImplementNode : public Object {
+ public:
+  /*! \brief Compute function */
+  FTVMCompute fcompute;
+  /*! \brief Schedule function */
+  FTVMSchedule fschedule;
+  /*! \brief Priority level */
+  Integer plevel;
+
+  void VisitAttrs(tvm::AttrVisitor* v) {
+v->Visit("plevel", &plevel);
+  }
+
+  static constexpr const char* _type_key = "relay.OpImplement";
+  TVM_DECLARE_FINAL_OBJECT_INFO(OpImplementNode, Object);
+};
+
+/*!
+ * \brief Operator implementation class.
+ */
+class OpImplement : public ObjectRef {
+ public:
+  /*! \brief default constructor  */
+  OpImplement() {}
+  /*! \brief constructor from node pointer */
+  explicit OpImplement(ObjectPtr n) : ObjectRef(n) {}
+  /*!
+   * \brief access the internal node container
+   * \return the pointer to the internal node container
+   */
+  inline const OpImplementNode* operator->() const;
 
 Review comment:
   Shall we use `TVM_DEFINE_OBJECT_REF_METHODS` instead?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375563704
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,
+ 

[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #4790: Fast exponent

2020-02-05 Thread GitBox
anijain2305 commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r375563628
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,66 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+/*
+ * \brief Fast exponential function implementation
+ * log2(e^x) = x * log2(e) * log2(2) =>
+ * log2(e^x) = log2(2^(x*log2(e))) =>
+ * e^x = 2^(x*log2(e))
+ * Splitting power x*log2(e) into integer and fractional parts:
+ * e^(n+f) = e^n * e^f
+ * n = floor(x*log2(e) + 1/2)
+ * f = x - n * ln(2)
+ * exp(x) = 2^n * exp(y)
+ * Approximation for fractional part:
+ * y = exp(f) = 1 + 2 * P(x**2)/(Q(x**2) - P(x**2))
+ */
+inline Tensor fast_exp(const Tensor& _x,
+   std::string name,
+   std::string tag) {
+  auto x_hi = make_const(DataType::Float(32), 88.3762626647950f);
+  auto x_lo = make_const(DataType::Float(32), -88.3762626647949f);
+  auto log2e = make_const(DataType::Float(32), 1.44269504088896341f);
+  auto ln2 = make_const(DataType::Float(32), 0.6931471805599453f);
+  PrimExpr p[6] = {make_const(DataType::Float(32), 1.9875691500E-4f),
+   make_const(DataType::Float(32), 1.3981999507E-3f),
+   make_const(DataType::Float(32), 8.3334519073E-3f),
+   make_const(DataType::Float(32), 4.1665795894E-2f),
+   make_const(DataType::Float(32), 1.665459E-1f),
+   make_const(DataType::Float(32), 5.001201E-1f)};
+  auto one = make_const(DataType::Float(32), 1.0f);
+  auto one_half = make_const(DataType::Float(32), 0.5f);
+  auto b = make_const(DataType::Float(32), 127.0f);
+
+  return compute(_x->shape,
+ [&](const Array& i) {
+   // clamp x
+   auto x = ::tvm::max(::tvm::min(_x(i), x_hi), x_lo);
+   // integer part
+   auto n = ::tvm::floor(x * log2e + one_half);
+   // fractional part
+   auto f = x - n * ln2;
+   auto y = (p[0] * f + p[1]) * f + p[2]) * f + p[3])* f+ 
p[4]) * f
+ + p[5]) * f* f + f + one;
+   // Return 2^m * exp(r).
+   auto ef = tvm::reinterpret(DataType::Float(32),
+  ::tvm::cast(DataType::Int(32), n 
+ b) << 23);
+   return ::tvm::max(ef * y, _x(i)); // NOLINT(*)
+ },
+ name, tag);
+}
+
+
+inline Tensor exp(const Tensor& x,
+  std::string name = "T_exp",
+  std::string tag = kElementWise) {
+  if (x->dtype == DataType::Float(32)) {
+return fast_exp(x, name, tag);
 
 Review comment:
   How about having 3 new relay contrib operators - contrib.fast_exp, 
contrib.fast_tanh, contrib.fast_softmax. We can then add a Relay pass with 
opt_level 4, that legalizes these functions to their approximate counterparts.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4818: [REFACTOR][PY] Establish tvm.runtime

2020-02-05 Thread GitBox
tqchen commented on a change in pull request #4818: [REFACTOR][PY] Establish 
tvm.runtime
URL: https://github.com/apache/incubator-tvm/pull/4818#discussion_r375561036
 
 

 ##
 File path: python/tvm/runtime/ndarray.py
 ##
 @@ -18,132 +18,36 @@
 """Runtime NDArray api"""
 import ctypes
 import numpy as np
-from .base import _LIB, check_call, c_array, string_types, _FFI_MODE, c_str
-from .runtime_ctypes import TVMType, TVMContext, TVMArray, TVMArrayHandle
-from .runtime_ctypes import TypeCode, tvm_shape_index_t
+import tvm._ffi
+
+from tvm._ffi.base import _LIB, check_call, c_array, string_types, _FFI_MODE
+from tvm._ffi.runtime_ctypes import DataType, TVMContext, TVMArray, 
TVMArrayHandle
+from tvm._ffi.runtime_ctypes import TypeCode, tvm_shape_index_t
 
 try:
 # pylint: disable=wrong-import-position
 if _FFI_MODE == "ctypes":
 raise ImportError()
-from ._cy3.core import _set_class_ndarray, _make_array, _from_dlpack
-from ._cy3.core import NDArrayBase as _NDArrayBase
+from tvm._ffi._cy3.core import _set_class_ndarray, _make_array, 
_from_dlpack
+from tvm._ffi._cy3.core import NDArrayBase
 except (RuntimeError, ImportError):
 # pylint: disable=wrong-import-position
-from ._ctypes.ndarray import _set_class_ndarray, _make_array, _from_dlpack
-from ._ctypes.ndarray import NDArrayBase as _NDArrayBase
-
-
-def context(dev_type, dev_id=0):
-"""Construct a TVM context with given device type and id.
-
-Parameters
---
-dev_type: int or str
-The device type mask or name of the device.
-
-dev_id : int, optional
-The integer device id
-
-Returns
----
-ctx: TVMContext
-The corresponding context.
-
-Examples
-
-Context can be used to create reflection of context by
-string representation of the device type.
-
-.. code-block:: python
-
-  assert tvm.context("cpu", 1) == tvm.cpu(1)
-  assert tvm.context("gpu", 0) == tvm.gpu(0)
-  assert tvm.context("cuda", 0) == tvm.gpu(0)
-"""
-if isinstance(dev_type, string_types):
-if '-device=micro_dev' in dev_type:
-dev_type = 'micro_dev'
-else:
-dev_type = dev_type.split()[0]
-if dev_type not in TVMContext.STR2MASK:
-raise ValueError("Unknown device type %s" % dev_type)
-dev_type = TVMContext.STR2MASK[dev_type]
-return TVMContext(dev_type, dev_id)
-
-
-def numpyasarray(np_data):
-"""Return a TVMArray representation of a numpy array.
-"""
-data = np_data
-assert data.flags['C_CONTIGUOUS']
-arr = TVMArray()
-shape = c_array(tvm_shape_index_t, data.shape)
-arr.data = data.ctypes.data_as(ctypes.c_void_p)
-arr.shape = shape
-arr.strides = None
-arr.dtype = TVMType(np.dtype(data.dtype).name)
-arr.ndim = data.ndim
-# CPU device
-arr.ctx = context(1, 0)
-return arr, shape
-
-
-def empty(shape, dtype="float32", ctx=context(1, 0)):
-"""Create an empty array given shape and device
-
-Parameters
---
-shape : tuple of int
-The shape of the array
-
-dtype : type or str
-The data type of the array.
-
-ctx : TVMContext
-The context of the array
-
-Returns
----
-arr : tvm.nd.NDArray
-The array tvm supported.
-"""
-shape = c_array(tvm_shape_index_t, shape)
-ndim = ctypes.c_int(len(shape))
-handle = TVMArrayHandle()
-dtype = TVMType(dtype)
-check_call(_LIB.TVMArrayAlloc(
-shape, ndim,
-ctypes.c_int(dtype.type_code),
-ctypes.c_int(dtype.bits),
-ctypes.c_int(dtype.lanes),
-ctx.device_type,
-ctx.device_id,
-ctypes.byref(handle)))
-return _make_array(handle, False, False)
+from tvm._ffi._ctypes.ndarray import _set_class_ndarray, _make_array, 
_from_dlpack
+from tvm._ffi._ctypes.ndarray import NDArrayBase
 
 
-def from_dlpack(dltensor):
-"""Produce an array from a DLPack tensor without memory copy.
-Retreives the underlying DLPack tensor's pointer to create an array from 
the
-data. Removes the original DLPack tensor's destructor as now the array is
-responsible for destruction.
+@tvm._ffi.register_object
+class NDArray(NDArrayBase):
+"""Lightweight NDArray class of TVM runtime.
 
 Review comment:
   Feel free to send a followup PR


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch commented on issue #4818: [REFACTOR][PY] Establish tvm.runtime

2020-02-05 Thread GitBox
jroesch commented on issue #4818: [REFACTOR][PY] Establish tvm.runtime
URL: https://github.com/apache/incubator-tvm/pull/4818#issuecomment-582656463
 
 
   cc @robo-corg


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch commented on a change in pull request #4818: [REFACTOR][PY] Establish tvm.runtime

2020-02-05 Thread GitBox
jroesch commented on a change in pull request #4818: [REFACTOR][PY] Establish 
tvm.runtime
URL: https://github.com/apache/incubator-tvm/pull/4818#discussion_r375559169
 
 

 ##
 File path: python/tvm/runtime/ndarray.py
 ##
 @@ -18,132 +18,36 @@
 """Runtime NDArray api"""
 import ctypes
 import numpy as np
-from .base import _LIB, check_call, c_array, string_types, _FFI_MODE, c_str
-from .runtime_ctypes import TVMType, TVMContext, TVMArray, TVMArrayHandle
-from .runtime_ctypes import TypeCode, tvm_shape_index_t
+import tvm._ffi
+
+from tvm._ffi.base import _LIB, check_call, c_array, string_types, _FFI_MODE
+from tvm._ffi.runtime_ctypes import DataType, TVMContext, TVMArray, 
TVMArrayHandle
+from tvm._ffi.runtime_ctypes import TypeCode, tvm_shape_index_t
 
 try:
 # pylint: disable=wrong-import-position
 if _FFI_MODE == "ctypes":
 raise ImportError()
-from ._cy3.core import _set_class_ndarray, _make_array, _from_dlpack
-from ._cy3.core import NDArrayBase as _NDArrayBase
+from tvm._ffi._cy3.core import _set_class_ndarray, _make_array, 
_from_dlpack
+from tvm._ffi._cy3.core import NDArrayBase
 except (RuntimeError, ImportError):
 # pylint: disable=wrong-import-position
-from ._ctypes.ndarray import _set_class_ndarray, _make_array, _from_dlpack
-from ._ctypes.ndarray import NDArrayBase as _NDArrayBase
-
-
-def context(dev_type, dev_id=0):
-"""Construct a TVM context with given device type and id.
-
-Parameters
---
-dev_type: int or str
-The device type mask or name of the device.
-
-dev_id : int, optional
-The integer device id
-
-Returns
----
-ctx: TVMContext
-The corresponding context.
-
-Examples
-
-Context can be used to create reflection of context by
-string representation of the device type.
-
-.. code-block:: python
-
-  assert tvm.context("cpu", 1) == tvm.cpu(1)
-  assert tvm.context("gpu", 0) == tvm.gpu(0)
-  assert tvm.context("cuda", 0) == tvm.gpu(0)
-"""
-if isinstance(dev_type, string_types):
-if '-device=micro_dev' in dev_type:
-dev_type = 'micro_dev'
-else:
-dev_type = dev_type.split()[0]
-if dev_type not in TVMContext.STR2MASK:
-raise ValueError("Unknown device type %s" % dev_type)
-dev_type = TVMContext.STR2MASK[dev_type]
-return TVMContext(dev_type, dev_id)
-
-
-def numpyasarray(np_data):
-"""Return a TVMArray representation of a numpy array.
-"""
-data = np_data
-assert data.flags['C_CONTIGUOUS']
-arr = TVMArray()
-shape = c_array(tvm_shape_index_t, data.shape)
-arr.data = data.ctypes.data_as(ctypes.c_void_p)
-arr.shape = shape
-arr.strides = None
-arr.dtype = TVMType(np.dtype(data.dtype).name)
-arr.ndim = data.ndim
-# CPU device
-arr.ctx = context(1, 0)
-return arr, shape
-
-
-def empty(shape, dtype="float32", ctx=context(1, 0)):
-"""Create an empty array given shape and device
-
-Parameters
---
-shape : tuple of int
-The shape of the array
-
-dtype : type or str
-The data type of the array.
-
-ctx : TVMContext
-The context of the array
-
-Returns
----
-arr : tvm.nd.NDArray
-The array tvm supported.
-"""
-shape = c_array(tvm_shape_index_t, shape)
-ndim = ctypes.c_int(len(shape))
-handle = TVMArrayHandle()
-dtype = TVMType(dtype)
-check_call(_LIB.TVMArrayAlloc(
-shape, ndim,
-ctypes.c_int(dtype.type_code),
-ctypes.c_int(dtype.bits),
-ctypes.c_int(dtype.lanes),
-ctx.device_type,
-ctx.device_id,
-ctypes.byref(handle)))
-return _make_array(handle, False, False)
+from tvm._ffi._ctypes.ndarray import _set_class_ndarray, _make_array, 
_from_dlpack
+from tvm._ffi._ctypes.ndarray import NDArrayBase
 
 
-def from_dlpack(dltensor):
-"""Produce an array from a DLPack tensor without memory copy.
-Retreives the underlying DLPack tensor's pointer to create an array from 
the
-data. Removes the original DLPack tensor's destructor as now the array is
-responsible for destruction.
+@tvm._ffi.register_object
+class NDArray(NDArrayBase):
+"""Lightweight NDArray class of TVM runtime.
 
 Review comment:
   I think the comments here could use some rewording. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r37170
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,
+   

[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r37170
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,
+   

[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375549738
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,
+ 

[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375549738
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,
+ 

[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375549738
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1023 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ["from_pytorch"]
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], _expr.Expr):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], _expr.Expr):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, _expr.Expr):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+if isinstance(data, _expr.Expr):
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = inferred_shape
+end = list(end)
+else:
+end = data.shape
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype="int32"), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Expr):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl
+
+def _adaptive_max_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_max_pool2d(
+data,
+ 

[GitHub] [incubator-tvm] u99127 opened a new issue #4824: Tflite frontend needs to use zero point of input tensor while lowering qnn.conv2d for padding

2020-02-05 Thread GitBox
u99127 opened a new issue #4824: Tflite frontend needs to use zero point of 
input tensor while lowering qnn.conv2d for padding
URL: https://github.com/apache/incubator-tvm/issues/4824
 
 
   The Tflite frontend ignores the zero point for input tensors when creating a 
separate pad operation while lowering quantized convolutions. 
   
   This is fixed by PR #4807 and the tests broken by that are fixed #4816 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #4823: Improve tflite testing for quantized conv2d.

2020-02-05 Thread GitBox
anijain2305 commented on issue #4823: Improve tflite testing for quantized 
conv2d.
URL: https://github.com/apache/incubator-tvm/issues/4823#issuecomment-582641958
 
 
   Thanks @u99127 for raising this. I will work in coming weeks to get the unit 
coverage better.
   
   @FrozenGene might also be interested in this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] u99127 opened a new issue #4823: Improve tflite testing for quantized conv2d.

2020-02-05 Thread GitBox
u99127 opened a new issue #4823: Improve tflite testing for quantized conv2d.
URL: https://github.com/apache/incubator-tvm/issues/4823
 
 
   @anijain2305 , @inadob 
   
   While we have tests for qnn.conv2d at the relay level, the recent issue with 
padding highlighted by the pull request #4807 has highlighted the need for 
better unit tests for quantized conv2d in the tflite frontend. One of the 
reasons that we hit this issue in the tflite frontend is that the relay unit 
tests for qnn.conv2d considered an implicit padding while the tflite frontend 
lowers the padding in quantized conv2d into a separate relay operation. Thus we 
need to have a unit test for qnn.conv2d for the tflite frontend. 
   
   Further it's probably worth a separate audit to make sure we have adequate 
tests for qnn ops in the tflite frontend for those operators that are not 
added. 
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375519693
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1093 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype='int32'), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl

[GitHub] [incubator-tvm] u99127 commented on a change in pull request #4822: [Frontend][TFLite] Add MIRROR_PAD operator

2020-02-05 Thread GitBox
u99127 commented on a change in pull request #4822: [Frontend][TFLite] Add 
MIRROR_PAD operator
URL: https://github.com/apache/incubator-tvm/pull/4822#discussion_r375518286
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -1422,10 +1423,48 @@ def convert_pad(self, op):
 # convert list of lists to tuple of tuples
 paddings = tuple(tuple(l) for l in pad_list)
 
-# Use default pad_value 0 because TFLite does not support 
constant_values parameter
+# Use default pad_value 0 because TFLite PAD does not support 
constant_values parameter
 out = _op.nn.pad(in_expr, paddings)
 return out
 
+def convert_mirror_pad(self, op):
+"""Convert TFLite MIRROR_PAD"""
+try:
+from tflite.Operator import Operator
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.MirrorPadOptions import MirrorPadOptions
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+# the quantized form MirrorPad is not yet implemented in TFLite.
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized MIRROR_PAD operator is not supported yet.')
 
 Review comment:
   Pedantry : A not implemented error suggests to me as a user that the support 
exists in the tflite tooling but the TVM stack has not yet implemented the 
support i.e. the tflite tooling can't yet produce the mirror_pad operation, do 
we need a different notification here actually asking the user to report an 
issue asking for a feature to be added ? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] u99127 commented on a change in pull request #4822: [Frontend][TFLite] Add MIRROR_PAD operator

2020-02-05 Thread GitBox
u99127 commented on a change in pull request #4822: [Frontend][TFLite] Add 
MIRROR_PAD operator
URL: https://github.com/apache/incubator-tvm/pull/4822#discussion_r375516566
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -1422,10 +1423,48 @@ def convert_pad(self, op):
 # convert list of lists to tuple of tuples
 paddings = tuple(tuple(l) for l in pad_list)
 
-# Use default pad_value 0 because TFLite does not support 
constant_values parameter
+# Use default pad_value 0 because TFLite PAD does not support 
constant_values parameter
 out = _op.nn.pad(in_expr, paddings)
 return out
 
+def convert_mirror_pad(self, op):
+"""Convert TFLite MIRROR_PAD"""
+try:
+from tflite.Operator import Operator
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.MirrorPadOptions import MirrorPadOptions
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+# the quantized form MirrorPad is not yet implemented in TFLite.
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
 
 Review comment:
For future reference, 
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/ir/tfl_ops.td#L2803


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong edited a comment on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-582619862
 
 
   > @alexwong please address my comment at 
https://github.com/apache/incubator-tvm/pull/4497/files#r370911013
   > 
   > Add `with torch.no_grad()` and remove `detach()` there. Also calling 
`eval()` once on your torch module is a good idea. I've seen some issues if I 
omit `eval()` and do forward.
   
   Resolved, eval() is called in load_torchvision or the specific single op 
function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-582619862
 
 
   > @alexwong please address my comment at 
https://github.com/apache/incubator-tvm/pull/4497/files#r370911013
   > 
   > Add `with torch.no_grad()` and remove `detach()` there. Also calling 
`eval()` once on your torch module is a good idea. I've seen some issues if I 
omit `eval()` and do forward.
   
   Addressed, eval() is called in load_torchvision or the specific single op 
function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch commented on issue #4815: [TOPI][Relay] Add bitwise ops

2020-02-05 Thread GitBox
jroesch commented on issue #4815: [TOPI][Relay] Add bitwise ops
URL: https://github.com/apache/incubator-tvm/pull/4815#issuecomment-582618898
 
 
   cc @jwfromm can you review this?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375513751
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1093 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype='int32'), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+return _impl
+
+def _adaptive_avg_2d():
+def _impl(inputs, input_types):
+data = inputs[0]
+output_size = _infer_shape(inputs[1])
+
+return _op.contrib.contrib.adaptive_avg_pool2d(
+data,
+output_size=output_size)
+return _impl

[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375513313
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1093 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
 
 Review comment:
   Previously was handling constants differently. Moved to just catching any 
expr now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375513469
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1093 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
 
 Review comment:
   Change to use existing inferred shape.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375513644
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1093 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+# TODO: Figure out a better way to get typing to work for tensor + 
scalar
+type0 = input_types[0]
+if isinstance(inputs[1], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type0 = input_types[1]
+
+type1 = input_types[1]
+if isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+type1 = input_types[0]
+
+data0 = _convert_elemwise_input(inputs[0], type0)
+data1 = _convert_elemwise_input(inputs[1], type1)
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype='int32'), 
axis=dim)
+return _impl
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
 
 Review comment:
   Moved together.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
alexwong commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375512544
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -0,0 +1,851 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Unit tests for various models and operators"""
+from time import time
+import os
+import sys
+from tempfile import TemporaryDirectory
+from scipy.stats import t as tdistr
+import numpy as np
+import torch
+from torch.nn import Module
+import tvm
+import torchvision
+
+from tvm import relay
+from tvm.contrib import graph_runtime
+from tvm.relay.testing.config import ctx_list
+
+sys.setrecursionlimit(1)
+
+def _vectorize(ten):
+return ten.reshape(-1)
+
+def atol(tru, est):
+def _atol_elt(tru, est):
+return abs(tru - est)
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_atol_elt(x, y) for x, y in zip(tru, est)])
+
+def rtol(tru, est):
+def _rtol_elt(tru, est):
+return abs(tru - est) / min(abs(tru), abs(est))
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_rtol_elt(x, y) for x, y in zip(tru, est)])
+
+def assert_shapes_match(tru, est):
+if tru.shape != est.shape:
+msg = "Output shapes {} and {} don't match"
+raise AssertionError(msg.format(tru.shape, est.shape))
+
+def load_torchvision(model_name):
+"""Given a model name, returns a Torchvision model in eval mode as well
+as an example input."""
+with torch.no_grad():
+if model_name.startswith('inception'):
+height = width = 299
+mean = [0.5, 0.5, 0.5]
+std = [0.5, 0.5, 0.5]
+else:
+height = width = 224
+mean = [0.485, 0.456, 0.406]
+std = [0.229, 0.224, 0.225]
+input_shape = [1, 3, height, width]
+input_data = torch.randn(input_shape).float()
+for channel in range(3):
+input_data[:, channel] -= mean[channel]
+input_data[:, channel] /= std[channel]
+model = getattr(torchvision.models, model_name)(pretrained=True)
+model = model.float().eval()
+return model, input_data
+
+def load_pretrainedmodels(model_name):
+"""Given a model name, returns a pretrainedmodels.pytorch model in eval
+mode as well as an example input."""
+import pretrainedmodels # 
https://github.com/Cadene/pretrained-models.pytorch
+model = getattr(pretrainedmodels, model_name)().float().eval()
+input_shape = [1, *model.input_size]
+input_data = torch.rand(input_shape).float() * 256
+for channel in range(3):
+input_data[:, channel] -= model.mean[channel]
+input_data[:, channel] /= model.std[channel]
+return model, input_data
+
+def load_model(model_name):
+"""Given a model name, returns a model as well as an example input."""
+if hasattr(torchvision.models, model_name):
+return load_torchvision(model_name)
+try:
+if hasattr(pretrainedmodels, model_name):
+return load_pretrainedmodels(model_name)
+except ModuleNotFoundError:
+raise ModuleNotFoundError('Please install pretrainedmodels.pytorch')
+raise RuntimeError('Model not supported')
+
+
+def confidence_interval(mean, stdev, count, alpha=.01):
+"""Returns the lower and upper bounds of the confidence interval of a 
random
+variable. Confidence is 1 - alpha (default confidence is 99%)."""
+stdval = tdistr.ppf(1 - alpha / 2, count - 1)
+lower, upper = mean + np.array([-1, 1]) * stdval * stdev / np.sqrt(count)
+return lower, upper
+
+def measure_latency(model, input_shapes, output_shapes, thresh, dryruns=40):
+"""Compute the latency of the given model"""
+latencies = []
+count = 0
+while True:
+if isinstance(model, torch.nn.Module):
+input_data = [torch.rand(shape).float() for shape in input_shapes]
+if torch.cuda.is_available():
+input_data = list(map(lambda x: x.cuda(), input_data))
+model = model.cuda()
+t_start = time()
+model(*input_d

[GitHub] [incubator-tvm] masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
masahi commented on issue #4497: [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-582598505
 
 
   @tqchen we should probably finish the docker update in 
https://github.com/apache/incubator-tvm/pull/4756 before we merge this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to Relay Parser

2020-02-05 Thread GitBox
masahi commented on a change in pull request #4497: [Relay] Add a PyTorch to 
Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r375484453
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward.py
 ##
 @@ -0,0 +1,851 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Unit tests for various models and operators"""
+from time import time
+import os
+import sys
+from tempfile import TemporaryDirectory
+from scipy.stats import t as tdistr
+import numpy as np
+import torch
+from torch.nn import Module
+import tvm
+import torchvision
+
+from tvm import relay
+from tvm.contrib import graph_runtime
+from tvm.relay.testing.config import ctx_list
+
+sys.setrecursionlimit(1)
+
+def _vectorize(ten):
+return ten.reshape(-1)
+
+def atol(tru, est):
+def _atol_elt(tru, est):
+return abs(tru - est)
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_atol_elt(x, y) for x, y in zip(tru, est)])
+
+def rtol(tru, est):
+def _rtol_elt(tru, est):
+return abs(tru - est) / min(abs(tru), abs(est))
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_rtol_elt(x, y) for x, y in zip(tru, est)])
+
+def assert_shapes_match(tru, est):
+if tru.shape != est.shape:
+msg = "Output shapes {} and {} don't match"
+raise AssertionError(msg.format(tru.shape, est.shape))
+
+def load_torchvision(model_name):
+"""Given a model name, returns a Torchvision model in eval mode as well
+as an example input."""
+with torch.no_grad():
+if model_name.startswith('inception'):
+height = width = 299
+mean = [0.5, 0.5, 0.5]
+std = [0.5, 0.5, 0.5]
+else:
+height = width = 224
+mean = [0.485, 0.456, 0.406]
+std = [0.229, 0.224, 0.225]
+input_shape = [1, 3, height, width]
+input_data = torch.randn(input_shape).float()
+for channel in range(3):
+input_data[:, channel] -= mean[channel]
+input_data[:, channel] /= std[channel]
+model = getattr(torchvision.models, model_name)(pretrained=True)
+model = model.float().eval()
+return model, input_data
+
+def load_pretrainedmodels(model_name):
+"""Given a model name, returns a pretrainedmodels.pytorch model in eval
+mode as well as an example input."""
+import pretrainedmodels # 
https://github.com/Cadene/pretrained-models.pytorch
+model = getattr(pretrainedmodels, model_name)().float().eval()
+input_shape = [1, *model.input_size]
+input_data = torch.rand(input_shape).float() * 256
+for channel in range(3):
+input_data[:, channel] -= model.mean[channel]
+input_data[:, channel] /= model.std[channel]
+return model, input_data
+
+def load_model(model_name):
+"""Given a model name, returns a model as well as an example input."""
+if hasattr(torchvision.models, model_name):
+return load_torchvision(model_name)
+try:
+if hasattr(pretrainedmodels, model_name):
+return load_pretrainedmodels(model_name)
+except ModuleNotFoundError:
+raise ModuleNotFoundError('Please install pretrainedmodels.pytorch')
+raise RuntimeError('Model not supported')
+
+
+def confidence_interval(mean, stdev, count, alpha=.01):
+"""Returns the lower and upper bounds of the confidence interval of a 
random
+variable. Confidence is 1 - alpha (default confidence is 99%)."""
+stdval = tdistr.ppf(1 - alpha / 2, count - 1)
+lower, upper = mean + np.array([-1, 1]) * stdval * stdev / np.sqrt(count)
+return lower, upper
+
+def measure_latency(model, input_shapes, output_shapes, thresh, dryruns=40):
+"""Compute the latency of the given model"""
+latencies = []
+count = 0
+while True:
+if isinstance(model, torch.nn.Module):
+input_data = [torch.rand(shape).float() for shape in input_shapes]
+if torch.cuda.is_available():
+input_data = list(map(lambda x: x.cuda(), input_data))
+model = model.cuda()
+t_start = time()
+model(*input_dat

[GitHub] [incubator-tvm] anijain2305 commented on issue #4807: [Frontend][TFLite] Fix quantized pad value for convolution

2020-02-05 Thread GitBox
anijain2305 commented on issue #4807: [Frontend][TFLite] Fix quantized pad 
value for convolution
URL: https://github.com/apache/incubator-tvm/pull/4807#issuecomment-582591219
 
 
   @inadob You can close this one. #4816 covers this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (23f3988 -> 79ce87f)

2020-02-05 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 23f3988  [QNN] Optimize lowering for requantize and 
FixedPointMultiply. (#4798)
 add 79ce87f  [Relay][Frontend][TFLite] Add parser support for logical 
operators (#4642)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 34 
 tests/python/frontend/tflite/test_forward.py | 31 +
 2 files changed, 65 insertions(+)



[GitHub] [incubator-tvm] yzhliu commented on issue #4642: [Relay][Frontend][TFLite] Add parser support for logical operators

2020-02-05 Thread GitBox
yzhliu commented on issue #4642: [Relay][Frontend][TFLite] Add parser support 
for logical operators
URL: https://github.com/apache/incubator-tvm/pull/4642#issuecomment-582590768
 
 
   Thanks @inadob @anijain2305 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu merged pull request #4642: [Relay][Frontend][TFLite] Add parser support for logical operators

2020-02-05 Thread GitBox
yzhliu merged pull request #4642: [Relay][Frontend][TFLite] Add parser support 
for logical operators
URL: https://github.com/apache/incubator-tvm/pull/4642
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 commented on issue #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

2020-02-05 Thread GitBox
anijain2305 commented on issue #4696: [Relay][Frontend][TFlite] Add support for 
quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696#issuecomment-582584426
 
 
   @inadob Please rebase


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (2989d72 -> 23f3988)

2020-02-05 Thread wuwei
This is an automated email from the ASF dual-hosted git repository.

wuwei pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 2989d72  [Frontend][TFLite] Dynamically calculate input_stats of any 
fake_quant range (#4789)
 add 23f3988  [QNN] Optimize lowering for requantize and 
FixedPointMultiply. (#4798)

No new revisions were added by this update.

Summary of changes:
 src/relay/qnn/op/requantize.cc   | 20 +---
 src/relay/qnn/util.cc| 10 +++---
 tests/python/relay/test_op_qnn_requantize.py | 15 +++
 3 files changed, 39 insertions(+), 6 deletions(-)



[GitHub] [incubator-tvm] vinx13 commented on issue #4798: [QNN] Optimize lowering for requantize and FixedPointMultiply.

2020-02-05 Thread GitBox
vinx13 commented on issue #4798: [QNN] Optimize lowering for requantize and 
FixedPointMultiply.
URL: https://github.com/apache/incubator-tvm/pull/4798#issuecomment-582582122
 
 
   Thanks @anijain2305 @jackwish this is merged


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vinx13 merged pull request #4798: [QNN] Optimize lowering for requantize and FixedPointMultiply.

2020-02-05 Thread GitBox
vinx13 merged pull request #4798: [QNN] Optimize lowering for requantize and 
FixedPointMultiply.
URL: https://github.com/apache/incubator-tvm/pull/4798
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun commented on issue #4789: [Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range

2020-02-05 Thread GitBox
kevinthesun commented on issue #4789: [Frontend][TFLite] Dynamically calculate 
input_stats of any fake_quant range
URL: https://github.com/apache/incubator-tvm/pull/4789#issuecomment-582582023
 
 
   Thanks @inadob @wyc-ruiker @anijain2305 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] kevinthesun merged pull request #4789: [Frontend][TFLite] Dynamically calculate input_stats of any fake_quant range

2020-02-05 Thread GitBox
kevinthesun merged pull request #4789: [Frontend][TFLite] Dynamically calculate 
input_stats of any fake_quant range
URL: https://github.com/apache/incubator-tvm/pull/4789
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (019356f -> 2989d72)

2020-02-05 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository.

kevinthesun pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 019356f  Fixed subprocess creation under windows (#4820)
 add 2989d72  [Frontend][TFLite] Dynamically calculate input_stats of any 
fake_quant range (#4789)

No new revisions were added by this update.

Summary of changes:
 tests/python/frontend/tflite/test_forward.py | 43 +---
 1 file changed, 27 insertions(+), 16 deletions(-)



[GitHub] [incubator-tvm] anijain2305 commented on issue #4798: [QNN] Optimize lowering for requantize and FixedPointMultiply.

2020-02-05 Thread GitBox
anijain2305 commented on issue #4798: [QNN] Optimize lowering for requantize 
and FixedPointMultiply.
URL: https://github.com/apache/incubator-tvm/pull/4798#issuecomment-582581737
 
 
   Ping


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >