[GitHub] [incubator-tvm] icemelon9 commented on issue #4764: [CI] ci-gpu update blockers

2020-01-30 Thread GitBox
icemelon9 commented on issue #4764: [CI] ci-gpu update blockers 
URL: https://github.com/apache/incubator-tvm/issues/4764#issuecomment-580592009
 
 
   One work around is that we can use the latest build of mxnet-mkl, which has 
fixed the problem. 
   ```
   pip install 
https://apache-mxnet.s3-us-west-2.amazonaws.com/dist/2020-01-30/dist/mxnet_mkl-1.6.0b20200130-py2.py3-none-manylinux1_x86_64.whl
   ```
   See https://github.com/apache/incubator-mxnet/issues/17479


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4790: Fast exponent

2020-01-30 Thread GitBox
FrozenGene commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r37330
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,71 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+ /*
+ * \brief Fast exponential function implementation from Eigen
+ * 
https://github.com/eigenteam/eigen-git-mirror/blob/master/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h#L183
 
 Review comment:
   does MPL2 allow somebody modify the code and doesn't open source? This is 
critical for company using TVM. I worried about it. However, I am not the 
expert of open source license. Maybe @tqchen could have more authoritative 
answer about it. My previous meaning is if you understand it and write it by 
yourself, you could remove this link so that we could avoid license problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4764: [CI] ci-gpu update blockers

2020-01-30 Thread GitBox
tqchen commented on issue #4764: [CI] ci-gpu update blockers 
URL: https://github.com/apache/incubator-tvm/issues/4764#issuecomment-580558384
 
 
   given that the mkl part poses accuracy problem, i feel it might be a bad 
idea to rely on it for testing QNN(see also comment about intel dependency). 
would be great if we can explore generic alternatives that can test QNN. For 
the parser part, I think we can start by directly checking alpha equivalence of 
the graph as well as potentially the comparison to a simulated FP32 version.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580519028
 
 
   > actually I cannot run torchvision tests in this PR on my 8GB laptop. Maybe 
RAM is the problem?
   
   Yeah I think it's something along that line. Will try to clean up the models 
after every test to see if it fixes it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
masahi commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580517710
 
 
   actually I cannot run torchvision tests in this PR on my 8GB laptop. Maybe 
RAM is the problem?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580517249
 
 
   My local test (using CI container) passes but it fails here due to an out of 
memory issue so I think it is an issue with not enough memory on whatever 
machine is running here. Will try some things periodically as I can't really 
reproduce locally. One more thing, I'm not too sure if we want to keep the 
specific tests for other data types as it makes the code kind of ugly and I 
don't see other frontends with similar tests. Perhaps we should move this to 
another file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] zxy844288792 commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-30 Thread GitBox
zxy844288792 commented on a change in pull request #4787: [Relay] Conv2D 
padding representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r373255718
 
 

 ##
 File path: topi/python/topi/nn/conv2d.py
 ##
 @@ -62,6 +61,8 @@ def conv2d(input, filter, strides, padding, dilation, 
layout='NCHW', out_dtype=N
 output : tvm.Tensor
 4-D with shape [batch, out_channel, out_height, out_width]
 """
+#only accepts 4-way padding
+assert len(padding) == 4, "only accepts 4-way padding"
 
 Review comment:
   @comaniac Agree


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] gussmith23 commented on issue #4568: [Feature] Support for NonZero ONNX operator

2020-01-30 Thread GitBox
gussmith23 commented on issue #4568: [Feature] Support for NonZero ONNX operator
URL: https://github.com/apache/incubator-tvm/issues/4568#issuecomment-580510543
 
 
   I've been trying to get a GNMT model importing into Relay. The ONNX route 
seems the most pain-free; however, this is a blocking issue!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] huajsj commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format support.

2020-01-30 Thread GitBox
huajsj commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format 
support.
URL: https://github.com/apache/incubator-tvm/pull/4791#issuecomment-580493189
 
 
   Hi @Laurawly @Huyuwei , could you help for the review thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
alexwong commented on a change in pull request #4497: [WIP] [Relay] Add a 
PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r373217425
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1084 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+data0 = _convert_elemwise_input(inputs[0])
+data1 = _convert_elemwise_input(inputs[1])
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if isinstance(inputs[3], str) and inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+else:
+end[dim] = inputs[3]
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+dim = int(inputs[1])
+index = int(inputs[2])
+
+return _op.transform.take(data, _expr.const(index, dtype='int32'), 
axis=dim)
+return _impl
+
+def _convert_data_type(input_type):
+if input_type in ['double', 'torch.float64']:
+return 'float64'
+elif input_type in ['float', 'torch.float32']:
+return 'float32'
+elif input_type in ['half', 'torch.float16']:
+return 'float16'
+elif input_type in ['long', 'torch.int64']:
+return 'int64'
+elif input_type in ['int', 'torch.int32']:
+return 'int32'
+elif input_type in ['short', 'torch.int16']:
+return 'int16'
+elif input_type in ['char', 'torch.int8']:
+return 'int8'
+elif input_type in ['byte', 'torch.uint8']:
+return 'uint8'
+else:
+return input_type
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(1), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _zeros():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+return _op.full(_expr.const(0), shape, 
dtype=_convert_data_type(input_types[0]))
+return _impl
+
+def _relu():
+def _impl(inputs, input_types):
+data = inputs[0]
+return _op.nn.relu(data)
+ret

[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-30 Thread GitBox
comaniac commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r373193490
 
 

 ##
 File path: topi/python/topi/nn/conv2d.py
 ##
 @@ -62,6 +61,8 @@ def conv2d(input, filter, strides, padding, dilation, 
layout='NCHW', out_dtype=N
 output : tvm.Tensor
 4-D with shape [batch, out_channel, out_height, out_width]
 """
+#only accepts 4-way padding
+assert len(padding) == 4, "only accepts 4-way padding"
 
 Review comment:
   @zxy844288792 how about we revert this file first and add a note in 
`python/tvm/relay/op/nn/nn.py` to remind us to add it back when #4644 is merged?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 edited a comment on issue #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 edited a comment on issue #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#issuecomment-579944529
 
 
   I've added the strategy for all ops. We can start to review this PR since 
it's huge. Could you help review it?
   @tqchen @kevinthesun @comaniac @masahi @MarisaKirisame @hlu1 @yzhliu @zhiics 
@ZihengJiang @merrymercy @vinx13 @FrozenGene @jroesch @jwfromm 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r373190892
 
 

 ##
 File path: topi/python/topi/nn/conv2d.py
 ##
 @@ -62,6 +61,8 @@ def conv2d(input, filter, strides, padding, dilation, 
layout='NCHW', out_dtype=N
 output : tvm.Tensor
 4-D with shape [batch, out_channel, out_height, out_width]
 """
+#only accepts 4-way padding
+assert len(padding) == 4, "only accepts 4-way padding"
 
 Review comment:
   Yes, @comaniac is correct. Probably directly contribute to #4644? 😄 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
jwfromm commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580456133
 
 
   @alexwong the new refactored tests look so much better!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
jwfromm commented on a change in pull request #4497: [WIP] [Relay] Add a 
PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r373186300
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward_refactor.py
 ##
 @@ -0,0 +1,813 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Unit tests for various models and operators"""
+from time import time
+import os
+import sys
+from tempfile import TemporaryDirectory
+from scipy.stats import t as tdistr
+import numpy as np
+import torch
+from torch.nn import Module
+import tvm
+import torchvision
+
+from tvm import relay
+from tvm.contrib import graph_runtime
+from tvm.relay.testing.config import ctx_list
+
+sys.setrecursionlimit(1)
+
+def _vectorize(ten):
+return ten.reshape(-1)
+
+def atol(tru, est):
+def _atol_elt(tru, est):
+return abs(tru - est)
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_atol_elt(x, y) for x, y in zip(tru, est)])
+
+def rtol(tru, est):
+def _rtol_elt(tru, est):
+return abs(tru - est) / min(abs(tru), abs(est))
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_rtol_elt(x, y) for x, y in zip(tru, est)])
+
+def assert_shapes_match(tru, est):
+if tru.shape != est.shape:
+msg = "Output shapes {} and {} don't match"
+raise AssertionError(msg.format(tru.shape, est.shape))
+
+def load_torchvision(model_name):
+"""Given a model name, returns a Torchvision model in eval mode as well
+as an example input."""
+if model_name.startswith('inception'):
+height = width = 299
+mean = [0.5, 0.5, 0.5]
+std = [0.5, 0.5, 0.5]
+else:
+height = width = 224
+mean = [0.485, 0.456, 0.406]
+std = [0.229, 0.224, 0.225]
+input_shape = [1, 3, height, width]
+input_data = torch.randn(input_shape).float()
+for channel in range(3):
+input_data[:, channel] -= mean[channel]
+input_data[:, channel] /= std[channel]
+model = getattr(torchvision.models, model_name)(pretrained=True)
+model = model.float().eval()
+return model, input_data
+
+def load_pretrainedmodels(model_name):
+"""Given a model name, returns a pretrainedmodels.pytorch model in eval
+mode as well as an example input."""
+import pretrainedmodels # 
https://github.com/Cadene/pretrained-models.pytorch
+model = getattr(pretrainedmodels, model_name)().float().eval()
+input_shape = [1, *model.input_size]
+input_data = torch.rand(input_shape).float() * 256
+for channel in range(3):
+input_data[:, channel] -= model.mean[channel]
+input_data[:, channel] /= model.std[channel]
+return model, input_data
+
+def load_model(model_name):
+"""Given a model name, returns a model as well as an example input."""
+if hasattr(torchvision.models, model_name):
+return load_torchvision(model_name)
+try:
+if hasattr(pretrainedmodels, model_name):
+return load_pretrainedmodels(model_name)
+except ModuleNotFoundError:
+raise ModuleNotFoundError('Please install pretrainedmodels.pytorch')
+raise RuntimeError('Model not supported')
+
+
+def confidence_interval(mean, stdev, count, alpha=.01):
+"""Returns the lower and upper bounds of the confidence interval of a 
random
+variable. Confidence is 1 - alpha (default confidence is 99%)."""
+stdval = tdistr.ppf(1 - alpha / 2, count - 1)
+lower, upper = mean + np.array([-1, 1]) * stdval * stdev / np.sqrt(count)
+return lower, upper
+
+def measure_latency(model, input_shapes, output_shapes, thresh, dryruns=40):
+"""Compute the latency of the given model"""
+latencies = []
+count = 0
+while True:
+if isinstance(model, torch.nn.Module):
+input_data = [torch.rand(shape).float() for shape in input_shapes]
+if torch.cuda.is_available():
+input_data = list(map(lambda x: x.cuda(), input_data))
+model = model.cuda()
+t_start = time()
+model(*input_data)
+t_end = time()
+latencies.append(t_end - t_star

[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
jwfromm commented on a change in pull request #4497: [WIP] [Relay] Add a 
PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r373186048
 
 

 ##
 File path: tests/python/frontend/pytorch/test_forward_refactor.py
 ##
 @@ -0,0 +1,813 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, invalid-name, unused-argument
+"""Unit tests for various models and operators"""
+from time import time
+import os
+import sys
+from tempfile import TemporaryDirectory
+from scipy.stats import t as tdistr
+import numpy as np
+import torch
+from torch.nn import Module
+import tvm
+import torchvision
+
+from tvm import relay
+from tvm.contrib import graph_runtime
+from tvm.relay.testing.config import ctx_list
+
+sys.setrecursionlimit(1)
+
+def _vectorize(ten):
+return ten.reshape(-1)
+
+def atol(tru, est):
+def _atol_elt(tru, est):
+return abs(tru - est)
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_atol_elt(x, y) for x, y in zip(tru, est)])
+
+def rtol(tru, est):
+def _rtol_elt(tru, est):
+return abs(tru - est) / min(abs(tru), abs(est))
+tru = _vectorize(tru)
+est = _vectorize(est)
+return max([_rtol_elt(x, y) for x, y in zip(tru, est)])
+
+def assert_shapes_match(tru, est):
+if tru.shape != est.shape:
+msg = "Output shapes {} and {} don't match"
+raise AssertionError(msg.format(tru.shape, est.shape))
+
+def load_torchvision(model_name):
+"""Given a model name, returns a Torchvision model in eval mode as well
+as an example input."""
+if model_name.startswith('inception'):
+height = width = 299
+mean = [0.5, 0.5, 0.5]
+std = [0.5, 0.5, 0.5]
+else:
+height = width = 224
+mean = [0.485, 0.456, 0.406]
+std = [0.229, 0.224, 0.225]
+input_shape = [1, 3, height, width]
+input_data = torch.randn(input_shape).float()
+for channel in range(3):
+input_data[:, channel] -= mean[channel]
+input_data[:, channel] /= std[channel]
+model = getattr(torchvision.models, model_name)(pretrained=True)
+model = model.float().eval()
+return model, input_data
+
+def load_pretrainedmodels(model_name):
+"""Given a model name, returns a pretrainedmodels.pytorch model in eval
+mode as well as an example input."""
+import pretrainedmodels # 
https://github.com/Cadene/pretrained-models.pytorch
+model = getattr(pretrainedmodels, model_name)().float().eval()
+input_shape = [1, *model.input_size]
+input_data = torch.rand(input_shape).float() * 256
+for channel in range(3):
+input_data[:, channel] -= model.mean[channel]
+input_data[:, channel] /= model.std[channel]
+return model, input_data
+
+def load_model(model_name):
+"""Given a model name, returns a model as well as an example input."""
+if hasattr(torchvision.models, model_name):
+return load_torchvision(model_name)
+try:
+if hasattr(pretrainedmodels, model_name):
+return load_pretrainedmodels(model_name)
+except ModuleNotFoundError:
+raise ModuleNotFoundError('Please install pretrainedmodels.pytorch')
+raise RuntimeError('Model not supported')
+
+
+def confidence_interval(mean, stdev, count, alpha=.01):
+"""Returns the lower and upper bounds of the confidence interval of a 
random
+variable. Confidence is 1 - alpha (default confidence is 99%)."""
+stdval = tdistr.ppf(1 - alpha / 2, count - 1)
+lower, upper = mean + np.array([-1, 1]) * stdval * stdev / np.sqrt(count)
+return lower, upper
+
+def measure_latency(model, input_shapes, output_shapes, thresh, dryruns=40):
+"""Compute the latency of the given model"""
+latencies = []
+count = 0
+while True:
+if isinstance(model, torch.nn.Module):
+input_data = [torch.rand(shape).float() for shape in input_shapes]
+if torch.cuda.is_available():
+input_data = list(map(lambda x: x.cuda(), input_data))
+model = model.cuda()
+t_start = time()
+model(*input_data)
+t_end = time()
+latencies.append(t_end - t_star

[GitHub] [incubator-tvm] alexgl-github commented on a change in pull request #4790: Fast exponent

2020-01-30 Thread GitBox
alexgl-github commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r373157363
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,71 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+ /*
+ * \brief Fast exponential function implementation from Eigen
+ * 
https://github.com/eigenteam/eigen-git-mirror/blob/master/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h#L183
 
 Review comment:
   @FrozenGene 
   Eigen licence is MPL2 https://www.mozilla.org/en-US/MPL/2.0/
   I wrote this TVM fastexp implementation using eigen as reference, and 
understand the code.  Eigen github link in the comment is for the original 
algorithm.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexgl-github commented on a change in pull request #4790: Fast exponent

2020-01-30 Thread GitBox
alexgl-github commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r373157363
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,71 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+ /*
+ * \brief Fast exponential function implementation from Eigen
+ * 
https://github.com/eigenteam/eigen-git-mirror/blob/master/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h#L183
 
 Review comment:
   @FrozenGene 
   Eigen licence is MPL2 https://www.mozilla.org/en-US/MPL/2.0/
   I wrote this fastexp implementation and understand the code.  Eigen github 
link in the comment is for algorithm reference.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexgl-github commented on a change in pull request #4775: conv3d_ndhwc schedule

2020-01-30 Thread GitBox
alexgl-github commented on a change in pull request #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#discussion_r373155323
 
 

 ##
 File path: topi/python/topi/x86/conv3d.py
 ##
 @@ -0,0 +1,112 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable, too-many-locals
+# pylint: disable=unused-argument, redefined-builtin, no-else-return
+"""Conv3D operators"""
+import tvm
+from tvm import autotvm
+from .. import generic, tag
+from ..nn.conv3d import conv3d, conv3d_ndhwc, conv3d_ncdhw
+from ..generic.nn import schedule_conv3d_ndhwc
+
+@autotvm.register_topi_compute(conv3d, 'cpu', ['direct'])
+def conv3d_x86(cfg, input, filter, strides, padding, dilation, layout='NCDHW', 
out_dtype=None):
+if layout == 'NCDHW':
+return conv3d_ncdhw(input, filter, strides, padding, dilation, 
out_dtype)
+elif layout == 'NDHWC':
+return conv3d_ndhwc(input, filter, strides, padding, dilation, 
out_dtype)
+
+@autotvm.register_topi_schedule(schedule_conv3d_ndhwc, 'cpu', ['direct'])
+def schedule_conv3d_ndhwc_x86(cfg, outs):
+"""TOPI schedule callback for conv2d
 
 Review comment:
   @vinx13 Fixed, thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373142619
 
 

 ##
 File path: python/tvm/relay/backend/compile_engine.py
 ##
 @@ -63,6 +83,316 @@ def _get_cache_key(source_func, target):
 return source_func
 
 
+def get_shape(shape):
+"""Convert the shape to correct dtype and vars."""
+ret = []
+for dim in shape:
+if isinstance(dim, tvm.expr.IntImm):
+val = int(dim)
+assert val <= np.iinfo(np.int32).max
+ret.append(tvm.expr.IntImm("int32", val))
+elif isinstance(dim, tvm.expr.Any):
+ret.append(tvm.var("any_dim", "int32"))
+else:
+ret.append(dim)
+return ret
+
+
+def get_valid_implements(op, attrs, inputs, out_type, target):
+"""Get all valid implementations from the op strategy.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list of tvm.Tensor
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+Returns
+---
+ret : list of relay.op.OpImplement
+The list of op implementations.
+"""
+fstrategy = op.get_attr("FTVMStrategy")
+assert fstrategy is not None, "%s doesn't have FTVMStrategy registered" % 
op.name
+with target:
+strategy = fstrategy(attrs, inputs, out_type, target)
+ret = []
+for spec in strategy.specializations:
+if spec.condition:
+flag = True
+for clause in spec.condition.clauses:
+clause = tvm.ir_pass.Simplify(clause)
+if isinstance(clause, tvm.expr.IntImm) and clause.value:
 
 Review comment:
   added the comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373142390
 
 

 ##
 File path: python/tvm/relay/quantize/_annotate.py
 ##
 @@ -53,11 +53,11 @@ def simulated_quantize_compute(attrs, inputs, out_type, 
target):
 return [rdata]
 
 
-_reg.register_schedule("relay.op.annotation.simulated_quantize",
-   _reg.schedule_injective)
+# _reg.register_schedule("relay.op.annotation.simulated_quantize",
 
 Review comment:
   done
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373142456
 
 

 ##
 File path: python/tvm/autotvm/task/task.py
 ##
 @@ -116,43 +149,134 @@ def __repr__(self):
 self.name, self.args, self.kwargs, self.workload
 )
 
-TASK_TABLE = {
-}
+TASK_TABLE2 = {}
 
 Review comment:
   fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
comaniac commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373142087
 
 

 ##
 File path: python/tvm/autotvm/task/dispatcher.py
 ##
 @@ -481,8 +412,12 @@ def _query_inside(self, target, workload):
 """
 if self._counter < len(self._records):
 cfg = self._records[self._counter][0].config
+wkl = self._records[self._counter][0].task.workload
+if workload is not None:
+assert wkl == workload
 self._counter += 1
-self.update(target, workload, cfg)
+self.update(target, wkl, cfg)
+cfg.workload = wkl
 
 Review comment:
   I see. Could we define `self.workload` in `ConfigSpace` and add your comment 
on it? So that we will remember to remove it in the future.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373141667
 
 

 ##
 File path: python/tvm/relay/backend/compile_engine.py
 ##
 @@ -63,6 +83,316 @@ def _get_cache_key(source_func, target):
 return source_func
 
 
+def get_shape(shape):
+"""Convert the shape to correct dtype and vars."""
+ret = []
+for dim in shape:
+if isinstance(dim, tvm.expr.IntImm):
+val = int(dim)
+assert val <= np.iinfo(np.int32).max
+ret.append(tvm.expr.IntImm("int32", val))
+elif isinstance(dim, tvm.expr.Any):
+ret.append(tvm.var("any_dim", "int32"))
+else:
+ret.append(dim)
+return ret
+
+
+def get_valid_implements(op, attrs, inputs, out_type, target):
+"""Get all valid implementations from the op strategy.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list of tvm.Tensor
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+Returns
+---
+ret : list of relay.op.OpImplement
+The list of op implementations.
+"""
+fstrategy = op.get_attr("FTVMStrategy")
+assert fstrategy is not None, "%s doesn't have FTVMStrategy registered" % 
op.name
+with target:
+strategy = fstrategy(attrs, inputs, out_type, target)
+ret = []
+for spec in strategy.specializations:
+if spec.condition:
+flag = True
+for clause in spec.condition.clauses:
+clause = tvm.ir_pass.Simplify(clause)
+if isinstance(clause, tvm.expr.IntImm) and clause.value:
+continue
+flag = False
+break
+if flag:
+for impl in spec.implements:
+ret.append(impl)
+else:
+for impl in spec.implements:
+ret.append(impl)
+return ret
+
+
+def select_implement(op, attrs, inputs, out_type, target, use_autotvm=True):
+"""Select the best implement from the op strategy.
+
+If use_autotvm is True, it'll first try to find the best implementation
+based on AutoTVM profile results. If no AutoTVM profile result is found,
+it'll choose the implementation with highest plevel.
+
+If use_autotvm is False, it'll directly choose the implementation with
+highest plevel.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list[tvm.Tensor]
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+use_autotvm : bool
+Whether query AutoTVM to pick the best.
+
+Returns
+---
+ret : tuple(relay.op.OpImplement, list[tvm.Tensor])
+The best op implementation and the corresponding output tensors.
+"""
+all_impls = get_valid_implements(op, attrs, inputs, out_type, target)
+
+best_plevel_impl = None
+for impl in all_impls:
+if best_plevel_impl is None or int(impl.plevel) > 
int(best_plevel_impl.plevel):
+best_plevel_impl = impl
+if not use_autotvm:
+outs = best_plevel_impl.compute(attrs, inputs, out_type)
+return best_plevel_impl, outs
+
+outputs = {}
+best_autotvm_impl = None
+best_cfg = None
+dispatch_ctx = autotvm.task.DispatchContext.current
+for impl in all_impls:
+outs = impl.compute(attrs, inputs, out_type)
+outputs[impl] = outs
+workload = autotvm.task.get_workload(outs)
+if workload is None:
+continue
+workload = autotvm.task.args_to_workload(workload)
 
 Review comment:
   Yes, I forgot to remove this line. :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373139677
 
 

 ##
 File path: python/tvm/autotvm/task/dispatcher.py
 ##
 @@ -481,8 +412,12 @@ def _query_inside(self, target, workload):
 """
 if self._counter < len(self._records):
 cfg = self._records[self._counter][0].config
+wkl = self._records[self._counter][0].task.workload
+if workload is not None:
+assert wkl == workload
 self._counter += 1
-self.update(target, workload, cfg)
+self.update(target, wkl, cfg)
+cfg.workload = wkl
 
 Review comment:
   This is only specific to `ApplyGraphBest`. The reason is complication. 
Because `ApplyGraphBest` relies on the order of query, we cannot use 
`relay.backend.compile_engine.select_implement` to collect the autotvm workload 
as it may query more than once. Therefore, this is a temporary work around that 
we sneak in the workload in the return cfg. We can remove this part of logic 
after we make `ApplyGraphBest` no longer relies on the query order.
   @kevinthesun 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373139677
 
 

 ##
 File path: python/tvm/autotvm/task/dispatcher.py
 ##
 @@ -481,8 +412,12 @@ def _query_inside(self, target, workload):
 """
 if self._counter < len(self._records):
 cfg = self._records[self._counter][0].config
+wkl = self._records[self._counter][0].task.workload
+if workload is not None:
+assert wkl == workload
 self._counter += 1
-self.update(target, workload, cfg)
+self.update(target, wkl, cfg)
+cfg.workload = wkl
 
 Review comment:
   This is only specific to `ApplyGraphBest`. The reason is complicated. 
Because `ApplyGraphBest` relies on the order of query, we cannot use 
`relay.backend.compile_engine.select_implement` to collect the autotvm workload 
as it may query more than once. Therefore, this is a temporary work around that 
we sneak in the workload in the return cfg. We can remove this part of logic 
after we make `ApplyGraphBest` no longer relies on the query order.
   @kevinthesun 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (24126b4 -> 10f85d0)

2020-01-30 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 24126b4  Fix parsing of different exception string formats (#4785)
 add 10f85d0  Dedup BindParamByName function in VM compiler (#4793)

No new revisions were added by this update.

Summary of changes:
 src/relay/backend/build_module.cc | 39 +
 src/relay/backend/utils.h | 41 +++
 src/relay/backend/vm/compiler.cc  | 37 ++-
 src/relay/backend/vm/compiler.h   | 10 --
 4 files changed, 44 insertions(+), 83 deletions(-)



[GitHub] [incubator-tvm] masahi merged pull request #4793: [Relay] Remove duplicated BindParamByName function in VM compiler

2020-01-30 Thread GitBox
masahi merged pull request #4793: [Relay] Remove duplicated BindParamByName 
function in VM compiler
URL: https://github.com/apache/incubator-tvm/pull/4793
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy

2020-01-30 Thread GitBox
icemelon9 commented on a change in pull request #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#discussion_r373136444
 
 

 ##
 File path: python/tvm/relay/backend/compile_engine.py
 ##
 @@ -63,6 +83,316 @@ def _get_cache_key(source_func, target):
 return source_func
 
 
+def get_shape(shape):
+"""Convert the shape to correct dtype and vars."""
+ret = []
+for dim in shape:
+if isinstance(dim, tvm.expr.IntImm):
+val = int(dim)
+assert val <= np.iinfo(np.int32).max
+ret.append(tvm.expr.IntImm("int32", val))
+elif isinstance(dim, tvm.expr.Any):
+ret.append(tvm.var("any_dim", "int32"))
+else:
+ret.append(dim)
+return ret
+
+
+def get_valid_implements(op, attrs, inputs, out_type, target):
+"""Get all valid implementations from the op strategy.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list of tvm.Tensor
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+Returns
+---
+ret : list of relay.op.OpImplement
+The list of op implementations.
+"""
+fstrategy = op.get_attr("FTVMStrategy")
+assert fstrategy is not None, "%s doesn't have FTVMStrategy registered" % 
op.name
+with target:
+strategy = fstrategy(attrs, inputs, out_type, target)
+ret = []
+for spec in strategy.specializations:
+if spec.condition:
+flag = True
+for clause in spec.condition.clauses:
+clause = tvm.ir_pass.Simplify(clause)
+if isinstance(clause, tvm.expr.IntImm) and clause.value:
+continue
+flag = False
+break
+if flag:
+for impl in spec.implements:
+ret.append(impl)
+else:
+for impl in spec.implements:
+ret.append(impl)
+return ret
+
+
+def select_implement(op, attrs, inputs, out_type, target, use_autotvm=True):
+"""Select the best implement from the op strategy.
+
+If use_autotvm is True, it'll first try to find the best implementation
+based on AutoTVM profile results. If no AutoTVM profile result is found,
+it'll choose the implementation with highest plevel.
+
+If use_autotvm is False, it'll directly choose the implementation with
+highest plevel.
+
+Note that this function doesn't support op that has symbolic input shapes.
+
+Parameters
+--
+op : relay.op.Op
+Relay operator.
+
+attrs : object
+The op attribute.
+
+inputs : list[tvm.Tensor]
+Input tensors to the op.
+
+out_type : relay.Type
+The output type.
+
+target : tvm.Target
+The target to compile the op.
+
+use_autotvm : bool
+Whether query AutoTVM to pick the best.
+
+Returns
+---
+ret : tuple(relay.op.OpImplement, list[tvm.Tensor])
+The best op implementation and the corresponding output tensors.
+"""
+all_impls = get_valid_implements(op, attrs, inputs, out_type, target)
+
+best_plevel_impl = None
+for impl in all_impls:
+if best_plevel_impl is None or int(impl.plevel) > 
int(best_plevel_impl.plevel):
 
 Review comment:
   Because `plevel` is `IntImm`, direct comparison between them will become an 
expr.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580391605
 
 
   > @alexwong the CI is not updated even if you update the docker script in 
this PR (see https://docs.tvm.ai/contribute/pull_request.html#ci-environment). 
To update for v1.4, first we need to wait for #4756 to be merged.
   > 
   > In the mean time, you can use
   > 
   > ```python
   > if torch.__version__ != "1.2.0":
   > torch._C._jit_pass_inline(graph)
   > ```
   > 
   > to unblock your testing.
   
   Ah, makes sense. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] abergeron commented on issue #4758: [relay] PrimitiveInliner doesn't correctly handle match expressions that have more than one use

2020-01-30 Thread GitBox
abergeron commented on issue #4758: [relay] PrimitiveInliner doesn't correctly 
handle match expressions that have more than one use
URL: https://github.com/apache/incubator-tvm/issues/4758#issuecomment-580367317
 
 
   Fixed by #4783 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] abergeron closed issue #4758: [relay] PrimitiveInliner doesn't correctly handle match expressions that have more than one use

2020-01-30 Thread GitBox
abergeron closed issue #4758: [relay] PrimitiveInliner doesn't correctly handle 
match expressions that have more than one use
URL: https://github.com/apache/incubator-tvm/issues/4758
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-30 Thread GitBox
comaniac commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r373085568
 
 

 ##
 File path: topi/python/topi/nn/conv2d.py
 ##
 @@ -62,6 +61,8 @@ def conv2d(input, filter, strides, padding, dilation, 
layout='NCHW', out_dtype=N
 output : tvm.Tensor
 4-D with shape [batch, out_channel, out_height, out_width]
 """
+#only accepts 4-way padding
+assert len(padding) == 4, "only accepts 4-way padding"
 
 Review comment:
   I'm afraid that you have to add this assertion to all conv2d compute 
functions. Specifically, all conv2d functions with 
`@autotvm.register_topi_compute(nn.conv2d, ...)` decorator should have this 
assertion. @icemelon9 could you help confirm?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene merged pull request #4785: [FFI][Windows] Parse additional exception strings

2020-01-30 Thread GitBox
FrozenGene merged pull request #4785: [FFI][Windows] Parse additional exception 
strings
URL: https://github.com/apache/incubator-tvm/pull/4785
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4785: [FFI][Windows] Parse additional exception strings

2020-01-30 Thread GitBox
FrozenGene commented on issue #4785: [FFI][Windows] Parse additional exception 
strings
URL: https://github.com/apache/incubator-tvm/pull/4785#issuecomment-580344098
 
 
   Thanks @jmorrill @soiferj This is merged now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (6914963 -> 24126b4)

2020-01-30 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 6914963  [Relay][Frontend][TFlite] Add add parser support for 
relational ops (#4695)
 add 24126b4  Fix parsing of different exception string formats (#4785)

No new revisions were added by this update.

Summary of changes:
 python/tvm/_ffi/base.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] soiferj commented on issue #4785: [FFI][Windows] Parse additional exception strings

2020-01-30 Thread GitBox
soiferj commented on issue #4785: [FFI][Windows] Parse additional exception 
strings
URL: https://github.com/apache/incubator-tvm/pull/4785#issuecomment-580339270
 
 
   @FrozenGene that’s how I approved the changes yesterday! Above my comment I 
see the message “soiferj approved these changes”. Feel free to check in. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] expectopatronm commented on issue #1027: TVMError: src/runtime/cuda/cuda_module.cc:93: CUDAError: cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUD

2020-01-30 Thread GitBox
expectopatronm commented on issue #1027: TVMError: 
src/runtime/cuda/cuda_module.cc:93: CUDAError: 
cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: 
CUDA_ERROR_INVALID_PTX
URL: https://github.com/apache/incubator-tvm/issues/1027#issuecomment-580311728
 
 
   I get the exact same issue.
   
   jetson@jetson:~/fast-depth/deploy$ python3 tx2_run_tvm.py --input-fp 
data/rgb.npy --output-fp data/pred.npy --model-dir 
../results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/ --cuda True
   => [TVM on TX2] using model files in 
../results/tvm_compiled/tx2_gpu_mobilenet_nnconv5dw_skipadd_pruned/
   => [TVM on TX2] loading model lib and ptx
   => [TVM on TX2] loading model graph and params
   => [TVM on TX2] creating TVM runtime module
   => [TVM on TX2] feeding inputs and params into TVM module
   => [TVM on TX2] running TVM module, saving output
   Traceback (most recent call last):
   
 File "tx2_run_tvm.py", line 91, in 
   main()
   
 File "tx2_run_tvm.py", line 88, in main
   run_model(args.model_dir, args.input_fp, args.output_fp, args.warmup, 
args.run, args.cuda,  try_randin=args.randin)
   
 File "tx2_run_tvm.py", line 36, in run_model
   run() # not gmodule.run()
   
 File "/home/jetson/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in 
__call__
   raise get_last_ffi_error()
   
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 [bt] (3) /home/jetson/tvm/build/libtvm.so(TVMFuncCall+0x70) [0x7fad7ccec0]
 [bt] (2) /home/jetson/tvm/build/libtvm.so(std::_Function_handler(tvm::runtime::CUDAWrappedFunc, 
std::vector > 
const&)::{lambda(tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, 
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xe8) [0x7fad850b08]
 [bt] (1) 
/home/jetson/tvm/build/libtvm.so(tvm::runtime::CUDAWrappedFunc::operator()(tvm::runtime::TVMArgs,
 tvm::runtime::TVMRetValue*, void**) const+0x6cc) [0x7fad85093c]
 [bt] (0) 
/home/jetson/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4c)
 [0x7facfdebac]
 File "/home/jetson/tvm/src/runtime/cuda/cuda_module.cc", line 110
 File "/home/jetson/tvm/src/runtime/library_module.cc", line 91
   CUDAError: Check failed: ret == 0 (-1 vs. 0) : 
cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: 
CUDA_ERROR_INVALID_PTX
   
   Still haven't found a solution to it. I am runnig it on a Jetson Nano. 
Please help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-30 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r372971382
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -1662,6 +1667,112 @@ def convert_transpose_conv(self, op):
 
 return out
 
+def convert_detection_postprocess(self, op):
+"""Convert TFLite_Detection_PostProcess"""
+_option_names = [
+"w_scale",
+"max_detections",
+"_output_quantized",
+"detections_per_class",
+"x_scale",
+"nms_score_threshold",
+"num_classes",
+"max_classes_per_detection",
+"use_regular_nms",
+"y_scale",
+"h_scale",
+"_support_output_type_float_in_quantized_op",
+"nms_iou_threshold"
+]
+
+custom_options = get_custom_options(op, _option_names)
+if custom_options["use_regular_nms"]:
+raise tvm.error.OpAttributeUnImplemented(
+"use_regular_nms=True is not yet supported for operator {}."
+.format("TFLite_Detection_PostProcess")
+)
+
+inputs = self.get_input_tensors(op)
 
 Review comment:
   Does it make sense adding one assert `assert len(inputs) == 3`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on issue #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-30 Thread GitBox
FrozenGene commented on issue #4695: [Relay][Frontend][TFlite] Add parser 
support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#issuecomment-580271032
 
 
   Thanks @inadob @wyc-ruiker This is merged now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene merged pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-30 Thread GitBox
FrozenGene merged pull request #4695: [Relay][Frontend][TFlite] Add parser 
support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated: [Relay][Frontend][TFlite] Add add parser support for relational ops (#4695)

2020-01-30 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 6914963  [Relay][Frontend][TFlite] Add add parser support for 
relational ops (#4695)
6914963 is described below

commit 6914963545a10c3c031c154f89a51a587e154743
Author: Ina Dobreva <55383260+ina...@users.noreply.github.com>
AuthorDate: Thu Jan 30 14:10:52 2020 +

[Relay][Frontend][TFlite] Add add parser support for relational ops (#4695)

Add support for: greater_equal, less, less_equal, equal, not_equal
Add tests for the elemwise relational ops
---
 python/tvm/relay/frontend/tflite.py  | 57 +---
 tests/python/frontend/tflite/test_forward.py | 39 +++
 2 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index 5902b92..791c056 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -89,6 +89,11 @@ class OperatorConverter(object):
 'MAXIMUM': self.convert_maximum,
 'MINIMUM': self.convert_minimum,
 'GREATER': self.convert_greater,
+'GREATER_EQUAL': self.convert_greater_equal,
+'LESS': self.convert_less,
+'LESS_EQUAL': self.convert_less_equal,
+'EQUAL': self.convert_equal,
+'NOT_EQUAL': self.convert_not_equal,
 'ZEROS_LIKE': self.convert_zeros_like,
 'REDUCE_MIN': self._convert_reduce_min,
 'REDUCE_MAX': self._convert_reduce_max,
@@ -690,7 +695,7 @@ class OperatorConverter(object):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized sub operator is not supported yet.')
+'TFlite quantized SUB operator is not supported yet.')
 return self._convert_elemwise(_op.subtract, op)
 
 def convert_mul(self, op):
@@ -705,38 +710,43 @@ class OperatorConverter(object):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized div operator is not supported yet.')
+'TFlite quantized DIV operator is not supported yet.')
 return self._convert_elemwise(_op.divide, op)
 
 def convert_pow(self, op):
+"""Convert TFLite POW"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized pow operator is not supported yet.')
+'TFlite quantized POW operator is not supported yet.')
 return self._convert_elemwise(_op.power, op)
 
 def convert_maximum(self, op):
+"""Convert TFLite MAXIMUM"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized maximum operator is not supported yet.')
+'TFlite quantized MAXIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.maximum, op)
 
 def convert_minimum(self, op):
+"""Convert TFLite MINIMUM"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized minimum operator is not supported yet.')
+'TFlite quantized MINIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.minimum, op)
 
 def convert_greater(self, op):
+"""Convert TFLite GREATER"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized greater operator is not supported yet.')
+'TFlite quantized GREATER operator is not supported yet.')
 return self._convert_elemwise(_op.greater, op)
 
 def convert_squared_difference(self, op):
+"""Convert TFLite SQUARED DIFFERENCE"""
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
@@ -747,6 +757,41 @@ class OperatorConverter(object):
 out = _op.power(difference, relay.const(2, exp_type))
 return out
 
+def convert_greater_equal(self, op):
+"""Convert TFLite GREATER_EQUAL"""
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized GREATER_EQUAL operator is not supported 
yet.')
+return self._convert_elemwise(_op.greater_equal, op)
+
+def convert_less(self, op):
+""

[GitHub] [incubator-tvm] masahi commented on issue #4756: [Docker] Update torch version to 1.4

2020-01-30 Thread GitBox
masahi commented on issue #4756: [Docker] Update torch version to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-580259814
 
 
   ok CI is fixed. 
   
   @tqchen since the update of MXNet seems to be having issues, and we want to 
test the [PyTorch frontend](https://github.com/apache/incubator-tvm/pull/4497) 
on the latest version, we want the PyTorch update to be built first. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-30 Thread GitBox
masahi commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580244271
 
 
   @alexwong the CI is not updated even if you update the docker script in this 
PR (see https://docs.tvm.ai/contribute/pull_request.html#ci-environment). To 
update for v1.4, first we need to wait for #4756 to be merged.
   
   In the mean time, you can use
   ```Python
   if torch.__version__ != "1.2.0":
   torch._C._jit_pass_inline(graph)
   ``` 
   to unblock your testing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi opened a new pull request #4793: Dedup BindParamByName function in VM compiler

2020-01-30 Thread GitBox
masahi opened a new pull request #4793: Dedup BindParamByName function in VM 
compiler
URL: https://github.com/apache/incubator-tvm/pull/4793
 
 
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4792: [Tutorial, VTA] Fix param name for autotvm function

2020-01-30 Thread GitBox
masahi commented on issue #4792: [Tutorial, VTA] Fix param name for autotvm 
function
URL: https://github.com/apache/incubator-tvm/pull/4792#issuecomment-580222718
 
 
   I realized I'd better fix this in my existing PR  #4756 that is blocked by 
this bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi closed pull request #4792: [Tutorial, VTA] Fix param name for autotvm function

2020-01-30 Thread GitBox
masahi closed pull request #4792: [Tutorial, VTA] Fix param name for autotvm 
function
URL: https://github.com/apache/incubator-tvm/pull/4792
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi opened a new pull request #4792: [Tutorial, VTA] Fix param name for autotvm function

2020-01-30 Thread GitBox
masahi opened a new pull request #4792: [Tutorial, VTA] Fix param name for 
autotvm function
URL: https://github.com/apache/incubator-tvm/pull/4792
 
 
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-30 Thread GitBox
inadob commented on a change in pull request #4695: [Relay][Frontend][TFlite] 
Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r372861913
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -843,6 +843,13 @@ def _test_pow(data):
 """ One iteration of power """
 return _test_elemwise(math_ops.pow, data)
 ###
+# Squared_difference
+# --
+
+def _test_squared_difference(data):
 
 Review comment:
   Done now


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services