[GitHub] [incubator-tvm] leandron commented on issue #4756: [Docker] Update torch version to 1.4

2020-01-27 Thread GitBox
leandron commented on issue #4756: [Docker] Update torch version to 1.4
URL: https://github.com/apache/incubator-tvm/pull/4756#issuecomment-578681838
 
 
   Sorry @masahi , I don't know how that error is related to the Pillow 
dependency and couldn't find any related issue on sphinx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] hcho3 commented on issue #4412: Python binding: No module named 'topi'

2020-01-27 Thread GitBox
hcho3 commented on issue #4412: Python binding: No module named 'topi'
URL: https://github.com/apache/incubator-tvm/issues/4412#issuecomment-578689513
 
 
   I've had a similar problem before, and the solution is to run `python 
setup.py install` from `topi/python` directory. This command installs the 
Python package named `topi`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
mbarrett97 commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371170921
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   Apologies for the delayed response. Yes, that's probably the source of the 
error. Normally that could be worked around just by increasing the error 
tolerances. But that doesn't work in this case because of the sorting and 
clipping that occurs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371193247
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   I think we should resolve the issue of rounding in TVM. Would you mind 
opening an RFC to describe it? We could discuss and resolve it. This case is 
one good candidate why we need to keep the same as the rounding behavior of 
TFLite when we parse TFLite quantized model.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-27 Thread GitBox
inadob commented on a change in pull request #4695: [Relay][Frontend][TFlite] 
Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r371203978
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -705,47 +710,77 @@ def convert_div(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized div operator is not supported yet.')
+'TFlite quantized DIV operator is not supported yet.')
 return self._convert_elemwise(_op.divide, op)
 
 def convert_pow(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized pow operator is not supported yet.')
+'TFlite quantized POW operator is not supported yet.')
 return self._convert_elemwise(_op.power, op)
 
+def convert_squared_difference(self, op):
+# Check if the input tensor is quantized, call QNN op
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized SQUARED_DIFFERENCE operator is not supported 
yet.')
+difference = self._convert_elemwise(_op.subtract, op)
+# _convert_elemwise has guaranteed only have one output tensor
+exp_type = 
self.get_tensor_type_str(self.get_output_tensors(op)[0].tensor.Type())
+out = _op.power(difference, relay.const(2, exp_type))
+return out
+
 def convert_maximum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized maximum operator is not supported yet.')
+'TFlite quantized MAXIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.maximum, op)
 
 def convert_minimum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized minimum operator is not supported yet.')
+'TFlite quantized MINIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.minimum, op)
 
 def convert_greater(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized greater operator is not supported yet.')
+'TFlite quantized GREATER operator is not supported yet.')
 return self._convert_elemwise(_op.greater, op)
 
-def convert_squared_difference(self, op):
-# Check if the input tensor is quantized, call QNN op
+def convert_greater_equal(self, op):
 
 Review comment:
   so do you want me to add such a doc string to every elemwise function or to 
remove it from 'add', 'sub', 'mul' and 'div' 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
mbarrett97 commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371205476
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   I can do, but will we continue that as an orthogonal conversation? I'm just 
clarifying as I don't think that issue affects the correctness of this operator 
which is already tested by 'test_detection_postprocess'.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371269023
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   Alright, we could remove ssd mobilenet model because of this limitation, but 
we should still keep the unit testing of detection postprocess. After we 
resolve the limitation, we could add ssd mobilenet testing back. Morever, we 
could remove the atol=1 of test_qconv2d and so on. Because we could get the 
same result completely compared with the tflite. Does it make sense to you?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4695: [Relay][Frontend][TFlite] Add parser support for relational ops

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4695: 
[Relay][Frontend][TFlite] Add parser support for relational ops
URL: https://github.com/apache/incubator-tvm/pull/4695#discussion_r371276409
 
 

 ##
 File path: python/tvm/relay/frontend/tflite.py
 ##
 @@ -705,47 +710,77 @@ def convert_div(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized div operator is not supported yet.')
+'TFlite quantized DIV operator is not supported yet.')
 return self._convert_elemwise(_op.divide, op)
 
 def convert_pow(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized pow operator is not supported yet.')
+'TFlite quantized POW operator is not supported yet.')
 return self._convert_elemwise(_op.power, op)
 
+def convert_squared_difference(self, op):
+# Check if the input tensor is quantized, call QNN op
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized SQUARED_DIFFERENCE operator is not supported 
yet.')
+difference = self._convert_elemwise(_op.subtract, op)
+# _convert_elemwise has guaranteed only have one output tensor
+exp_type = 
self.get_tensor_type_str(self.get_output_tensors(op)[0].tensor.Type())
+out = _op.power(difference, relay.const(2, exp_type))
+return out
+
 def convert_maximum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized maximum operator is not supported yet.')
+'TFlite quantized MAXIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.maximum, op)
 
 def convert_minimum(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized minimum operator is not supported yet.')
+'TFlite quantized MINIMUM operator is not supported yet.')
 return self._convert_elemwise(_op.minimum, op)
 
 def convert_greater(self, op):
 # Check if the input tensor is quantized, call QNN op
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
-'TFlite quantized greater operator is not supported yet.')
+'TFlite quantized GREATER operator is not supported yet.')
 return self._convert_elemwise(_op.greater, op)
 
-def convert_squared_difference(self, op):
-# Check if the input tensor is quantized, call QNN op
+def convert_greater_equal(self, op):
 
 Review comment:
   add such a doc string to every elemwise function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
mbarrett97 commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371276909
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   This test is a bit misleading because it doesn't actually run ssd mobilenet, 
it just test the postprocess op. I couldn't find a way to create the op using 
the tflite python API, so what I did instead was take a model that has it and 
then run it through the tflite converter but with the converter inputs set to 
the inputs of the postprocess op rather than the input to the network.
   
   This has the net effect of producing a single postprocess op, so this should 
already be a unit test (and it passes). I can add the end-to-end tests if/when 
we resolve the QNN accuracy issue. I'll open an RFC shortly to describe why 
rounding is a particularly significant in the case of this operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371281677
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   I think if we could view the TOCO source code, maybe we could find how to 
construct detection_postprocess. Please refer our `test_prelu` comment. I ever 
write what the pattern tflite could produce prelu.  However, current way is 
acceptable too in my opinion. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add support for TFLite_Detection_PostProcess

2020-01-27 Thread GitBox
FrozenGene commented on a change in pull request #4543: [FRONTEND][TFLITE] Add 
support for TFLite_Detection_PostProcess
URL: https://github.com/apache/incubator-tvm/pull/4543#discussion_r371281677
 
 

 ##
 File path: tests/python/frontend/tflite/test_forward.py
 ##
 @@ -1113,6 +1113,49 @@ def test_forward_fully_connected():
 _test_fully_connected([5, 1, 1, 150], [150, 100], [100])
 
 
+###
+# Custom Operators
+# ---
+
+def test_detection_postprocess():
+tf_model_file = tf_testing.get_workload_official(
+"http://download.tensorflow.org/models/object_detection/";
+"ssd_mobilenet_v2_quantized_300x300_coco_2019_01_03.tar.gz",
 
 Review comment:
   I think if we could view the TOCO source code, maybe we could find how to 
construct detection_postprocess. Please refer our `_test_prelu` comment. I ever 
write what the pattern tflite could produce prelu.  However, current way is 
acceptable too in my opinion. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yongfeng-nv commented on issue #4651: Tensor Expression Debug Display (TEDD)

2020-01-27 Thread GitBox
yongfeng-nv commented on issue #4651: Tensor Expression Debug Display (TEDD)
URL: https://github.com/apache/incubator-tvm/pull/4651#issuecomment-578845333
 
 
   > > Thanks @yongfeng-nv . One thing that I think worth considering, as in 
many viz tools, is the separation of visualization data source specification(in 
this case the perhaps a dom tree or similar kind) from the 
visualization(graphviz).
   > > We can have a tool that extracts the spec into json, then have a tool to 
take that spec and visualize it
   > 
   > @tqchen, I see your point. Let me spec the data source spec.
   
   @tqchen, @Hzfengsy, I have posted proposal DOM tree in the RFC thread: 
https://discuss.tvm.ai/t/visualize-tensor-expression/5174/6.  Please leave your 
comments and suggestions.
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv opened a new pull request #4779: [AUTOTVM] Fix a bug in generating the search space

2020-01-27 Thread GitBox
wpan11nv opened a new pull request #4779: [AUTOTVM] Fix a bug in generating the 
search space
URL: https://github.com/apache/incubator-tvm/pull/4779
 
 
   - Do not use numpy.prod which ignores integer (64 bits) overflows.
 This leads to an incorrect number of points in the search space.
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4779: [AUTOTVM] Fix a bug in generating the search space

2020-01-27 Thread GitBox
comaniac commented on a change in pull request #4779: [AUTOTVM] Fix a bug in 
generating the search space
URL: https://github.com/apache/incubator-tvm/pull/4779#discussion_r371397521
 
 

 ##
 File path: python/tvm/autotvm/task/space.py
 ##
 @@ -226,7 +226,13 @@ def __init__(self, axes, policy, **kwargs):
 def _generate_space(self, now, tmp_stack, enforce_no_tail=False):
 """Generate space by DFS"""
 if now == self.num_output - 1:
-prod = np.prod(tmp_stack, dtype=np.int64)
+prod = 1
 
 Review comment:
   It seems to me that manually implementing a classic array production is not 
necessary in any case. Since limited types are only enforced in numpy and 
Python's types are unlimited, it would be more concise to use Python builtins 
to calculate the product:
   ```python
   import functools
   import operator
   prod = functools.reduce(operator.mul, tmp_stack, 1)
   ```
   
   Note that the length of `tmp_stack` is always small (currently 4 at most, 
and I don't think it would be longer than 10), so this won't hurt the 
performance.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #4779: [AUTOTVM] Fix a bug in generating the search space

2020-01-27 Thread GitBox
wpan11nv commented on a change in pull request #4779: [AUTOTVM] Fix a bug in 
generating the search space
URL: https://github.com/apache/incubator-tvm/pull/4779#discussion_r371417185
 
 

 ##
 File path: python/tvm/autotvm/task/space.py
 ##
 @@ -226,7 +226,13 @@ def __init__(self, axes, policy, **kwargs):
 def _generate_space(self, now, tmp_stack, enforce_no_tail=False):
 """Generate space by DFS"""
 if now == self.num_output - 1:
-prod = np.prod(tmp_stack, dtype=np.int64)
+prod = 1
 
 Review comment:
   It looks more pythonic. The number of lines will be similar. Here is one 
debate on this: :) 
   
   https://stackoverflow.com/questions/9474412/python-alternative-to-reduce


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4779: [AUTOTVM] Fix a bug in generating the search space

2020-01-27 Thread GitBox
comaniac commented on a change in pull request #4779: [AUTOTVM] Fix a bug in 
generating the search space
URL: https://github.com/apache/incubator-tvm/pull/4779#discussion_r371422539
 
 

 ##
 File path: python/tvm/autotvm/task/space.py
 ##
 @@ -226,7 +226,13 @@ def __init__(self, axes, policy, **kwargs):
 def _generate_space(self, now, tmp_stack, enforce_no_tail=False):
 """Generate space by DFS"""
 if now == self.num_output - 1:
-prod = np.prod(tmp_stack, dtype=np.int64)
+prod = 1
 
 Review comment:
   Well, I know that some people prevent `reduce` and other patterns like `map` 
from being used in Python 3, although I personally think not like `map` and 
`filter`, `reduce` is hard to be replaced by other pythonic representation. I 
am fine with that if you do not want to introduce `reduce` to the code. In this 
case I would suggest making this logic a standalone function in 
`autotvm/util.py`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4779: [AUTOTVM] Fix a bug in generating the search space

2020-01-27 Thread GitBox
comaniac commented on a change in pull request #4779: [AUTOTVM] Fix a bug in 
generating the search space
URL: https://github.com/apache/incubator-tvm/pull/4779#discussion_r371422539
 
 

 ##
 File path: python/tvm/autotvm/task/space.py
 ##
 @@ -226,7 +226,13 @@ def __init__(self, axes, policy, **kwargs):
 def _generate_space(self, now, tmp_stack, enforce_no_tail=False):
 """Generate space by DFS"""
 if now == self.num_output - 1:
-prod = np.prod(tmp_stack, dtype=np.int64)
+prod = 1
 
 Review comment:
   Well, I know that some people prevent `reduce` and other patterns like `map` 
from being used in Python 3, although I personally think not like `map` and 
`filter`, `reduce` is hard to be replaced by other pythonic representations. I 
am fine with that if you do not want to introduce `reduce` to the code. In this 
case I would suggest making this logic a standalone function in 
`autotvm/util.py`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj commented on issue #4776: [Build] Explicitly link to cublasLt if it exists

2020-01-27 Thread GitBox
soiferj commented on issue #4776: [Build] Explicitly link to cublasLt if it 
exists
URL: https://github.com/apache/incubator-tvm/pull/4776#issuecomment-578902685
 
 
   @masahi fixed. Would you mind taking another look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] soiferj opened a new pull request #4780: [FFI][Windows] Extract Python error type from Windows error log

2020-01-27 Thread GitBox
soiferj opened a new pull request #4780: [FFI][Windows] Extract Python error 
type from Windows error log
URL: https://github.com/apache/incubator-tvm/pull/4780
 
 
   As discussed 
[here](https://discuss.tvm.ai/t/cross-compilation-ubuntu-for-tvm-generation-and-windows-for-executing-in-tvm-runtime/159/5),
 `hasattr` is currently broken for Windows. When a `Expr` does not have an 
attribute, a `TVMError` is thrown rather than an `AttributeError`. This is 
because of how we extract the error type from the logged message.
   
   On Unix, DMLC will log the stack trace when `LOG(FATAL)` is called. However, 
DMLC will not log the stack trace on Windows. This causes the errors to look 
different.
   
   Unix:
   
   ```
   AttributeError: relay.Call object has no attributed name_hint
   Stack trace:
 File "/home/jonso/dev/TVM/src/node/reflection.cc", line 109
 [bt] (0) 
/home/jonso/dev/TVM/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32)
 [0x7f0970787a62]
 [bt] (1) 
/home/jonso/dev/TVM/build/libtvm.so(tvm::ReflectionVTable::GetAttr(tvm::runtime::Object*,
 std::__cxx11::basic_string, std::allocator 
> const&) const+0x2c5) [0x7f0970af27f5]
 [bt] (2) 
/home/jonso/dev/TVM/build/libtvm.so(tvm::NodeGetAttr(tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)+0x15c) [0x7f0970af2c4c]
 [bt] (3) /home/jonso/dev/TVM/build/libtvm.so(std::_Function_handler::_M_invoke(std::_Any_data const&, 
tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x14) [0x7f0970af6f14]
 [bt] (4) /home/jonso/dev/TVM/build/libtvm.so(TVMFuncCall+0x61) 
[0x7f0970fb68a1]
   ```
   
   Windows:
   
   ```
   [11:19:42] D:\_work\1\s\TVM\src\node\reflection.cc:109: AttributeError: 
relay.Call object has no attributed name_hint
   ```
   
   This change fixes the error string parsing logic in base.py to try to find 
the error type in the Windows error message.
   
   @jmorrill @tqchen would you be able to take a look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] abergeron commented on issue #4758: [relay] ADT match becomes not well formed during VM optimization

2020-01-27 Thread GitBox
abergeron commented on issue #4758: [relay] ADT match becomes not well formed 
during VM optimization
URL: https://github.com/apache/incubator-tvm/issues/4758#issuecomment-578918024
 
 
   After some experimentation, if I comment out InlinePrimitives at line 921 in 
compiler.cc 
(https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/vm/compiler.cc#L921),
 then the compilation works.  It seems this is the line that introduces the 
weird change.
   
   If I add an override like this:
   
   ```
 Expr VisitExpr_(const MatchNode* m) {
   std::vector clauses;
   for (const Clause& p : m->clauses) {
 clauses.push_back(VisitClause(p));
   }
   return GetRef(m);
 }
   ```
   
   to the PrimitiveInliner class (in 
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/vm/inline_primitives.cc#L54),
 it appears to make the compilation work (at least in my limited example), but 
this may have some other consequences that I am not aware of (I suspect this 
means that no transformations will be applied to the body of match clauses, 
which might be bad).
   
   I'll keep working on that for a bit trying to find an acceptable solution, 
but I will gladly take any help/hints that I can get.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #4779: [AUTOTVM] Fix a bug in generating the search space

2020-01-27 Thread GitBox
wpan11nv commented on a change in pull request #4779: [AUTOTVM] Fix a bug in 
generating the search space
URL: https://github.com/apache/incubator-tvm/pull/4779#discussion_r371466338
 
 

 ##
 File path: python/tvm/autotvm/task/space.py
 ##
 @@ -226,7 +226,13 @@ def __init__(self, axes, policy, **kwargs):
 def _generate_space(self, now, tmp_stack, enforce_no_tail=False):
 """Generate space by DFS"""
 if now == self.num_output - 1:
-prod = np.prod(tmp_stack, dtype=np.int64)
+prod = 1
 
 Review comment:
   Utility function looks like a overkill. A reduce call is fine. Updated. 
Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac opened a new pull request #4781: [AutoTVM] Ignore error when removing tmpdir

2020-01-27 Thread GitBox
comaniac opened a new pull request #4781: [AutoTVM] Ignore error when removing 
tmpdir
URL: https://github.com/apache/incubator-tvm/pull/4781
 
 
   Sometimes the tmpdir drops for some reasons. For example, /tmp is cleaned by 
someone or similar situations. Anyhow, it seems to me that the use of `rmtree` 
should be error-free.
   
   @merrymercy could you help review? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-01-27 Thread GitBox
comaniac commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r371504533
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
+
+class MergeCompositeWrapper : public ExprMutator {
+ public:
+  explicit MergeCompositeWrapper(const tvm::Map& 
pattern_map)
+: pattern_map_(pattern_map) {}
+
+  bool MatchPattern(const Call& pattern, const Call& root) {
+if (!pattern->op->IsInstance() || !root->op->IsInstance())
+  return false;
+if (pattern->op.as()->name != root->op.as()->name)
+  return false;
+if (pattern->args.size() != root->args.size())
+  return false;
+
+unsigned int i = 0;
+for (const auto& arg : pattern->args) {
+  if (arg->IsInstance()) {
+if (!root->args[i]->IsInstance())
+  return false;
+if (!MatchPattern(Downcast(arg), Downcast(root->args[i])))
+  return false;
+  }
+  i++;
+}
+return true;
+  }
+
+  Expr ExtractPattern(const Var& pattern, const Expr& root,
+  Map>* var_map) {
+if (var_map->find(pattern->name_hint()) == var_map->end()) {
+  auto free_var = VarNode::make(pattern->name_hint(), Type());
+  var_map->Set(pattern->name_hint(), Array({free_var, root}));
+  return free_var;
+} else {
+  return (*var_map)[pattern->name_hint()][0];
+}
+  }
+
+  Expr ExtractPattern(const Constant& pattern, const Expr& root,
+  Map>* var_map) {
+return root;
+  }
+
+  Expr ExtractPattern(const Call& pattern, const Call& root,
+  Map>* var_map) {
+Expr expr;
+Expr empty_expr;
+if (!pattern->op->IsInstance() || !root->op->IsInstance())
+  return empty_expr;
+if (pattern->op.as()->name != root->op.as()->name)
+  return empty_expr;
+if (pattern->args.size() != root->args.size())
+  return empty_expr;
+
+unsigned int i = 0;
+Array new_args;
+for (const auto& arg : pattern->args) {
+  if (arg->IsInstance()) {
+new_args.push_back(ExtractPattern(Downcast(arg),
+  Downcast(root->args[i]),
+  var_map));
+  }
+  if (arg->IsInstance()) {
+new_args.push_back(ExtractPattern(Downcast(arg),
+  root->args[i],
+  var_map));
+  }
+  if (arg->IsInstance()) {
+new_args.push_back(ExtractPattern(Downcast(arg),
+  root->args[i],
+  var_map));
+  }
+  i++;
+}
+
+auto new_call = CallNode::make(root->op, new_args, root->attrs);
+return new_call;
+  }
+
+  Expr VisitExpr_(const CallNode* cn) {
+Call call = GetRef(cn);
+if (call->op->IsInstance()) {
+  Function func = Downcast(call->op);
+  CHECK(func.defined());
+  const auto name_node = FunctionGetAttr(func, 
attr::kComposite).as();
+  if (name_node->value != "") {
+tvm::Array new_args;
+for (const auto& arg : call->args) {
+  auto new_e = this->Mutate(arg);
+  new_args.push_back(new_e);
+}
+return CallNode::make(call->op, new_args, call->attrs);
+  }
+}
+
+Expr expr = ExprMutator::VisitExpr_(cn);
+call = Downcast(expr);
+if (!call->op->IsInstance())
+  return call;
+
+Op op = Downcast(call->op);
+CHECK(op.defined());
+for (const auto& x : pattern_map_) {
+  Call pattern = Downcast(x.second);
+  if (Downcast(pattern->op)->name != op->name)
+continue;
+
+  if (MatchPattern(pattern, call)) {
+Map> args_map;
+auto extract = ExtractPattern(pattern, call, &args_map);
+auto free_vars = FreeVars(extract);
+Function new_func = FunctionNode::mak

[GitHub] [incubator-tvm] mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge Composite pass

2020-01-27 Thread GitBox
mbarrett97 commented on a change in pull request #4771: [Relay] Added Merge 
Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#discussion_r371508067
 
 

 ##
 File path: src/relay/pass/merge_composite.cc
 ##
 @@ -0,0 +1,192 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/pass/merge_composite.cc
+ * \brief Merges expressions matching patterns into functions marked
+ * as 'composite'.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+namespace merge_composite {
+
+
+class MergeCompositeWrapper : public ExprMutator {
+ public:
+  explicit MergeCompositeWrapper(const tvm::Map& 
pattern_map)
+: pattern_map_(pattern_map) {}
+
+  bool MatchPattern(const Call& pattern, const Call& root) {
+if (!pattern->op->IsInstance() || !root->op->IsInstance())
+  return false;
+if (pattern->op.as()->name != root->op.as()->name)
+  return false;
+if (pattern->args.size() != root->args.size())
+  return false;
+
+unsigned int i = 0;
+for (const auto& arg : pattern->args) {
+  if (arg->IsInstance()) {
+if (!root->args[i]->IsInstance())
+  return false;
+if (!MatchPattern(Downcast(arg), Downcast(root->args[i])))
+  return false;
+  }
+  i++;
+}
+return true;
+  }
+
+  Expr ExtractPattern(const Var& pattern, const Expr& root,
+  Map>* var_map) {
+if (var_map->find(pattern->name_hint()) == var_map->end()) {
+  auto free_var = VarNode::make(pattern->name_hint(), Type());
+  var_map->Set(pattern->name_hint(), Array({free_var, root}));
+  return free_var;
+} else {
+  return (*var_map)[pattern->name_hint()][0];
+}
+  }
+
+  Expr ExtractPattern(const Constant& pattern, const Expr& root,
+  Map>* var_map) {
+return root;
+  }
+
+  Expr ExtractPattern(const Call& pattern, const Call& root,
+  Map>* var_map) {
+Expr expr;
+Expr empty_expr;
+if (!pattern->op->IsInstance() || !root->op->IsInstance())
+  return empty_expr;
+if (pattern->op.as()->name != root->op.as()->name)
+  return empty_expr;
+if (pattern->args.size() != root->args.size())
+  return empty_expr;
+
+unsigned int i = 0;
+Array new_args;
+for (const auto& arg : pattern->args) {
+  if (arg->IsInstance()) {
+new_args.push_back(ExtractPattern(Downcast(arg),
+  Downcast(root->args[i]),
+  var_map));
+  }
+  if (arg->IsInstance()) {
+new_args.push_back(ExtractPattern(Downcast(arg),
+  root->args[i],
+  var_map));
+  }
+  if (arg->IsInstance()) {
+new_args.push_back(ExtractPattern(Downcast(arg),
+  root->args[i],
+  var_map));
+  }
+  i++;
+}
+
+auto new_call = CallNode::make(root->op, new_args, root->attrs);
+return new_call;
+  }
+
+  Expr VisitExpr_(const CallNode* cn) {
+Call call = GetRef(cn);
+if (call->op->IsInstance()) {
+  Function func = Downcast(call->op);
+  CHECK(func.defined());
+  const auto name_node = FunctionGetAttr(func, 
attr::kComposite).as();
+  if (name_node->value != "") {
+tvm::Array new_args;
+for (const auto& arg : call->args) {
+  auto new_e = this->Mutate(arg);
+  new_args.push_back(new_e);
+}
+return CallNode::make(call->op, new_args, call->attrs);
+  }
+}
+
+Expr expr = ExprMutator::VisitExpr_(cn);
+call = Downcast(expr);
+if (!call->op->IsInstance())
+  return call;
+
+Op op = Downcast(call->op);
+CHECK(op.defined());
+for (const auto& x : pattern_map_) {
+  Call pattern = Downcast(x.second);
+  if (Downcast(pattern->op)->name != op->name)
+continue;
+
+  if (MatchPattern(pattern, call)) {
+Map> args_map;
+auto extract = ExtractPattern(pattern, call, &args_map);
+auto free_vars = FreeVars(extract);
+Function new_func = FunctionNode::m

[GitHub] [incubator-tvm] mbarrett97 closed issue #4150: [RFC] [AutoTVM] Implementing an auto-tuning library/cache

2020-01-27 Thread GitBox
mbarrett97 closed issue #4150: [RFC] [AutoTVM] Implementing an auto-tuning 
library/cache
URL: https://github.com/apache/incubator-tvm/issues/4150
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] mbarrett97 commented on issue #4771: [Relay] Added Merge Composite pass

2020-01-27 Thread GitBox
mbarrett97 commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-578979321
 
 
   It's the intention that this can be called on the entire Relay graph so that 
it can be used to help implement a generic annotation pass (one that is aware 
of composite functions). That way we can define functions similar to the 'Is 
Supported?' mechanism in the original annotation PR (since taken down) where 
you could declare in Python whether an operator was supported. That could be 
extended to say whether a composite function is supported without having to add 
pattern matching code to the annotator.
   
   The problem there is the case where composite functions do not end up in an 
external function after partitioning. My thinking is to have some legalize pass 
after the partitioning that removes the composite functions from sections of 
the graph not marked as external.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi merged pull request #4776: [Build] Explicitly link to cublasLt if it exists

2020-01-27 Thread GitBox
masahi merged pull request #4776: [Build] Explicitly link to cublasLt if it 
exists
URL: https://github.com/apache/incubator-tvm/pull/4776
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4776: [Build] Explicitly link to cublasLt if it exists

2020-01-27 Thread GitBox
masahi commented on issue #4776: [Build] Explicitly link to cublasLt if it 
exists
URL: https://github.com/apache/incubator-tvm/pull/4776#issuecomment-578985825
 
 
   thanks @soiferj 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (9056fc4 -> 00ec7f9)

2020-01-27 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 9056fc4  Update tune_simple_template.py (#4778)
 add 00ec7f9  [Build] Explicitly link to cublasLt if it exists (#4776)

No new revisions were added by this update.

Summary of changes:
 cmake/modules/CUDA.cmake  | 3 +++
 cmake/util/FindCUDA.cmake | 7 +++
 2 files changed, 10 insertions(+)



[GitHub] [incubator-tvm] tqchen merged pull request #4780: [FFI][Windows] Fix hasattr by extracting Python error type from Windows error message

2020-01-27 Thread GitBox
tqchen merged pull request #4780: [FFI][Windows] Fix hasattr by extracting 
Python error type from Windows error message
URL: https://github.com/apache/incubator-tvm/pull/4780
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4780: [FFI][Windows] Fix hasattr by extracting Python error type from Windows error message

2020-01-27 Thread GitBox
tqchen commented on issue #4780: [FFI][Windows] Fix hasattr by extracting 
Python error type from Windows error message
URL: https://github.com/apache/incubator-tvm/pull/4780#issuecomment-578996008
 
 
   Thanks @jonso4 !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (00ec7f9 -> f71a10c)

2020-01-27 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 00ec7f9  [Build] Explicitly link to cublasLt if it exists (#4776)
 add f71a10c  properly extract error type from windows error message (#4780)

No new revisions were added by this update.

Summary of changes:
 python/tvm/_ffi/base.py | 29 +++--
 1 file changed, 23 insertions(+), 6 deletions(-)



[GitHub] [incubator-tvm] jroesch merged pull request #4774: [Relay][Frontend][ONNX] Broadcast condition, x, and y for Where op

2020-01-27 Thread GitBox
jroesch merged pull request #4774: [Relay][Frontend][ONNX] Broadcast condition, 
x, and y for Where op
URL: https://github.com/apache/incubator-tvm/pull/4774
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (f71a10c -> de919cb)

2020-01-27 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from f71a10c  properly extract error type from windows error message (#4780)
 add de919cb  [Relay][Frontend][ONNX] Broadcast condition, x, and y for 
Where op (#4774)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  | 29 -
 tests/python/frontend/onnx/test_forward.py | 16 
 2 files changed, 40 insertions(+), 5 deletions(-)



[GitHub] [incubator-tvm] comaniac commented on issue #4771: [Relay] Added Merge Composite pass

2020-01-27 Thread GitBox
comaniac commented on issue #4771: [Relay] Added Merge Composite pass
URL: https://github.com/apache/incubator-tvm/pull/4771#issuecomment-579014540
 
 
   So you want the flow to be:
   ```
   CompositeMerge -> Annotation -> Partitioning. 
   ```
   I agree that this would make annotation generic and straightforward, 
although it seems like we don't need annotation anymore if we specify all 
patterns including single ops. While there are lots of approaches to do so, 
maybe we could accept this solution first and consider the further steps. What 
you do think? @zhiics 
   
   Also, please help clarify the question asked by @masahi and me about 
multiple matching. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] merrymercy merged pull request #4781: [AutoTVM] Ignore error when removing tmpdir

2020-01-27 Thread GitBox
merrymercy merged pull request #4781: [AutoTVM] Ignore error when removing 
tmpdir
URL: https://github.com/apache/incubator-tvm/pull/4781
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (de919cb -> d54036a)

2020-01-27 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from de919cb  [Relay][Frontend][ONNX] Broadcast condition, x, and y for 
Where op (#4774)
 add d54036a  Safe remove tmpdir (#4781)

No new revisions were added by this update.

Summary of changes:
 python/tvm/autotvm/measure/measure_methods.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[incubator-tvm] branch pass_callback_via_cx created (now 00ec7f9)

2020-01-27 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch pass_callback_via_cx
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at 00ec7f9  [Build] Explicitly link to cublasLt if it exists (#4776)

No new revisions were added by this update.



[incubator-tvm] branch pass_callback_via_cx updated: Implement pass tracing API

2020-01-27 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch pass_callback_via_cx
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/pass_callback_via_cx by this 
push:
 new 1f38493  Implement pass tracing API
1f38493 is described below

commit 1f3849378884be54c16efcd23c78f168c5c99ecd
Author: Jared Roesch 
AuthorDate: Mon Jan 27 16:35:45 2020 -0800

Implement pass tracing API
---
 include/tvm/ir/transform.h  | 19 +++
 python/tvm/relay/transform.py   | 13 +
 src/ir/transform.cc |  8 
 src/relay/ir/transform.cc   |  3 ++-
 tests/python/relay/test_pass_manager.py | 30 ++
 5 files changed, 68 insertions(+), 5 deletions(-)

diff --git a/include/tvm/ir/transform.h b/include/tvm/ir/transform.h
index c606b34..03aba40 100644
--- a/include/tvm/ir/transform.h
+++ b/include/tvm/ir/transform.h
@@ -65,6 +65,14 @@
 namespace tvm {
 namespace transform {
 
+// Forward declare for TraceFunc.
+class PassInfo;
+
+/*! \brief A callback for tracing passes, useful for debugging and logging.
+ *
+ */
+using TraceFunc = runtime::TypedPackedFunc;
+
 /*!
  * \brief PassContextNode contains the information that a pass can rely on,
  * such as analysis results.
@@ -88,6 +96,8 @@ class PassContextNode : public Object {
   /*! \brief The list of disabled passes. */
   Array disabled_pass;
 
+  TraceFunc trace_func;
+
   PassContextNode() = default;
 
   void VisitAttrs(AttrVisitor* v) {
@@ -101,6 +111,7 @@ class PassContextNode : public Object {
   TVM_DECLARE_FINAL_OBJECT_INFO(PassContextNode, Object);
 };
 
+
 /*!
  * \brief PassContext that is used to configure the pass behavior.
  *
@@ -146,6 +157,14 @@ class PassContext : public ObjectRef {
*/
   TVM_DLL static PassContext Current();
 
+  /*!
+   * \brief Apply the tracing functions of the context to the module, with the 
info.
+   * \param module The IRModule to trace.
+   * \param info The pass information.
+   * \param is_before Indicated whether the tracing is before or after a pass.
+   */
+  TVM_DLL void Trace(const IRModule& module, const PassInfo& info, bool 
is_before) const;
+
   // accessor.
   using ContainerType = PassContextNode;
   class Internal;
diff --git a/python/tvm/relay/transform.py b/python/tvm/relay/transform.py
index c4fbde6..26b20e0 100644
--- a/python/tvm/relay/transform.py
+++ b/python/tvm/relay/transform.py
@@ -78,7 +78,8 @@ class PassContext(RelayNode):
  opt_level=2,
  fallback_device=_nd.cpu(),
  required_pass=None,
- disabled_pass=None):
+ disabled_pass=None,
+ trace=None):
 if isinstance(fallback_device, str):
 fallback_device = _nd.context(fallback_device).device_type
 elif isinstance(fallback_device, TVMContext):
@@ -99,7 +100,7 @@ class PassContext(RelayNode):
 
 self.__init_handle_by_constructor__(_transform.PassContext, opt_level,
 fallback_device, required,
-disabled)
+disabled, trace)
 
 def __enter__(self):
 _transform.EnterPassContext(self)
@@ -117,7 +118,8 @@ class PassContext(RelayNode):
 def build_config(opt_level=2,
  fallback_device=_nd.cpu(),
  required_pass=None,
- disabled_pass=None):
+ disabled_pass=None,
+ trace=None):
 """Configure the build behavior by setting config variables.
 
 Parameters
@@ -151,13 +153,16 @@ def build_config(opt_level=2,
 disabled_pass: set of str, optional
 Optimization passes to be disabled during optimization.
 
+trace: Callable[[IRModule, PassInfo, bool], None]
+A tracing function for debugging or introspection.
+
 Returns
 ---
 pass_context: PassContext
 The pass context for optimizations.
 """
 return PassContext(opt_level, fallback_device, required_pass,
-   disabled_pass)
+   disabled_pass, trace)
 
 
 @register_relay_node
diff --git a/src/ir/transform.cc b/src/ir/transform.cc
index 1da010c..d14a5b4 100644
--- a/src/ir/transform.cc
+++ b/src/ir/transform.cc
@@ -84,6 +84,10 @@ PassContext PassContext::Create() {
   return PassContext(make_object());
 }
 
+void PassContext::Trace(const IRModule& module, const PassInfo& info, bool 
is_before) const {
+this->operator->()->trace_func(module, info, is_before);
+}
+
 class ModulePass;
 
 /*!
@@ -231,8 +235,10 @@ IRModule ModulePassNode::operator()(const IRModule& mod,
  << " with opt level: "
  << pass_info->opt_level;
   CHECK(mod.defined());
+  pass_ctx.Trace(mod, pass_info, true);
   IRModule updated_mod = pass_func(mod, pass_ctx);
   CHEC

[GitHub] [incubator-tvm] jroesch opened a new pull request #4782: [PassManager] Implement pass manager tracing API

2020-01-27 Thread GitBox
jroesch opened a new pull request #4782: [PassManager] Implement pass manager 
tracing API
URL: https://github.com/apache/incubator-tvm/pull/4782
 
 
   I am working on some pass infrastructure for TVM and needed the previously 
discussed tracing 
[feature](https://discuss.tvm.ai/t/rfc-printing-ir-and-parameters-to-pass-pipelines/4239/5)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch commented on issue #4782: [PassManager] Implement pass manager tracing API

2020-01-27 Thread GitBox
jroesch commented on issue #4782: [PassManager] Implement pass manager tracing 
API
URL: https://github.com/apache/incubator-tvm/pull/4782#issuecomment-579026027
 
 
   cc @zhiics @tmoreau89 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jroesch commented on issue #4782: [PassManager] Implement pass manager tracing API

2020-01-27 Thread GitBox
jroesch commented on issue #4782: [PassManager] Implement pass manager tracing 
API
URL: https://github.com/apache/incubator-tvm/pull/4782#issuecomment-579028693
 
 
   @tmoreau89 I added to the pass docs with an example. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-27 Thread GitBox
alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-579028784
 
 
   > @alexwong I had a big success in refactoring, the parser itself (except op 
conversion) is about 150 lines and the "main loop" is just:
   > 
   > ```python
   > def get_op_inputs(op_node, outputs, name_map):
   > inputs = []
   > for i in op_node.inputs():
   > inode_name = name_map[i.debugName()]
   > inputs.append(outputs[inode_name])
   > return inputs
   > 
   > outputs = list(input_vars.values())
   > node_name_to_nid = dict(zip(input_vars.keys(), range(len(outputs
   > 
   > for node_name, op_node in ops.items():
   > operator = op_node.kind()
   > if operator == "prim::Constant":
   > node_name_to_nid[node_name] = len(outputs)
   > outputs.append(consts[node_name])
   > elif operator != 'prim::ListConstruct':
   > node_name_to_nid[node_name] = len(outputs)
   > inputs = get_op_inputs(op_node, outputs, node_name_to_nid)
   > call = convert_map[operator](inputs, op_in_types[node_name])
   > outputs.append(call)
   > 
   > body = outputs[-1]
   > func = tvm.relay.Function(_analysis.free_vars(body), body)
   > param = {k: tvm.nd.array(v) for k, v in param_tensors.items()}
   > ```
   > 
   > My updated version is 
[here](https://gist.github.com/masahi/7704856919563c4b8a74bf085686b519)
   > 
   > Maybe this is a too much change for you, I'm happy to send my change as a 
follow up after this PR. We can merge this after you fix the CI issue.
   
   It is a lot of feedback but I think I can manage, just have been sidetracked 
with other things that keep pulling me away from this. I'm not sure about today 
but I should be able to work on this tomorrow. I'm all for simpler code though, 
would you prefer I pull in the changes above in this PR or just try and make 
all of the simpler changes to get this merged first?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-27 Thread GitBox
masahi commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-579038361
 
 
   @alexwong I'm 100% for "make all of the simpler changes to get this merged 
first". I don't want you to spend your time redoing the refactoring work I did. 
Of course, if you want to improve your PR and have time to do so, go ahead and 
do that as you like. In particular, good simplification can be done around 
GetAttr node parsing logic and inputs/outputs handling inside main loop (my 
gist above have my take on this). 
   
   Anyway, the must TODOs for this PR before we merge are,
   * Make CI green
   * Make it work with PyTorch 1.4 (just add `torch._C._jit_pass_inline(graph)`)
   * Replace all the string hacks with PyTorch's Python interface for JIT data 
structures
   
   After we merge this PR, I'll send a refactoring PR (only about 200 lines 
change, all op conversion and test cases will be used as is). I've finished 
refactoring on my branch and currently working on quantized model support. I'm 
also looking at dynamic models with control flow nodes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-27 Thread GitBox
masahi commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch 
to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r369430546
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1138 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+data0 = convert_input(inputs[0])
+data1 = convert_input(inputs[1])
+
+if not isinstance(data0, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+temp = data0
+data0 = data1
+data1 = temp
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+inferred_shape = _infer_shape(data)
+end = []
+
+for infer in inferred_shape:
+end.append(int(infer))
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+index = int(inputs[2])
+
+end[dim] = index+1
+begin[dim] = index
+
+strides = [1]*len(end)
+
+sym = _op.transform.strided_slice(data, begin, end, strides)
+axis = [dim]
+
+return _op.transform.squeeze(sym, axis)
+return _impl
+
+def _convert_data_type(input_type):
+if input_type == 'double' or input_type == 'torch.float64':
+return 'float64'
+elif input_type == 'float' or input_type == 'torch.float32':
+return 'float32'
+elif input_type == 'half' or input_type == 'torch.float16':
+return 'float16'
+elif input_type == 'long' or input_type == 'torch.int64':
+return 'int64'
+elif input_type == 'int' or input_type == 'torch.int32':
+return 'int32'
+elif input_type == 'short' or input_type == 'torch.int16':
+return 'int16'
+elif input_type == 'char' or input_type == 'torch.int8':
+return 'int8'
+elif input_type == 'byte' or input_type == 'torch.uint8':
+return 'uint8'
+else:
+return input_type
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+fill_value = _get_fill_value(input_types)
+
+return get_relay_op('full')(fill_value, shape, 
dtype=_convert_data_type(input_types[0]))
+return 

[GitHub] [incubator-tvm] masahi commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-27 Thread GitBox
masahi commented on a change in pull request #4497: [WIP] [Relay] Add a PyTorch 
to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#discussion_r369428504
 
 

 ##
 File path: python/tvm/relay/frontend/pytorch.py
 ##
 @@ -0,0 +1,1138 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=import-self, too-many-lines, len-as-condition, 
no-else-return, unused-variable, too-many-nested-blocks
+# pylint: disable=consider-iterating-dictionary, invalid-name, 
unused-argument, unused-variable, broad-except
+"""PT: PyTorch frontend."""
+import numpy as np
+
+import tvm
+
+from .. import analysis as _analysis
+from .. import expr as _expr
+from .. import module as _module
+from .. import op as _op
+from .common import get_relay_op
+from .common import infer_shape as _infer_shape
+
+__all__ = ['from_pytorch']
+
+# operator implementation
+def _elemwise(name):
+def _impl(inputs, input_types):
+data0 = convert_input(inputs[0])
+data1 = convert_input(inputs[1])
+
+if not isinstance(data0, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+temp = data0
+data0 = data1
+data1 = temp
+
+return get_relay_op(name)(data0, data1)
+return _impl
+
+def _unsqueeze():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+return _op.transform.expand_dims(data, int(axis), 1)
+return _impl
+
+def _concatenate():
+def _impl(inputs, input_types):
+data = inputs[0]
+axis = inputs[1]
+
+if isinstance(data, (_expr.Call, _expr.TupleGetItem, _expr.Var)):
+data = [data]
+
+return _op.tensor.concatenate(data, int(axis))
+return _impl
+
+def _slice():
+def _impl(inputs, input_types):
+data = inputs[0]
+strides = []
+
+inferred_shape = _infer_shape(data)
+end = []
+for infer in inferred_shape:
+end.append(int(infer))
+if isinstance(data, _expr.Var):
+end = _infer_shape(data)
+end = list(end)
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+begin[dim] = int(inputs[2])
+
+if inputs[3].isdigit():
+end[dim] = min(end[dim], int(inputs[3]))
+
+strides.append(int(inputs[4]))
+return _op.transform.strided_slice(data, begin, end, strides)
+return _impl
+
+def _select():
+def _impl(inputs, input_types):
+data = inputs[0]
+inferred_shape = _infer_shape(data)
+end = []
+
+for infer in inferred_shape:
+end.append(int(infer))
+
+begin = [0]*len(end)
+dim = int(inputs[1])
+index = int(inputs[2])
+
+end[dim] = index+1
+begin[dim] = index
+
+strides = [1]*len(end)
+
+sym = _op.transform.strided_slice(data, begin, end, strides)
+axis = [dim]
+
+return _op.transform.squeeze(sym, axis)
+return _impl
+
+def _convert_data_type(input_type):
+if input_type == 'double' or input_type == 'torch.float64':
+return 'float64'
+elif input_type == 'float' or input_type == 'torch.float32':
+return 'float32'
+elif input_type == 'half' or input_type == 'torch.float16':
+return 'float16'
+elif input_type == 'long' or input_type == 'torch.int64':
+return 'int64'
+elif input_type == 'int' or input_type == 'torch.int32':
+return 'int32'
+elif input_type == 'short' or input_type == 'torch.int16':
+return 'int16'
+elif input_type == 'char' or input_type == 'torch.int8':
+return 'int8'
+elif input_type == 'byte' or input_type == 'torch.uint8':
+return 'uint8'
+else:
+return input_type
+
+def _ones():
+def _impl(inputs, input_types):
+if isinstance(inputs[0], _expr.Var):
+shape = _infer_shape(inputs[0])
+elif isinstance(inputs[0], (_expr.Call, _expr.TupleGetItem)):
+shape = _infer_shape(inputs[0])
+else:
+shape = inputs[0].shape
+
+fill_value = _get_fill_value(input_types)
+
+return get_relay_op('full')(fill_value, shape, 
dtype=_convert_data_type(input_types[0]))
+return