[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738629079


   Maybe I'm missing something, I'm just wondering why `full_like` at 
https://github.com/apache/tvm/blob/464396706ec075fbada0f04629211e1ae7276234/python/tvm/relay/frontend/pytorch.py#L610
 is nonstatic while `linspace` just below is static? By "all" I just meant all 
op converter, I don't see why we cannot make some of op converter static?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] t-vi edited a comment on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


t-vi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738625073


   For 1. I can make all methods nonstatic. I wouldn't know how to reasonably 
make all methods static.
   Part of this is that I don't agree with the singleton.
   With convert map moved into the state and late the graph, it's not a 
singleton anymore.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738625073


   For 1. I can make all methods nonstatic. I wouldn't know how to reasonably 
make all methods static.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738619723


   1. To avoid confusion when we add a new converter, I think we should make 
everything staticmethod or regular one. Since this class is supposed to be a 
singleton, staticmethod makes sense to me.
   2. Yes, the arguments of each op converter are supposed to be `inputs, 
input_types`. If you add another arg like `name, inputs, input_types` I'd say 
it is already not consistent anyway. So I prefer lifting `name` arg to a 
wrapper function and returning a new impl function. I think being able to 
remove all the `functools.partial(...)` is a big plus. 
   3. Ok we can do that later.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738619723


   1. To avoid confusion when we add a new converter, I think we should make 
everything staticmethod or regular one. Since this class is supposed to be a 
singleton, staticmethod makes sense to me.
   2. Yes, the arguments of each op converter are supposed to be `inputs, 
input_types`. If you add another arg like `name, inputs, input_types` I'd say 
it is already not consistent. So I prefer lifting `name` arg to a wrapper 
function and returning a new impl function. I think being able to remove all 
the `functools.partial(...)` is a big plus. 
   3. Ok we can do that later.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] t-vi edited a comment on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


t-vi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738607046


   Thank you for looking at this.
   
   1. I used static methods to not have unused `self`. I'm not terribly 
attached to it, if you like the consistency of everything being a regular 
method.
   2. This is again a consistency thing. Right now all methods apply the 
operation. Moving some back to returning impl_ has us doing two different 
things (though I'm not opposed). One alternative I'd consider is to actually 
register a method for all ops we handle (`convert_op_aten_mm`) and build the 
conversion dictionary on the fly. For the time being I'll move the things 
taking extra arguments back to the impl scheme as you suggest.
   3. Yes, I thought I'd do the conversion in pieces. If you prefer I can move 
these now, too. I will tentatively add them to this PR.
   4. Good point about `_`, in particular as leading `_` in classes is special 
in Python...
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] t-vi commented on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


t-vi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738607046


   Thank you for looking at this.
   
   1. I used static methods to not have unused `self`. I'm not terribly 
attached to it, if you like the consistency of everything being a regular 
method.
   2. This is again a consistency thing. Right now all methods apply the 
operation. Moving some back to returning impl_ has us doing two different 
things (though I'm not opposed). One alternative I'd consider is to actually 
register a method for all ops we handle (`convert_op_aten_mm`) and build the 
conversion dictionary on the fly. For the time being I'll move the things 
taking extra arguments back to the impl scheme as you suggest.
   3. Yes, I thought I'd do the conversion in pieces. If you prefer I can move 
these now, too.
   4. Good point about `_`, in particular as leading `_` in classes is special 
in Python...
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-vta] dsteger commented on pull request #19: config: small spelling fix

2020-12-03 Thread GitBox


dsteger commented on pull request #19:
URL: https://github.com/apache/tvm-vta/pull/19#issuecomment-738583141


   @tqchen  Do you mind taking a look at this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-vta] dsteger commented on pull request #17: vta: Update VTA to use load_pad_2d in compute

2020-12-03 Thread GitBox


dsteger commented on pull request #17:
URL: https://github.com/apache/tvm-vta/pull/17#issuecomment-738582984


   @tqchen Do you mind taking a look at this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-vta] dsteger opened a new pull request #19: config: small spelling fix

2020-12-03 Thread GitBox


dsteger opened a new pull request #19:
URL: https://github.com/apache/tvm-vta/pull/19


   Small spelling correction to "offet" = "offset".



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Meteorix closed pull request #7032: Legalize tensorcore

2020-12-03 Thread GitBox


Meteorix closed pull request #7032:
URL: https://github.com/apache/tvm/pull/7032


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


jroesch commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535842427



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,128 +25,179 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
-
-
-def test_inline():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.c))
-orig = run_opt_pass(orig, transform.DeadCodeElimination(True))
-tvm.ir.assert_structural_equal(Function(free_vars(orig), orig), 
Function([e.d], e.d))
-
-
-def test_chain_unused_let():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.e))
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.e], e.e))
-
-
-def use_f(func):
-f = relay.Var("f")
-n = relay.Var("n", e.int32)
-data = relay.Var("data", e.float32)
-funcbody = relay.If(
-equal(n, relay.const(0)), data, relay.Call(f, [subtract(n, 
relay.const(1)), log(data)])
+def optimize_source(source, passes):
+if not isinstance(passes, list):
+passes = [passes]
+
+optimize = tvm.transform.Sequential(passes)
+module = tvm.parser.parse(source)
+return optimize(module)
+
+
+def optimize_and_check(before_source, after_source, passes):
+optimize_module = optimize_source(before_source, passes)
+after_module = tvm.parser.parse(after_source)
+print(optimize_module)
+print(after_module)
+assert tvm.ir.structural_equal(after_module, optimize_module)
+
+
+def test_dead_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%z
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+%z
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
+
+
+def test_one_live_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+let %y = 2;
+%x + %x
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%x + %x
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
+
+
+def test_nested_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %a = %b;
+let %c = %d;
+%c
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %c = %d;
+%c
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())

Review comment:
   I'll fix it in follow up.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch merged pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


jroesch merged pull request #7029:
URL: https://github.com/apache/tvm/pull/7029


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (f4c6517 -> 91c905d)

2020-12-03 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from f4c6517  [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided 
Slice in Topi (#7018)
 add 91c905d  [Relay][Pass] Clean up DCE tests in preparation for 
refactoring.  (#7029)

No new revisions were added by this update.

Summary of changes:
 .../relay/test_pass_dead_code_elimination.py   | 267 -
 1 file changed, 159 insertions(+), 108 deletions(-)



[GitHub] [tvm] zhiics commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


zhiics commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738563250


   Thanks @kevinthesun @mbrookhart @icemelon9 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics merged pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


zhiics merged pull request #7018:
URL: https://github.com/apache/tvm/pull/7018


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (e6c1baf -> f4c6517)

2020-12-03 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from e6c1baf  [AutoScheduler] Misc update to hardware parameter and task 
scheduler (#7020)
 add f4c6517  [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided 
Slice in Topi (#7018)

No new revisions were added by this update.

Summary of changes:
 include/tvm/topi/detail/constant_utils.h | 15 +
 include/tvm/topi/nn.h|  2 +-
 include/tvm/topi/transform.h | 43 +++-
 python/tvm/topi/cuda/sort.py | 20 ++-
 src/relay/op/tensor/transform.cc | 19 ---
 tests/python/relay/dyn/test_dynamic_op_level6.py |  4 +--
 tests/python/relay/test_any.py   | 10 ++
 7 files changed, 80 insertions(+), 33 deletions(-)



[GitHub] [tvm] jwfromm commented on a change in pull request #7031: [Relay][Frontend][Onnx] Add support for Size op in Onnx frontend.

2020-12-03 Thread GitBox


jwfromm commented on a change in pull request #7031:
URL: https://github.com/apache/tvm/pull/7031#discussion_r535810837



##
File path: tests/python/frontend/onnx/test_forward.py
##
@@ -3888,6 +3888,36 @@ def test_if():
 tvm.testing.assert_allclose(correct_out[i], tvm_out[i], 
rtol=1e-05, atol=1e-05)
 
 
+@tvm.testing.uses_gpu
+def test_size():
+def verify_size(indata):
+node = helper.make_node(
+"Size",
+inputs=["X"],
+outputs=["Y"],
+)
+
+graph = helper.make_graph(
+[node],
+"size_test",
+inputs=[helper.make_tensor_value_info("X", TensorProto.INT64, 
list(indata.shape))],
+outputs=[helper.make_tensor_value_info("Y", TensorProto.INT64, 
[])],
+)
+
+model = helper.make_model(graph, producer_name="size_test")
+
+for target, _ in tvm.testing.enabled_targets():
+verify_with_ort_with_inputs(
+model, [indata], targets=[target], dtype="int64", use_vm=True, 
opset=11

Review comment:
   Whoops, fixed now. Thanks for pointing that out!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Meteorix opened a new pull request #7032: Legalize tensorcore

2020-12-03 Thread GitBox


Meteorix opened a new pull request #7032:
URL: https://github.com/apache/tvm/pull/7032


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy edited a comment on pull request #7009: [TFLite] Bugfix - ensure pad calcalution to be in int32

2020-12-03 Thread GitBox


merrymercy edited a comment on pull request #7009:
URL: https://github.com/apache/tvm/pull/7009#issuecomment-738542246


   I also found the same problem and sent #7030 to fix it. Your modification of 
`AsText` is nice.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy edited a comment on pull request #7009: [TFLite] Bugfix - ensure pad calcalution to be in int32

2020-12-03 Thread GitBox


merrymercy edited a comment on pull request #7009:
URL: https://github.com/apache/tvm/pull/7009#issuecomment-738542246


   I also found the same problem and sent #7030 to solve it. Your modification 
of `AsText` is nice.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy edited a comment on pull request #7009: [TFLite] Bugfix - ensure pad calcalution to be in int32

2020-12-03 Thread GitBox


merrymercy edited a comment on pull request #7009:
URL: https://github.com/apache/tvm/pull/7009#issuecomment-738542246


   #7030 solves the same problem. The modification of `AsText` is nice.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on pull request #7009: [TFLite] Bugfix - ensure pad calcalution to be in int32

2020-12-03 Thread GitBox


merrymercy commented on pull request #7009:
URL: https://github.com/apache/tvm/pull/7009#issuecomment-738542246


   #7030 solves the same problem. But the modification of `AsText` is nice.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on pull request #7030: [Frontend] Prevent tflite frontend from producing int64 shape/parameters

2020-12-03 Thread GitBox


FrozenGene commented on pull request #7030:
URL: https://github.com/apache/tvm/pull/7030#issuecomment-738540812


   Seems this pr should resolve https://github.com/apache/tvm/pull/7009's goal



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7031: [Relay][Frontend][Onnx] Add support for Size op in Onnx frontend.

2020-12-03 Thread GitBox


masahi commented on a change in pull request #7031:
URL: https://github.com/apache/tvm/pull/7031#discussion_r535806394



##
File path: tests/python/frontend/onnx/test_forward.py
##
@@ -3888,6 +3888,36 @@ def test_if():
 tvm.testing.assert_allclose(correct_out[i], tvm_out[i], 
rtol=1e-05, atol=1e-05)
 
 
+@tvm.testing.uses_gpu
+def test_size():
+def verify_size(indata):
+node = helper.make_node(
+"Size",
+inputs=["X"],
+outputs=["Y"],
+)
+
+graph = helper.make_graph(
+[node],
+"size_test",
+inputs=[helper.make_tensor_value_info("X", TensorProto.INT64, 
list(indata.shape))],
+outputs=[helper.make_tensor_value_info("Y", TensorProto.INT64, 
[])],
+)
+
+model = helper.make_model(graph, producer_name="size_test")
+
+for target, _ in tvm.testing.enabled_targets():
+verify_with_ort_with_inputs(
+model, [indata], targets=[target], dtype="int64", use_vm=True, 
opset=11

Review comment:
   Note that you don't have to explicitly pass `targets` if you test 
against all targets. If `targets` is None, `verify_with_ort_with_inputs` will 
test on all targets.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #7031: [Relay][Frontend][Onnx] Add support for Size op in Onnx frontend.

2020-12-03 Thread GitBox


jwfromm commented on pull request #7031:
URL: https://github.com/apache/tvm/pull/7031#issuecomment-738531822


   @masahi @mbrookhart can you guys take a look at this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm opened a new pull request #7031: [Relay][Frontend][Onnx] Add support for Size op in Onnx frontend.

2020-12-03 Thread GitBox


jwfromm opened a new pull request #7031:
URL: https://github.com/apache/tvm/pull/7031


   This little PR adds support for the Size operator in the Onnx frontend. It 
addresses one concern brought up in [this 
thread](https://discuss.tvm.apache.org/t/tvm-support-for-conv1d-operator/8569).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy opened a new pull request #7030: [Frontend] Prevent tflite frontend from producing int64 shape

2020-12-03 Thread GitBox


merrymercy opened a new pull request #7030:
URL: https://github.com/apache/tvm/pull/7030


   I found some shapes and parameters (padding) are converted to int64 from 
tflite frontend.
   It will result in a placeholder shape be `[1, (int64) 112, (int64) 112, 
64]`. Some of them are int32, the others are int32.
   This fails some optimization in the auto-scheduler.
   
   This is because `ShapeAsNumpy` returns `np.int32`, but it is automatically 
upcasted to `np.int64` when doing computation. We convert all `np.int32` to 
python's int to prevent this auto upcast.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] yongwww commented on issue #4102: [Relay] Support for Tensorflow 2.0

2020-12-03 Thread GitBox


yongwww commented on issue #4102:
URL: https://github.com/apache/tvm/issues/4102#issuecomment-738507269


   We have looked into the models like image classification, object detection, 
segmentation, etc. Seems we need to enable the support for TensorList, 
control-flow, function, and some missing operations. We would like to work on 
this project from scratch.  As we discussed above, we might create a new tf 
frontend for tf2.x (and I believe the parser for tf1.x will be deprecated in 
the future as tf community stops the support for 1.x).
   
   Please comment here or ping me/us if anyone is interested in working with us 
on this. I synced with siva @srkreddy1238 in the past few weeks, we will keep 
him and the community updated if we work together on this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


altanh commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535747377



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,128 +25,179 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
-
-
-def test_inline():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.c))
-orig = run_opt_pass(orig, transform.DeadCodeElimination(True))
-tvm.ir.assert_structural_equal(Function(free_vars(orig), orig), 
Function([e.d], e.d))
-
-
-def test_chain_unused_let():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.e))
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.e], e.e))
-
-
-def use_f(func):
-f = relay.Var("f")
-n = relay.Var("n", e.int32)
-data = relay.Var("data", e.float32)
-funcbody = relay.If(
-equal(n, relay.const(0)), data, relay.Call(f, [subtract(n, 
relay.const(1)), log(data)])
+def optimize_source(source, passes):
+if not isinstance(passes, list):
+passes = [passes]
+
+optimize = tvm.transform.Sequential(passes)
+module = tvm.parser.parse(source)
+return optimize(module)
+
+
+def optimize_and_check(before_source, after_source, passes):
+optimize_module = optimize_source(before_source, passes)
+after_module = tvm.parser.parse(after_source)
+print(optimize_module)
+print(after_module)
+assert tvm.ir.structural_equal(after_module, optimize_module)
+
+
+def test_dead_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%z
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+%z
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
+
+
+def test_one_live_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+let %y = 2;
+%x + %x
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%x + %x
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
+
+
+def test_nested_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %a = %b;
+let %c = %d;
+%c
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %c = %d;
+%c
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())

Review comment:
   the reason why the old test had a different result is because the pass 
was constructed as `DeadCodeElimination(True)`, whereas the default is `False` 
(for the parameter called `inline_once`)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


jroesch commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535742798



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -91,62 +138,107 @@ def use_f(func):
 return relay.Let(f, value, func(f))
 
 
-# make sure we dont infinite loop
-def test_recursion():
+def test_live_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
+"""
+
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
 """
-Program:
-   let f(n: i32, data: f32) -> f32 = {
-  if (n == 0) {
-  return data;
-  } else {
-  return f(n - 1, log(data));
-  }
-   }
-   f(2, 1);
+
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
+
+
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+()
+}
 """
-orig = use_f(lambda f: relay.Call(f, [relay.const(2), 
relay.const(1.0)]))
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-orig = run_opt_pass(orig, transform.InferType())
-tvm.ir.assert_structural_equal(dced, orig)
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+()
+}
+"""
 
-def test_recursion_dead():
-x = relay.Let(e.a, e.one, e.three)
-dced_f = lambda f: x
-dced = run_opt_pass(use_f(dced_f), transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, e.three)
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-def test_op_let():
-dced = run_opt_pass(add(relay.Let(e.a, e.one, e.three), e.two), 
transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, add(e.three, e.two))
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+(let %a = 1; 3) + 2
+}
+"""
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+3 + 2
+}
+"""
 
-def test_tuple_get_item():
-tt = relay.TupleType([e.float32, e.float32])
-t = relay.Var("t", tt)
-a = relay.Var("a")
-g = relay.TupleGetItem(t, 0)
-dced = run_opt_pass(g, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
-orig = relay.TupleGetItem(relay.Let(a, e.one, t), 0)
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-@pytest.mark.timeout(timeout=10, method="thread")
-def test_complexity():

Review comment:
   No these are not good tests, they don't really assert anything except 
that this thing isn't like crazy slow and slows CI down by running passes 
against large models is why CI performance is really bad. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


altanh commented on pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#issuecomment-738467164


   sorry for the duplicated comments, didn't see the other reviews  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


altanh commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535742377



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,59 +25,106 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
+# class env:
+# def __init__(self):
+# self.shape = tvm.runtime.convert([1, 2, 3])
+# self.tt = relay.TensorType(self.shape, "float32")
+# self.int32 = relay.TensorType([], "int32")
+# self.float32 = relay.TensorType([], "float32")
+# self.one = relay.const(1.0)
+# self.two = relay.const(2.0)
+# self.three = relay.const(3.0)
+# self.a = relay.Var("a", self.float32)
+# self.b = relay.Var("b", self.float32)
+# self.c = relay.Var("c", self.float32)
+# self.d = relay.Var("d", self.float32)
+# self.e = relay.Var("e", self.float32)
+# self.x = relay.Var("x", self.int32)
+# self.y = relay.Var("y", self.int32)
+# self.z = relay.Var("z", self.int32)
+
+
+# e = env()
+
+
+# def run_opt_pass(expr, opt_pass):
+# assert isinstance(opt_pass, tvm.transform.Pass)
+# mod = tvm.IRModule.from_expr(expr)
+# mod = opt_pass(mod)
+# entry = mod["main"]
+# return entry if isinstance(expr, relay.Function) else entry.body
+
+
+def optimize_source(source, passes):
+if not isinstance(passes, list):
+passes = [passes]
+
+optimize = tvm.transform.Sequential(passes)
+module = tvm.parser.parse(source)
+return optimize(module)
+
+
+def optimize_and_check(before_source, after_source, passes):
+optimize_module = optimize_source(before_source, passes)
+after_module = tvm.parser.parse(after_source)
+print(optimize_module)
+print(after_module)
+assert tvm.ir.structural_equal(after_module, optimize_module)
+
+
+def test_dead_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%z
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+%z
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_inline():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.c))
-orig = run_opt_pass(orig, transform.DeadCodeElimination(True))
-tvm.ir.assert_structural_equal(Function(free_vars(orig), orig), 
Function([e.d], e.d))
+def test_one_live_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+let %y = 2;
+%x + %x
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%x + %x
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_chain_unused_let():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.e))
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.e], e.e))
+def test_nested_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %a = %b;
+let %c = %d;
+%c
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %c = %d;

Review comment:
   not sure if something changed





This is an automated message 

[GitHub] [tvm] altanh commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


altanh commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535742349



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,59 +25,106 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
+# class env:
+# def __init__(self):
+# self.shape = tvm.runtime.convert([1, 2, 3])
+# self.tt = relay.TensorType(self.shape, "float32")
+# self.int32 = relay.TensorType([], "int32")
+# self.float32 = relay.TensorType([], "float32")
+# self.one = relay.const(1.0)
+# self.two = relay.const(2.0)
+# self.three = relay.const(3.0)
+# self.a = relay.Var("a", self.float32)
+# self.b = relay.Var("b", self.float32)
+# self.c = relay.Var("c", self.float32)
+# self.d = relay.Var("d", self.float32)
+# self.e = relay.Var("e", self.float32)
+# self.x = relay.Var("x", self.int32)
+# self.y = relay.Var("y", self.int32)
+# self.z = relay.Var("z", self.int32)
+
+
+# e = env()
+
+
+# def run_opt_pass(expr, opt_pass):
+# assert isinstance(opt_pass, tvm.transform.Pass)
+# mod = tvm.IRModule.from_expr(expr)
+# mod = opt_pass(mod)
+# entry = mod["main"]
+# return entry if isinstance(expr, relay.Function) else entry.body
+
+
+def optimize_source(source, passes):
+if not isinstance(passes, list):
+passes = [passes]
+
+optimize = tvm.transform.Sequential(passes)
+module = tvm.parser.parse(source)
+return optimize(module)
+
+
+def optimize_and_check(before_source, after_source, passes):
+optimize_module = optimize_source(before_source, passes)
+after_module = tvm.parser.parse(after_source)
+print(optimize_module)
+print(after_module)
+assert tvm.ir.structural_equal(after_module, optimize_module)
+
+
+def test_dead_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%z
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+%z
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_inline():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.c))
-orig = run_opt_pass(orig, transform.DeadCodeElimination(True))
-tvm.ir.assert_structural_equal(Function(free_vars(orig), orig), 
Function([e.d], e.d))
+def test_one_live_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+let %y = 2;
+%x + %x
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%x + %x
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_chain_unused_let():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.e))
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.e], e.e))
+def test_nested_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %a = %b;
+let %c = %d;
+%c
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %c = %d;

Review comment:
   but yeah I think the old DCE pass behavior would indeed collapse it to 
just `%d`





[GitHub] [tvm] altanh commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


altanh commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535741743



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,59 +25,106 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
+# class env:
+# def __init__(self):
+# self.shape = tvm.runtime.convert([1, 2, 3])
+# self.tt = relay.TensorType(self.shape, "float32")
+# self.int32 = relay.TensorType([], "int32")
+# self.float32 = relay.TensorType([], "float32")
+# self.one = relay.const(1.0)
+# self.two = relay.const(2.0)
+# self.three = relay.const(3.0)
+# self.a = relay.Var("a", self.float32)
+# self.b = relay.Var("b", self.float32)
+# self.c = relay.Var("c", self.float32)
+# self.d = relay.Var("d", self.float32)
+# self.e = relay.Var("e", self.float32)
+# self.x = relay.Var("x", self.int32)
+# self.y = relay.Var("y", self.int32)
+# self.z = relay.Var("z", self.int32)
+
+
+# e = env()
+
+
+# def run_opt_pass(expr, opt_pass):
+# assert isinstance(opt_pass, tvm.transform.Pass)
+# mod = tvm.IRModule.from_expr(expr)
+# mod = opt_pass(mod)
+# entry = mod["main"]
+# return entry if isinstance(expr, relay.Function) else entry.body
+
+
+def optimize_source(source, passes):
+if not isinstance(passes, list):
+passes = [passes]
+
+optimize = tvm.transform.Sequential(passes)
+module = tvm.parser.parse(source)
+return optimize(module)
+
+
+def optimize_and_check(before_source, after_source, passes):
+optimize_module = optimize_source(before_source, passes)
+after_module = tvm.parser.parse(after_source)
+print(optimize_module)
+print(after_module)
+assert tvm.ir.structural_equal(after_module, optimize_module)
+
+
+def test_dead_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%z
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+%z
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_inline():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.c))
-orig = run_opt_pass(orig, transform.DeadCodeElimination(True))
-tvm.ir.assert_structural_equal(Function(free_vars(orig), orig), 
Function([e.d], e.d))
+def test_one_live_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+let %y = 2;
+%x + %x
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%x + %x
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_chain_unused_let():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.e))
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.e], e.e))
+def test_nested_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %a = %b;
+let %c = %d;
+%c
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %c = %d;

Review comment:
   that might need PE? 





This is an automated message from the 

[GitHub] [tvm] jroesch commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


jroesch commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535741593



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,59 +25,106 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
+# class env:
+# def __init__(self):
+# self.shape = tvm.runtime.convert([1, 2, 3])
+# self.tt = relay.TensorType(self.shape, "float32")
+# self.int32 = relay.TensorType([], "int32")
+# self.float32 = relay.TensorType([], "float32")
+# self.one = relay.const(1.0)
+# self.two = relay.const(2.0)
+# self.three = relay.const(3.0)
+# self.a = relay.Var("a", self.float32)
+# self.b = relay.Var("b", self.float32)
+# self.c = relay.Var("c", self.float32)
+# self.d = relay.Var("d", self.float32)
+# self.e = relay.Var("e", self.float32)
+# self.x = relay.Var("x", self.int32)
+# self.y = relay.Var("y", self.int32)
+# self.z = relay.Var("z", self.int32)
+
+
+# e = env()
+
+
+# def run_opt_pass(expr, opt_pass):
+# assert isinstance(opt_pass, tvm.transform.Pass)
+# mod = tvm.IRModule.from_expr(expr)
+# mod = opt_pass(mod)
+# entry = mod["main"]
+# return entry if isinstance(expr, relay.Function) else entry.body
+
+
+def optimize_source(source, passes):
+if not isinstance(passes, list):
+passes = [passes]
+
+optimize = tvm.transform.Sequential(passes)
+module = tvm.parser.parse(source)
+return optimize(module)
+
+
+def optimize_and_check(before_source, after_source, passes):
+optimize_module = optimize_source(before_source, passes)
+after_module = tvm.parser.parse(after_source)
+print(optimize_module)
+print(after_module)
+assert tvm.ir.structural_equal(after_module, optimize_module)
+
+
+def test_dead_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%z
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+%z
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_inline():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.c))
-orig = run_opt_pass(orig, transform.DeadCodeElimination(True))
-tvm.ir.assert_structural_equal(Function(free_vars(orig), orig), 
Function([e.d], e.d))
+def test_one_live_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+let %y = 2;
+%x + %x
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%z: int) {
+let %x = 1;
+%x + %x
+}
+"""
+optimize_and_check(before_program, after_program, 
transform.DeadCodeElimination())
 
 
-def test_chain_unused_let():
-orig = relay.Let(e.a, e.b, relay.Let(e.c, e.d, e.e))
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.e], e.e))
+def test_nested_let():
+before_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %a = %b;
+let %c = %d;
+%c
+}
+"""
+after_program = """
+#[version = "0.0.5"]
+def @main(%d: int, %b: int) {
+let %c = %d;

Review comment:
   It looks like current liveness keeps the binding around unless the 
constructed IR is slightly different. 





[GitHub] [tvm] mbrookhart commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


mbrookhart commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535741224



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -91,62 +138,107 @@ def use_f(func):
 return relay.Let(f, value, func(f))
 
 
-# make sure we dont infinite loop
-def test_recursion():
+def test_live_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
+"""
+
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
 """
-Program:
-   let f(n: i32, data: f32) -> f32 = {
-  if (n == 0) {
-  return data;
-  } else {
-  return f(n - 1, log(data));
-  }
-   }
-   f(2, 1);
+
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
+
+
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+()
+}
 """
-orig = use_f(lambda f: relay.Call(f, [relay.const(2), 
relay.const(1.0)]))
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-orig = run_opt_pass(orig, transform.InferType())
-tvm.ir.assert_structural_equal(dced, orig)
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+()
+}
+"""
 
-def test_recursion_dead():
-x = relay.Let(e.a, e.one, e.three)
-dced_f = lambda f: x
-dced = run_opt_pass(use_f(dced_f), transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, e.three)
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-def test_op_let():
-dced = run_opt_pass(add(relay.Let(e.a, e.one, e.three), e.two), 
transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, add(e.three, e.two))
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+(let %a = 1; 3) + 2
+}
+"""
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+3 + 2
+}
+"""
 
-def test_tuple_get_item():
-tt = relay.TupleType([e.float32, e.float32])
-t = relay.Var("t", tt)
-a = relay.Var("a")
-g = relay.TupleGetItem(t, 0)
-dced = run_opt_pass(g, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
-orig = relay.TupleGetItem(relay.Let(a, e.one, t), 0)
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-@pytest.mark.timeout(timeout=10, method="thread")

Review comment:
   Agreed on this one.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


altanh commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535739423



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -91,62 +138,107 @@ def use_f(func):
 return relay.Let(f, value, func(f))
 
 
-# make sure we dont infinite loop
-def test_recursion():
+def test_live_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
+"""
+
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
 """
-Program:
-   let f(n: i32, data: f32) -> f32 = {
-  if (n == 0) {
-  return data;
-  } else {
-  return f(n - 1, log(data));
-  }
-   }
-   f(2, 1);
+
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
+
+
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+()
+}
 """
-orig = use_f(lambda f: relay.Call(f, [relay.const(2), 
relay.const(1.0)]))
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-orig = run_opt_pass(orig, transform.InferType())
-tvm.ir.assert_structural_equal(dced, orig)
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+()
+}
+"""
 
-def test_recursion_dead():
-x = relay.Let(e.a, e.one, e.three)
-dced_f = lambda f: x
-dced = run_opt_pass(use_f(dced_f), transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, e.three)
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-def test_op_let():
-dced = run_opt_pass(add(relay.Let(e.a, e.one, e.three), e.two), 
transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, add(e.three, e.two))
+def test_dead_recursion():

Review comment:
   樂 duplicated, maybe you meant `test_op_let`

##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -91,62 +138,107 @@ def use_f(func):
 return relay.Let(f, value, func(f))
 
 
-# make sure we dont infinite loop
-def test_recursion():
+def test_live_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
+"""
+
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
 """
-Program:
-   let f(n: i32, data: f32) -> f32 = {
-  if (n == 0) {
-  return data;
-  } else {
-  return f(n - 1, log(data));
-  }
-   }
-   f(2, 1);
+
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
+
+
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+()
+}
 """
-orig = use_f(lambda f: relay.Call(f, [relay.const(2), 
relay.const(1.0)]))
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-orig = run_opt_pass(orig, transform.InferType())
-tvm.ir.assert_structural_equal(dced, orig)
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+()
+}
+"""
 
-def test_recursion_dead():
-x = relay.Let(e.a, e.one, e.three)
-dced_f = lambda f: x
-dced = run_opt_pass(use_f(dced_f), transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, e.three)
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-def test_op_let():
-dced = run_opt_pass(add(relay.Let(e.a, e.one, e.three), e.two), 

[GitHub] [tvm] jroesch commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


jroesch commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535741004



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -91,62 +138,107 @@ def use_f(func):
 return relay.Let(f, value, func(f))
 
 
-# make sure we dont infinite loop
-def test_recursion():
+def test_live_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
+"""
+
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
 """
-Program:
-   let f(n: i32, data: f32) -> f32 = {
-  if (n == 0) {
-  return data;
-  } else {
-  return f(n - 1, log(data));
-  }
-   }
-   f(2, 1);
+
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
+
+
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+()
+}
 """
-orig = use_f(lambda f: relay.Call(f, [relay.const(2), 
relay.const(1.0)]))
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-orig = run_opt_pass(orig, transform.InferType())
-tvm.ir.assert_structural_equal(dced, orig)
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+()
+}
+"""
 
-def test_recursion_dead():
-x = relay.Let(e.a, e.one, e.three)
-dced_f = lambda f: x
-dced = run_opt_pass(use_f(dced_f), transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, e.three)
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-def test_op_let():
-dced = run_opt_pass(add(relay.Let(e.a, e.one, e.three), e.two), 
transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, add(e.three, e.two))
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+(let %a = 1; 3) + 2
+}
+"""
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+3 + 2
+}
+"""
 
-def test_tuple_get_item():
-tt = relay.TupleType([e.float32, e.float32])
-t = relay.Var("t", tt)
-a = relay.Var("a")
-g = relay.TupleGetItem(t, 0)
-dced = run_opt_pass(g, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
-orig = relay.TupleGetItem(relay.Let(a, e.one, t), 0)
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-@pytest.mark.timeout(timeout=10, method="thread")

Review comment:
   To elaborate people need to write unit tests for this not just keep 
running large integration tests. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


jroesch commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535740180



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -91,62 +138,107 @@ def use_f(func):
 return relay.Let(f, value, func(f))
 
 
-# make sure we dont infinite loop
-def test_recursion():
+def test_live_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
+"""
+
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
 """
-Program:
-   let f(n: i32, data: f32) -> f32 = {
-  if (n == 0) {
-  return data;
-  } else {
-  return f(n - 1, log(data));
-  }
-   }
-   f(2, 1);
+
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
+
+
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+()
+}
 """
-orig = use_f(lambda f: relay.Call(f, [relay.const(2), 
relay.const(1.0)]))
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-orig = run_opt_pass(orig, transform.InferType())
-tvm.ir.assert_structural_equal(dced, orig)
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+()
+}
+"""
 
-def test_recursion_dead():
-x = relay.Let(e.a, e.one, e.three)
-dced_f = lambda f: x
-dced = run_opt_pass(use_f(dced_f), transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, e.three)
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-def test_op_let():
-dced = run_opt_pass(add(relay.Let(e.a, e.one, e.three), e.two), 
transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, add(e.three, e.two))
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+(let %a = 1; 3) + 2
+}
+"""
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+3 + 2
+}
+"""
 
-def test_tuple_get_item():
-tt = relay.TupleType([e.float32, e.float32])
-t = relay.Var("t", tt)
-a = relay.Var("a")
-g = relay.TupleGetItem(t, 0)
-dced = run_opt_pass(g, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
-orig = relay.TupleGetItem(relay.Let(a, e.one, t), 0)
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-@pytest.mark.timeout(timeout=10, method="thread")

Review comment:
   No integration tests are bad, this is why CI takes like 10hours.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


mbrookhart commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535738565



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,59 +25,106 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
+# class env:
+# def __init__(self):
+# self.shape = tvm.runtime.convert([1, 2, 3])
+# self.tt = relay.TensorType(self.shape, "float32")
+# self.int32 = relay.TensorType([], "int32")
+# self.float32 = relay.TensorType([], "float32")
+# self.one = relay.const(1.0)
+# self.two = relay.const(2.0)
+# self.three = relay.const(3.0)
+# self.a = relay.Var("a", self.float32)
+# self.b = relay.Var("b", self.float32)
+# self.c = relay.Var("c", self.float32)
+# self.d = relay.Var("d", self.float32)
+# self.e = relay.Var("e", self.float32)
+# self.x = relay.Var("x", self.int32)
+# self.y = relay.Var("y", self.int32)
+# self.z = relay.Var("z", self.int32)
+
+
+# e = env()
+
+
+# def run_opt_pass(expr, opt_pass):
+# assert isinstance(opt_pass, tvm.transform.Pass)
+# mod = tvm.IRModule.from_expr(expr)
+# mod = opt_pass(mod)
+# entry = mod["main"]
+# return entry if isinstance(expr, relay.Function) else entry.body
+
+
+def optimize_source(source, passes):
+if not isinstance(passes, list):
+passes = [passes]
+
+optimize = tvm.transform.Sequential(passes)
+module = tvm.parser.parse(source)
+return optimize(module)
+
+
+def optimize_and_check(before_source, after_source, passes):
+optimize_module = optimize_source(before_source, passes)
+after_module = tvm.parser.parse(after_source)
+print(optimize_module)
+print(after_module)

Review comment:
   Remove Prints

##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,59 +25,106 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(orig), orig), 
Function([e.z], e.z))
-
-
-def test_used_let():
-orig = relay.Let(e.c, e.one, e.c + e.c)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-expected = relay.Let(e.c, e.one, e.c + e.c)
-assert tvm.ir.structural_equal(Function([], orig), Function([], expected))
+# class env:
+# def __init__(self):
+# self.shape = tvm.runtime.convert([1, 2, 

[GitHub] [tvm] MarisaKirisame commented on a change in pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


MarisaKirisame commented on a change in pull request #7029:
URL: https://github.com/apache/tvm/pull/7029#discussion_r535737856



##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -91,62 +138,107 @@ def use_f(func):
 return relay.Let(f, value, func(f))
 
 
-# make sure we dont infinite loop
-def test_recursion():
+def test_live_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
+"""
+
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+%f(2, 1)
+}
 """
-Program:
-   let f(n: i32, data: f32) -> f32 = {
-  if (n == 0) {
-  return data;
-  } else {
-  return f(n - 1, log(data));
-  }
-   }
-   f(2, 1);
+
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
+
+
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+let %f = fn (%n: int, %data: int) -> int {
+if (%n == 0) {
+%data
+} else {
+%f(%n - 1, log(%data))
+}
+};
+()
+}
 """
-orig = use_f(lambda f: relay.Call(f, [relay.const(2), 
relay.const(1.0)]))
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-orig = run_opt_pass(orig, transform.InferType())
-tvm.ir.assert_structural_equal(dced, orig)
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+()
+}
+"""
 
-def test_recursion_dead():
-x = relay.Let(e.a, e.one, e.three)
-dced_f = lambda f: x
-dced = run_opt_pass(use_f(dced_f), transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, e.three)
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-def test_op_let():
-dced = run_opt_pass(add(relay.Let(e.a, e.one, e.three), e.two), 
transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(dced, add(e.three, e.two))
+def test_dead_recursion():
+before_program = """
+#[version = "0.0.5"]
+def @main() {
+(let %a = 1; 3) + 2
+}
+"""
 
+after_program = """
+#[version = "0.0.5"]
+def @main() {
+3 + 2
+}
+"""
 
-def test_tuple_get_item():
-tt = relay.TupleType([e.float32, e.float32])
-t = relay.Var("t", tt)
-a = relay.Var("a")
-g = relay.TupleGetItem(t, 0)
-dced = run_opt_pass(g, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
-orig = relay.TupleGetItem(relay.Let(a, e.one, t), 0)
-dced = run_opt_pass(orig, transform.DeadCodeElimination())
-assert tvm.ir.structural_equal(Function(free_vars(dced), dced), 
Function(free_vars(g), g))
+optimize_and_check(
+before_program, after_program, [transform.DeadCodeElimination(), 
transform.InferType()]
+)
 
 
-@pytest.mark.timeout(timeout=10, method="thread")
-def test_complexity():
-g = inception_v3.get_net(1, 1000, (3, 299, 299), "float32")
-run_opt_pass(g, transform.DeadCodeElimination())
+def test_dead_recursion():

Review comment:
   wrong name

##
File path: tests/python/relay/test_pass_dead_code_elimination.py
##
@@ -25,59 +25,106 @@
 import pytest
 
 
-class env:
-def __init__(self):
-self.shape = tvm.runtime.convert([1, 2, 3])
-self.tt = relay.TensorType(self.shape, "float32")
-self.int32 = relay.TensorType([], "int32")
-self.float32 = relay.TensorType([], "float32")
-self.one = relay.const(1.0)
-self.two = relay.const(2.0)
-self.three = relay.const(3.0)
-self.a = relay.Var("a", self.float32)
-self.b = relay.Var("b", self.float32)
-self.c = relay.Var("c", self.float32)
-self.d = relay.Var("d", self.float32)
-self.e = relay.Var("e", self.float32)
-self.x = relay.Var("x", self.int32)
-self.y = relay.Var("y", self.int32)
-self.z = relay.Var("z", self.int32)
-
-
-e = env()
-
-
-def run_opt_pass(expr, opt_pass):
-assert isinstance(opt_pass, tvm.transform.Pass)
-mod = tvm.IRModule.from_expr(expr)
-mod = opt_pass(mod)
-entry = mod["main"]
-return entry if isinstance(expr, relay.Function) else entry.body
-
-
-def test_let():
-orig = relay.Let(e.x, e.y, e.z)
-orig = run_opt_pass(orig, transform.DeadCodeElimination())
-

[GitHub] [tvm] jroesch opened a new pull request #7029: [Relay][Pass] Clean up DCE tests in preparation for refactoring.

2020-12-03 Thread GitBox


jroesch opened a new pull request #7029:
URL: https://github.com/apache/tvm/pull/7029


   This PR just cleans up DCE tests. cc @MarisaKirisame @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on pull request #7028: [AutoScheduler] Refactor task interface for tuning single operators

2020-12-03 Thread GitBox


merrymercy commented on pull request #7028:
URL: https://github.com/apache/tvm/pull/7028#issuecomment-738441762


   @FrozenGene @jcf94 @comaniac  Please take another look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (b06b64d -> e6c1baf)

2020-12-03 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from b06b64d  [CI] Hotfix CI (see #7010) (#7025)
 add e6c1baf  [AutoScheduler] Misc update to hardware parameter and task 
scheduler (#7020)

No new revisions were added by this update.

Summary of changes:
 docs/conf.py   |  1 +
 include/tvm/auto_scheduler/search_task.h   | 26 ++-
 python/tvm/auto_scheduler/__init__.py  |  7 ++-
 python/tvm/auto_scheduler/auto_schedule.py | 32 +-
 python/tvm/auto_scheduler/relay_integration.py | 12 +
 python/tvm/auto_scheduler/search_policy.py |  2 +-
 python/tvm/auto_scheduler/task_scheduler.py|  8 +++-
 python/tvm/relay/op/strategy/cuda.py   |  9 ++--
 python/tvm/relay/op/strategy/x86.py| 17 ++--
 src/auto_scheduler/search_task.cc  | 51 --
 .../unittest/test_auto_scheduler_compute_dag.py|  2 +-
 .../python/unittest/test_auto_scheduler_feature.py |  4 +-
 12 files changed, 121 insertions(+), 50 deletions(-)



[GitHub] [tvm] merrymercy merged pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

2020-12-03 Thread GitBox


merrymercy merged pull request #7020:
URL: https://github.com/apache/tvm/pull/7020


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on issue #7010: [TEST][FLAKY] test_op_grad_level2.py::test_conv2d_grad.py

2020-12-03 Thread GitBox


altanh commented on issue #7010:
URL: https://github.com/apache/tvm/issues/7010#issuecomment-738383860


   We should keep this issue but rename to dependency libomp conflict I think 
(or open a new one), since it might arise in the future



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


mbrookhart commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738375832


   :/ OddEvenTransportSort should be stable, but something looks very wrong 
about the threading in this kernel. I'll see if I can edit to to solve these 
problems at some point in the near-ish future. If somehow this sort isn't 
stable, that would easily explain flakiness in argwhere/argsort.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Thanks for working on this. I have three questions:
   
   1. What is the distinction between op converters method with `@staticmethod` 
annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def 
_unary(name, inputs, input_types):` etc,  can we have something like `def 
_unary(name): return lambda inputs, input_types: ...` 
   3. Do you intend to move functions such as `convert_operators`, 
`convert_block` etc into the class later? These are the functions that 
currently require passing around "global" variables such as `outputs, 
convert_map, prelude, default_dtype`.
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738362461


   Also, now that we are encapsulating each converter inside a class, I think 
it is ok to remove underscore `_` in each converter method, if you prefer 
(replace `def _` -> `def `, `self._` -> `self.`). I'm fine either way.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (c1f7820 -> b06b64d)

2020-12-03 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from c1f7820  [RPC] Prefer IPv4 between IPv4 and IPv6 (#7013)
 add b06b64d  [CI] Hotfix CI (see #7010) (#7025)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/testing/__init__.py  | 12 ++--
 tests/python/relay/test_op_grad_level2.py | 51 +--
 2 files changed, 11 insertions(+), 52 deletions(-)



[GitHub] [tvm] tqchen merged pull request #7025: [CI] Hotfix CI (see #7010)

2020-12-03 Thread GitBox


tqchen merged pull request #7025:
URL: https://github.com/apache/tvm/pull/7025


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


zhiics commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738355992


   @mbrookhart yeah, argwhere is flaky on large inputs if sort is used



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


mbrookhart commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738347327


   Yeah, the perf of the kernel isn't great, and I see some thread definition 
issues that will cause issues with dynamic shapes. Do we have a flaky test we 
can include? I don't think it's important for this PR, but it might be 
interesting to tackle later.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7028: [AutoScheduler] Refactor task interface for tuning single operators

2020-12-03 Thread GitBox


comaniac commented on a change in pull request #7028:
URL: https://github.com/apache/tvm/pull/7028#discussion_r535668480



##
File path: python/tvm/auto_scheduler/search_task.py
##
@@ -42,18 +153,124 @@ class SearchTask(Object):
 The target host device of this search task.
 hardware_params : Optional[HardwareParams]
 Hardware parameters used in this search task.
+
+Examples
+
+.. code-block:: python
+
+  # We support two ways to create a search task
+
+  # Way 1: create a task by a workload generation function.
+  # The `workload_func` is a function decorated by 
@auto_scheduler.register_workload
+  task = SearchTask(func=workload_func, args=args, target=target)
+
+  # Way 2: create a task by a workload_key.
+  # The `workload_key` is a string, which can be either a hash key or a 
json-serialized
+  # tuple(func, args).
+  task = SearchTask(workload_key=workload_key, target=target)
 """
 
-def __init__(self, dag, workload_key, target, target_host=None, 
hardware_params=None):
-self.dag = dag
+def __init__(
+self,
+func=None,
+args=None,
+compute_dag=None,
+workload_key=None,
+target=None,
+target_host=None,
+hardware_params=None,
+):
+assert (
+func is not None or workload_key is not None
+), "Either a workload generation function or a workload key should be 
provided"
+
+if func is not None:
+workload_key = make_workload_key(func, args)
+if compute_dag is None:
+compute_dag = ComputeDAG(workload_key)
+
+assert target is not None, "Must specify a target."
+if isinstance(target, str):
+target = Target(target)
+if isinstance(target_host, str):
+target_host = Target(target_host)
+
+self.dag = compute_dag
 self.workload_key = workload_key
 self.target = target
 self.target_host = target_host
 self.hardware_params = hardware_params
 self.__init_handle_by_constructor__(
-_ffi_api.SearchTask, dag, workload_key, target, target_host, 
hardware_params
+_ffi_api.SearchTask, compute_dag, workload_key, target, 
target_host, hardware_params
 )
 
+def tune(self, tuning_options, search_policy=None):
+"""Run auto scheduling search for a task
+
+Parameters
+--
+tuning_options : Optional[TuningOptions]
+Tuning and measurement options.
+search_policy : Optional[SearchPolicy]
+The search policy to be used for schedule search.
+"""
+if search_policy is None:
+cost_model = XGBModel()
+search_policy = SketchPolicy(self, cost_model)
+
+_ffi_api.AutoSchedule(search_policy, tuning_options)
+
+def apply_best(self, log_file, layout_rewrite_option=None):
+"""Apply the history best from a log file and return the schedule.
+
+Parameters
+--
+log_file : str
+   The name of the log file
+layout_rewrite_option : Optional[LayoutRewriteOption]
+   The layout rewrite option
+
+Returns
+---
+A `te.Schedule` and the a list of `te.Tensor` to be used in 
`tvm.lower` or `tvm.build`.
+"""
+inp, res = load_best_record(log_file, self.workload_key)
+
+if layout_rewrite_option is None:
+layout_rewrite_option = LayoutRewriteOption.NO_REWRITE
+if self.target.kind.name == "llvm":
+layout_rewrite_option = 
LayoutRewriteOption.INSERT_TRANSFORM_STAGE
+sch, args = self.compute_dag.apply_steps_from_state(inp.state, 
layout_rewrite_option)
+return sch, args
+
+def print_best(self, log_file, print_mode="schedule"):
+"""Print the best schedule as python schedule API code or CUDA source 
code.
+
+Parameters
+--
+log_file : str
+   The name of the log file
+print_mode: str
+   if "schedule", print the best schedule as python schedule API code.
+   if "cude", print the best schedule as CUDA source code.

Review comment:
   - s/cude/cuda
   - This looks inconsistent to "schedule". Maybe just name it "code" and throw 
RuntimeError if the target is not CUDA?
   

##
File path: tutorials/auto_scheduler/tune_matmul_x86.py
##
@@ -147,22 +156,10 @@ def matmul_add(N, L, M, dtype):
 # file "matmul.json". The measurement records can be used to re-apply search 
results,
 # resume the search, and perform other analyses.
 
-##
-# Here is an example where we load the best schedule from a file,
-# print the equivalent python schedule API, and build the binary again.
-
-# Load the measuremnt record for the best schedule
-inp, res = 

[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738346443


   Right now CI is having an issue, please retrigger after 
https://github.com/apache/tvm/pull/7025 is merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Thanks for working on this. I have two questions:
   
   1. What is the distinction between op converters method with `@staticmethod` 
annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def 
_unary(name, inputs, input_types):` etc,  can we have something like `def 
_unary(name): return lambda inputs, input_types: ...` 
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi edited a comment on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Ok I have two questions:
   
   1. What is the distinction between op converters method with `@staticmethod` 
annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def 
_unary(name, inputs, input_types):` etc,  can we have something like `def 
_unary(name): return lambda inputs, input_types: ...` 
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7023: Save PyTorch frontend state in object

2020-12-03 Thread GitBox


masahi commented on pull request #7023:
URL: https://github.com/apache/tvm/pull/7023#issuecomment-738344230


   Ok I have two questions:
   
   1. What is the distinction between op converters method with `@staticmethod` 
annotation and the ones without it (the ones which take `self` as argument)?
   2. Can we remove `functools.partial` stuff? So rather than having `def 
_unary(name, inputs, input_types):` etc, `def _unary(name): return lambda 
inputs, input_types: ...` Would that be possible?
   
   cc @siju-samuel We will be doing big refactoring of pytorch frontend.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun closed pull request #7024: [CI] Ugrade CI cmake for GPU

2020-12-03 Thread GitBox


kevinthesun closed pull request #7024:
URL: https://github.com/apache/tvm/pull/7024


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


kevinthesun commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738342059


   AFAIK cuda sort has several issues:
   1. Performance is bad for large workloads.
   2. Can't handle dynamic data shape well.
   3. Can generate flaky result.
   
   There is no clear path to a solution to these problems. For now the best way 
is to let user turn on Thrust, when they want to compile sort related op on 
nvidia gpu.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun commented on a change in pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


kevinthesun commented on a change in pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#discussion_r535663839



##
File path: python/tvm/topi/cuda/sort.py
##
@@ -561,10 +561,11 @@ def topk_thrust(data, k=1, axis=-1, ret_type="both", 
is_ascend=False, dtype="int
 tag="topk_gpu",
 )
 
-if k > 0:
+if not isinstance(k, int) or k > 0:
 beg = [0] * ndim
-end = data.shape[:-1] + [k]
-out = [strided_slice(o, beg, end) for o in out]
+end = data.shape[:-1] + [k if isinstance(k, int) else 
tvm.te.size_var("dim")]
+strides = [1] * ndim
+out = [strided_slice(o, beg, end, strides) for o in out]

Review comment:
   I modified cuda topk so that topk in dyn can pass. However, topk in test 
any in which data has dynamic shape can't pass without Thrust. I disable that 
test for now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy opened a new pull request #7028: [AutoScheduler] Refactor task interface

2020-12-03 Thread GitBox


merrymercy opened a new pull request #7028:
URL: https://github.com/apache/tvm/pull/7028


   - Refactor the task and tuning interface for tuning a single operator
   - Use `InsertTransformStage` as the default layout rewrite option for CPU 
target.
   - **NOTE: This PR breaks APIs**
   
   ### Before
   ```
   task = auto_scheduler.create_task(matmul, (1024, 1024), "llvm")
   sch, args =  auto_scheduler.auto_schedule(task)
   ```
   
   ### After
   ```
   task = auto_scheduler.SearchTask(func=matmul, args=(1024, 1024), 
target="llvm")
   task.tune(tune_option)
   sch, args = task.apply_best(log_file)
   ```
   
   ### Rationality: 
   1. clean separated API for tuning and applying best
   2. Choose a style similar to `tvm.target.Target`, deprecate the `create_XXX` 
style.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


mbrookhart commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738328610


   I'm not really sure what's wrong with the tir sort, do we have a regression 
test/issue we could track?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


zhiics commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738327979


   I think without thrust, we then have to fix sort. We can probably disable 
the test for now and come back to work on sorting and then enable the test. 
This would at least unblock downstream users to run models through thrust. 
@mbrookhart @icemelon9 @kevinthesun how do you think?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen merged pull request #7013: Change default hostname in rpc_tracker

2020-12-03 Thread GitBox


tqchen merged pull request #7013:
URL: https://github.com/apache/tvm/pull/7013


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (8daa97e -> c1f7820)

2020-12-03 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 8daa97e  [Diagnostics] Add environment variable for controlling 
top-level printing and fix issue with pretty printing/parsing roundtrip. (#6874)
 add c1f7820  [RPC] Prefer IPv4 between IPv4 and IPv6 (#7013)

No new revisions were added by this update.

Summary of changes:
 python/tvm/rpc/base.py | 3 +++
 1 file changed, 3 insertions(+)



[GitHub] [tvm] mbrookhart commented on a change in pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


mbrookhart commented on a change in pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#discussion_r535596794



##
File path: python/tvm/topi/cuda/sort.py
##
@@ -561,10 +561,11 @@ def topk_thrust(data, k=1, axis=-1, ret_type="both", 
is_ascend=False, dtype="int
 tag="topk_gpu",
 )
 
-if k > 0:
+if not isinstance(k, int) or k > 0:
 beg = [0] * ndim
-end = data.shape[:-1] + [k]
-out = [strided_slice(o, beg, end) for o in out]
+end = data.shape[:-1] + [k if isinstance(k, int) else 
tvm.te.size_var("dim")]
+strides = [1] * ndim
+out = [strided_slice(o, beg, end, strides) for o in out]

Review comment:
   @kevinthesun, why don't we just repeat this change in the tir topk 
above? that would fix the unit test, I think.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun edited a comment on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


kevinthesun edited a comment on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738287828


   I think we can raise an exception when compiling dynamic topk but Thrust is 
not enabled. Building with Thrust usually needs extra effort since it requires 
cmake >=3.13. User can enable it when necessary. For tvm cuda sorting, I'm not 
sure whether it covers some cases  which Thrust doesn't. Maybe we can keep it a 
while.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


kevinthesun commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738287828


   I think we can raise an exception when compiling dynamic topk but Thrust is 
not enable. Building with Thrust usually needs extra effort since it requires 
cmake >=3.13. User can enable it when necessary. For tvm cuda sorting, I'm not 
sure whether it covers some cases  which Thrust doesn't. Maybe we can keep it a 
while.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] samskalicky opened a new pull request #7027: [GraphRuntime] remove print from GetInputIndex

2020-12-03 Thread GitBox


samskalicky opened a new pull request #7027:
URL: https://github.com/apache/tvm/pull/7027


   Remove print statement from `GetInputIndex` API in GraphRuntime. The 
function already returns -1 when the input isnt found so the print statement is 
unnecessary and degrades performance at runtime. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m commented on pull request #7026: [BYOC][TRT] Support batch norm for all ranks <=5, and all axes

2020-12-03 Thread GitBox


trevor-m commented on pull request #7026:
URL: https://github.com/apache/tvm/pull/7026#issuecomment-738275838


   All tests in test_tensorrt.py passed locally



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7024: [CI] Ugrade CI cmake for GPU

2020-12-03 Thread GitBox


tqchen commented on pull request #7024:
URL: https://github.com/apache/tvm/pull/7024#issuecomment-738272541


   Unfortunately we cannot simply upgrade the CMake version as we want to be 
able to keep backward compact on LTS we want to support, see also 
https://discuss.tvm.apache.org/t/update-cmake-version/8553



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] anijain2305 commented on a change in pull request #7026: [BYOC][TRT] Support batch norm for all ranks <=5, and all axes

2020-12-03 Thread GitBox


anijain2305 commented on a change in pull request #7026:
URL: https://github.com/apache/tvm/pull/7026#discussion_r535541583



##
File path: python/tvm/relay/op/contrib/tensorrt.py
##
@@ -341,6 +341,11 @@ def batch_norm_annotate_fn(expr):  # pylint: 
disable=unused-variable
 if any([x.checked_type.dtype != "float32" for x in args]):
 logger.info("Only float32 inputs are supported for TensorRT.")
 return False
+if len(args[0].checked_type.shape) == 5 and get_tensorrt_version() < (6, 
0, 1):
+logger.info("nn.batch_norm: TensorRT 6.0.1 or higher is required for 
rank 5 inputs.")

Review comment:
   MIssing `return False`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] trevor-m opened a new pull request #7026: [BYOC][TRT] Support batch norm for all ranks <=5, and all axes

2020-12-03 Thread GitBox


trevor-m opened a new pull request #7026:
URL: https://github.com/apache/tvm/pull/7026


   Previous batch norm only supported rank 4 inputs with axis 1 or 3. Now we 
support input ranks and axes 1-5.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] echuraev removed a comment on pull request #7013: Change default hostname in rpc_tracker

2020-12-03 Thread GitBox


echuraev removed a comment on pull request #7013:
URL: https://github.com/apache/tvm/pull/7013#issuecomment-738243984


   CI is green. @tqchen, could you please take a look on this PR once again?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


mbrookhart commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738243991


   I don't love making thrust a necessary component unless we automatically 
enable it when we turn on cuda? If we don't support the tir-based sort, should 
we remove it from the codebase?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] echuraev commented on pull request #7013: Change default hostname in rpc_tracker

2020-12-03 Thread GitBox


echuraev commented on pull request #7013:
URL: https://github.com/apache/tvm/pull/7013#issuecomment-738243984


   CI is green. @tqchen, could you please take a look on this PR once again?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun commented on pull request #7018: [Topi] Fix GPU Dynamic Topk by Improving Dynamic Strided Slice in Topi

2020-12-03 Thread GitBox


kevinthesun commented on pull request #7018:
URL: https://github.com/apache/tvm/pull/7018#issuecomment-738240723


   @mbrookhart Generally we need thrust for this dynamic sorting ops. nvptx 
will have issue to compile them.
   @icemelon9 We need to enable thrust for ci gpu. 
https://github.com/apache/tvm/pull/7024



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #7024: [CI] Update CI to use Thrust for GPU

2020-12-03 Thread GitBox


zhiics commented on pull request #7024:
URL: https://github.com/apache/tvm/pull/7024#issuecomment-738236603


   We can just upgrade cmake first and leave turning on thrust to the other pr. 
@tqchen could you please take a look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on issue #7010: [TEST][FLAKY] test_op_grad_level2.py::test_conv2d_grad.py

2020-12-03 Thread GitBox


altanh commented on issue #7010:
URL: https://github.com/apache/tvm/issues/7010#issuecomment-738218571


   @tkonolige found that `pytest-xdist` package supports passing `--forked` 
argument to `pytest`. This seems to fix the problem for running contrib tests.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

2020-12-03 Thread GitBox


merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535492994



##
File path: python/tvm/auto_scheduler/relay_integration.py
##
@@ -342,3 +343,14 @@ def rewrite_compute_body(compute_tensor, new_layout):
 num = op_node.num_outputs
 outputs = tuple(op_node.output(i) for i in range(num))
 return outputs[0] if num == 1 else outputs
+
+
+def is_auto_scheduler_enabled():
+"""Return whether the auto-scheduler is enabled

Review comment:
   ```suggestion
   """Return whether the auto-scheduler is enabled.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #6874: [Diagnostics] Add environment variable for controlling top-level printing and fix issue with pretty printing/parsing roundtrip.

2020-12-03 Thread GitBox


mbrookhart commented on pull request #6874:
URL: https://github.com/apache/tvm/pull/6874#issuecomment-738205638


   Thanks @jroesch @tkonolige !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart merged pull request #6874: [Diagnostics] Add environment variable for controlling top-level printing and fix issue with pretty printing/parsing roundtrip.

2020-12-03 Thread GitBox


mbrookhart merged pull request #6874:
URL: https://github.com/apache/tvm/pull/6874


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Diagnostics] Add environment variable for controlling top-level printing and fix issue with pretty printing/parsing roundtrip. (#6874)

2020-12-03 Thread mbrookhart
This is an automated email from the ASF dual-hosted git repository.

mbrookhart pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 8daa97e  [Diagnostics] Add environment variable for controlling 
top-level printing and fix issue with pretty printing/parsing roundtrip. (#6874)
8daa97e is described below

commit 8daa97ec87118ecdf38453ca878655cb08fba329
Author: Jared Roesch 
AuthorDate: Thu Dec 3 10:32:37 2020 -0800

[Diagnostics] Add environment variable for controlling top-level printing 
and fix issue with pretty printing/parsing roundtrip. (#6874)

* Update Parser in order to handle the NMS code

* Add support for displaying traces optionally

* WIP

* Fix

* Fix error reporting in parser and clean up __init__.py due to CR

* Format

* Quick fix for If

* Fix format

* Fix lint
---
 python/tvm/__init__.py   | 21 +++--
 src/parser/parser.cc | 91 +---
 tests/python/relay/test_ir_parser.py | 14 ++
 3 files changed, 95 insertions(+), 31 deletions(-)

diff --git a/python/tvm/__init__.py b/python/tvm/__init__.py
index 569e8f0..c2b4fdb 100644
--- a/python/tvm/__init__.py
+++ b/python/tvm/__init__.py
@@ -68,15 +68,28 @@ from . import support
 from .contrib import rocm as _rocm, nvcc as _nvcc, sdaccel as _sdaccel
 
 
+def _should_print_backtrace():
+in_pytest = "PYTEST_CURRENT_TEST" in os.environ
+tvm_backtrace = os.environ.get("TVM_BACKTRACE", "0")
+
+try:
+tvm_backtrace = bool(int(tvm_backtrace))
+except ValueError:
+raise ValueError(
+f"invalid value for TVM_BACKTRACE `{tvm_backtrace}`, please set to 
0 or 1."
+)
+
+return in_pytest or tvm_backtrace
+
+
 def tvm_wrap_excepthook(exception_hook):
 """Wrap given excepthook with TVM additional work."""
 
 def wrapper(exctype, value, trbk):
 """Clean subprocesses when TVM is interrupted."""
-in_pytest = "PYTEST_CURRENT_TEST" in os.environ
-
-if exctype is error.DiagnosticError and not in_pytest:
-pass
+if exctype is error.DiagnosticError and not _should_print_backtrace():
+# TODO(@jroesch): consider moving to C++?
+print("note: run with `TVM_BACKTRACE=1` environment variable to 
display a backtrace.")
 else:
 exception_hook(exctype, value, trbk)
 
diff --git a/src/parser/parser.cc b/src/parser/parser.cc
index 987a6e2..afcf707 100644
--- a/src/parser/parser.cc
+++ b/src/parser/parser.cc
@@ -605,30 +605,43 @@ class Parser {
 return ast;
   }
 
+  struct MetaRef {
+std::string type_key;
+uint64_t node_index;
+Span span;
+MetaRef(std::string type_key, uint64_t node_index, Span span)
+: type_key(type_key), node_index(node_index), span(span) {}
+  };
+
+  MetaRef MetaRefFromToken(const Token& tok) {
+Call ref = Downcast(tok->data);
+auto attrs = ref->attrs.as();
+auto type_key = attrs->node_type_key;
+auto index = attrs->node_index;
+return MetaRef(type_key, index, ref->span);
+  }
+
   /*! \brief Parse a meta reference of the form `meta[type_key][node_index]`.
* For example `meta[relay.Constant][0]` references the first constant, 
`meta[relay.Constant][1]`
* the second, and so on.
*/
   ObjectRef ParseMetaRef() {
-auto meta_ref = Match(TokenType::kMetaReference);
-Call ref = Downcast(meta_ref->data);
-auto attrs = ref->attrs.as();
-auto type_key = attrs->node_type_key;
-auto index = attrs->node_index;
-auto it = this->meta_table.find(type_key);
+auto meta_ref_tok = Match(TokenType::kMetaReference);
+auto meta_ref = MetaRefFromToken(meta_ref_tok);
+auto it = this->meta_table.find(meta_ref.type_key);
 if (it != this->meta_table.end()) {
   auto nodes = (*it).second;
-  if (index < nodes.size()) {
-return nodes[index];
+  if (meta_ref.node_index < nodes.size()) {
+return nodes[meta_ref.node_index];
   } else {
-this->diag_ctx.Emit(Diagnostic::Error(meta_ref->span)
-<< "the node index `" << index << "` is out of 
bounds for `" << type_key
-<< "`");
+this->diag_ctx.Emit(Diagnostic::Error(meta_ref.span)
+<< "the node index `" << meta_ref.node_index
+<< "` is out of bounds for `" << meta_ref.type_key 
<< "`");
 return ObjectRef();
   }
 } else {
-  this->diag_ctx.Emit(Diagnostic::Error(meta_ref->span)
-  << "no entry in the meta table for `" << type_key << 
"`");
+  this->diag_ctx.Emit(Diagnostic::Error(meta_ref.span)
+  << "no entry in the meta table for `" << 
meta_ref.type_key << "`");
   return ObjectRef();
 }
   }
@@ -922,10 +935,7 

[GitHub] [tvm] altanh opened a new pull request #7025: [CI] Hotfix CI (see #7010)

2020-12-03 Thread GitBox


altanh opened a new pull request #7025:
URL: https://github.com/apache/tvm/pull/7025


   Addresses #7010 partially, hopefully enough for now while we work on a 
broader solution.
   
   Removes PyTorch dependency from conv2d gradient test, this should fix the CI 
failures. We might want to get this merged ASAP so people can retrigger CI on 
PRs.
   
   cc @tqchen @tkonolige 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kevinthesun opened a new pull request #7024: Update CI to use Thrust for GPU

2020-12-03 Thread GitBox


kevinthesun opened a new pull request #7024:
URL: https://github.com/apache/tvm/pull/7024


   We also need to update docker image.
   
   @tqchen @zhiics 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on a change in pull request #7006: [frontend][keras] Add support for TimeDistributed

2020-12-03 Thread GitBox


jwfromm commented on a change in pull request #7006:
URL: https://github.com/apache/tvm/pull/7006#discussion_r535456535



##
File path: python/tvm/relay/frontend/keras.py
##
@@ -927,6 +1031,66 @@ def _convert_repeat_vector(inexpr, keras_layer, _):
 return out
 
 
+def _convert_time_distributed(inexpr, keras_layer, etab, input_shape=None, 
data_layout=None):
+# TimeDistributed: split input tensor along the second dimension (assumed 
to be time),
+# apply inner layer to each split individually,
+# and then combine the results
+if input_shape is None:
+input_shape = keras_layer.input_shape
+if data_layout is None:
+data_layout = etab.data_layout
+
+assert len(input_shape) >= 2, "Input to TimeDistributed must have at least 
two dimensions"
+
+inner_layer = keras_layer.layer
+inner_input_shape = [d for (i, d) in enumerate(input_shape) if i != 1]
+
+# for NDHWC, inner data layout will drop the D
+inner_data_layout = None
+if data_layout == "NDHWC":
+inner_data_layout = "NHWC"
+
+# some code duplication from keras_op_to_relay
+# but it's useful to avoid cluttering the etab
+inner_layer_op_name = type(keras_layer.layer).__name__
+if inner_layer_op_name not in _convert_map:
+raise tvm.error.OpNotImplemented(
+"The inner layer for TimeDistributed {} is not supported for 
frontend Keras.".format(
+inner_layer_op_name
+)
+)
+
+conversion_func = lambda expr: _convert_map[inner_layer_op_name](
+expr, inner_layer, etab, input_shape=inner_input_shape, 
data_layout=inner_data_layout

Review comment:
   Ok that seems reasonable. I think its fine to remove `data_layout` from 
etab and use explicit inputs.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on issue #7010: [TEST][FLAKY] test_op_grad_level2.py::test_conv2d_grad.py

2020-12-03 Thread GitBox


altanh commented on issue #7010:
URL: https://github.com/apache/tvm/issues/7010#issuecomment-738177491


   I agree. I think first we should address #7017 to confirm it's the same 
failure that is happening on CI, and then look into removing the dependencies. 
If we can't remove the dependency (like in the case of `test_onnx.py` and 
`test_dlpack.py`), I propose sandboxing based on dependency so that files with 
conflicting dependencies will always be run on separate pytest processes. If a 
single file uses two conflicting dependencies, I'm not sure how to proceed- we 
may need to build dependencies with special libomp configuration on the CI 
machine (at least we can cache this?)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

2020-12-03 Thread GitBox


merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535444853



##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 else:  # group_conv2d
 if layout == "NCHW":
 assert kernel_layout == "OIHW"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")

Review comment:
   ```suggestion
   logger.warning("group_conv2d is not optimized for x86 with 
autotvm.")
   ```

##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 else:  # group_conv2d
 if layout == "NCHW":
 assert kernel_layout == "OIHW"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")
 strategy.add_implementation(
 wrap_compute_conv2d(topi.nn.group_conv2d_nchw, 
has_groups=True),
 wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
 name="group_conv2d_nchw.generic",
 )
 elif layout == "NHWC":
 assert kernel_layout == "HWIO"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")

Review comment:
   ```suggestion
   logger.warning("group_conv2d is not optimized for x86 with 
autotvm.")
   ```

##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -117,14 +118,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 return conv2d_NCHWc_strategy_cpu(attrs, inputs, out_type, target)
 elif layout == "NHWC":
 assert kernel_layout == "HWIO"
+if not is_auto_scheduler_enabled():
+logger.warning("conv2d NHWC layout is not optimized for x86 in 
autotvm.")

Review comment:
   ```suggestion
   logger.warning("conv2d NHWC layout is not optimized for x86 
with autotvm.")
   ```

##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -117,14 +118,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 return conv2d_NCHWc_strategy_cpu(attrs, inputs, out_type, target)
 elif layout == "NHWC":
 assert kernel_layout == "HWIO"
+if not is_auto_scheduler_enabled():
+logger.warning("conv2d NHWC layout is not optimized for x86 in 
autotvm.")
 strategy.add_implementation(
 wrap_compute_conv2d(topi.nn.conv2d_nhwc, 
need_auto_scheduler_layout=True),
 wrap_topi_schedule(topi.x86.schedule_conv2d_nhwc),
 name="conv2d_nhwc.x86",
 )
 elif layout == "HWCN":
 assert kernel_layout == "HWIO"
-logger.warning("conv2d HWCN layout is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("conv2d HWCN layout is not optimized for x86 in 
autotvm.")

Review comment:
   ```suggestion
   logger.warning("conv2d HWCN layout is not optimized for x86 
with autotvm.")
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

2020-12-03 Thread GitBox


merrymercy commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535444556



##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 else:  # group_conv2d
 if layout == "NCHW":
 assert kernel_layout == "OIHW"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")
 strategy.add_implementation(
 wrap_compute_conv2d(topi.nn.group_conv2d_nchw, 
has_groups=True),
 wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
 name="group_conv2d_nchw.generic",
 )
 elif layout == "NHWC":
 assert kernel_layout == "HWIO"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")

Review comment:
   ```suggestion
   logger.warning("group_conv2d is not optimized for x86 with 
autotvm.")
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7020: [AutoScheduler] Misc update to hardware parameter and task scheduler

2020-12-03 Thread GitBox


comaniac commented on a change in pull request #7020:
URL: https://github.com/apache/tvm/pull/7020#discussion_r535429580



##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 else:  # group_conv2d
 if layout == "NCHW":
 assert kernel_layout == "OIHW"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")

Review comment:
   `with autotvm`?

##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -168,15 +175,17 @@ def conv2d_strategy_cpu(attrs, inputs, out_type, target):
 else:  # group_conv2d
 if layout == "NCHW":
 assert kernel_layout == "OIHW"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")
 strategy.add_implementation(
 wrap_compute_conv2d(topi.nn.group_conv2d_nchw, 
has_groups=True),
 wrap_topi_schedule(topi.generic.schedule_group_conv2d_nchw),
 name="group_conv2d_nchw.generic",
 )
 elif layout == "NHWC":
 assert kernel_layout == "HWIO"
-logger.warning("group_conv2d is not optimized for x86.")
+if not is_auto_scheduler_enabled():
+logger.warning("group_conv2d is not optimized for x86.")

Review comment:
   `with autotvm`?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [µTVM] Fix paths in the reference VM tutorial and add vbguest recommendation (#7015)

2020-12-03 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 42583d6  [µTVM] Fix paths in the reference VM tutorial and add vbguest 
recommendation (#7015)
42583d6 is described below

commit 42583d6a722ef117e9ca83460d6a0182dcf44c89
Author: Andrew Reusch 
AuthorDate: Thu Dec 3 08:35:46 2020 -0800

[µTVM] Fix paths in the reference VM tutorial and add vbguest 
recommendation (#7015)

* Add recommendation to install vbguest plugin.

* Update directories to match checked-in.



[GitHub] [tvm] tqchen merged pull request #7015: [µTVM] Fix paths in the reference VM tutorial and add vbguest recommendation

2020-12-03 Thread GitBox


tqchen merged pull request #7015:
URL: https://github.com/apache/tvm/pull/7015


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on issue #7010: [TEST][FLAKY] test_op_grad_level2.py::test_conv2d_grad.py

2020-12-03 Thread GitBox


tqchen edited a comment on issue #7010:
URL: https://github.com/apache/tvm/issues/7010#issuecomment-738105336


   It would be great to propose a fix, given that the flaky error happens quite 
frequently. 
   
   Is this related to the fact that we are using pytorch for gradient testing? 
Ideally we sould move that to a separate set of test suite.  By default, we 
should use numerical gradient checking that is independent from other frameworks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on issue #7010: [TEST][FLAKY] test_op_grad_level2.py::test_conv2d_grad.py

2020-12-03 Thread GitBox


tqchen commented on issue #7010:
URL: https://github.com/apache/tvm/issues/7010#issuecomment-738105336


   It would be great to propose a fix. Is this related to the fact that we are 
using pytorch for gradient testing? Ideally we sould move that to a separate 
set of test suite. 
   
   By default, we should use numerical gradient checking that is independent 
from other frameworks



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy merged pull request #7022: [auto_scheduler] Part.1 metal default hardware params

2020-12-03 Thread GitBox


merrymercy merged pull request #7022:
URL: https://github.com/apache/tvm/pull/7022


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [auto_scheduler] metal default hardware params (#7022)

2020-12-03 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 965a67e  [auto_scheduler] metal default hardware params (#7022)
965a67e is described below

commit 965a67e7a04612806a390b50e2cca1c0a7744900
Author: Bing Xu 
AuthorDate: Thu Dec 3 06:39:06 2020 -0800

[auto_scheduler] metal default hardware params (#7022)
---
 src/auto_scheduler/search_task.cc | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/src/auto_scheduler/search_task.cc 
b/src/auto_scheduler/search_task.cc
index 0b85a03..bd09a70 100755
--- a/src/auto_scheduler/search_task.cc
+++ b/src/auto_scheduler/search_task.cc
@@ -72,6 +72,17 @@ HardwareParams 
HardwareParamsNode::GetDefaultHardwareParams(const Target& target
 p_hardware_params->max_vthread_extent = p_hardware_params->warp_size / 4;
 
 return hardware_params;
+  } else if (target->kind->device_type == kDLMetal) {
+// Reference: 
https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf
+// This setting looks working for Metal GPUs later than A10
+auto hardware_params = HardwareParams(-1, 16, 64);
+auto* p_hardware_params = hardware_params.CopyOnWrite();
+p_hardware_params->max_shared_memory_per_block = 32 * 1024;
+p_hardware_params->max_registers_per_block = 4 * 1024;
+p_hardware_params->max_threads_per_block = 1024;
+p_hardware_params->warp_size = 8;
+p_hardware_params->max_vthread_extent = p_hardware_params->warp_size / 4;
+return hardware_params;
   } else {
 LOG(FATAL) << "No default hardware parameters for target: " << target;
   }



[GitHub] [tvm] merrymercy merged pull request #7019: [AutoScheduler] Add a tutorial on auto-scheduling a network for x86 CPU

2020-12-03 Thread GitBox


merrymercy merged pull request #7019:
URL: https://github.com/apache/tvm/pull/7019


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (22a0877 -> 3afde62)

2020-12-03 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 22a0877  Fix trt Test (#7016)
 add 3afde62  [AutoScheduler] Add a tutorial on auto-scheduling a network 
for x86 CPU (#7019)

No new revisions were added by this update.

Summary of changes:
 docs/dev/convert_layout.rst|   1 +
 python/tvm/auto_scheduler/measure.py   |   4 +
 .../ci_logs/resnet-18-NHWC-B1-cuda.json|  26 +
 .../auto_scheduler/ci_logs/resnet-18-NHWC-B1.json  |  26 -
 .../ci_logs/resnet-50-NHWC-B1-llvm.json|  31 ++
 tutorials/auto_scheduler/tune_network_cuda.py  |  21 ++--
 .../{tune_network_cuda.py => tune_network_x86.py}  | 108 ++---
 7 files changed, 127 insertions(+), 90 deletions(-)
 create mode 100644 tutorials/auto_scheduler/ci_logs/resnet-18-NHWC-B1-cuda.json
 delete mode 100644 tutorials/auto_scheduler/ci_logs/resnet-18-NHWC-B1.json
 create mode 100644 tutorials/auto_scheduler/ci_logs/resnet-50-NHWC-B1-llvm.json
 copy tutorials/auto_scheduler/{tune_network_cuda.py => tune_network_x86.py} 
(76%)



  1   2   >