[tvm] branch last-successful updated (961a7c70d7 -> 1d39f2c974)

2022-07-30 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch last-successful
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 961a7c70d7 [ROOFLINE] Add CUDA support to roofline analysis (#12205)
 add 9f16b607c8 [TVMScript] Doc Definition (#12244)
 add 1d39f2c974 [FQ2I] fix unary op output affine type in fq2i (#12224)

No new revisions were added by this update.

Summary of changes:
 .../transform/fake_quantization_to_integer.py  |2 +-
 python/tvm/script/printer/doc_core.py  | 1140 
 .../test_pass_fake_quantization_to_integer.py  |   35 +-
 3 files changed, 1165 insertions(+), 12 deletions(-)
 create mode 100644 python/tvm/script/printer/doc_core.py



[GitHub] [tvm] junrushao1994 commented on pull request #12242: [CPP-RPC] Fix GetPath to use relative file path

2022-07-30 Thread GitBox


junrushao1994 commented on PR #12242:
URL: https://github.com/apache/tvm/pull/12242#issuecomment-1200353429

   CC @FrozenGene who is the original author of CPP RPC


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on a diff in pull request #12245: [Fix] Fix some errors in unittests

2022-07-30 Thread GitBox


junrushao1994 commented on code in PR #12245:
URL: https://github.com/apache/tvm/pull/12245#discussion_r933928686


##
tests/python/unittest/test_tir_transform_hoist_expression.py:
##
@@ -448,7 +448,8 @@ class TestHoistLetExpr(BaseBeforeAfter):
 def before(A: T.Buffer[(4, 4), "float32"]):
 for i, j in T.grid(4, 4):
 x = T.var("float32")
-A[i, j] = tir.Let(x, T.cast(i + 1, "float32"), 5.0 * x + T.cast(j, 
"float32"))
+with T.let(x, T.cast(i + 1, "float32")):
+A[i, j] = 5.0 * x + T.cast(j, "float32")

Review Comment:
   Are we changing the original let expression to TIR's let stmt?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] cyx-6 opened a new pull request, #12245: [Fix] Fix some errors in unittests

2022-07-30 Thread GitBox


cyx-6 opened a new pull request, #12245:
URL: https://github.com/apache/tvm/pull/12245

   test_aot_legalize_packed_call.py: `T.preflattened_buffer` returns `void`
   test_tir_intrin.py: `type` here should be `buffer_type`
   test_tir_transform_flatten_buffer.py: `extents` should be `list`
   test_tir_transform_hoist_expression.py: rewrite to avoid direct usage of 
`tir` in TVMScript
   test_tir_transform_storage_flatten.py: `T.allocate` has no argument named 
`strides`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [MetaSchedule][Test] Add unittests for GMM (#12243)

2022-07-30 Thread xiyou
This is an automated email from the ASF dual-hosted git repository.

xiyou pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 42dd6afa97 [MetaSchedule][Test] Add unittests for GMM (#12243)
42dd6afa97 is described below

commit 42dd6afa970e8948584d5474691673c32e2c3457
Author: Junru Shao 
AuthorDate: Sat Jul 30 21:24:19 2022 -0700

[MetaSchedule][Test] Add unittests for GMM (#12243)
---
 .../unittest/test_meta_schedule_space_cpu.py   | 123 +
 .../unittest/test_meta_schedule_space_cuda.py  |  82 ++
 2 files changed, 205 insertions(+)

diff --git a/tests/python/unittest/test_meta_schedule_space_cpu.py 
b/tests/python/unittest/test_meta_schedule_space_cpu.py
index 12aa150f57..7d601a7b0b 100644
--- a/tests/python/unittest/test_meta_schedule_space_cpu.py
+++ b/tests/python/unittest/test_meta_schedule_space_cpu.py
@@ -1079,6 +1079,128 @@ def test_cpu_dil():
 )
 
 
+def test_cpu_gmm():
+# fmt: off
+@T.prim_func
+def gmm_0(X: T.Buffer[(1, 128, 128), "float32"], Y: T.Buffer[(1, 128, 
128), "float32"], Z: T.Buffer[(1, 128, 128), "float32"]) -> None:
+# function attr dict
+T.func_attr({"global_symbol": "main", "tir.noalias": True})
+# body
+with T.block("root"):
+T.reads()
+T.writes()
+T.block_attr({"meta_schedule.parallel":288, 
"meta_schedule.unroll_explicit":16, "meta_schedule.vectorize":64})
+Z_global = T.alloc_buffer([1, 128, 128], dtype="float32")
+for i0_0, i1_0, i2_0, i0_1, i1_1, i2_1 in T.grid(1, 4, 2, 1, 1, 8):
+for i3_0, i0_2, i1_2, i2_2, i3_1, i0_3, i1_3, i2_3 in 
T.grid(128, 1, 16, 1, 1, 1, 2, 8):
+with T.block("Z"):
+b = T.axis.spatial(1, i0_0 + i0_1 + i0_2 + i0_3)
+i = T.axis.spatial(128, i1_0 * 32 + i1_1 * 32 + i1_2 * 
2 + i1_3)
+j = T.axis.spatial(128, i2_0 * 64 + i2_1 * 8 + i2_2 * 
8 + i2_3)
+k = T.axis.reduce(128, i3_1 + i3_0)
+T.reads(X[b, i, k], Y[b, k, j])
+T.writes(Z_global[b, i, j])
+
T.block_attr({"meta_schedule.tiling_structure":"SSRSRS"})
+with T.init():
+Z_global[b, i, j] = T.float32(0)
+Z_global[b, i, j] = Z_global[b, i, j] + X[b, i, k] * 
Y[b, k, j]
+for ax0, ax1, ax2 in T.grid(1, 32, 8):
+with T.block("Z_global"):
+v0 = T.axis.spatial(1, ax0)
+v1 = T.axis.spatial(128, i1_0 * 32 + ax1)
+v2 = T.axis.spatial(128, i2_0 * 64 + i2_1 * 8 + ax2)
+T.reads(Z_global[v0, v1, v2])
+T.writes(Z[v0, v1, v2])
+Z[v0, v1, v2] = Z_global[v0, v1, v2]
+@T.prim_func
+def gmm_1(X: T.Buffer[(1, 128, 128), "float32"], Y: T.Buffer[(1, 128, 
128), "float32"], Z: T.Buffer[(1, 128, 128), "float32"]) -> None:
+# function attr dict
+T.func_attr({"global_symbol": "main", "tir.noalias": True})
+# body
+with T.block("root"):
+T.reads()
+T.writes()
+T.block_attr({"meta_schedule.parallel":288, 
"meta_schedule.unroll_explicit":16, "meta_schedule.vectorize":64})
+Z_global = T.alloc_buffer([1, 128, 128], dtype="float32")
+for i0_0, i1_0, i2_0 in T.grid(1, 4, 2):
+for i0_1, i1_1, i2_1, i3_0, i0_2, i1_2, i2_2, i3_1, i0_3, 
i1_3, i2_3 in T.grid(1, 1, 8, 128, 1, 16, 1, 1, 1, 2, 8):
+with T.block("Z"):
+b = T.axis.spatial(1, i0_0 + i0_1 + i0_2 + i0_3)
+i = T.axis.spatial(128, i1_0 * 32 + i1_1 * 32 + i1_2 * 
2 + i1_3)
+j = T.axis.spatial(128, i2_0 * 64 + i2_1 * 8 + i2_2 * 
8 + i2_3)
+k = T.axis.reduce(128, i3_1 + i3_0)
+T.reads(X[b, i, k], Y[b, k, j])
+T.writes(Z_global[b, i, j])
+
T.block_attr({"meta_schedule.tiling_structure":"SSRSRS"})
+with T.init():
+Z_global[b, i, j] = T.float32(0)
+Z_global[b, i, j] = Z_global[b, i, j] + X[b, i, k] * 
Y[b, k, j]
+for ax0, ax1, ax2 in T.grid(1, 32, 64):
+with T.block("Z_global"):
+v0 = T.axis.spatial(1, ax0)
+v1 = T.axis.spatial(128, i1_0 * 32 + ax1)
+v2 = T.axis.spatial(128, i2_0 * 64 + ax2)
+T.reads(Z_global[v0, v1, v2])
+T.writes(Z[v0, v1, v2])
+Z[v0, v1, v2] = Z_global[v0, v1, v2]
+@T.prim_func
+def gmm_2(X: 

[GitHub] [tvm] zxybazh merged pull request #12243: [MetaSchedule][Test] Add unittests for GMM

2022-07-30 Thread GitBox


zxybazh merged PR #12243:
URL: https://github.com/apache/tvm/pull/12243


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [FQ2I] fix unary op output affine type in fq2i (#12224)

2022-07-30 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 1d39f2c974 [FQ2I] fix unary op output affine type in fq2i (#12224)
1d39f2c974 is described below

commit 1d39f2c974e09e5a767b67e127a5132f0b36c102
Author: Matthew Brookhart 
AuthorDate: Sat Jul 30 21:00:55 2022 -0600

[FQ2I] fix unary op output affine type in fq2i (#12224)

* fix unary op output affine type in fq2i

* better names

* add option to force to positive values for ops that are undefined on 
negative values
---
 .../transform/fake_quantization_to_integer.py  |  2 +-
 .../test_pass_fake_quantization_to_integer.py  | 35 +++---
 2 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/python/tvm/relay/transform/fake_quantization_to_integer.py 
b/python/tvm/relay/transform/fake_quantization_to_integer.py
index 8308298e70..b0464439b0 100644
--- a/python/tvm/relay/transform/fake_quantization_to_integer.py
+++ b/python/tvm/relay/transform/fake_quantization_to_integer.py
@@ -534,7 +534,7 @@ def register_unary_qnn(op_name, op):
 out_t.scale,
 out_t.zero_point,
 )
-return [out, x_t]
+return [out, out_t]
 
 return register_fake_quantization_to_integer(op_name, unary)
 
diff --git a/tests/python/relay/test_pass_fake_quantization_to_integer.py 
b/tests/python/relay/test_pass_fake_quantization_to_integer.py
index d0c8cca6b7..38520ff2df 100644
--- a/tests/python/relay/test_pass_fake_quantization_to_integer.py
+++ b/tests/python/relay/test_pass_fake_quantization_to_integer.py
@@ -318,23 +318,36 @@ def test_fake_quantize_global_avg_pool():
 
 
 class TestUnaryQNNOp:
-def helper_test_fake_quantize_unary_op(self, fp32_op, scale=0.125):
-x = relay.var("x", shape=[1, 3, 3, 3], dtype="int8")
-mid_point = relay.const(-128)
+def helper_test_fake_quantize_unary_op(self, fp32_op, pos_values=False):
+for dtype in ["int8", "uint8"]:
+x = relay.var("x", shape=[1, 3, 3, 3], dtype=dtype)
 
-x = relay.qnn.op.dequantize(x, relay.const(scale), mid_point)
-op = fp32_op(x)
-op = relay.qnn.op.quantize(op, relay.const(scale), mid_point)
+zero = -128 if dtype == "int8" else 0
+if pos_values:
+# Use a positive range for quanitzed ops that only work on 
positive values
+input_mid_point = relay.const(zero)
+output_mid_point = relay.const(zero)
+else:
+input_mid_point = relay.const(np.random.randint(0, 255) + zero)
+output_mid_point = relay.const(np.random.randint(0, 255) + 
zero)
 
-x_np = np.random.randint(-128, 127, size=[1, 3, 3, 3], dtype="int8")
+input_scale = relay.const(np.random.rand())
+output_scale = relay.const(np.random.rand())
 
-compare_fq_to_int(op, [x_np], True)
+x = relay.qnn.op.dequantize(x, input_scale, input_mid_point)
+op = fp32_op(x)
+
+op = relay.qnn.op.quantize(op, output_scale, output_mid_point, 
out_dtype=dtype)
+
+x_np = np.random.randint(0 + zero, 255 + zero, size=[1, 3, 3, 3], 
dtype=dtype)
+
+compare_fq_to_int(op, [x_np], True)
 
 def test_sqrt(self):
-self.helper_test_fake_quantize_unary_op(fp32_op=relay.sqrt)
+self.helper_test_fake_quantize_unary_op(fp32_op=relay.sqrt, 
pos_values=True)
 
 def test_rsqrt(self):
-self.helper_test_fake_quantize_unary_op(fp32_op=relay.rsqrt)
+self.helper_test_fake_quantize_unary_op(fp32_op=relay.rsqrt, 
pos_values=True)
 
 def test_exp(self):
 self.helper_test_fake_quantize_unary_op(fp32_op=relay.exp)
@@ -349,7 +362,7 @@ class TestUnaryQNNOp:
 self.helper_test_fake_quantize_unary_op(fp32_op=relay.tanh)
 
 def test_log(self):
-self.helper_test_fake_quantize_unary_op(fp32_op=relay.log)
+self.helper_test_fake_quantize_unary_op(fp32_op=relay.log, 
pos_values=True)
 
 
 def test_fake_quantize_reshape():



[GitHub] [tvm] junrushao1994 merged pull request #12224: [FQ2I] fix unary op output affine type in fq2i

2022-07-30 Thread GitBox


junrushao1994 merged PR #12224:
URL: https://github.com/apache/tvm/pull/12224


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [TVMScript] Doc Definition (#12244)

2022-07-30 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 9f16b607c8 [TVMScript] Doc Definition (#12244)
9f16b607c8 is described below

commit 9f16b607c8a5fd84b807da665210c9aad31be961
Author: Junru Shao 
AuthorDate: Sat Jul 30 19:58:08 2022 -0700

[TVMScript] Doc Definition (#12244)

This single-file PR is automatically generated by a script that describes 
the Doc AST.
---
 python/tvm/script/printer/doc_core.py | 1140 +
 1 file changed, 1140 insertions(+)

diff --git a/python/tvm/script/printer/doc_core.py 
b/python/tvm/script/printer/doc_core.py
new file mode 100644
index 00..b88eef9a0e
--- /dev/null
+++ b/python/tvm/script/printer/doc_core.py
@@ -0,0 +1,1140 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=redefined-outer-name,missing-docstring,invalid-name
+# pylint: disable=useless-super-delegation,redefined-builtin
+# pylint: disable=too-few-public-methods,too-many-arguments
+class AST:
+_FIELDS = ["lineno", "col_offset", "end_lineno", "end_col_offset"]
+
+def __init__(self, lineno, col_offset, end_lineno, end_col_offset):
+super().__init__()
+self.lineno = lineno
+self.col_offset = col_offset
+self.end_lineno = end_lineno
+self.end_col_offset = end_col_offset
+
+
+class mod(AST):
+_FIELDS = ["lineno", "col_offset", "end_lineno", "end_col_offset"]
+
+def __init__(self, lineno, col_offset, end_lineno, end_col_offset):
+super().__init__(lineno, col_offset, end_lineno, end_col_offset)
+
+
+class Module(mod):
+_FIELDS = ["body", "lineno", "col_offset", "end_lineno", "end_col_offset"]
+
+def __init__(self, body, lineno, col_offset, end_lineno, end_col_offset):
+super().__init__(lineno, col_offset, end_lineno, end_col_offset)
+self.body = body
+
+
+class Interactive(mod):
+_FIELDS = ["body", "lineno", "col_offset", "end_lineno", "end_col_offset"]
+
+def __init__(self, body, lineno, col_offset, end_lineno, end_col_offset):
+super().__init__(lineno, col_offset, end_lineno, end_col_offset)
+self.body = body
+
+
+class Expression(mod):
+_FIELDS = ["body", "lineno", "col_offset", "end_lineno", "end_col_offset"]
+
+def __init__(self, body, lineno, col_offset, end_lineno, end_col_offset):
+super().__init__(lineno, col_offset, end_lineno, end_col_offset)
+self.body = body
+
+
+class stmt(AST):
+_FIELDS = ["lineno", "col_offset", "end_lineno", "end_col_offset"]
+
+def __init__(self, lineno, col_offset, end_lineno, end_col_offset):
+super().__init__(lineno, col_offset, end_lineno, end_col_offset)
+
+
+class FunctionDef(stmt):
+_FIELDS = [
+"name",
+"args",
+"body",
+"decorator_list",
+"returns",
+"lineno",
+"col_offset",
+"end_lineno",
+"end_col_offset",
+]
+
+def __init__(
+self,
+name,
+args,
+body,
+decorator_list,
+returns,
+lineno,
+col_offset,
+end_lineno,
+end_col_offset,
+):
+super().__init__(lineno, col_offset, end_lineno, end_col_offset)
+self.name = name
+self.args = args
+self.body = body
+self.decorator_list = decorator_list
+self.returns = returns
+
+
+class ClassDef(stmt):
+_FIELDS = [
+"name",
+"bases",
+"keywords",
+"body",
+"decorator_list",
+"lineno",
+"col_offset",
+"end_lineno",
+"end_col_offset",
+]
+
+def __init__(
+self,
+name,
+bases,
+keywords,
+body,
+decorator_list,
+lineno,
+col_offset,
+end_lineno,
+end_col_offset,
+):
+super().__init__(lineno, col_offset, end_lineno, end_col_offset)
+self.name = name
+self.bases = bases
+self.keywords = keywords
+self.body = body
+self.decorator_list = decorator_list
+
+
+class 

[GitHub] [tvm] junrushao1994 merged pull request #12244: [TVMScript] Doc Definition

2022-07-30 Thread GitBox


junrushao1994 merged PR #12244:
URL: https://github.com/apache/tvm/pull/12244


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch last-successful updated (e756980b41 -> 961a7c70d7)

2022-07-30 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch last-successful
in repository https://gitbox.apache.org/repos/asf/tvm.git


from e756980b41 [UX][TVMSciprt] Use HTML formatter in notebook environments 
(#12240)
 add 961a7c70d7 [ROOFLINE] Add CUDA support to roofline analysis (#12205)

No new revisions were added by this update.

Summary of changes:
 python/tvm/utils/__init__.py   |   2 +-
 .../utils/{roofline.py => roofline/__init__.py}| 266 +++--
 python/tvm/utils/roofline/cuda.py  | 236 ++
 python/tvm/utils/roofline/registry.py  |  83 +++
 python/tvm/utils/roofline/x86.py   | 254 
 src/target/source/codegen_cuda.cc  |   2 +
 src/tir/ir/specialize.cc   |   1 +
 src/tir/transforms/tensorcore_infer_fragment.cc|  15 +-
 tests/python/unittest/test_roofline.py | 121 ++
 tests/python/unittest/test_runtime_profiling.py|  98 
 10 files changed, 736 insertions(+), 342 deletions(-)
 rename python/tvm/utils/{roofline.py => roofline/__init__.py} (51%)
 create mode 100644 python/tvm/utils/roofline/cuda.py
 create mode 100644 python/tvm/utils/roofline/registry.py
 create mode 100644 python/tvm/utils/roofline/x86.py
 create mode 100644 tests/python/unittest/test_roofline.py



[tvm] branch last-successful updated (c0a3da84bc -> e756980b41)

2022-07-30 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch last-successful
in repository https://gitbox.apache.org/repos/asf/tvm.git


from c0a3da84bc [Relay][VM] Fix an ICHECK which never fails in ctor of 
VMFunction (#12241)
 add e756980b41 [UX][TVMSciprt] Use HTML formatter in notebook environments 
(#12240)

No new revisions were added by this update.

Summary of changes:
 python/tvm/script/highlight.py | 24 +---
 1 file changed, 17 insertions(+), 7 deletions(-)



[tvm] branch nightly-docker-update updated (03cdd1b4ea -> cd40093d9e)

2022-07-30 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly-docker-update
in repository https://gitbox.apache.org/repos/asf/tvm.git


 discard 03cdd1b4ea [ci][docker] Nightly Docker image update
 add db4380cf41 [ci][docker] create Dockerfile.ci_riscv (#12230)
 add dff5c975a0 Deploy the Pretrained Model on Jetson Nano  (#11037)
 add fb87c21bf8 remove duplicated cast op when lowering qnn.requantize op 
in float mode (#12234)
 add 12dcfd70ef [AutoSchedule] Fix misusage of an already-moved object 
(#12239)
 add c0a3da84bc [Relay][VM] Fix an ICHECK which never fails in ctor of 
VMFunction (#12241)
 add e756980b41 [UX][TVMSciprt] Use HTML formatter in notebook environments 
(#12240)
 add 961a7c70d7 [ROOFLINE] Add CUDA support to roofline analysis (#12205)
 add cd40093d9e [ci][docker] Nightly Docker image update

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (03cdd1b4ea)
\
 N -- N -- N   refs/heads/nightly-docker-update (cd40093d9e)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile|  16 +-
 ci/jenkins/Jenkinsfile.j2  |  16 +-
 docker/{Dockerfile.ci_qemu => Dockerfile.ci_riscv} |  35 +--
 ...oy_model_on_rasp.py => deploy_model_on_nano.py} |  54 +++--
 include/tvm/runtime/vm/vm.h|   2 +-
 python/tvm/script/highlight.py |  24 +-
 python/tvm/utils/__init__.py   |   2 +-
 .../utils/{roofline.py => roofline/__init__.py}| 266 +++--
 python/tvm/utils/roofline/cuda.py  | 236 ++
 python/tvm/utils/roofline/registry.py  |  83 +++
 python/tvm/utils/roofline/x86.py   | 254 
 src/auto_scheduler/search_policy/sketch_policy.cc  |   2 +-
 src/relay/qnn/op/requantize.cc |   5 +-
 src/target/source/codegen_cuda.cc  |   2 +
 src/tir/ir/specialize.cc   |   1 +
 src/tir/transforms/tensorcore_infer_fragment.cc|  15 +-
 tests/python/unittest/test_roofline.py | 121 ++
 tests/python/unittest/test_runtime_profiling.py|  98 
 18 files changed, 809 insertions(+), 423 deletions(-)
 copy docker/{Dockerfile.ci_qemu => Dockerfile.ci_riscv} (75%)
 copy gallery/how_to/deploy_models/{deploy_model_on_rasp.py => 
deploy_model_on_nano.py} (84%)
 rename python/tvm/utils/{roofline.py => roofline/__init__.py} (51%)
 create mode 100644 python/tvm/utils/roofline/cuda.py
 create mode 100644 python/tvm/utils/roofline/registry.py
 create mode 100644 python/tvm/utils/roofline/x86.py
 create mode 100644 tests/python/unittest/test_roofline.py



[tvm] branch main updated: [ROOFLINE] Add CUDA support to roofline analysis (#12205)

2022-07-30 Thread andrewzhaoluo
This is an automated email from the ASF dual-hosted git repository.

andrewzhaoluo pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 961a7c70d7 [ROOFLINE] Add CUDA support to roofline analysis (#12205)
961a7c70d7 is described below

commit 961a7c70d75c81503c8c1d7c2e0db66bac4a1859
Author: Tristan Konolige 
AuthorDate: Sat Jul 30 16:35:25 2022 -0700

[ROOFLINE] Add CUDA support to roofline analysis (#12205)

* [ROOFLINE] Add CUDA support to roofline analysis

Add functions to estimate peak flops and bandwidth for CUDA. Add a new
registration mechanism to the roofline analysis to support adding any
target. This mechanism uses generic functions with overrides. New
targets only need to add `estimate_peak_bandwidth` and
`estimate_peak_flops` functions.

Also fix cuda codegen and tensorcore_infer_fragment.cc to support
filling matrix_a and matrix_b fragments.

* formatiing

* move statement back inside loops

* print out report for debugging

* default to avx2

* review comments
---
 python/tvm/utils/__init__.py   |   2 +-
 .../utils/{roofline.py => roofline/__init__.py}| 266 +++--
 python/tvm/utils/roofline/cuda.py  | 236 ++
 python/tvm/utils/roofline/registry.py  |  83 +++
 python/tvm/utils/roofline/x86.py   | 254 
 src/target/source/codegen_cuda.cc  |   2 +
 src/tir/ir/specialize.cc   |   1 +
 src/tir/transforms/tensorcore_infer_fragment.cc|  15 +-
 tests/python/unittest/test_roofline.py | 121 ++
 tests/python/unittest/test_runtime_profiling.py|  98 
 10 files changed, 736 insertions(+), 342 deletions(-)

diff --git a/python/tvm/utils/__init__.py b/python/tvm/utils/__init__.py
index 3c1703c244..33abc352b0 100644
--- a/python/tvm/utils/__init__.py
+++ b/python/tvm/utils/__init__.py
@@ -16,4 +16,4 @@
 # under the License.
 """Utilities operating at a graph/model or other "high" level"""
 
-from .roofline import estimate_peak_bandwidth, estimate_peak_fma_flops, 
roofline_analysis
+from .roofline import roofline_analysis
diff --git a/python/tvm/utils/roofline.py 
b/python/tvm/utils/roofline/__init__.py
similarity index 51%
rename from python/tvm/utils/roofline.py
rename to python/tvm/utils/roofline/__init__.py
index 7323149193..a54f5ed41d 100644
--- a/python/tvm/utils/roofline.py
+++ b/python/tvm/utils/roofline/__init__.py
@@ -18,15 +18,17 @@
 from typing import Dict, Union, Optional
 import numpy as np
 
-from .. import auto_scheduler, relay, tir, nd, IRModule, build, topi, 
transform, get_global_func
-from ..target import Target
-from ..runtime import profiler_vm, profiling, Device, num_threads
-from ..script import tir as T
-from ..ir.instrument import pass_instrument
-from ..ir.expr import GlobalVar
-from ..rpc.base import RPC_SESS_MASK
-from ..rpc.client import RPCSession
-from ..contrib import utils
+from ... import auto_scheduler, relay, tir, nd, IRModule, build, topi, 
transform, get_global_func
+from ...target import Target
+from ...runtime import profiler_vm, profiling, Device, num_threads
+from ...script import tir as T
+from ...ir.instrument import pass_instrument
+from ...ir.expr import GlobalVar
+from ...rpc.base import RPC_SESS_MASK
+from ...rpc.client import RPCSession
+from ...contrib import utils
+
+from . import registry, cuda, x86
 
 
 def _create_args(mod: IRModule, dev: Device, func_name: str = "main", 
remote=None):
@@ -47,231 +49,6 @@ def _create_args(mod: IRModule, dev: Device, func_name: str 
= "main", remote=Non
 return args
 
 
-def _detect_vec_width_registers(
-target: Target, vec_width: Optional[int], num_vector_registers: 
Optional[int]
-):
-"""Get the vector width and number of vector registers for a target.
-
-Parameters
---
-target : Target
-Target to detect vector width and registers for.
-vec_width : Optional[int]
-If None, try and detect vector width from target. Otherwise provided 
input is used.
-num_vector_registers : Optional[int]
-If None, try and number of vector registers from target. Otherwise 
provided input is used.
-
-Returns
----
-vec_width: int
-Width of a vector register on `target`.
-num_vector_registers: int
-Number of vector registers on `target`.
-"""
-if vec_width is None:
-# Only implemented for x86 so far...
-if (
-str(target.kind) == "llvm"
-and target.device_name == ""
-and len(target.keys) == 1
-and target.keys[0] == "cpu"
-):
-with target:
-vec_width = topi.x86.utils.get_simd_32bit_lanes()  # in number 
of float32s
-else:
-raise 

[GitHub] [tvm] AndrewZhaoLuo merged pull request #12205: [ROOFLINE] Add CUDA support to roofline analysis

2022-07-30 Thread GitBox


AndrewZhaoLuo merged PR #12205:
URL: https://github.com/apache/tvm/pull/12205


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] junrushao1994 opened a new pull request, #12243: [MetaSchedule][Test] Add unittests for GMM

2022-07-30 Thread GitBox


junrushao1994 opened a new pull request, #12243:
URL: https://github.com/apache/tvm/pull/12243

   CC: @zxybazh @Hzfengsy 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [UX][TVMSciprt] Use HTML formatter in notebook environments (#12240)

2022-07-30 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new e756980b41 [UX][TVMSciprt] Use HTML formatter in notebook environments 
(#12240)
e756980b41 is described below

commit e756980b41c339ad26ea50315eb29073504b3796
Author: Jiawei Liu 
AuthorDate: Sat Jul 30 16:33:45 2022 -0500

[UX][TVMSciprt] Use HTML formatter in notebook environments (#12240)

Previously we use ANSI color sequences to highlight TVM script. In jupyter 
notebook environments, such color sequence will be recoginized and translated 
to corresponding HTML to display things.

This works fine for most notebook environments (including the jupyter 
notebook and the VS Code plugin). Recently, thanks to @tqchen, we found that 
Google Colab does not well support ansi color sequence for 24-bit colors 
(`JupyterLight` and `VSCDark`) that all its displayed colors are unexpectedly 
black/gray/white. To also bring highlighting in Colab, in this PR, we directly 
render the highlighted code with HTML when a notebook environment is detected.
---
 python/tvm/script/highlight.py | 24 +---
 1 file changed, 17 insertions(+), 7 deletions(-)

diff --git a/python/tvm/script/highlight.py b/python/tvm/script/highlight.py
index 03476ba60c..5a9c69a0ff 100644
--- a/python/tvm/script/highlight.py
+++ b/python/tvm/script/highlight.py
@@ -50,7 +50,7 @@ def cprint(printable: Union[IRModule, PrimFunc], style: 
Optional[str] = None) ->
 import pygments
 from pygments import highlight
 from pygments.lexers.python import Python3Lexer
-from pygments.formatters import Terminal256Formatter
+from pygments.formatters import Terminal256Formatter, HtmlFormatter
 from pygments.style import Style
 from pygments.token import Keyword, Name, Comment, String, Number, 
Operator
 from packaging import version
@@ -72,8 +72,9 @@ def cprint(printable: Union[IRModule, PrimFunc], style: 
Optional[str] = None) ->
 else:
 
 class JupyterLight(Style):
-"""A Jupyter-Notebook-like Pygments style configuration (aka. 
"dark")"""
+"""A Jupyter-Notebook-like Pygments style configuration (aka. 
"light")"""
 
+background_color = ""
 styles = {
 Keyword: "bold #008000",
 Keyword.Type: "nobold #008000",
@@ -90,6 +91,7 @@ def cprint(printable: Union[IRModule, PrimFunc], style: 
Optional[str] = None) ->
 class VSCDark(Style):
 """A VSCode-Dark-like Pygments style configuration (aka. "dark")"""
 
+background_color = ""
 styles = {
 Keyword: "bold #c586c0",
 Keyword.Type: "#82aaff",
@@ -107,6 +109,7 @@ def cprint(printable: Union[IRModule, PrimFunc], style: 
Optional[str] = None) ->
 class AnsiTerminalDefault(Style):
 """The default style for terminal display with ANSI colors (aka. 
"ansi")"""
 
+background_color = ""
 styles = {
 Keyword: "bold ansigreen",
 Keyword.Type: "nobold ansigreen",
@@ -120,12 +123,11 @@ def cprint(printable: Union[IRModule, PrimFunc], style: 
Optional[str] = None) ->
 Comment: "italic ansibrightblack",
 }
 
+is_in_notebook = "ipykernel" in sys.modules  # in notebook env 
(support html display).
+
 if style is None:
 # choose style automatically according to the environment:
-if "ipykernel" in sys.modules:  # in notebook env.
-style = JupyterLight
-else:  # in a terminal or something.
-style = AnsiTerminalDefault
+style = JupyterLight if is_in_notebook else AnsiTerminalDefault
 elif style == "light":
 style = JupyterLight
 elif style == "dark":
@@ -133,4 +135,12 @@ def cprint(printable: Union[IRModule, PrimFunc], style: 
Optional[str] = None) ->
 elif style == "ansi":
 style = AnsiTerminalDefault
 
-print(highlight(printable.script(), Python3Lexer(), 
Terminal256Formatter(style=style)))
+if is_in_notebook:  # print with HTML display
+from IPython.display import display, HTML  # pylint: 
disable=import-outside-toplevel
+
+formatter = HtmlFormatter(style=JupyterLight)
+formatter.noclasses = True  # inline styles
+html = highlight(printable.script(), Python3Lexer(), formatter)
+display(HTML(html))
+else:
+print(highlight(printable.script(), Python3Lexer(), 
Terminal256Formatter(style=style)))



[GitHub] [tvm] junrushao1994 merged pull request #12240: [UX][TVMSciprt] Use HTML formatter in notebook environments

2022-07-30 Thread GitBox


junrushao1994 merged PR #12240:
URL: https://github.com/apache/tvm/pull/12240


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch last-successful updated (12dcfd70ef -> c0a3da84bc)

2022-07-30 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch last-successful
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 12dcfd70ef [AutoSchedule] Fix misusage of an already-moved object 
(#12239)
 add c0a3da84bc [Relay][VM] Fix an ICHECK which never fails in ctor of 
VMFunction (#12241)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/vm/vm.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[tvm] branch last-successful updated (2b3e1eb3f5 -> 12dcfd70ef)

2022-07-30 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch last-successful
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 2b3e1eb3f5 [ci] Reinstall junintparser after zephyr deps (#12226)
 add db4380cf41 [ci][docker] create Dockerfile.ci_riscv (#12230)
 add dff5c975a0 Deploy the Pretrained Model on Jetson Nano  (#11037)
 add fb87c21bf8 remove duplicated cast op when lowering qnn.requantize op 
in float mode (#12234)
 add 12dcfd70ef [AutoSchedule] Fix misusage of an already-moved object 
(#12239)

No new revisions were added by this update.

Summary of changes:
 docker/{Dockerfile.ci_qemu => Dockerfile.ci_riscv} | 35 ++
 ...oy_model_on_rasp.py => deploy_model_on_nano.py} | 54 +-
 src/auto_scheduler/search_policy/sketch_policy.cc  |  2 +-
 src/relay/qnn/op/requantize.cc |  5 +-
 4 files changed, 39 insertions(+), 57 deletions(-)
 copy docker/{Dockerfile.ci_qemu => Dockerfile.ci_riscv} (75%)
 copy gallery/how_to/deploy_models/{deploy_model_on_rasp.py => 
deploy_model_on_nano.py} (84%)



[tvm] branch main updated: [Relay][VM] Fix an ICHECK which never fails in ctor of VMFunction (#12241)

2022-07-30 Thread kparzysz
This is an automated email from the ASF dual-hosted git repository.

kparzysz pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new c0a3da84bc [Relay][VM] Fix an ICHECK which never fails in ctor of 
VMFunction (#12241)
c0a3da84bc is described below

commit c0a3da84bcc801e21d8e4dfc68a68665977d8912
Author: Twice 
AuthorDate: Sun Jul 31 01:47:58 2022 +0800

[Relay][VM] Fix an ICHECK which never fails in ctor of VMFunction (#12241)
---
 include/tvm/runtime/vm/vm.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/tvm/runtime/vm/vm.h b/include/tvm/runtime/vm/vm.h
index e58fe5eeb3..f58df7d5af 100644
--- a/include/tvm/runtime/vm/vm.h
+++ b/include/tvm/runtime/vm/vm.h
@@ -94,7 +94,7 @@ struct VMFunction {
 instructions(std::move(instructions)),
 register_file_size(register_file_size),
 param_device_indexes(std::move(param_device_indexes)) {
-ICHECK_EQ(params.size(), param_device_indexes.size());
+ICHECK_EQ(this->params.size(), this->param_device_indexes.size());
   }
 
   VMFunction() = default;



[GitHub] [tvm] kparzysz-quic merged pull request #12241: [Relay][VM] Fix an ICHECK which never fails in ctor of VMFunction

2022-07-30 Thread GitBox


kparzysz-quic merged PR #12241:
URL: https://github.com/apache/tvm/pull/12241


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [AutoSchedule] Fix misusage of an already-moved object (#12239)

2022-07-30 Thread kparzysz
This is an automated email from the ASF dual-hosted git repository.

kparzysz pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 12dcfd70ef [AutoSchedule] Fix misusage of an already-moved object 
(#12239)
12dcfd70ef is described below

commit 12dcfd70ef365a9d5cdccdcc516bf818367e561a
Author: Twice 
AuthorDate: Sun Jul 31 01:32:54 2022 +0800

[AutoSchedule] Fix misusage of an already-moved object (#12239)
---
 src/auto_scheduler/search_policy/sketch_policy.cc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/auto_scheduler/search_policy/sketch_policy.cc 
b/src/auto_scheduler/search_policy/sketch_policy.cc
index 4a4ab18b5e..8b0faed5b5 100644
--- a/src/auto_scheduler/search_policy/sketch_policy.cc
+++ b/src/auto_scheduler/search_policy/sketch_policy.cc
@@ -150,7 +150,7 @@ SketchPolicy::SketchPolicy(SearchTask task, CostModel 
program_cost_model,
 node->mutation_rules.push_back(std::make_shared(0.90));
 node->mutation_rules.push_back(std::make_shared(0.10));
   } else {
-LOG(FATAL) << "No default sketch rules for target: " << task->target;
+LOG(FATAL) << "No default sketch rules for target: " << 
node->search_task->target;
   }
 
   data_ = std::move(node);



[GitHub] [tvm] kparzysz-quic merged pull request #12239: [AutoSchedule] Fix misusage of an already-moved object

2022-07-30 Thread GitBox


kparzysz-quic merged PR #12239:
URL: https://github.com/apache/tvm/pull/12239


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] guberti commented on a diff in pull request #12207: [microTVM] Refactor pytest fixtures

2022-07-30 Thread GitBox


guberti commented on code in PR #12207:
URL: https://github.com/apache/tvm/pull/12207#discussion_r933801203


##
tests/micro/arduino/conftest.py:
##
@@ -15,35 +15,19 @@
 # specific language governing permissions and limitations
 # under the License.
 
-import pytest
+pytest_plugins = [
+"tvm.micro.testing.pytest_plugin",
+]
 
-from test_utils import ARDUINO_BOARDS
+import pytest
 
 
 def pytest_addoption(parser):
-parser.addoption(
-"--arduino-board",
-nargs="+",
-required=True,
-choices=ARDUINO_BOARDS.keys(),
-help="Arduino board for tests.",
-)
 parser.addoption(
 "--arduino-cli-cmd",

Review Comment:
   yea you’re right, build tool is just too abstract.



##
python/tvm/micro/testing/pytest_plugin.py:
##
@@ -0,0 +1,108 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# pylint: disable=invalid-name,redefined-outer-name
+""" microTVM testing fixtures used to deduce testing argument
+values from testing parameters """
+
+import pathlib
+import os
+import datetime
+import pytest
+
+from tvm.contrib.utils import tempdir
+
+from .utils import get_supported_boards
+
+
+def pytest_addoption(parser):
+"""Adds more pytest arguments"""
+parser.addoption(
+"--board",
+required=True,
+choices=list(get_supported_boards("zephyr").keys())
++ list(get_supported_boards("arduino").keys()),
+help=(
+"microTVM boards for tests. Board refers to instances"
+"of microcontrollers/emulators defined in a platform."
+),
+)
+parser.addoption(
+"--test-build-only",
+action="store_true",
+help="Only run tests that don't require physical hardware.",
+)
+parser.addoption(
+"--tvm-debug",
+action="store_true",
+default=False,
+help="If set true, it will keep the project directory for debugging.",
+)
+
+
+@pytest.fixture(scope="session")
+def board(request):
+return request.config.getoption("--board")
+
+
+@pytest.fixture(scope="session")
+def tvm_debug(request):

Review Comment:
   nit: I don't love the name `tvm_debug` if all this flag does is keep the 
project directory - IMO `--keep-project-dir` or `--preserve-project` makes more 
sense. If it does things besides this, we should document them in the `help` 
string.



##
tests/micro/zephyr/test_zephyr.py:
##
@@ -89,7 +89,7 @@ def _make_add_sess(temp_dir, model, zephyr_board, west_cmd, 
build_config, dtype=
 # The same test code can be executed on both the QEMU simulation and on real 
hardware.
 @tvm.testing.requires_micro
 @pytest.mark.skip_boards(["mps2_an521"])
-def test_add_uint(temp_dir, board, west_cmd, tvm_debug):
+def test_add_uint(workspace_dir, board, west_cmd, tvm_debug):

Review Comment:
   Huh, I’d forgotten that workspace_dir is kept when debug is passed. That 
sounds super useful - good call with this!



##
python/tvm/micro/testing/evaluation.py:
##
@@ -153,4 +163,4 @@ def evaluate_model_accuracy(session, aot_executor, 
input_data, true_labels, runs
 num_correct = sum(u == v for u, v in zip(true_labels, predicted_labels))
 average_time = sum(aot_runtimes) / len(aot_runtimes)
 accuracy = num_correct / len(predicted_labels)
-return average_time, accuracy
+return average_time, accuracy, predicted_labels

Review Comment:
   Ah, I'd forgotten we need the labels for some hardware in the loop tests. 
That seems fine - for anomaly detection and other AOC metrics using the same 
"template", we could use confidence values (or even just `None`) in place of 
`predicted_labels`. This LGTM now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] yoyo-nb opened a new pull request, #12242: [CPP-RPC] Fix GetPath to use relative file path

2022-07-30 Thread GitBox


yoyo-nb opened a new pull request, #12242:
URL: https://github.com/apache/tvm/pull/12242

   The GetPath function can only use filenames as input, not relative paths.
   
   Assuming the workpath is `/data/local/tmp`, and when I use 
`GetPath("dataset/img.jpg")`,  the expected result is 
`/data/local/tmp/dataset/img.jpg` instead of `dataset/img.jpg`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] juda commented on a diff in pull request #12232: libstdc++ CXX11 ABI Compatibility & boolean tensor support

2022-07-30 Thread GitBox


juda commented on code in PR #12232:
URL: https://github.com/apache/tvm/pull/12232#discussion_r933779749


##
cmake/modules/contrib/PT_TVMDSOOP.cmake:
##
@@ -21,38 +21,55 @@ if(NOT USE_PT_TVMDSOOP STREQUAL "OFF")
   execute_process(COMMAND ${PYTHON_EXECUTABLE} -c "import torch; 
print(torch.__path__[0].strip())"
 OUTPUT_VARIABLE PT_PATH
 RESULT_VARIABLE PT_STATUS)
-  if (NOT ${PT_STATUS} EQUAL 0)
+
+  if(NOT ${PT_STATUS} EQUAL 0)
 message(FATAL_ERROR "Fail to get pytorch path")
   endif()
 
   string(REGEX REPLACE "\n" "" PT_PATH "${PT_PATH}")
   message(STATUS "PyTorch path: ${PT_PATH}")
 
-  set(PT_COMPILE_FLAGS_STR "-I${PT_PATH}/include -D_GLIBCXX_USE_CXX11_ABI=0")
+  execute_process(COMMAND ${PYTHON_EXECUTABLE} -c "import 
torch;print(torch.compiled_with_cxx11_abi())"
+OUTPUT_VARIABLE PT_CXX_FLAG
+RESULT_VARIABLE PT_STATUS)
+
+  string(REGEX REPLACE "\n" "" PT_CXX_FLAG "${PT_CXX_FLAG}")
+  message(STATUS "Found TORCH_BUILT_WITH_CXX_ABI=${PT_CXX_FLAG} ")
+
+  if(${PT_CXX_FLAG} STREQUAL "False")
+set(CXX_ABI_ENABLED 0)
+  else()
+set(CXX_ABI_ENABLED 1)
+  endif()
+
+  set_property(
+SOURCE
+
${CMAKE_CURRENT_SOURCE_DIR}/src/contrib/torch/tvm_module_wrapper/RuntimeModuleWrapperTorch.cc
+APPEND PROPERTY
+COMPILE_OPTIONS
+"-D_GLIBCXX_USE_CXX11_ABI=${CXX_ABI_ENABLED}"
+"-I${PT_PATH}/include"
+  )
   set(PT_LINK_FLAGS_STR "-L${PT_PATH}/lib -l:libtorch.so 
-l:libtorch_python.so")
 
   if(NOT USE_CUDA STREQUAL "OFF")
 add_definitions(-DPT_TVMDSOOP_ENABLE_GPU)
   endif()
 
-
   string(REGEX REPLACE "\n" " " PT_FLAGS "${PT_COMPILE_FLAGS} 
${PT_LINK_FLAGS}")
-  separate_arguments(PT_COMPILE_FLAGS UNIX_COMMAND ${PT_COMPILE_FLAGS_STR})
+  separate_arguments(PT_COMPILE_FLAGS UNIX_COMMAND)
   separate_arguments(PT_LINK_FLAGS UNIX_COMMAND ${PT_LINK_FLAGS_STR})
 
-
   set(LIBRARY_NAME pt_tvmdsoop)
-  tvm_file_glob(GLOB_RECURSE PTTVM_SRCS 
${CMAKE_CURRENT_SOURCE_DIR}/src/contrib/torch/**/*.cc)
+  tvm_file_glob(GLOB_RECURSE PTTVM_SRCS 
${CMAKE_CURRENT_SOURCE_DIR}/src/contrib/torch/tvm_module_wrapper/*.cc)

Review Comment:
   Done



##
python/tvm/contrib/torch/pytorch_tvm.py:
##
@@ -183,6 +184,11 @@ def load_tvm(self, export_dir):
 
 def build_pytorch_module(self, num_inputs, num_outputs, input_infos=None):
 """Build pytorch module containing TVM Graph Module"""
+warnings.warn(
+"We suggest users to use `optimized_torch` for tuning Torch 
modules instead",

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #12144: [Auto Scheduler] Upgrade autoscheduler xgboost callback

2022-07-30 Thread GitBox


junrushao1994 commented on PR #12144:
URL: https://github.com/apache/tvm/pull/12144#issuecomment-1200121587

   I don't think a unit test is viable specifically for xgboost version 
compatibility because (IIUC) our CI instance could only install  one version of 
xgboost...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] Sunny-Island commented on pull request #12144: [Auto Scheduler] Upgrade autoscheduler xgboost callback

2022-07-30 Thread GitBox


Sunny-Island commented on PR #12144:
URL: https://github.com/apache/tvm/pull/12144#issuecomment-1200118875

   > The review has been stale for 4 days without any update. To unblock the 
progress, I would suggest that someone (maybe @Hzfengsy) to validate it locally 
and then get it merged at our earliest convenience
   
   I am still wait for https://github.com/apache/tvm/pull/12141 to be reviewed 
so that I can follow its design(It seems that that pr have been done, I will 
finish unit test soon). But It's fine this pr can be merged now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] junrushao1994 commented on pull request #12144: [Auto Scheduler] Upgrade autoscheduler xgboost callback

2022-07-30 Thread GitBox


junrushao1994 commented on PR #12144:
URL: https://github.com/apache/tvm/pull/12144#issuecomment-1200118357

   The review has been stale for 9 days without any update. To unblock the 
progress, I would suggest that someone (maybe @Hzfengsy) to validate it locally 
and then get it merged at our earliest convenience


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: remove duplicated cast op when lowering qnn.requantize op in float mode (#12234)

2022-07-30 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new fb87c21bf8 remove duplicated cast op when lowering qnn.requantize op 
in float mode (#12234)
fb87c21bf8 is described below

commit fb87c21bf8d0fa5edec96a054a57a6d37c11289f
Author: paperplanet 
AuthorDate: Sat Jul 30 16:28:39 2022 +0800

remove duplicated cast op when lowering qnn.requantize op in float mode 
(#12234)
---
 src/relay/qnn/op/requantize.cc | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/src/relay/qnn/op/requantize.cc b/src/relay/qnn/op/requantize.cc
index 2a6153e810..5bf53a95ed 100644
--- a/src/relay/qnn/op/requantize.cc
+++ b/src/relay/qnn/op/requantize.cc
@@ -303,10 +303,7 @@ Expr RequantizeLowerFP(const Expr& input_tensor, const 
Expr& input_scale,
   -1,
   }),
   rank, {axis});
-tensor = Subtract(Cast(tensor, DataType::Float(Bits)),
-  Cast(input_zero_broadcast, DataType::Float(Bits)));
-  } else {
-tensor = Cast(tensor, DataType::Float(Bits));
+tensor = Subtract(tensor, Cast(input_zero_broadcast, 
DataType::Float(Bits)));
   }
 
   // 2) If the input and output scales are same, we can skip the 
multiplication. Check



[GitHub] [tvm] masahi merged pull request #12234: [QNN]remove duplicated cast op when lowering qnn.requantize op in float mode

2022-07-30 Thread GitBox


masahi merged PR #12234:
URL: https://github.com/apache/tvm/pull/12234


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] PragmaTwice opened a new pull request, #12241: [Relay][VM] Fix an ICHECK which never fails in ctor of VMFunction

2022-07-30 Thread GitBox


PragmaTwice opened a new pull request, #12241:
URL: https://github.com/apache/tvm/pull/12241

   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   
   
https://github.com/apache/tvm/blob/dff5c975a082e6f15b556914a029541b63ff1280/include/tvm/runtime/vm/vm.h#L97
   
   Since the referenced parameter `params` and `param_device_indexes` is moved 
in the member initializer list, 
   the assertion `ICHECK_EQ(params.size(), param_device_indexes.size())` is 
equivalent to `ICHECK_EQ(0, 0)` that never fails,
   which makes the `ICHECK` meaningless.
   
   PTAL @ganler ❤️ 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] ganler opened a new pull request, #12240: [UX][TVMSciprt] Use HTML formatter in notebook environments

2022-07-30 Thread GitBox


ganler opened a new pull request, #12240:
URL: https://github.com/apache/tvm/pull/12240

   Previously we use ANSI color sequences to highlight TVM script. In jupyter 
notebook environments, such color sequence will be recoginized and translated 
to corresponding HTML to display things. 
   
   This works fine for most notebook environments (including the jupyter 
notebook and the VS Code plugin). Recently, we found that colab does not well 
support ansi color sequence for 24-bit colors (`JupyterLight` and `VSCDark`) 
thanks to @tqchen. To also bring highlighting in Colab, in this PR, we directly 
render the highlighted code with HTML when a notebook environment is detected.
   
   cc: @tqchen @Hzfengsy   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated (db4380cf41 -> dff5c975a0)

2022-07-30 Thread syfeng
This is an automated email from the ASF dual-hosted git repository.

syfeng pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


from db4380cf41 [ci][docker] create Dockerfile.ci_riscv (#12230)
 add dff5c975a0 Deploy the Pretrained Model on Jetson Nano  (#11037)

No new revisions were added by this update.

Summary of changes:
 ...oy_model_on_rasp.py => deploy_model_on_nano.py} | 54 +-
 1 file changed, 33 insertions(+), 21 deletions(-)
 copy gallery/how_to/deploy_models/{deploy_model_on_rasp.py => 
deploy_model_on_nano.py} (84%)



[GitHub] [tvm] Hzfengsy merged pull request #11037: Deploy the Pretrained Model on Jetson Nano

2022-07-30 Thread GitBox


Hzfengsy merged PR #11037:
URL: https://github.com/apache/tvm/pull/11037


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] ganler commented on pull request #12239: [AutoSchedule] Fix misusage of an already-moved object

2022-07-30 Thread GitBox


ganler commented on PR #12239:
URL: https://github.com/apache/tvm/pull/12239#issuecomment-1200107272

   Thanks for pointing it out. I see. `task` is moved in 
   
   
https://github.com/apache/tvm/blob/43b15a8cafdea9378deb5ab879e1c7ac7e5f3336/src/auto_scheduler/search_policy/sketch_policy.cc#L76
   
   that we should not use it after move.
   
   fwd to: @merrymercy @jcf94 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] BBuf commented on pull request #11037: Deploy the Pretrained Model on Jetson Nano

2022-07-30 Thread GitBox


BBuf commented on PR #11037:
URL: https://github.com/apache/tvm/pull/11037#issuecomment-1200106689

   this pr has passed ci, please merge it. @Hzfengsy 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [tvm] PragmaTwice opened a new pull request, #12239: [AutoSchedule] Fix misusage of an already-moved object

2022-07-30 Thread GitBox


PragmaTwice opened a new pull request, #12239:
URL: https://github.com/apache/tvm/pull/12239

   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org