[GitHub] [tvm] junrushao1994 commented on pull request #8267: [Bugfix, CuDNN] fix segfault when cudnnDestroy called with destroyed cuda context

2021-06-28 Thread GitBox


junrushao1994 commented on pull request #8267:
URL: https://github.com/apache/tvm/pull/8267#issuecomment-870277530


   Cc: @comaniac 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] melsonlai commented on a change in pull request #8076: [BYOC][NNAPI]: Implement basic structure of Android NNAPI BYOC

2021-06-28 Thread GitBox


melsonlai commented on a change in pull request #8076:
URL: https://github.com/apache/tvm/pull/8076#discussion_r660303669



##
File path: 
python/tvm/contrib/target/android_nnapi/relayir_to_nnapi_converter/_export_object/helper.py
##
@@ -0,0 +1,28 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Namespace for helper objects/methods that's not part of the JSON
+content. This includes the symbol table, checking methods, ...
+"""
+from .operand import Operand as _Operand
+
+
+class Helper:

Review comment:
   I would make it "JSONAnalyser" then. :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zxybazh commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


zxybazh commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870238391


   OK, in that case, is it possible that you explicitly clear the host field of 
the given target object and then construct it this way? Because `fork` seem to 
be a special case, and generally the host should always be consistent.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hgt312 commented on a change in pull request #8266: [Bugfix] [tir] do not simplify 'Any() - Any()' to 0

2021-06-28 Thread GitBox


hgt312 commented on a change in pull request #8266:
URL: https://github.com/apache/tvm/pull/8266#discussion_r660268841



##
File path: tests/python/unittest/test_arith_rewrite_simplify.py
##
@@ -275,6 +275,7 @@ def test_add_index_simplify():
 def test_sub_index_simplify():
 ck = RewriteChecker()
 x, y, z = te.var("x"), te.var("y"), te.var("z")
+a, b, c = tvm.tir.Any(), tvm.tir.Any(), tvm.tir.Any()

Review comment:
   Nice catch!

##
File path: src/tir/analysis/deep_equal.cc
##
@@ -59,6 +59,9 @@ bool ExprDeepEqual::operator()(const PrimExpr& lhs, const 
PrimExpr& rhs) const {
 auto* prhs = rhs.as();
 return plhs->dtype == prhs->dtype && plhs->value == prhs->value;
   }
+  if (lhs.as()) {
+return lhs.same_as(rhs);

Review comment:
   Nice catch!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


comaniac commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660255574



##
File path: python/tvm/topi/nn/dense.py
##
@@ -51,37 +65,120 @@ def dense(data, weight, bias=None, out_dtype=None, 
auto_scheduler_rewritten_layo
 assert len(bias.shape) == 1
 if out_dtype is None:
 out_dtype = data.dtype
-batch, in_dim = data.shape
+if data_transposed:
+in_dim, batch = data.shape
+else:
+batch, in_dim = data.shape
 
 if auto_scheduler_rewritten_layout:
 # Infer shape for the rewritten layout
 out_dim, red_dim = auto_scheduler.get_shape_from_rewritten_layout(
-auto_scheduler_rewritten_layout, ["j", "k"]
+auto_scheduler_rewritten_layout, ["j", "k"] if weight_transposed 
else ["k", "j"]
 )
 auto_scheduler.remove_index_check(weight)
-else:
+elif weight_transposed:
 out_dim, red_dim = weight.shape
+else:
+red_dim, out_dim = weight.shape
 assert in_dim == red_dim
 
 k = te.reduce_axis((0, in_dim), name="k")
-matmul = te.compute(
+if data_transposed:
+if weight_transposed:
+compute_lambda = lambda i, j: te.sum(
+data[k, i].astype(out_dtype) * weight[j, k].astype(out_dtype), 
axis=k
+)
+compute_name = "T_matmul_TT"
+else:
+compute_lambda = lambda i, j: te.sum(
+data[k, i].astype(out_dtype) * weight[k, j].astype(out_dtype), 
axis=k
+)
+compute_name = "T_matmul_TN"
+compute_tag = "matmul"
+else:
+if weight_transposed:
+compute_lambda = lambda i, j: te.sum(
+data[i, k].astype(out_dtype) * weight[j, k].astype(out_dtype), 
axis=k
+)
+compute_name = "T_dense"

Review comment:
   I personally vote for B.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660253600



##
File path: python/tvm/relay/op/strategy/x86.py
##
@@ -370,6 +370,55 @@ def conv1d_strategy_cpu(attrs, inputs, out_type, target):
 return strategy
 
 
+@matmul_strategy.register("cpu")
+def matmul_strategy_cpu(attrs, inputs, out_type, target):
+"""matmul x86 strategy"""
+strategy = _op.OpStrategy()
+if is_auto_scheduler_enabled():
+strategy.add_implementation(
+wrap_compute_matmul(topi.nn.matmul, 
need_auto_scheduler_layout=True),
+naive_schedule,
+name="matmul.generic",
+plevel=11,
+)
+else:
+logger.warning("Matmul other than NT format is not optimized for x86.")
+strategy.add_implementation(
+wrap_compute_matmul(topi.nn.matmul),
+naive_schedule,
+name="matmul.generic",
+)
+
+same_type = inputs[0].dtype == inputs[1].dtype == out_type.dtype
+dtype = inputs[0].dtype
+u8s8s32 = dtype == "uint8" and inputs[1].dtype == "int8" and 
out_type.dtype == "int32"
+if "cblas" in target.libs:

Review comment:
   I agree, but seems there is not an api for `SpecializedCondition` to 
process the False path?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660250782



##
File path: python/tvm/relay/op/nn/_nn.py
##
@@ -1160,21 +1186,46 @@ def batch_flatten_shape_func(attrs, inputs, _):
 
 
 @script
-def _dense_shape_func(data_shape, weight_shape):
+def _matmul_shape_func(data_shape, weight_shape, data_transposed, 
weight_transposed):

Review comment:
   Updated all `data_transposed` & `weight_transposed` to `transpose_a` & 
`transpose_b`. And also renamed all the `data` & `weight` in matmul to 
`tensor_a` & `tensor_b`. Tensor names in dense remain unchanged.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] ganler commented on pull request #8267: [Bugfix, CuDNN] fix segfault when cudnnDestroy called with destroyed cuda context

2021-06-28 Thread GitBox


ganler commented on pull request #8267:
URL: https://github.com/apache/tvm/pull/8267#issuecomment-870197052


   Sorry for the late response. I have applied the 'let-it-leak' mechanism. 
@junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang edited a comment on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang edited a comment on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-870176117


   @areusch Oops sorry I forgot to change the tag name. Fixed and checked with 
task_sphinx_precheck.sh. Hope it works this time.
   
   @tkonolige Thank you for so detailed review!
   And sorry for my some careless grammar mistakes :( 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660231706



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660233541



##
File path: python/tvm/topi/nn/dense.py
##
@@ -51,37 +65,120 @@ def dense(data, weight, bias=None, out_dtype=None, 
auto_scheduler_rewritten_layo
 assert len(bias.shape) == 1
 if out_dtype is None:
 out_dtype = data.dtype
-batch, in_dim = data.shape
+if data_transposed:
+in_dim, batch = data.shape
+else:
+batch, in_dim = data.shape
 
 if auto_scheduler_rewritten_layout:
 # Infer shape for the rewritten layout
 out_dim, red_dim = auto_scheduler.get_shape_from_rewritten_layout(
-auto_scheduler_rewritten_layout, ["j", "k"]
+auto_scheduler_rewritten_layout, ["j", "k"] if weight_transposed 
else ["k", "j"]
 )
 auto_scheduler.remove_index_check(weight)
-else:
+elif weight_transposed:
 out_dim, red_dim = weight.shape
+else:
+red_dim, out_dim = weight.shape
 assert in_dim == red_dim
 
 k = te.reduce_axis((0, in_dim), name="k")
-matmul = te.compute(
+if data_transposed:
+if weight_transposed:
+compute_lambda = lambda i, j: te.sum(
+data[k, i].astype(out_dtype) * weight[j, k].astype(out_dtype), 
axis=k
+)
+compute_name = "T_matmul_TT"
+else:
+compute_lambda = lambda i, j: te.sum(
+data[k, i].astype(out_dtype) * weight[k, j].astype(out_dtype), 
axis=k
+)
+compute_name = "T_matmul_TN"
+compute_tag = "matmul"
+else:
+if weight_transposed:
+compute_lambda = lambda i, j: te.sum(
+data[i, k].astype(out_dtype) * weight[j, k].astype(out_dtype), 
axis=k
+)
+compute_name = "T_dense"

Review comment:
   I think its fine for it is just a op name. 😄  But the tag `dense` has 
been used in some schedule check, so I think we'd better keep that.
   
   There're some options I can come up with:
   - A: Use `T_dense` as name and `dense` as tag for NT format, use `T_matmul` 
as name and `matmul` as tag for all other 3 format.
   - B: Use `T_matmul_NN`, `T_matmul_NT`, `T_matmul_TN`, `T_matmul_TT` as name 
for each format, use `dense` as tag for NT format and `matmul` as tag for 
others.
   
   What do you think about?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang commented on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-870176117


   Thank you @tkonolige for so detailed reviewing!
   And sorry for my some careless grammar mistakes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660232408



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660232211



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660232073



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660231929



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660231889



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class

Review comment:
   Removed.




-- 
This is an automated message fro

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660231625



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660230716



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660230570



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660230056



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create PassContext With Instruments
+# ---
+# It is as simple as passing ``instruments`` argument to ``PassContext`` 
constructor.
+# A built-in ``PassTimingInstrument`` is used to profile the execution time of
+# each passes.
+timing_inst = PassTimingInstrument()
+with tvm.transform.PassContext(instruments=[timing_inst]):
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+# before exiting the context, get profile results.
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Use Current PassContext With Instruments
+# 
+# One can also use the current ``PassContext`` and register
+# ``PassInstrument`` instances by ``override_instruments`` method.
+# Note that ``override_instruments`` executes ``exit_pass_ctx`` method
+# if any instrument already exists. Then it switches to new instruments
+# and calls ``enter_pass_ctx`` method of new instruments.
+# Refer to following sections and :py:func:`tvm.instrument.pass_instrument` 
for these methods.
+cur_pass_ctx = tvm.transform.PassContext.current()
+cur_pass_ctx.override_instruments([timing_inst])
+relay_mod = relay.transform.InferType()(relay_mod)
+relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
+profiles = timing_inst.render()
+print(profiles)
+
+
+###
+# Register empty list to clear instruments.
+#
+# Note that ``exit_pass_ctx`` of ``PassTimingInstrument`` is called.
+# Profiles are cleared so nothing is printed.
+cur_pass_ctx.override_instruments([])
+# Uncomment the call to .render() to see a warning like:
+# Warning: no passes have been profiled, did you enable pass profiling?
+# profiles = timing_inst.render()
+
+
+###
+# Create Customized Instrument Class
+# --
+# A customized instrument class can be easily created by
+# :py:func:`tvm.instrument.pass_instrument` decorator.
+#
+# Let's create an instrument class which calculate the difference of 
``CallNode``
+# counting per ``op.name`` before and after passes.
+
+# decorate the class
+@pass_instrument
+class RelayCallNodeDiffer:
+def __init__(self):
+ 

[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r660229952



##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))

Review comment:
   Done.

##
File path: tutorials/dev/use_pass_instrument.py
##
@@ -0,0 +1,378 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=line-too-long
+"""
+.. _tutorial-use-pass-instrument:
+
+How to Use TVM Pass Instrument
+==
+**Author**: `Chi-Wei Wang `_
+
+As more and more passes are implemented, it becomes useful to instrument
+passes execution, analyze per-pass effects and observe various events.
+Pass infrastructure provides instrument mechanism. One can pass a list of
+instrument instances to :py:class:`tvm.transform.PassContext`.
+Also a decorator :py:func:`tvm.instrument.pass_instrument` is provided
+to easily implement instrument classes.
+
+This tutorial demostrates how developers can use ``PassContext`` to instrument
+passes. Please also refer to the :ref:`pass-infra`.
+"""
+import tvm
+import tvm.relay as relay
+from tvm.relay.testing import resnet
+from tvm.contrib.download import download_testdata
+from tvm.relay.build_module import bind_params_by_name
+from tvm.ir.instrument import (
+PassTimingInstrument,
+pass_instrument,
+)
+
+
+###
+# Create An Example Relay Program
+# ---
+# We use pre-defined resnet-18 network in Relay.
+batch_size = 1
+num_of_image_class = 1000
+image_shape = (3, 224, 224)
+output_shape = (batch_size, num_of_image_class)
+relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, 
image_shape=image_shape)
+print(relay_mod.astext(show_meta_data=False))
+
+
+###
+# Create Pas

[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660229467



##
File path: include/tvm/relay/attrs/nn.h
##
@@ -961,6 +961,32 @@ struct AvgPool3DAttrs : public 
tvm::AttrsNode {
   }
 };
 
+/*! \brief Attributes for matmul operator */
+struct MatmulAttrs : public tvm::AttrsNode {
+  IndexExpr units;
+  DataType out_dtype;
+  bool data_transposed;
+  bool weight_transposed;
+  tvm::String auto_scheduler_rewritten_layout;  // The layout after 
auto-scheduler's layout rewrite

Review comment:
   You mean `MatmulAttrs`? We're not able to remove all the `nn.dense` at 
this moment. So `nn.dense` and `nn.matmul` should still be two different ops 
now. They need different `Attrs`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660228979



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -1204,7 +1208,7 @@ def from_tensorflow(self, graph, layout="NHWC", 
shape=None, outputs=None):
 return func, self._params
 
 
-def from_tensorflow(graph, layout="NHWC", shape=None, outputs=None):
+def from_tensorflow(graph, layout="NHWC", shape=None, outputs=None, 
use_dense_op=True):

Review comment:
   The problem is that we're not able to remove all the `nn.dense` at this 
moment and there's not enough AutoTVM template for `nn.matmul`.
   
   So the use of `nn.matmul` can only be seen as a experimental feature. We 
should not change the default behavior in case this may affect those who are 
using `nn.dense`.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660226675



##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -1471,6 +1471,47 @@ def bias_add(data, bias, axis=1):
 return _make.bias_add(data, bias, axis)
 
 
+def matmul(data, weight, units=None, out_dtype="", data_transposed=False, 
weight_transposed=False):
+"""Matmul operator.
+Applies a linear transformation. The X & W can be transposed.
+
+.. math::
+
+`Y = X * W`
+
+Parameters
+--
+data : tvm.relay.Expr
+The input data to the operator,
+of shape `(d_1, d_2, ..., d_n, units_in)` or `(d_1, d_2, ..., 
units_in, d_n)`.
+
+weight : tvm.relay.Expr
+The weight expressions, 2-D matrix,
+of shape `(units_in, units)` or `(units, units_in)`.
+
+units : int, optional
+Number of hidden units of the matmul transformation.

Review comment:
   I think the doc has explained enough: "The hidden units." This is copied 
from the original `nn.dense`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660226315



##
File path: python/tvm/relay/op/nn/_nn.py
##
@@ -1160,21 +1186,46 @@ def batch_flatten_shape_func(attrs, inputs, _):
 
 
 @script
-def _dense_shape_func(data_shape, weight_shape):
+def _matmul_shape_func(data_shape, weight_shape, data_transposed, 
weight_transposed):
 out = output_tensor((data_shape.shape[0],), "int64")
 for i in const_range(out.shape[0] - 1):
 out[i] = data_shape[i]
-out[out.shape[0] - 1] = weight_shape[0]
+if data_transposed:
+out[out.shape[0] - 2] = out[out.shape[0] - 1]
+out[out.shape[0] - 1] = weight_shape[0] if weight_transposed else 
weight_shape[1]

Review comment:
   Since the dimension of data tensor can be more than 2, this is the 
simplest implementation to do so.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


jcf94 commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660225637



##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -1471,6 +1471,47 @@ def bias_add(data, bias, axis=1):
 return _make.bias_add(data, bias, axis)
 
 
+def matmul(data, weight, units=None, out_dtype="", data_transposed=False, 
weight_transposed=False):
+"""Matmul operator.
+Applies a linear transformation. The X & W can be transposed.
+
+.. math::
+
+`Y = X * W`
+
+Parameters
+--
+data : tvm.relay.Expr
+The input data to the operator,
+of shape `(d_1, d_2, ..., d_n, units_in)` or `(d_1, d_2, ..., 
units_in, d_n)`.

Review comment:
   No, the input of matmul is supposed to be a multiple-dim tensor(not 
limited to 2). This is copied from the original `nn.dense`.
   
   Other frameworks like Pytorch also has such definition.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zackcquic commented on a change in pull request #8352: [Relay][Parser] Support slash in identifier.

2021-06-28 Thread GitBox


zackcquic commented on a change in pull request #8352:
URL: https://github.com/apache/tvm/pull/8352#discussion_r660220453



##
File path: tests/python/relay/test_ir_text_printer.py
##
@@ -284,5 +284,12 @@ def test_optional_info():
 assert txt.count("/* ty=int32 */") == 3
 
 
+def test_slash_in_identifier():
+x = relay.var("base/x")
+y = relay.var("base/y")
+z = x + y
+txt = astext(z)

Review comment:
   Thanks for pointing out. Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zackcquic commented on a change in pull request #8352: [Relay][Parser] Support slash in identifier.

2021-06-28 Thread GitBox


zackcquic commented on a change in pull request #8352:
URL: https://github.com/apache/tvm/pull/8352#discussion_r660220453



##
File path: tests/python/relay/test_ir_text_printer.py
##
@@ -284,5 +284,12 @@ def test_optional_info():
 assert txt.count("/* ty=int32 */") == 3
 
 
+def test_slash_in_identifier():
+x = relay.var("base/x")
+y = relay.var("base/y")
+z = x + y
+txt = astext(z)

Review comment:
   Done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zackcquic commented on a change in pull request #8352: [Relay][Parser] Support slash in identifier.

2021-06-28 Thread GitBox


zackcquic commented on a change in pull request #8352:
URL: https://github.com/apache/tvm/pull/8352#discussion_r660212089



##
File path: tests/python/relay/test_ir_text_printer.py
##
@@ -284,5 +284,12 @@ def test_optional_info():
 assert txt.count("/* ty=int32 */") == 3
 
 
+def test_slash_in_identifier():
+x = relay.var("base/x")
+y = relay.var("base/y")
+z = x + y
+txt = astext(z)

Review comment:
   Yes, inside `astext()` it is tested. 
   
[test_ir_text_printer.py#L39](https://github.com/apache/tvm/blob/f82cf36c12d964052bf11830b5180bcee051266c/tests/python/relay/test_ir_text_printer.py#L39)
   I just reuse it here.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo edited a comment on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo edited a comment on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870128615


   Hmm, I'll be honest, I don't quite understand the target/host part of tvm 
very well. I was hoping you could give context on this since you were the last 
person on git to touch the line. Specifically the proper usage of the commented 
out check.
   
   This method appears in a lot of places 
https://github.com/apache/tvm/blob/main/python/tvm/target/target.py#L171. And 
the problematic line specifically is this one: 
https://github.com/apache/tvm/blob/main/python/tvm/target/target.py#L200
   
   Before it worked since a majority of tested systems used `fork()` as the 
method for multiprocessing. macOS and windows by default use a different 
process which is causing this error. If I had to guess why this is the case, it 
is because macOS and windows serialize and deserialize arguments to a process 
which breaks the pointer equality assumption in the 2 arg constructor.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo edited a comment on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo edited a comment on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870128615


   Hmm, I'll be honest, I don't quite understand the target/host part of tvm 
very well. I was hoping you could give context on this since you were the last 
person on git to touch the line. Specifically the proper usage of the commented 
out check.
   
   This method appears in a lot of places 
https://github.com/apache/tvm/blob/main/python/tvm/target/target.py#L200. And 
the problematic line specifically is this one: 
https://github.com/apache/tvm/blob/main/python/tvm/target/target.py#L200
   
   Before it worked since a majority of tested systems used `fork()` as the 
method for multiprocessing. macOS and windows by default use a different 
process which is causing this error. If I had to guess why this is the case, it 
is because macOS and windows serialize and deserialize arguments to a process 
which breaks the pointer equality assumption in the 2 arg constructor.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8072: Add "operator" style to Model Library Format

2021-06-28 Thread GitBox


areusch commented on pull request #8072:
URL: https://github.com/apache/tvm/pull/8072#issuecomment-870137434


   @giuseros please take another look and explicitly approve if you're ok with 
this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8072: Add "operator" style to Model Library Format

2021-06-28 Thread GitBox


areusch commented on pull request #8072:
URL: https://github.com/apache/tvm/pull/8072#issuecomment-870137269


   @manupa-arm please let me know if there's anything else--i believe your 
comments are all forward-looking, but want to understand if there are specific 
changes needed here to merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8072: Add "operator" style to Model Library Format

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8072:
URL: https://github.com/apache/tvm/pull/8072#discussion_r660197575



##
File path: src/printer/model_library_format_printer.cc
##
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+#include 
+#include 
+#include 
+
+#include "text_printer.h"
+
+namespace tvm {
+namespace printer {
+
+class ModelLibraryFormatPrinter : public ::tvm::runtime::ModuleNode {
+ public:
+  ModelLibraryFormatPrinter(bool show_meta_data,
+const 
runtime::TypedPackedFunc& annotate,
+bool show_warning)
+  : text_printer_{show_meta_data, annotate, show_warning} {}
+
+  const char* type_key() const override { return 
"model_library_format_printer"; }
+
+  std::string Print(const ObjectRef& node) {
+Doc doc;
+doc << text_printer_.PrintFinal(node);
+return doc.str();
+  }
+
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) override {

Review comment:
   i agree with that--however, we do need the lambda function to capture 
`sptr_to_self` (this mimics the Python descriptor `get()` implementation). i 
moved the body into a separate function to align this class for a future world 
where we implemented the auto-generated interface.
   
   cc @jroesch who has a prototype of the auto-generator




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


tkonolige commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r660188664



##
File path: include/tvm/relay/attrs/nn.h
##
@@ -961,6 +961,32 @@ struct AvgPool3DAttrs : public 
tvm::AttrsNode {
   }
 };
 
+/*! \brief Attributes for matmul operator */
+struct MatmulAttrs : public tvm::AttrsNode {
+  IndexExpr units;
+  DataType out_dtype;
+  bool data_transposed;
+  bool weight_transposed;
+  tvm::String auto_scheduler_rewritten_layout;  // The layout after 
auto-scheduler's layout rewrite

Review comment:
   Why is this field necessary?

##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -1204,7 +1208,7 @@ def from_tensorflow(self, graph, layout="NHWC", 
shape=None, outputs=None):
 return func, self._params
 
 
-def from_tensorflow(graph, layout="NHWC", shape=None, outputs=None):
+def from_tensorflow(graph, layout="NHWC", shape=None, outputs=None, 
use_dense_op=True):

Review comment:
   I don't think we should have a flag here. We should just commit to one 
codepath.

##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -554,6 +554,35 @@ def dense_grad(orig, grad):
 ]
 
 
+@register_gradient("nn.matmul")
+def matmul_grad(orig, grad):
+"""Returns [grad' @ weight, data @ grad']"""
+data, weight = orig.args
+if (orig.attrs["data_transposed"], orig.attrs["weight_transposed"]) == 
(True, True):

Review comment:
   Please refactor this to not if/else on every possible combination of 
transpose.

##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -1471,6 +1471,47 @@ def bias_add(data, bias, axis=1):
 return _make.bias_add(data, bias, axis)
 
 
+def matmul(data, weight, units=None, out_dtype="", data_transposed=False, 
weight_transposed=False):
+"""Matmul operator.
+Applies a linear transformation. The X & W can be transposed.
+
+.. math::
+
+`Y = X * W`
+
+Parameters
+--
+data : tvm.relay.Expr
+The input data to the operator,
+of shape `(d_1, d_2, ..., d_n, units_in)` or `(d_1, d_2, ..., 
units_in, d_n)`.
+
+weight : tvm.relay.Expr
+The weight expressions, 2-D matrix,
+of shape `(units_in, units)` or `(units, units_in)`.
+
+units : int, optional
+Number of hidden units of the matmul transformation.

Review comment:
   What is a unit?

##
File path: python/tvm/relay/op/nn/nn.py
##
@@ -1471,6 +1471,47 @@ def bias_add(data, bias, axis=1):
 return _make.bias_add(data, bias, axis)
 
 
+def matmul(data, weight, units=None, out_dtype="", data_transposed=False, 
weight_transposed=False):
+"""Matmul operator.
+Applies a linear transformation. The X & W can be transposed.
+
+.. math::
+
+`Y = X * W`
+
+Parameters
+--
+data : tvm.relay.Expr
+The input data to the operator,
+of shape `(d_1, d_2, ..., d_n, units_in)` or `(d_1, d_2, ..., 
units_in, d_n)`.

Review comment:
   Shouldn't both input shapes by dimension 2?

##
File path: python/tvm/relay/op/nn/_nn.py
##
@@ -1160,21 +1186,46 @@ def batch_flatten_shape_func(attrs, inputs, _):
 
 
 @script
-def _dense_shape_func(data_shape, weight_shape):
+def _matmul_shape_func(data_shape, weight_shape, data_transposed, 
weight_transposed):
 out = output_tensor((data_shape.shape[0],), "int64")
 for i in const_range(out.shape[0] - 1):
 out[i] = data_shape[i]
-out[out.shape[0] - 1] = weight_shape[0]
+if data_transposed:
+out[out.shape[0] - 2] = out[out.shape[0] - 1]
+out[out.shape[0] - 1] = weight_shape[0] if weight_transposed else 
weight_shape[1]

Review comment:
   This seems really complicated. Shouldn't it just be some part of 
data_shape and weight_shape depending on the transposes?

##
File path: python/tvm/topi/nn/dense.py
##
@@ -38,6 +46,12 @@ def dense(data, weight, bias=None, out_dtype=None, 
auto_scheduler_rewritten_layo
 out_dtype : Optional[str]
 The output type. This is used for mixed precision.
 
+data_transposed : Optional[bool]

Review comment:
   Add the default values to this




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


tkonolige commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870130246


   @zxybazh The target host and the new host are functionally the same, but not 
the same object.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870128615


   Hmm, I'll be honest, I don't quite understand the target/host part of tvm 
very well. I was hoping you could give context on this since you were the last 
person on git to touch the line. Specifically the proper usage of the commented 
out check.
   
   This method appears in a lot of places 
https://github.com/apache/tvm/blob/main/python/tvm/target/target.py#L200. And 
the problematic line specifically is this one: 
`https://github.com/apache/tvm/blob/main/python/tvm/target/target.py#L200` 
   
   Before it worked since a majority of tested systems used `fork()` as the 
method for multiprocessing. macOS and windows by default use a different 
process which is causing this error. If I had to guess why this is the case, it 
is because macOS and windows serialize and deserialize arguments to a process 
which breaks the pointer equality assumption in the 2 arg constructor.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zxybazh commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


zxybazh commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870125243


   Hi Andrew, just curious about the context, why in this case would we add a 
target host to a target object that already has a different host?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870122577


   @zxybazh thoughts with turning off this check?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8270: Rename runtime-config to executor-config and add documentation for Model Library Format

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8270:
URL: https://github.com/apache/tvm/pull/8270#discussion_r660182943



##
File path: docs/dev/model_library_format.rst
##
@@ -0,0 +1,167 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+Model Library Format
+
+
+About Model Library Format
+--
+
+TVM traditionally exports generated libraries as Dynamic Shared Objects
+(e.g. DLLs (Windows) or .so (linux)). Inference can be performed on those 
libraries by loading them

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8270: Rename runtime-config to executor-config and add documentation for Model Library Format

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8270:
URL: https://github.com/apache/tvm/pull/8270#discussion_r660181426



##
File path: docs/dev/model_library_format.rst
##
@@ -0,0 +1,167 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+Model Library Format
+
+
+About Model Library Format
+--
+
+TVM traditionally exports generated libraries as Dynamic Shared Objects
+(e.g. DLLs (Windows) or .so (linux)). Inference can be performed on those 
libraries by loading them
+into an executable using ``libtvm_runtime.so``. This process is very dependent 
on services provided
+by traditional OS.
+
+For deployment to unconventional platforms (e.g. those lacking traditional 
OS), the microTVM project
+can be used to export a generated library in pieces. In this case, microTVM 
provides another output
+format, Model Library Format. Model Library Format is a tarball containing a 
file for each part of
+the TVM compiler output.
+
+What can be Exported
+
+
+At the time of writing, export is limited to full models built with 
``tvm.relay.build``.
+
+Directory Layout
+
+
+Model Library Format is traditionally contained within a tarball. All paths 
are relative to the root
+of the tarball:
+
+- ``/`` - Root of the tarball
+
+  - ``codegen`` - Root directory for all generated device code
+
+- (see `codegen`_ section)
+
+  - ``executor-config/`` - Configuration for the executor which drives model 
inference
+
+- ``graph/`` - Root directory containing configuration for the 
GraphExecutor
+
+  - ``graph.json`` - GraphExecutor JSON configuration
+
+  -  ``metadata.json`` - Machine-parseable metadata for this model
+
+  - ``parameters/`` - Root directory where simplified parameters are placed
+
+- ``.params`` - Parameters for the model 
tvm.relay._save_params format
+
+  - ``src/`` - Root directory for all source code consumed by TVM
+
+- ``relay.txt`` - Relay source code for the generated model
+
+Description of Sub-directories
+--
+
+.. _subdir_codegen:
+
+``codegen``
+^^^
+
+All TVM-generated code is placed in this directory. At the time of writing, 
there is 1 file per
+Module in the generated Module tree, though this restriction may change in the 
future. Files in
+this directory should have filenames of the form 
``/(lib|src)/.``.
+
+These components are described below:
+
+ *  - Identifies the TVM target on which the code should run. 
Currently, only ``host``
+   is supported.
+ *  - A unique slug identifying this file. Currently 
``lib``, with ``>` an
+   autoincrementing integer.
+ *  - Suffix identifying the filename format. Currently ``c`` or 
``o``.
+
+An example directory tree for a CPU-only model is shown below:
+
+- ``codegen/`` - Codegen directory
+
+  - ``host/`` - Generated code for ``target_host``
+
+-  ``lib/`` - Generated binary object files
+
+  - ``lib0.o`` - LLVM module (if ``llvm`` target is used)
+  - ``lib1.o`` - LLVM CRT Metadata Module (if ``llvm`` target is used)
+- ``src/`` - Generated C source
+
+  - ``lib0.c`` - C module (if ``c`` target is used)
+  - ``lib1.c`` - C CRT Metadata module (if ``c`` target is used)
+
+``executor-config``
+^^^
+
+Contains machine-parseable configuration for executors which can drive model 
inference. Currently,
+only the GraphExecutor produces configuration for this directory, in 
``graph/graph.json``. This
+file should be read in and the resulting string supplied to the 
``GraphExecutor()`` constructor for
+parsing.
+
+``parameters``
+^^
+
+Contains machine-parseable parameters. A variety of formats may be provided, 
but at present, only
+the format produced by ``tvm.relay._save_params`` is supplied. When building 
with
+``tvm.relay.build``,  the ``name`` parameter is considered to be the model 
name. A single file is
+created in this directory ``.json``.
+
+``src``
+^^^
+
+Contains source code parsed by TVM. Currently, just the Relay source code is 
created in
+``src/relay.txt``.
+
+Metadata
+
+
+Machine-parseable metadata is placed in a file ``metadata.json`` at the root 

[GitHub] [tvm] areusch commented on a change in pull request #8270: Rename runtime-config to executor-config and add documentation for Model Library Format

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8270:
URL: https://github.com/apache/tvm/pull/8270#discussion_r660181203



##
File path: docs/dev/model_library_format.rst
##
@@ -0,0 +1,167 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+Model Library Format
+

Review comment:
   i haven't used the acronym yet in this doc though. but i agree it's an 
easy shorthand for the format. maybe it would make sense more in tvmc docs, 
where it's a command-line param? wdyt?

##
File path: docs/dev/model_library_format.rst
##
@@ -0,0 +1,167 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+Model Library Format
+
+
+About Model Library Format
+--
+
+TVM traditionally exports generated libraries as Dynamic Shared Objects
+(e.g. DLLs (Windows) or .so (linux)). Inference can be performed on those 
libraries by loading them
+into an executable using ``libtvm_runtime.so``. This process is very dependent 
on services provided
+by traditional OS.
+
+For deployment to unconventional platforms (e.g. those lacking traditional 
OS), the microTVM project
+can be used to export a generated library in pieces. In this case, microTVM 
provides another output
+format, Model Library Format. Model Library Format is a tarball containing a 
file for each part of
+the TVM compiler output.
+
+What can be Exported
+
+
+At the time of writing, export is limited to full models built with 
``tvm.relay.build``.
+
+Directory Layout
+
+
+Model Library Format is traditionally contained within a tarball. All paths 
are relative to the root
+of the tarball:
+
+- ``/`` - Root of the tarball
+
+  - ``codegen`` - Root directory for all generated device code
+
+- (see `codegen`_ section)
+
+  - ``executor-config/`` - Configuration for the executor which drives model 
inference
+
+- ``graph/`` - Root directory containing configuration for the 
GraphExecutor
+
+  - ``graph.json`` - GraphExecutor JSON configuration
+
+  -  ``metadata.json`` - Machine-parseable metadata for this model
+
+  - ``parameters/`` - Root directory where simplified parameters are placed
+
+- ``.params`` - Parameters for the model 
tvm.relay._save_params format
+
+  - ``src/`` - Root directory for all source code consumed by TVM
+
+- ``relay.txt`` - Relay source code for the generated model
+
+Description of Sub-directories
+--
+
+.. _subdir_codegen:
+
+``codegen``
+^^^
+
+All TVM-generated code is placed in this directory. At the time of writing, 
there is 1 file per
+Module in the generated Module tree, though this restriction may change in the 
future. Files in
+this directory should have filenames of the form 
``/(lib|src)/.``.
+
+These components are described below:
+
+ *  - Identifies the TVM target on which the code should run. 
Currently, only ``host``
+   is supported.
+ *  - A unique slug identifying this file. Currently 
``lib``, with ``>` an
+   autoincrementing integer.

Review comment:
   done

##
File path: docs/dev/model_library_format.rst
##
@@ -0,0 +1,167 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+   

[GitHub] [tvm] areusch commented on a change in pull request #8270: Rename runtime-config to executor-config and add documentation for Model Library Format

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8270:
URL: https://github.com/apache/tvm/pull/8270#discussion_r660180740



##
File path: docs/dev/model_library_format.rst
##
@@ -0,0 +1,167 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+Model Library Format
+
+
+About Model Library Format
+--
+
+TVM traditionally exports generated libraries as Dynamic Shared Objects
+(e.g. DLLs (Windows) or .so (linux)). Inference can be performed on those 
libraries by loading them
+into an executable using ``libtvm_runtime.so``. This process is very dependent 
on services provided
+by traditional OS.
+
+For deployment to unconventional platforms (e.g. those lacking traditional 
OS), the microTVM project

Review comment:
   i feel like it's not strictly limited to embedded though




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8270: Rename runtime-config to executor-config and add documentation for Model Library Format

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8270:
URL: https://github.com/apache/tvm/pull/8270#discussion_r660180632



##
File path: docs/dev/model_library_format.rst
##
@@ -0,0 +1,167 @@
+..  Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+..http://www.apache.org/licenses/LICENSE-2.0
+
+..  Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+
+Model Library Format
+
+
+About Model Library Format
+--
+
+TVM traditionally exports generated libraries as Dynamic Shared Objects
+(e.g. DLLs (Windows) or .so (linux)). Inference can be performed on those 
libraries by loading them

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870112870


   Hmm I don't understand a lot of the host target stuff. In this case I 
believe we need equality to be exact equality. 
   
   The problem you are describing could be another form of equality?
   
   In any case me and @tkonolige might just turn off the check for now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


comaniac commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870112769


   > Equality check is another big issue. I discussed with @comaniac a while 
ago, but haven’t got a conclusion yet when should two targets are considered 
“equal”: If one target has -libs=cudnn and the other doesn’t, are they equal to 
each other?
   
   Exactly. It seems to me that we ultimately need two APIs for target: one 
(i.e., `==`) checks if two targets are exactly the same, and the other 
(`.compatible(self, other)`) checks if target A is compatible to target B. The 
problem is the definition of "compatible" targets. In my own experience, 
`compatible` is much more useful than `==`, as in many cases, people care more 
about whether a model/schedule built with target A can be used in target B. 
Meanwhile, we can still have the equality check first for internal use cases 
like this one I guess.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo edited a comment on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo edited a comment on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870093432


   Still a draft, there still is a problem with `src/target/target.cc` 
construction. 
   
   ```
   Target::Target(Target target, Target host) {
 ObjectPtr n = make_object(*target.get());
 CHECK(!n->host.defined() || n->host.same_as(host))
 << "ValueError: Adding a host to a target whose host field has been 
defined target: "
 << n->host << " host: " << host << " ptr target: " << n.get()
 << " ptr host: " << make_object(*host.get()).get();
 ;
 // add target host into host field
 n->host = std::move(host);
 data_ = std::move(n);
   }
   ```
   Check failed
   Spawn breaks pointer equality which was the assumption. We now need a deep 
equality thing for "tvm::Target" I think.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8358: [TIR] Tighten up invariance of CopyOnWrite in recursive stmt visitor

2021-06-28 Thread GitBox


junrushao1994 commented on pull request #8358:
URL: https://github.com/apache/tvm/pull/8358#issuecomment-870108582


   The PR fails a flaky test. Please retrigger and let’s get it merged :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


junrushao1994 commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870106414


   Does it work if we use Target.export for serialization? This method converts 
a Target to a JSON-like dict and should preserve all the information.
   
   Equality check is another big issue. I discussed with @comaniac a while ago, 
but haven’t got a conclusion yet when should two targets are considered 
“equal”: If one target has -libs=cudnn and the other doesn’t, are they equal to 
each other?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (2915349 -> f82cf36)

2021-06-28 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 2915349  [Onnx] Support Bidirectional RNNs (#8337)
 add f82cf36  bump sphinx-addon version (#8360)

No new revisions were added by this update.

Summary of changes:
 tests/scripts/task_ci_setup.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


[GitHub] [tvm] junrushao1994 commented on pull request #8360: [Doc] Fix sphinx doc style for unordered list

2021-06-28 Thread GitBox


junrushao1994 commented on pull request #8360:
URL: https://github.com/apache/tvm/pull/8360#issuecomment-870104299


   Thanks @yzh119 for the contribution!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 merged pull request #8360: [Doc] Fix sphinx doc style for unordered list

2021-06-28 Thread GitBox


junrushao1994 merged pull request #8360:
URL: https://github.com/apache/tvm/pull/8360


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch edited a comment on pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


jroesch edited a comment on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-870097981


   @giuseros I would prefer to also use an array of structs, but my goal is to 
just get us to use the same data structure everywhere since there are multiple 
places where we are storing different versions of the same data without a 
shared data structure. We could move the Array outside and simplify the struct 
but it would be good to move memory planning to also use the same structure. 
   
   I guess one solution could be merge as is, I can rewrite the MemoryPlanning 
to use the Array of struct and then you can update to use the same structure. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


jroesch commented on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-870097981


   @giuseros I would prefer to also use an array of structs, but my goal is to 
just get us to use the same data structure everywhere since there are multiple 
places where we are storing different versions of the same data without a 
shared data structure. We could move the Array outside and simplify the struct 
but it would be good to move memory planning to also use the same structure. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo commented on pull request #8363:
URL: https://github.com/apache/tvm/pull/8363#issuecomment-870093432


   Still a draft, there still is a problem with `src/target/target.cc` 
construction. 
   
   Spawn breaks pointer equality which was the assumption. We now need a deep 
equality thing for "tvm::Target" I think.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8347: Switch threading model to `fork` on macOS

2021-06-28 Thread GitBox


AndrewZhaoLuo commented on pull request #8347:
URL: https://github.com/apache/tvm/pull/8347#issuecomment-870092802


   Ok, so the actual tuning system is built with duct tape, but I think the 
minimum changes to make this work is actually easier than initially thought: 
   
   https://github.com/apache/tvm/pull/8363


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo opened a new pull request #8363: [Tuning] Allow multiprocessing spawn to work (on macOS llvm at least)

2021-06-28 Thread GitBox


AndrewZhaoLuo opened a new pull request #8363:
URL: https://github.com/apache/tvm/pull/8363


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8352: [Relay][Parser] Support slash in identifier.

2021-06-28 Thread GitBox


comaniac commented on a change in pull request #8352:
URL: https://github.com/apache/tvm/pull/8352#discussion_r660150398



##
File path: tests/python/relay/test_ir_text_printer.py
##
@@ -284,5 +284,12 @@ def test_optional_info():
 assert txt.count("/* ty=int32 */") == 3
 
 
+def test_slash_in_identifier():
+x = relay.var("base/x")
+y = relay.var("base/y")
+z = x + y
+txt = astext(z)

Review comment:
   You might need to check the output of `astext` to make sure the names 
are preserved.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zackcquic commented on pull request #8352: [Relay][Parser] Support slash in identifier.

2021-06-28 Thread GitBox


zackcquic commented on pull request #8352:
URL: https://github.com/apache/tvm/pull/8352#issuecomment-870075050


   cc @areusch @tkonolige @comaniac
   Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] schilkunda-amba opened a new pull request #8362: Relay to onnx conversion

2021-06-28 Thread GitBox


schilkunda-amba opened a new pull request #8362:
URL: https://github.com/apache/tvm/pull/8362


   * Added support for following ops: Sigmoid, Copy, Round and Cast
   * Fixed issue in Pool conversion


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #8313: [Metal] Add pass for splitting kernel with huge number of args

2021-06-28 Thread GitBox


mbrookhart commented on a change in pull request #8313:
URL: https://github.com/apache/tvm/pull/8313#discussion_r660126055



##
File path: src/relay/transforms/pattern_utils.h
##
@@ -700,6 +700,13 @@ Expr StopFusion(Expr data);
 
 Expr CastHint(Expr data, DataType dtype);
 
+inline Expr Concat(Expr x, int axis = 0) {
+  static const Op& op = Op::Get("concatenate");
+  auto attrs = make_object();
+  attrs->axis = axis;
+  return Call(op, {x}, Attrs(attrs), {});
+}

Review comment:
   Maybe just use the version here instead of duplicating? 
https://github.com/apache/tvm/blob/2915349458619deb5bd9b03670b33205938a8c02/src/relay/op/make_op.h#L45




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #8313: [Metal] Add pass for splitting kernel with huge number of args

2021-06-28 Thread GitBox


masahi edited a comment on pull request #8313:
URL: https://github.com/apache/tvm/pull/8313#issuecomment-870050776


   Not exactly, but I've dealt with a similar issue. My mitigation was to limit 
the maximum fusion depth, which breaks large parameter kernels into smaller 
ones. But that is not guaranteed to work and not predictable. I can imagine 
that having a pass like this that allows more fine-grained controls might be 
necessary in some cases.
   
   @echuraev FYI you can cap the fuse depth by 
https://github.com/apache/tvm/blob/720e7b1ebd9b789a1100dee7536d0633c7941dd1/tests/python/relay/test_pass_fuse_ops.py#L755


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #8313: [Metal] Add pass for splitting kernel with huge number of args

2021-06-28 Thread GitBox


masahi commented on pull request #8313:
URL: https://github.com/apache/tvm/pull/8313#issuecomment-870050776


   Not exactly, but I've dealt with a similar issue. My mitigation was to limit 
the maximum fusion depth, which breaks large parameter kernels into smaller 
ones. But that is not guaranteed to work and not predictable. I can imagine 
that having a pass like this that allows more fine-grained controls might be 
necessary in some cases.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] MarisaKirisame commented on a change in pull request #8266: [Bugfix] [tir] do not simplify 'Any() - Any()' to 0

2021-06-28 Thread GitBox


MarisaKirisame commented on a change in pull request #8266:
URL: https://github.com/apache/tvm/pull/8266#discussion_r660116421



##
File path: src/tir/analysis/deep_equal.cc
##
@@ -59,6 +59,9 @@ bool ExprDeepEqual::operator()(const PrimExpr& lhs, const 
PrimExpr& rhs) const {
 auto* prhs = rhs.as();
 return plhs->dtype == prhs->dtype && plhs->value == prhs->value;
   }
+  if (lhs.as()) {
+return lhs.same_as(rhs);

Review comment:
   ```suggestion
   return false;
   ```
   the code that check same_as already happend at before.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros commented on pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


giuseros commented on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-869972638


   Hi @areusch , I can reuse that in theory, but `TempStorageInfo` is only a 
local structure used to have on-demand memory allocation, i.e.,  I don't need 
to pass the `TempStorageInfo` around. This was the reason you suggested to not 
use struct of arrays here: 
https://github.com/apache/tvm/pull/8096#discussion_r638171243. I personally 
agreed with your comment and would prefer the way it is (i.e., array of 
structs), because using a struct of arrays makes the code less readable. 
However, if you guys feel strongly about reusing the `StorageInfo` struct of 
arrays, I can surely do it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on pull request #7983: [PROFILING] Use PAPI to collect hardware performance counters on CPU and CUDA

2021-06-28 Thread GitBox


tkonolige commented on pull request #7983:
URL: https://github.com/apache/tvm/pull/7983#issuecomment-869949125


   @leandron @tqchen @areusch Can you review?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Lunderberg commented on a change in pull request #8343: [Unittests] Added a meta-test for tvm.testing.fixture behavior in case of a broken fixture.

2021-06-28 Thread GitBox


Lunderberg commented on a change in pull request #8343:
URL: https://github.com/apache/tvm/pull/8343#discussion_r660038825



##
File path: tests/python/unittest/test_tvm_testing_features.py
##
@@ -145,5 +145,37 @@ def test_cached_count(self):
 assert self.cached_calls == len(self.param1_vals)
 
 
+class TestBrokenFixture:
+# Tests that use a fixture that throws an exception fail, and are
+# marked as setup failures.  The tests themselves are never run.
+# This behavior should be the same whether or not the fixture
+# results are cached.
+
+num_uses_broken_uncached_fixture = 0
+num_uses_broken_cached_fixture = 0
+
+@tvm.testing.fixture
+def broken_uncached_fixture(self):
+raise RuntimeError("Intentionally broken fixture")
+
+@pytest.mark.xfail(True, reason="Broken fixtures should result in a 
failing setup", strict=True)
+def test_uses_broken_uncached_fixture(self, broken_uncached_fixture):
+type(self).num_uses_broken_fixture += 1
+
+def test_num_uses_uncached(self):
+assert self.num_uses_broken_uncached_fixture == 0
+
+@tvm.testing.fixture(cache_return_value=True)
+def broken_cached_fixture(self):
+raise RuntimeError("Intentionally broken fixture")
+
+@pytest.mark.xfail(True, reason="Broken fixtures should result in a 
failing setup", strict=True)
+def test_uses_broken_cached_fixture(self, broken_cached_fixture):
+type(self).num_uses_broken_cached_fixture += 1
+
+def test_num_uses_cached(self):

Review comment:
   Currently, it will not.  We'd need to also add a call to 
`pytest_xdist_make_scheduler` ([example stack overflow 
post](https://stackoverflow.com/a/59504228)) in order to force these tests to 
be run in order on a single node.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7742: Contributing the STM32 port

2021-06-28 Thread GitBox


areusch commented on pull request #7742:
URL: https://github.com/apache/tvm/pull/7742#issuecomment-869902846


   @stoa I've taken a look at your `c_backend_api`/`c_runtime_api` 
reorganization. I think it actually makes sense, but it's a bit more extensive 
than I originally imagined (I thought we were just discussing moving 
`TVMAPISetLastError` functionality). I think it would be best to open a 
separate PR for that, as we'll need to loop in some of the core committers for 
that, and this PR's thread is quite long. Would you be up for doing that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7742: Contributing the STM32 port

2021-06-28 Thread GitBox


areusch commented on pull request #7742:
URL: https://github.com/apache/tvm/pull/7742#issuecomment-869893211


   hey @stoa that look like a flake to me. can you push an empty commit or git 
commit --amend to retrigger?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8343: [Unittests] Added a meta-test for tvm.testing.fixture behavior in case of a broken fixture.

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8343:
URL: https://github.com/apache/tvm/pull/8343#discussion_r659995934



##
File path: tests/python/unittest/test_tvm_testing_features.py
##
@@ -145,5 +145,37 @@ def test_cached_count(self):
 assert self.cached_calls == len(self.param1_vals)
 
 
+class TestBrokenFixture:
+# Tests that use a fixture that throws an exception fail, and are
+# marked as setup failures.  The tests themselves are never run.
+# This behavior should be the same whether or not the fixture
+# results are cached.
+
+num_uses_broken_uncached_fixture = 0
+num_uses_broken_cached_fixture = 0
+
+@tvm.testing.fixture
+def broken_uncached_fixture(self):
+raise RuntimeError("Intentionally broken fixture")
+
+@pytest.mark.xfail(True, reason="Broken fixtures should result in a 
failing setup", strict=True)
+def test_uses_broken_uncached_fixture(self, broken_uncached_fixture):
+type(self).num_uses_broken_fixture += 1
+
+def test_num_uses_uncached(self):
+assert self.num_uses_broken_uncached_fixture == 0
+
+@tvm.testing.fixture(cache_return_value=True)
+def broken_cached_fixture(self):
+raise RuntimeError("Intentionally broken fixture")
+
+@pytest.mark.xfail(True, reason="Broken fixtures should result in a 
failing setup", strict=True)
+def test_uses_broken_cached_fixture(self, broken_cached_fixture):
+type(self).num_uses_broken_cached_fixture += 1
+
+def test_num_uses_cached(self):

Review comment:
   if we [parallelize 
testing](https://pypi.org/project/pytest-xdist/#parallelization), will this 
inter-test dependency work?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8234: [Matmul] Add matmul op

2021-06-28 Thread GitBox


comaniac commented on a change in pull request #8234:
URL: https://github.com/apache/tvm/pull/8234#discussion_r659979731



##
File path: python/tvm/relay/op/nn/_nn.py
##
@@ -1160,21 +1186,46 @@ def batch_flatten_shape_func(attrs, inputs, _):
 
 
 @script
-def _dense_shape_func(data_shape, weight_shape):
+def _matmul_shape_func(data_shape, weight_shape, data_transposed, 
weight_transposed):

Review comment:
   nit: the two inputs of matmul are not necessary to be `data` and 
`weight`, although it's almost right in DNN. We may consider calling them 
`transpose_a` and `transpose_b` as CuDNN.

##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -44,6 +44,10 @@
 
 __all__ = ["from_tensorflow"]
 
+# By default, TVM convert `tf.matmul` to `nn.dense` op with data tensor 
non-transposed and weight
+# tensor transposed
+_USE_DENSE_INSTEAD_OF_MATMUL = True

Review comment:
   ```suggestion
   # The default configurations of Relay TensorFlow frontend.
   TF_DEFAULT_CONFIGS = {
 # By default, TVM converts `tf.matmul` to `transpose(weight) + nn.dense`, 
which introduces
 # unnecessary overhead in weight transpose. Change this flag to False to 
directly convert to
 #`nn.matmul` to get rid of the overhead. However, please note that 
`nn.matmul` is in experimental
 # so it may have some performance issues.
 "use_dense": True,
   }
   ```
   
   I reviewed the primary entrypoint and feel we may need to make it more 
general, as we may have other configurations in the future. Also I used the 
name "default" to imply that it may be overwritten by user-specified values.

##
File path: python/tvm/topi/x86/dense.py
##
@@ -281,72 +281,121 @@ def _callback(op):
 return s
 
 
-def dense_blas_common(cfg, data, weight, bias, out_dtype, lib):
-"""Compute dense using a BLAS library"""
+def matmul_blas_common(cfg, data, weight, bias, out_dtype, data_transposed, 
weight_transposed, lib):
+"""Compute matmul/dense using a BLAS library"""
 M, K = get_const_tuple(data.shape)
 N, _ = get_const_tuple(weight.shape)
 if isinstance(M, int) and isinstance(K, int) and isinstance(N, int):
 cfg.add_flop(M * K * N * 2)
 if data.dtype == "uint8" and weight.dtype == "int8" and out_dtype == 
"int32":
 if not hasattr(lib, "matmul_u8s8s32"):
 raise NotImplementedError(
-f"Dense with {lib.__name__} for {data.dtype} is not supported "
+f"Matmul/Dense with {lib.__name__} for {data.dtype} is not 
supported "
 "(matmulu8s8s32 not imlemented)"
 )
-C = lib.matmul_u8s8s32(data, weight, False, True, dtype=out_dtype)
+C = lib.matmul_u8s8s32(data, weight, data_transposed, 
weight_transposed, dtype=out_dtype)
 elif data.dtype == "float32" or data.dtype == "float64":
-C = lib.matmul(data, weight, False, True)
+C = lib.matmul(data, weight, data_transposed, weight_transposed)
 else:
-raise NotImplementedError(f"Dense with {lib.__name__} for {data.dtype} 
is not supported")
+raise NotImplementedError(
+f"Matmul/Dense with {lib.__name__} for {data.dtype} is not 
supported"
+)
 
 if bias is not None:
 C = te.compute(C.shape, lambda i, j: C[i, j] + 
bias[j].astype(out_dtype), tag=tag.BROADCAST)
 return C
 
 
+def schedule_matmul_blas_common(outs):
+"""Default matmul schedule for BLAS library"""
+s = te.create_schedule([x.op for x in outs])
+te.schedule.AutoInlineInjective(s)
+
+for out in outs:
+if "dense" not in out.op.tag and "matmul" not in out.op.tag:
+schedule_injective_from_existing(s, out)
+return s
+
+
 @autotvm.register_topi_compute("dense_cblas.x86")
 def dense_cblas(cfg, data, weight, bias=None, out_dtype=None):
 """Compute dense using a cblas"""

Review comment:
   ditto

##
File path: python/tvm/topi/x86/dense.py
##
@@ -281,72 +281,121 @@ def _callback(op):
 return s
 
 
-def dense_blas_common(cfg, data, weight, bias, out_dtype, lib):
-"""Compute dense using a BLAS library"""
+def matmul_blas_common(cfg, data, weight, bias, out_dtype, data_transposed, 
weight_transposed, lib):
+"""Compute matmul/dense using a BLAS library"""
 M, K = get_const_tuple(data.shape)
 N, _ = get_const_tuple(weight.shape)
 if isinstance(M, int) and isinstance(K, int) and isinstance(N, int):
 cfg.add_flop(M * K * N * 2)
 if data.dtype == "uint8" and weight.dtype == "int8" and out_dtype == 
"int32":
 if not hasattr(lib, "matmul_u8s8s32"):
 raise NotImplementedError(
-f"Dense with {lib.__name__} for {data.dtype} is not supported "
+f"Matmul/Dense with {lib.__name__} for {data.dtype} is not 
supported "
 "(matmulu8s8s32 not imlemented)"
 )
-C = lib.matmul_u8s8s32(data, weight, False, True, dtype=out_dtype)
+C

[GitHub] [tvm] areusch commented on a change in pull request #8345: [Graph Debug Executor] Add exception for profile with remote devices

2021-06-28 Thread GitBox


areusch commented on a change in pull request #8345:
URL: https://github.com/apache/tvm/pull/8345#discussion_r659994309



##
File path: python/tvm/contrib/debugger/debug_executor.py
##
@@ -105,6 +106,7 @@ def __init__(self, module, device, graph_json_str, 
dump_root):
 self._profile = module["profile"]
 graph_executor.GraphModule.__init__(self, module)
 self._create_debug_env(graph_json_str, device)
+self._device = device[0]

Review comment:
   i'm worried that the cpu device might not be first in the list. maybe we 
should just store a local flag e.g. `self._has_remote_device = 
any(d.device_type >= base.RPC_SESS_MASK for d in device)`
   
   and use that?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch edited a comment on pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


areusch edited a comment on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-869882513


   hey @giuseros, i think @jroesch was hoping with #8297 that you could re-use 
that common StorageInfo struct for your purposes here. is it possible to do 
that? sorry if this was unclear from that PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] icemelon9 commented on a change in pull request #8266: [Bugfix] [tir] do not simplify 'Any() - Any()' to 0

2021-06-28 Thread GitBox


icemelon9 commented on a change in pull request #8266:
URL: https://github.com/apache/tvm/pull/8266#discussion_r659968478



##
File path: tests/python/unittest/test_arith_rewrite_simplify.py
##
@@ -275,6 +275,7 @@ def test_add_index_simplify():
 def test_sub_index_simplify():
 ck = RewriteChecker()
 x, y, z = te.var("x"), te.var("y"), te.var("z")
+a, b, c = tvm.tir.Any(), tvm.tir.Any(), tvm.tir.Any()

Review comment:
   ```suggestion
   a, b = tvm.tir.Any(), tvm.tir.Any(), tvm.tir.Any()
   ```
   `c` is no longer needed now




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


areusch commented on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-869882513


   hey @giuseros, i think @jroesch was hoping with #8297 that you could re-use 
that common StorageInfo struct for your purposes here. is it possible to do 
that?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros commented on pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


giuseros commented on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-869880637


   Hi @manupa-arm , @jroesch , @areusch , 
   I just rebased and had to change `StorageInfo` to `TempStorageInfo` because 
of a conflict. Please, let me know if this is OK for you!
   
   Thanks,
   Giuseppe


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AnastasiaStulova commented on pull request #8361: [Relay] Fix index order in conv2d computation for Arm CPU.

2021-06-28 Thread GitBox


AnastasiaStulova commented on pull request #8361:
URL: https://github.com/apache/tvm/pull/8361#issuecomment-869845657


   @giuseros


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AnastasiaStulova opened a new pull request #8361: [Relay] Fix index order in conv2d computation for Arm CPU.

2021-06-28 Thread GitBox


AnastasiaStulova opened a new pull request #8361:
URL: https://github.com/apache/tvm/pull/8361


   When dilation is larger than value 1 in conv2d with NHWC
   layout, the ordering of indexes when accessing data array
   in computation of convolution appears to be incorrect.
   
   'data_vec' is defined as
   
   lambda n, oho, owo, kh, kw, ic, ohi, owi:
   
   but accessed as
   
   data_vec[n, oho, owo, kh, kw, ohi, owi, ic]
   
   This patch fixes the order of indexes.
   
   =
   
   **Question:** While the bug is easily observable I wonder if
   there is some way to improve testing of this even if it might
   not be possible to add in CI straight away.
   I used `tests/python/topi/python/test_topi_conv2d_nhwc.py`
   to catch and test the issue. If it makes sense I could prepare
   another change adding `arm_cpu` in the list of test devices
   and/or even running using RPC that can be activated by
   passing a certain flag.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8076: [BYOC][NNAPI]: Implement basic structure of Android NNAPI BYOC

2021-06-28 Thread GitBox


comaniac commented on a change in pull request #8076:
URL: https://github.com/apache/tvm/pull/8076#discussion_r659946074



##
File path: 
python/tvm/contrib/target/android_nnapi/relayir_to_nnapi_converter/_export_object/helper.py
##
@@ -0,0 +1,28 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Namespace for helper objects/methods that's not part of the JSON
+content. This includes the symbol table, checking methods, ...
+"""
+from .operand import Operand as _Operand
+
+
+class Helper:

Review comment:
   analysis.py:XXAnalyzer seems much better to me.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8076: [BYOC][NNAPI]: Implement basic structure of Android NNAPI BYOC

2021-06-28 Thread GitBox


comaniac commented on a change in pull request #8076:
URL: https://github.com/apache/tvm/pull/8076#discussion_r659945572



##
File path: 
python/tvm/contrib/target/android_nnapi/relayir_to_nnapi_converter/export_object.py
##
@@ -0,0 +1,304 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""ExportObject, a dict-like structure providing infrastructure for
+Android NNAPI codegen
+"""
+import struct
+import copy
+from .error import assert_anc_compatibility
+from ._export_object import Helper as _Helper
+
+
+class ExportObject:
+"""A dict-like structure providing infrastructure for Android NNAPI codegen
+
+Parameters
+--
+options: dict
+The converter option dict
+
+"""
+
+_SCALAR_RELAY_NNAPI_TYPE_MAP = {
+"bool": "BOOL",
+"float16": "FLOAT16",
+"float32": "FLOAT32",
+"int32": "INT32",
+"uint32": "UINT32",
+}
+
+_TENSOR_RELAY_NNAPI_TYPE_MAP = {
+"bool": "TENSOR_BOOL",
+"float16": "TENSOR_FLOAT16",
+"float32": "TENSOR_FLOAT32",
+"int32": "TENSOR_INT32",
+"uint32": "TENSOR_UINT32",
+}
+
+def __init__(self, options):
+self.helper = _Helper(self)
+self._json = {
+"constants": [],
+"inputs": [],
+"memories": [],
+"operands": [],
+"operations": [],
+"outputs": [],
+"types": [],
+}
+self._options = options
+
+def __getitem__(self, key):
+return self._json[key]
+
+def __setitem__(self, key, value):
+self._json[key] = value
+
+def asjson(self):
+"""Return the content of ExportObject as a primitive Python dict
+
+Returns
+---
+json: dict
+The content of ExportObject as a primitive Python dict
+
+"""
+return copy.deepcopy(self._json)
+
+def get_type_idx(self, tipe):
+"""Register and lookup type index in export_obj["types"]
+
+Parameters
+--
+tipe: ((int, ...), str)
+type (shape, dtype) to look up
+
+Returns
+---
+index: int
+type index in export object
+"""
+tipe = (tuple(map(int, tipe[0])), str(tipe[1]))  # canonicalize
+shape, dtype = tipe
+assert_anc_compatibility(
+dtype in ["bool", "float16", "float32", "int32", "uint32"],
+"Unsupported data type { dtype }",
+)
+
+if self.helper.type_to_idx_map.get(tipe, None) is None:  # create new 
type
+shape, dtype = tipe
+
+if dtype == "bool":
+assert_anc_compatibility(
+self._options["target"]["api_level"] >= 29,
+f"Boolean is not supported for Android API{ 
self._options['target']['api_level'] }",  # pylint: disable=line-too-long
+)
+
+new_type = {}
+if len(shape) == 0:
+new_type["type"] = self._SCALAR_RELAY_NNAPI_TYPE_MAP[dtype]
+else:
+new_type["shape"] = list(shape)
+new_type["type"] = self._TENSOR_RELAY_NNAPI_TYPE_MAP[dtype]
+
+self["types"].append(new_type)
+self.helper.type_to_idx_map[tipe] = len(self["types"]) - 1
+return self.helper.type_to_idx_map[tipe]
+
+@staticmethod
+def _canonicalize_scalar_constant(dtype, val):
+# skip canonicalizing strings as they may carry specific meanings,
+# e.g. macro-defined values
+if not isinstance(val, str):
+if dtype == "float16":
+if isinstance(val, float):
+val = hex(
+struct.unpack("H", struct.pack("e", val))[0]
+)  # for float16 we use uint16_t in C, hence the conversion
+elif dtype == "float32":
+val = float(val)
+elif dtype == "int32":
+val = int(val)
+elif dtype == "uint32":
+val = int(val)
+elif dtype == "bool":
+val = bool(val)
+else:
+assert False, "Unreachable"
+return val
+

[GitHub] [tvm] comaniac commented on a change in pull request #8076: [BYOC][NNAPI]: Implement basic structure of Android NNAPI BYOC

2021-06-28 Thread GitBox


comaniac commented on a change in pull request #8076:
URL: https://github.com/apache/tvm/pull/8076#discussion_r659944789



##
File path: 
python/tvm/contrib/target/android_nnapi/relayir_to_nnapi_converter/__init__.py
##
@@ -0,0 +1,60 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Converts Relay IR subgraph to Android NNAPI source code
+"""
+import tvm
+from .converter import Converter
+
+
+def convert_relayir_to_nnapi(func):

Review comment:
   We actually prefer a reverse way that makes most of the implementations 
in C++ instead of Python for better performance. However, given that would be a 
huge effort, it should be fine for this specific backend to be all in Python 
IMHO.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #8313: [Metal] Add pass for splitting kernel with huge number of args

2021-06-28 Thread GitBox


mbrookhart commented on pull request #8313:
URL: https://github.com/apache/tvm/pull/8313#issuecomment-869835330


   cc @masahi 
   
   I remember Masa doing something like this for Vulkan at one point, but I'm 
not sure if that was a branch or if it ever got merged. If it's merged 
somewhere, maybe we should combine make this a generally available tool?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] manupa-arm commented on a change in pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


manupa-arm commented on a change in pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#discussion_r659942293



##
File path: src/tir/transforms/legalize_packed_calls.cc
##
@@ -60,30 +60,41 @@ class PackedCallLegalizer : public StmtExprMutator {
 if (call) {
   if (call->op.same_as(builtin::tvm_call_cpacked())) {
 Array packed_args{call->args[0]};
+std::vector tvm_values;
 for (unsigned i = 1; i < call->args.size(); i++) {
   // No need to pack inputs of the prim_func
   if (inputs_[call->args[i]] == true) {
 packed_args.push_back(call->args[i]);
   } else {
 // Pack the argument inside a TVMValue
-auto sid_array = tir::Var("tvm_value", DataType::Handle());
-tir::Stmt set_struct_stmt = tir::Evaluate(
+std::stringstream ss;
+ss << "tvm_value_" << tvm_value_index_++;
+auto sid_array = tir::Var(ss.str(), DataType::Handle());
+tvm_values.push_back(sid_array);
+
+new_stmts.push_back(tir::Evaluate(
 tvm::tir::Call(DataType::Handle(), 
tvm::tir::builtin::tvm_struct_set(),
-   {sid_array, 0, tir::builtin::kArrData, 
call->args[i]}));
-new_stmts.push_back(LetStmt(sid_array, StackAlloca("array", 1), 
set_struct_stmt));
+   {sid_array, 0, tir::builtin::kArrData, 
call->args[i]})));
 packed_args.push_back(sid_array);
   }
 }
-// Finally, evaluate the packed call and return a sequential statement
+// Evaluate the packed call
 new_stmts.push_back(tir::Evaluate(tir::Call(call->dtype, call->op, 
packed_args)));
-return tir::SeqStmt(new_stmts);
+tir::Stmt call_stmt = tir::SeqStmt(new_stmts);
+
+// Allocate the TVMValues on the stack and define the variables
+for (auto v : tvm_values) {
+  call_stmt = LetStmt(v, StackAlloca("array", 1), call_stmt);

Review comment:
   yes, that sounds good! (not sure we want a seperate pool but I can see 
them pooled to 'a'  workspace buffer)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (5e75ffa -> 2915349)

2021-06-28 Thread mbrookhart
This is an automated email from the ASF dual-hosted git repository.

mbrookhart pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 5e75ffa  [RPC] Fix android rpc connection to tracker (#8327)
 add 2915349  [Onnx] Support Bidirectional RNNs (#8337)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  | 331 +++--
 tests/python/frontend/onnx/test_forward.py | 551 +
 2 files changed, 543 insertions(+), 339 deletions(-)


[GitHub] [tvm] mbrookhart commented on pull request #8337: [Onnx] Support Bidirectional RNNs

2021-06-28 Thread GitBox


mbrookhart commented on pull request #8337:
URL: https://github.com/apache/tvm/pull/8337#issuecomment-869832330


   Thanks @AndrewZhaoLuo @jwfromm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart merged pull request #8337: [Onnx] Support Bidirectional RNNs

2021-06-28 Thread GitBox


mbrookhart merged pull request #8337:
URL: https://github.com/apache/tvm/pull/8337


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8347: Switch threading model to `fork` on macOS

2021-06-28 Thread GitBox


AndrewZhaoLuo commented on pull request #8347:
URL: https://github.com/apache/tvm/pull/8347#issuecomment-869829718


   Im going to spend a little bit today understanding what needs to change to
   make spawn work. I think just some objects need to be able to be pickle
   able.
   
   On Mon, Jun 28, 2021, 9:24 AM Andrew Luo ***@***.***> wrote:
   
   > Hmm I personally don't think this is the right move. This has serious
   > sideffects on program behavior.
   >
   > For example, if a user is running other scripts besides TVM in the same
   > session it could cause unexpected behavior. I think the best thing is
   > having the users manually change things and we focus on giving the user a
   > proper error message if we detect this error is applicable.
   >
   > On Mon, Jun 28, 2021, 2:44 AM Leandro Nunes ***@***.***>
   > wrote:
   >
   >> ***@***. commented on this pull request.
   >>
   >> I understand this will fix the immediate problem reported. I just wonder
   >> whether this shouldn't be fixed at import tvm level, considering the
   >> usages via the Python API.
   >>
   >> In case there agreement to fix this here, I'm also happy to take this PR.
   >>
   >> also cc @comaniac 
   >>
   >> —
   >> You are receiving this because you were mentioned.
   >> Reply to this email directly, view it on GitHub
   >> ,
   >> or unsubscribe
   >> 

   >> .
   >>
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #8327: [RPC] Fix android rpc connection to tracker

2021-06-28 Thread GitBox


jwfromm commented on pull request #8327:
URL: https://github.com/apache/tvm/pull/8327#issuecomment-869829815


   Thanks @echuraev!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (36fc525 -> 5e75ffa)

2021-06-28 Thread jwfromm
This is an automated email from the ASF dual-hosted git repository.

jwfromm pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 36fc525  [AOT] Name mangling in AOT (#8014)
 add 5e75ffa  [RPC] Fix android rpc connection to tracker (#8327)

No new revisions were added by this update.

Summary of changes:
 .../main/java/org/apache/tvm/rpc/ConnectTrackerServerProcessor.java| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)


[GitHub] [tvm] jwfromm merged pull request #8327: [RPC] Fix android rpc connection to tracker

2021-06-28 Thread GitBox


jwfromm merged pull request #8327:
URL: https://github.com/apache/tvm/pull/8327


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #8313: [Metal] Add pass for splitting kernel with huge number of args

2021-06-28 Thread GitBox


jwfromm commented on pull request #8313:
URL: https://github.com/apache/tvm/pull/8313#issuecomment-869829030


   @mbrookhart can you take a look at this one?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge closed pull request #8347: Switch threading model to `fork` on macOS

2021-06-28 Thread GitBox


hogepodge closed pull request #8347:
URL: https://github.com/apache/tvm/pull/8347


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on pull request #8347: Switch threading model to `fork` on macOS

2021-06-28 Thread GitBox


hogepodge commented on pull request #8347:
URL: https://github.com/apache/tvm/pull/8347#issuecomment-869827913


   Yes, it's my understanding that the appropriate solution is to switch to 
using methods like POpenWorker, like in this patch. 
https://github.com/apache/tvm/pull/7889. I'm going to close this PR, but we 
should continue the discussion.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8347: Switch threading model to `fork` on macOS

2021-06-28 Thread GitBox


AndrewZhaoLuo commented on pull request #8347:
URL: https://github.com/apache/tvm/pull/8347#issuecomment-869825939


   Hmm I personally don't think this is the right move. This has serious
   sideffects on program behavior.
   
   For example, if a user is running other scripts besides TVM in the same
   session it could cause unexpected behavior. I think the best thing is
   having the users manually change things and we focus on giving the user a
   proper error message if we detect this error is applicable.
   
   On Mon, Jun 28, 2021, 2:44 AM Leandro Nunes ***@***.***>
   wrote:
   
   > ***@***. commented on this pull request.
   >
   > I understand this will fix the immediate problem reported. I just wonder
   > whether this shouldn't be fixed at import tvm level, considering the
   > usages via the Python API.
   >
   > In case there agreement to fix this here, I'm also happy to take this PR.
   >
   > also cc @comaniac 
   >
   > —
   > You are receiving this because you were mentioned.
   > Reply to this email directly, view it on GitHub
   > , or
   > unsubscribe
   > 

   > .
   >
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros commented on a change in pull request #8096: Decoupling AOT from graph memory planner

2021-06-28 Thread GitBox


giuseros commented on a change in pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#discussion_r659914709



##
File path: src/tir/transforms/legalize_packed_calls.cc
##
@@ -60,30 +60,41 @@ class PackedCallLegalizer : public StmtExprMutator {
 if (call) {
   if (call->op.same_as(builtin::tvm_call_cpacked())) {
 Array packed_args{call->args[0]};
+std::vector tvm_values;
 for (unsigned i = 1; i < call->args.size(); i++) {
   // No need to pack inputs of the prim_func
   if (inputs_[call->args[i]] == true) {
 packed_args.push_back(call->args[i]);
   } else {
 // Pack the argument inside a TVMValue
-auto sid_array = tir::Var("tvm_value", DataType::Handle());
-tir::Stmt set_struct_stmt = tir::Evaluate(
+std::stringstream ss;
+ss << "tvm_value_" << tvm_value_index_++;
+auto sid_array = tir::Var(ss.str(), DataType::Handle());
+tvm_values.push_back(sid_array);
+
+new_stmts.push_back(tir::Evaluate(
 tvm::tir::Call(DataType::Handle(), 
tvm::tir::builtin::tvm_struct_set(),
-   {sid_array, 0, tir::builtin::kArrData, 
call->args[i]}));
-new_stmts.push_back(LetStmt(sid_array, StackAlloca("array", 1), 
set_struct_stmt));
+   {sid_array, 0, tir::builtin::kArrData, 
call->args[i]})));
 packed_args.push_back(sid_array);
   }
 }
-// Finally, evaluate the packed call and return a sequential statement
+// Evaluate the packed call
 new_stmts.push_back(tir::Evaluate(tir::Call(call->dtype, call->op, 
packed_args)));
-return tir::SeqStmt(new_stmts);
+tir::Stmt call_stmt = tir::SeqStmt(new_stmts);
+
+// Allocate the TVMValues on the stack and define the variables
+for (auto v : tvm_values) {
+  call_stmt = LetStmt(v, StackAlloca("array", 1), call_stmt);

Review comment:
   Yes, I think it is a very good idea, and I would personally like a 
similar direction. @manupa-arm, what do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-28 Thread GitBox


areusch commented on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-869798956


   thanks @chiwwang , please address the CI failure:
   
   `/workspace/docs/tutorials/dev/use_pass_infra.rst:307: WARNING: undefined 
label: pass_instrument_section_tag`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] yzh119 opened a new pull request #8360: [Doc] Fix sphinx doc style for unordered list

2021-06-28 Thread GitBox


yzh119 opened a new pull request #8360:
URL: https://github.com/apache/tvm/pull/8360


   TVM documentation hides all bullets for unordered list, 
https://github.com/tlc-pack/tlcpack-sphinx-addon/pull/3 fixed the issue.
   This PR bumps the version of tlcpack-sphinx-addon from 0.2.0 to 0.2.1.
   
   cc @tqchen @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (c586834 -> 36fc525)

2021-06-28 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from c586834  [AutoScheduler]Simplify the code (#8351)
 add 36fc525  [AOT] Name mangling in AOT (#8014)

No new revisions were added by this update.

Summary of changes:
 apps/microtvm/zephyr/aot_demo/src/main.c   |   4 +-
 include/tvm/runtime/module.h   |   2 +-
 python/tvm/micro/model_library_format.py   |  16 +-
 python/tvm/relay/backend/compile_engine.py |   6 +-
 python/tvm/relay/backend/graph_executor_codegen.py |   4 +-
 .../prj.conf => python/tvm/relay/backend/utils.py  |  30 +--
 python/tvm/relay/build_module.py   |  15 +-
 python/tvm/relay/transform/transform.py|   6 +-
 src/relay/backend/aot_executor_codegen.cc  |  30 ++-
 src/relay/backend/build_module.cc  |  15 +-
 src/relay/backend/compile_engine.cc|  19 +-
 src/relay/backend/compile_engine.h |   3 +-
 src/relay/backend/graph_executor_codegen.cc|  12 +-
 src/relay/backend/vm/compiler.cc   |   3 +-
 src/relay/transforms/partition_graph.cc|  85 +++-
 src/runtime/meta_data.h|  13 +-
 src/target/source/codegen_c_host.cc|   6 +-
 src/target/source/codegen_c_host.h |   2 +
 src/target/source/source_module.cc |  21 +-
 tests/cpp/microtvm_runtime_standalone_test.cc  |   2 +-
 tests/cpp/relay_build_module_test.cc   |   2 +-
 .../contrib/test_bnns/test_conv2d_patterns.py  |   6 +-
 tests/python/contrib/test_ethosn/test_networks.py  |   8 +-
 tests/python/contrib/test_tensorrt.py  |   6 +-
 .../contrib/test_vitis_ai/test_vitis_ai_codegen.py |   5 +-
 tests/python/relay/aot/aot_test.mk |   3 +-
 tests/python/relay/aot/aot_test_utils.py   | 235 +++--
 tests/python/relay/aot/test_crt_aot.py |  83 +++-
 tests/python/relay/test_json_runtime.py|  32 +--
 .../test_common.py => relay/test_name_mangling.py} |  27 +--
 tests/python/relay/test_op_fast_math.py|   2 +-
 tests/python/relay/test_pass_partition_graph.py|  62 +++---
 .../unittest/test_micro_model_library_format.py|  12 +-
 33 files changed, 556 insertions(+), 221 deletions(-)
 copy apps/microtvm/zephyr/aot_demo/prj.conf => 
python/tvm/relay/backend/utils.py (59%)
 copy tests/python/{driver/tvmc/test_common.py => relay/test_name_mangling.py} 
(59%)


[GitHub] [tvm] areusch merged pull request #8014: [AOT] Name mangling in AOT

2021-06-28 Thread GitBox


areusch merged pull request #8014:
URL: https://github.com/apache/tvm/pull/8014


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >