[GitHub] [tvm] comaniac closed issue #7732: [Bug][Auto-scheduler] generated cuda code

2021-03-24 Thread GitBox


comaniac closed issue #7732:
URL: https://github.com/apache/tvm/issues/7732


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on issue #7732: [Bug][Auto-scheduler] generated cuda code

2021-03-24 Thread GitBox


comaniac commented on issue #7732:
URL: https://github.com/apache/tvm/issues/7732#issuecomment-806393664


   You are not supposed to run the generated kernel by your own, so this is not 
a limitation to me. Since this is not a bug, I'm going to close this issue. For 
future questions, please post them to https://discuss.tvm.apache.org/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Haoran-Solar commented on issue #7732: [Bug][Auto-scheduler] generated cuda code

2021-03-24 Thread GitBox


Haoran-Solar commented on issue #7732:
URL: https://github.com/apache/tvm/issues/7732#issuecomment-806391681


   Thanks for reply. I just started to try auto-scheduler, I change nothing in 
file tune_conv2d, just want to have a test.
   Using prams from last layer in ResNet-50, just the tutorials did. 
   And I solve the problem. To run the generated code, I have to set the number 
of each dim less than 64 in a threadblock.
   Which mean:
   ```c++
   dim3 dimBlock(number);
   *number<64*
   *64, 32,16 *
   ```
   Is that some special limit for auto-scheduler code? Cause some kernels that 
I wrote do not such limit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


masahi commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-806353561


   thanks @AndrewZhaoLuo @mbrookhart @jwfromm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Topi, Relay] Add cumprod (#7722)

2021-03-24 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 8e23806  [Topi, Relay] Add cumprod (#7722)
8e23806 is described below

commit 8e23806d2d522b71979d0a2730b38cc5c3bf6185
Author: AndrewZhaoLuo 
AuthorDate: Wed Mar 24 21:25:18 2021 -0700

[Topi, Relay] Add cumprod (#7722)

* make cumbinop, refactor cumsum, add cumprod

* cumsum exclusive test

* Add cumprod + flesh out cumsum tests

add cumprod and tests

reinstate tests

rethink

* add rudimentary scan implementation

* add attributes of cumprod node

* add cumprod strategy

* add cuda strategy

* python relay node construction

* change attrs to be reusuable

* add cumprod nodes

* complete tests

* Fix some typos about sum --> prod

typos fix sum -> prod

more typos

more typo fixes

more typos

add doc strings

* Use Bool instead of int to represent exclusive

make exclusive a bool up and down stack

fix x

fix bool err

it is a bool now

fix

fix thing

formatting to pass linter

lint python

cumprod pylint

fix attribute

fix ordering

add exclusivity tests for end to end

fix things

cuda identity_value

* Overall improve formatting, add doc message corrections

simplify construction

clang-format

more tests

undo simpler construction due to function passing stuff

fix docs

more exclusive doc changes

more fixins"

* merge cumsum and cumprod to scan, merge tests

fix stuff

* remove other mentions of cumbinop -> scanop

* lint formatting

Co-authored-by: Andrew Zhao Luo 
---
 include/tvm/relay/attrs/transform.h  |  14 +-
 python/tvm/relay/op/_transform.py|  19 ++-
 python/tvm/relay/op/strategy/cuda.py |  19 ++-
 python/tvm/relay/op/strategy/generic.py  |  29 +++-
 python/tvm/relay/op/transform.py |  65 +++-
 python/tvm/topi/__init__.py  |   2 +-
 python/tvm/topi/cuda/scan.py | 196 +++---
 python/tvm/topi/cumsum.py| 121 --
 python/tvm/topi/scan.py  | 236 +++
 python/tvm/topi/unique.py|   2 +-
 src/relay/op/tensor/transform.cc |  34 +++-
 tests/python/relay/test_op_level3.py |  77 ++---
 tests/python/topi/python/test_topi_cumsum.py |  79 -
 tests/python/topi/python/test_topi_scan.py   | 144 
 14 files changed, 758 insertions(+), 279 deletions(-)

diff --git a/include/tvm/relay/attrs/transform.h 
b/include/tvm/relay/attrs/transform.h
index ff344f5..a5544c8 100644
--- a/include/tvm/relay/attrs/transform.h
+++ b/include/tvm/relay/attrs/transform.h
@@ -438,17 +438,19 @@ struct MatrixSetDiagAttrs : public 
tvm::AttrsNode {
   }
 };  // struct MatrixSetDiagAttrs
 
-/*! \brief Attributes used in cumsum operator */
-struct CumsumAttrs : public tvm::AttrsNode {
+/*! \brief Attributes used in cumsum and cumprod operator */
+struct ScanopAttrs : public tvm::AttrsNode {
   Integer axis;
   DataType dtype;
-  Integer exclusive;
-  TVM_DECLARE_ATTRS(CumsumAttrs, "relay.attrs.CumsumAttrs") {
-TVM_ATTR_FIELD(axis).describe("The axis to sum 
over").set_default(NullValue());
+  Bool exclusive = Bool(false);
+  TVM_DECLARE_ATTRS(ScanopAttrs, "relay.attrs.ScanopAttrs") {
+TVM_ATTR_FIELD(axis).describe("The axis to operate 
over").set_default(NullValue());
 TVM_ATTR_FIELD(dtype).describe("Output data 
type").set_default(NullValue());
+
+// Default is 0 which is "false"
 TVM_ATTR_FIELD(exclusive)
 .describe("The first element is not included")
-.set_default(NullValue());
+.set_default(Bool(false));
   }
 };
 
diff --git a/python/tvm/relay/op/_transform.py 
b/python/tvm/relay/op/_transform.py
index e90263d..1626283 100644
--- a/python/tvm/relay/op/_transform.py
+++ b/python/tvm/relay/op/_transform.py
@@ -19,16 +19,17 @@
 # pylint: disable=too-many-local-variables, too-many-arguments, no-else-return
 
 from __future__ import absolute_import
+
 import tvm
-from tvm import te
-from tvm.te.hybrid import script
+from tvm import te, topi
 from tvm.runtime import convert
-from tvm import topi
+from tvm.te.hybrid import script
 from tvm.topi.utils import get_const_int, get_const_tuple
+
 from . import op as _reg
 from . import strategy
-from .op import OpPattern
 from ._tensor import elemwise_shape_func
+from .op import OpPattern
 
 _reg.register_broadcast_schedule("broadcast_to")
 _reg.register_broadcast_schedule("broadcast_to_like"

[GitHub] [tvm] masahi merged pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


masahi merged pull request #7722:
URL: https://github.com/apache/tvm/pull/7722


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


altanh commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806348360


   Ended up having to make ToScalar return an optional value due to a custom 
datatype test (which as far as I can tell, we don't have a good way of 
supporting conversion at compile time in C++ currently). Let me know if this is 
fine, as I don't see an alternative within the scope of this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (3ba5868 -> 7130e80)

2021-03-24 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 3ba5868  [TensorIR] Fix parser autocompletion mode (#7737)
 add 7130e80  Better grouped convolution for CPU targets (#6137)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/strategy/arm_cpu.py |   7 +-
 python/tvm/relay/op/strategy/x86.py |   8 +-
 python/tvm/topi/arm_cpu/__init__.py |   1 +
 python/tvm/topi/arm_cpu/group_conv2d.py | 370 +++
 python/tvm/topi/x86/__init__.py |   1 +
 python/tvm/topi/x86/group_conv2d.py | 371 
 6 files changed, 749 insertions(+), 9 deletions(-)
 create mode 100644 python/tvm/topi/arm_cpu/group_conv2d.py
 create mode 100644 python/tvm/topi/x86/group_conv2d.py


[GitHub] [tvm] FrozenGene commented on pull request #6137: Better grouped convolution for CPU targets

2021-03-24 Thread GitBox


FrozenGene commented on pull request #6137:
URL: https://github.com/apache/tvm/pull/6137#issuecomment-806327714


   Thanks @Wheest Merged now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene merged pull request #6137: Better grouped convolution for CPU targets

2021-03-24 Thread GitBox


FrozenGene merged pull request #6137:
URL: https://github.com/apache/tvm/pull/6137


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mvermeulen opened a new pull request #7740: [ROCM] Fix missing header, caused compilation failure.

2021-03-24 Thread GitBox


mvermeulen opened a new pull request #7740:
URL: https://github.com/apache/tvm/pull/7740


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] schell commented on pull request #7503: [Rust] Make TVM Rust bindings installable via Cargo.

2021-03-24 Thread GitBox


schell commented on pull request #7503:
URL: https://github.com/apache/tvm/pull/7503#issuecomment-806240649


   Any luck @jroesch @tqchen ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on pull request #7641: [docs] Getting Started with TVM: Auto Tuning with Python

2021-03-24 Thread GitBox


hogepodge commented on pull request #7641:
URL: https://github.com/apache/tvm/pull/7641#issuecomment-806237862


   @tqchen @leandron ready for final review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on pull request #7638: [docs] Getting Started: Introduction and Installation

2021-03-24 Thread GitBox


hogepodge commented on pull request #7638:
URL: https://github.com/apache/tvm/pull/7638#issuecomment-806236079


   @tqchen ready for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on pull request #7643: [docs] Getting Started with TVM: AutoTVM and Matrix Multiply

2021-03-24 Thread GitBox


hogepodge commented on pull request #7643:
URL: https://github.com/apache/tvm/pull/7643#issuecomment-806235620


   @tqchen ready for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch ci-docker-staging updated (8cbc164 -> 7e48aa8)

2021-03-24 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


 discard 8cbc164  Merge remote-tracking branch 'origin/main' into 
test_mdw_qemu_changes
 discard 87be5dd  fix path
 discard 0bda958  Merge remote-tracking branch 'origin/main' into 
test_mdw_qemu_changes
 discard f0796a6  add ci-qemu-staging
omit 37e0bce  Fix path for test.
omit 65f0b4e  Lint comments.
omit f2b665b  Black formatting.
omit a87f3e8  Fix formatting.
omit 12cdd03  clang-format this file.
omit f672577  Add missing file to check_file_type.py.
omit 0bb5d85  Merge remote-tracking branch 'upstream/main' into 
mdw/demo-runtime
omit 3271923  small fixes
omit c552fa9  Fix linting rule.
omit 599c5fb  Fixup docs.
omit f1260df  Adding missing ONNX file.
omit bcce620  Add new files to check_file_type.py.
omit 687c09c  Fixup
omit 0db2207  Revert dep.
omit 2b14aa1  Remove redundant files.
omit ec2c374  Fix merge conflict.
omit fbdc1cb  Fix merge conflicts.
omit ec5b004  Merge branch 'mdw/demo-runtime' of github.com:mdw-octoml/tvm 
into mdw/demo-runtime
omit 8934c19  Fix tutorial and runtime.
omit b6ae7cf  Fix tutorial.
omit 6645f50  Fix up tutorial.
omit df722b8  Cleanup demo_runtime code.
omit 763017e  Adding data for unit tests.
omit fe2c9a1  Lots of hacking to get ONNX model to run on QEMU and nRF5340.
omit 058985b  Lots of hacking to get ONNX model to run on QEMU and nRF5340.
omit 4b1da9a  Some cleanup.
omit 853f01e  Adding board-specific prj.conf files.
omit b62ab38  Working on QEMU support.
omit 280479a  Cleanup of uTVM tests and demo runtime.
omit 443a429  Adding Zephyr demo runtime.
omit 7a38c7f  Couple of small fixes:
omit d04e04d  Fix up tutorial.
omit d249161  Cleanup demo_runtime code.
omit a9ffc96  Adding data for unit tests.
omit 98f3475  Lots of hacking to get ONNX model to run on QEMU and nRF5340.
omit bc87450  Some cleanup.
omit 123c64c  Adding board-specific prj.conf files.
omit 28b92de  Working on QEMU support.
omit c611d30  Cleanup of uTVM tests and demo runtime.
omit 545a241  Adding Zephyr demo runtime.
omit 82cee46  Couple of small fixes:
omit 6a38dc9  Some docstring fixes.
 add f88c2be  [microTVM] Update nrfjprog on reference virtual machine 
(#7723)
 add 6f0a656  [FIX] Fix temporary allocation size in threefry (#7709)
 add 8131364  [ONNX] Onnx node tests (#7720)
 add 1fe0abc  [TVMC] Python Scripting Init Files (#7698)
 add 63d8e97  [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency) 
(#7728)
 add cfe2e28  [crt] fix heap corruption from bad allocation (#7735)
 new 7e48aa8  test CI with staging containers

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (8cbc164)
\
 N -- N -- N   refs/heads/ci-docker-staging (7e48aa8)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 Jenkinsfile|   6 +-
 apps/microtvm/README.md|  17 +-
 apps/microtvm/reference-vm/base-box-tool.py|  28 ++-
 .../microtvm/reference-vm/zephyr/base-box/setup.sh |  15 +-
 .../reference-vm/zephyr/base-box/test-config.json  |  14 +-
 .../{ => reference-vm/zephyr}/pyproject.toml   |  10 +-
 apps/microtvm/zephyr/README.md |  19 --
 apps/microtvm/zephyr/demo_runtime/README.md|  21 --
 .../boards/nrf5340dk_nrf5340_cpuapp.conf   |  31 ---
 .../zephyr/demo_runtime/boards/nucleo_f746zg.conf  |  30 ---
 .../zephyr/demo_runtime/boards/qemu_x86.conf   |  23 ---
 docs/microtvm/index.rst|   2 +-
 python/tvm/driver/tvmc/__init__.py |   4 +
 python/tvm/micro/contrib/zephyr.py |   5 +-
 python/tvm/relay/frontend/onnx.py  | 133 +---
 python/tvm/relay/op/transform.py   |   7 +-
 python/tvm/runtime/module.py   |   2 +-
 python/tvm/target/target.py|  20 +-
 python/tvm/topi/random/kernel.py   |   2 +-
 src/runtim

[tvm] 01/01: test CI with staging containers

2021-03-24 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 7e48aa8ef6e2ee540bb8c374ed326cd929cc756f
Author: Andrew Reusch 
AuthorDate: Wed Mar 24 15:39:15 2021 -0700

test CI with staging containers
---
 Jenkinsfile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Jenkinsfile b/Jenkinsfile
index 8f11bba..76aa040 100644
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -46,11 +46,11 @@
 // NOTE: these lines are scanned by docker/dev_common.sh. Please update the 
regex as needed. -->
 ci_lint = "tlcpack/ci-lint:v0.62"
 ci_gpu = "tlcpack/ci-gpu:v0.72"
-ci_cpu = "tlcpack/ci-cpu:v0.72-t0"
+ci_cpu = "areusch1/ci-cpu-staging:v0.73"
 ci_wasm = "tlcpack/ci-wasm:v0.70"
 ci_i386 = "tlcpack/ci-i386:v0.72-t0"
 ci_qemu = "tlcpack/ci-qemu:v0.02"
-ci_arm = "tlcpack/ci-arm:v0.02"
+ci_arm = "areusch1/ci-arm-staging:v0.03"
 // <--- End of regex-scanned config.
 
 // tvm libraries


[GitHub] [tvm] hogepodge commented on pull request #7642: [docs] Getting Started With TVM: Tensor Expressions

2021-03-24 Thread GitBox


hogepodge commented on pull request #7642:
URL: https://github.com/apache/tvm/pull/7642#issuecomment-806226992


   @comaniac @Hzfengsy changes committed, ready for review


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600912352



##
File path: src/relay/transforms/simplify_expr.h
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/transforms/simplify_expr.h
+ * \brief Utility data structures for simplifying Relay expressions.
+ */
+#ifndef TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+#define TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+
+#include 
+#include 
+
+#include 
+
+namespace tvm {
+namespace relay {
+
+/*! \brief Defines a static function `RewriteType::Get()` that returns a 
statically initialized
+ * instance of RewriteType. */
+#define TVM_DF_PATTERN_REWRITE_GETTER(RewriteType)\
+  static DFPatternRewrite* Get() {\
+static RewriteType rw;\
+return &rw;   \
+  }   \
+  static DFPatternCallback GetCallback() {\
+static DFPatternCallback cb = RewriteType::Get()->MakeCallback(); \
+return cb;\
+  }

Review comment:
   removed, made a helper class for composing rewrites since we need to 
ensure the lifetimes of the DFPatternCallbacks do not exceed the Rewrite objects




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7737: [TensorIR] Fix parser autocompletion mode

2021-03-24 Thread GitBox


tqchen commented on pull request #7737:
URL: https://github.com/apache/tvm/pull/7737#issuecomment-806195883


   Thanks @Hzfengsy @MasterJH5574 . This PR is merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7739: [TVMC] Externalize load from compile_module (compiler.py line 132)

2021-03-24 Thread GitBox


comaniac commented on pull request #7739:
URL: https://github.com/apache/tvm/pull/7739#issuecomment-806190266


   I'm fine with this change. Leave on others.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] CircleSpin commented on pull request #7739: [TVMC] Externalize load from compile_module (compiler.py line 132)

2021-03-24 Thread GitBox


CircleSpin commented on pull request #7739:
URL: https://github.com/apache/tvm/pull/7739#issuecomment-806185373


   @comaniac @mdw-octoml @leandron @jwfromm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] CircleSpin opened a new pull request #7739: [TVMC] Externalize load from compile_module (compiler.py line 132)

2021-03-24 Thread GitBox


CircleSpin opened a new pull request #7739:
URL: https://github.com/apache/tvm/pull/7739


   This PR moves the load component from compile_module to drive_compile to 
make it cleaner for python scripting. It prevents "double dipping" the load in 
frontends more than once. 
   The flow is now 
   1. mod, params = tvmc.load("whatever") #from frontends.py
   2. lib, etc = tvmc.compile(mod, params) #from compiler.py
   
   See PR #7698 for import details.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600865480



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +249,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+  }
+

[GitHub] [tvm] mbrookhart commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


mbrookhart commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600863750



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +249,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+ 

[GitHub] [tvm] mbrookhart commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


mbrookhart commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600863512



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +249,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+ 

[GitHub] [tvm] mbrookhart commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


mbrookhart commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600863124



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +249,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+ 

[tvm] branch main updated (cfe2e28 -> 3ba5868)

2021-03-24 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from cfe2e28  [crt] fix heap corruption from bad allocation (#7735)
 add 3ba5868  [TensorIR] Fix parser autocompletion mode (#7737)

No new revisions were added by this update.

Summary of changes:
 src/tir/ir/script/script_complete.cc  |  26 +++-
 tests/python/unittest/test_tvmscript_complete.py  | 174 ++
 tests/python/unittest/test_tvmscript_roundtrip.py |  11 +-
 3 files changed, 201 insertions(+), 10 deletions(-)
 create mode 100644 tests/python/unittest/test_tvmscript_complete.py


[GitHub] [tvm] tqchen merged pull request #7737: [TensorIR] Fix parser autocompletion mode

2021-03-24 Thread GitBox


tqchen merged pull request #7737:
URL: https://github.com/apache/tvm/pull/7737


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600860418



##
File path: python/tvm/topi/cuda/scan.py
##
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
 The result has the same size as data, and the same shape as data if 
axis is not None.
 If axis is None, the result is a 1-d array.
 """
-if axis is None:
-axis = 0
-data = reshape(data, (prod(data.shape),))
-axis = get_const_int(axis)
-if exclusive is not None and exclusive != 0:
-return exclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
-return inclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
+return scanop(
+data=data,
+binop=tvm.tir.generic.add,
+identity_value=0,
+axis=axis,
+dtype=dtype,
+exclusive=exclusive,
+)
+
+
+def cumprod(
+data: tvm.te.Tensor,
+axis: Optional[int] = None,
+dtype: Optional[int] = None,
+exclusive: Optional[bool] = None,
+):
+"""Numpy style cumprod op. Return the cumulative product of the elements 
along a given axis.
+
+Parameters
+--
+data : tvm.te.Tensor
+The input data to the operator.
+
+axis : int, optional
+Axis along which the cumulative product is computed. The default 
(None) is to compute
+the cumproduct over the flattened array.
+
+dtype : string, optional
+Type of the returned array and of the accumulator in which the 
elements are multiplied.
+If dtype is not specified, it defaults to the dtype of data.
+
+exclusive : bool, optional
+If True, will return exclusive product in which the first element is 
not
+included. In other terms, if True, the j-th output element would be
+the product of the first (j-1) elements. Otherwise, it would be the 
product of
+the first j elements.

Review comment:
   Well we might as well keep cumprod's exclusive option since it has very 
little implementation overhead. The only thing that is there to support it is 
the `identity_value` fields which are pretty simple. I say we might as well 
keep it at this point.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600860158



##
File path: src/relay/transforms/simplify_expr.h
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/transforms/simplify_expr.h
+ * \brief Utility data structures for simplifying Relay expressions.
+ */
+#ifndef TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+#define TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+
+#include 
+#include 
+
+#include 
+
+namespace tvm {
+namespace relay {
+
+/*! \brief Defines a static function `RewriteType::Get()` that returns a 
statically initialized
+ * instance of RewriteType. */
+#define TVM_DF_PATTERN_REWRITE_GETTER(RewriteType)\
+  static DFPatternRewrite* Get() {\
+static RewriteType rw;\
+return &rw;   \
+  }   \
+  static DFPatternCallback GetCallback() {\
+static DFPatternCallback cb = RewriteType::Get()->MakeCallback(); \
+return cb;\
+  }

Review comment:
   got it, I'll remove this, thanks for the clarification




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600858602



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +249,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+  }
+

[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600858326



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +249,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+  }
+

[GitHub] [tvm] comaniac commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


comaniac commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600852619



##
File path: src/relay/transforms/simplify_expr.h
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/transforms/simplify_expr.h
+ * \brief Utility data structures for simplifying Relay expressions.
+ */
+#ifndef TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+#define TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+
+#include 
+#include 
+
+#include 
+
+namespace tvm {
+namespace relay {
+
+/*! \brief Defines a static function `RewriteType::Get()` that returns a 
statically initialized
+ * instance of RewriteType. */
+#define TVM_DF_PATTERN_REWRITE_GETTER(RewriteType)\
+  static DFPatternRewrite* Get() {\
+static RewriteType rw;\
+return &rw;   \
+  }   \
+  static DFPatternCallback GetCallback() {\
+static DFPatternCallback cb = RewriteType::Get()->MakeCallback(); \
+return cb;\
+  }

Review comment:
   The previously PR has a different implemetation and my point was the 
pattern table itself should be static. Given the current implemntation is based 
on SimplifyExpr, I agree with @mbrookhart that we don't need to make those 
functions static in the pattern class.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


mbrookhart commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600849643



##
File path: python/tvm/relay/op/_transform.py
##
@@ -19,16 +19,17 @@
 # pylint: disable=too-many-local-variables, too-many-arguments, no-else-return
 
 from __future__ import absolute_import
+
 import tvm
-from tvm import te
-from tvm.te.hybrid import script
+from tvm import te, topi
 from tvm.runtime import convert
-from tvm import topi
+from tvm.te.hybrid import script

Review comment:
   It's used elsewhere in this file, for some reason the autoformater moved 
it around in the input list




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


masahi commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600848449



##
File path: python/tvm/relay/op/_transform.py
##
@@ -19,16 +19,17 @@
 # pylint: disable=too-many-local-variables, too-many-arguments, no-else-return
 
 from __future__ import absolute_import
+
 import tvm
-from tvm import te
-from tvm.te.hybrid import script
+from tvm import te, topi
 from tvm.runtime import convert
-from tvm import topi
+from tvm.te.hybrid import script

Review comment:
   unused import?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


masahi commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600847541



##
File path: python/tvm/topi/cuda/scan.py
##
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
 The result has the same size as data, and the same shape as data if 
axis is not None.
 If axis is None, the result is a 1-d array.
 """
-if axis is None:
-axis = 0
-data = reshape(data, (prod(data.shape),))
-axis = get_const_int(axis)
-if exclusive is not None and exclusive != 0:
-return exclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
-return inclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
+return scanop(
+data=data,
+binop=tvm.tir.generic.add,
+identity_value=0,
+axis=axis,
+dtype=dtype,
+exclusive=exclusive,
+)
+
+
+def cumprod(
+data: tvm.te.Tensor,
+axis: Optional[int] = None,
+dtype: Optional[int] = None,
+exclusive: Optional[bool] = None,
+):
+"""Numpy style cumprod op. Return the cumulative product of the elements 
along a given axis.
+
+Parameters
+--
+data : tvm.te.Tensor
+The input data to the operator.
+
+axis : int, optional
+Axis along which the cumulative product is computed. The default 
(None) is to compute
+the cumproduct over the flattened array.
+
+dtype : string, optional
+Type of the returned array and of the accumulator in which the 
elements are multiplied.
+If dtype is not specified, it defaults to the dtype of data.
+
+exclusive : bool, optional
+If True, will return exclusive product in which the first element is 
not
+included. In other terms, if True, the j-th output element would be
+the product of the first (j-1) elements. Otherwise, it would be the 
product of
+the first j elements.

Review comment:
   Yes, ONNX cumsum has an `exclusive` option. There is a good use case for 
exclusive cumsum but probably not for cumprod. We can always do inclusive scan 
for cumprod.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600844682



##
File path: src/relay/transforms/simplify_expr.h
##
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/transforms/simplify_expr.h
+ * \brief Utility data structures for simplifying Relay expressions.
+ */
+#ifndef TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+#define TVM_RELAY_TRANSFORMS_SIMPLIFY_EXPR_H_
+
+#include 
+#include 
+
+#include 
+
+namespace tvm {
+namespace relay {
+
+/*! \brief Defines a static function `RewriteType::Get()` that returns a 
statically initialized
+ * instance of RewriteType. */
+#define TVM_DF_PATTERN_REWRITE_GETTER(RewriteType)\
+  static DFPatternRewrite* Get() {\
+static RewriteType rw;\
+return &rw;   \
+  }   \
+  static DFPatternCallback GetCallback() {\
+static DFPatternCallback cb = RewriteType::Get()->MakeCallback(); \
+return cb;\
+  }

Review comment:
   Indeed the overhead of initializing and calling is probably negligible 
compared to running the pass itself, I did this following comments on my 
previous PR. @comaniac maybe you can comment? In the end I am fine with either 
way




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


mbrookhart commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600842607



##
File path: python/tvm/topi/cuda/scan.py
##
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
 The result has the same size as data, and the same shape as data if 
axis is not None.
 If axis is None, the result is a 1-d array.
 """
-if axis is None:
-axis = 0
-data = reshape(data, (prod(data.shape),))
-axis = get_const_int(axis)
-if exclusive is not None and exclusive != 0:
-return exclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
-return inclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
+return scanop(
+data=data,
+binop=tvm.tir.generic.add,
+identity_value=0,
+axis=axis,
+dtype=dtype,
+exclusive=exclusive,
+)
+
+
+def cumprod(
+data: tvm.te.Tensor,
+axis: Optional[int] = None,
+dtype: Optional[int] = None,
+exclusive: Optional[bool] = None,
+):
+"""Numpy style cumprod op. Return the cumulative product of the elements 
along a given axis.
+
+Parameters
+--
+data : tvm.te.Tensor
+The input data to the operator.
+
+axis : int, optional
+Axis along which the cumulative product is computed. The default 
(None) is to compute
+the cumproduct over the flattened array.
+
+dtype : string, optional
+Type of the returned array and of the accumulator in which the 
elements are multiplied.
+If dtype is not specified, it defaults to the dtype of data.
+
+exclusive : bool, optional
+If True, will return exclusive product in which the first element is 
not
+included. In other terms, if True, the j-th output element would be
+the product of the first (j-1) elements. Otherwise, it would be the 
product of
+the first j elements.

Review comment:
   Hehehe, I tagged the wrong line. I know why cuda scan needs to be 
inclusive or exclusive (it's for other ops), but I'm not sure why cumprod needs 
the option?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600838442



##
File path: python/tvm/topi/cuda/scan.py
##
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
 The result has the same size as data, and the same shape as data if 
axis is not None.
 If axis is None, the result is a 1-d array.
 """
-if axis is None:
-axis = 0
-data = reshape(data, (prod(data.shape),))
-axis = get_const_int(axis)
-if exclusive is not None and exclusive != 0:
-return exclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
-return inclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
+return scanop(
+data=data,
+binop=tvm.tir.generic.add,
+identity_value=0,
+axis=axis,
+dtype=dtype,
+exclusive=exclusive,
+)
+
+
+def cumprod(
+data: tvm.te.Tensor,
+axis: Optional[int] = None,
+dtype: Optional[int] = None,
+exclusive: Optional[bool] = None,
+):
+"""Numpy style cumprod op. Return the cumulative product of the elements 
along a given axis.
+
+Parameters
+--
+data : tvm.te.Tensor
+The input data to the operator.
+
+axis : int, optional
+Axis along which the cumulative product is computed. The default 
(None) is to compute
+the cumproduct over the flattened array.
+
+dtype : string, optional
+Type of the returned array and of the accumulator in which the 
elements are multiplied.
+If dtype is not specified, it defaults to the dtype of data.
+
+exclusive : bool, optional
+If True, will return exclusive product in which the first element is 
not
+included. In other terms, if True, the j-th output element would be
+the product of the first (j-1) elements. Otherwise, it would be the 
product of
+the first j elements.

Review comment:
   I'll defer to @masahi on this. I do agree that it adds a lot of 
complexity for not much gain. 
   
   For what it's worth, thrust has inclusive and exclusive scan operations 
which is what may have motivated the current interface.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity rewrites for SimplifyExpr

2021-03-24 Thread GitBox


mbrookhart commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600836110



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +249,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+ 

[GitHub] [tvm] mbrookhart commented on a change in pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


mbrookhart commented on a change in pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#discussion_r600830424



##
File path: python/tvm/topi/cuda/scan.py
##
@@ -514,10 +616,54 @@ def cumsum(data, axis=None, dtype=None, exclusive=None):
 The result has the same size as data, and the same shape as data if 
axis is not None.
 If axis is None, the result is a 1-d array.
 """
-if axis is None:
-axis = 0
-data = reshape(data, (prod(data.shape),))
-axis = get_const_int(axis)
-if exclusive is not None and exclusive != 0:
-return exclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
-return inclusive_scan(data, axis, output_dtype=dtype, 
binop=tvm.tir.generic.add)
+return scanop(
+data=data,
+binop=tvm.tir.generic.add,
+identity_value=0,
+axis=axis,
+dtype=dtype,
+exclusive=exclusive,
+)
+
+
+def cumprod(
+data: tvm.te.Tensor,
+axis: Optional[int] = None,
+dtype: Optional[int] = None,
+exclusive: Optional[bool] = None,
+):
+"""Numpy style cumprod op. Return the cumulative product of the elements 
along a given axis.
+
+Parameters
+--
+data : tvm.te.Tensor
+The input data to the operator.
+
+axis : int, optional
+Axis along which the cumulative product is computed. The default 
(None) is to compute
+the cumproduct over the flattened array.
+
+dtype : string, optional
+Type of the returned array and of the accumulator in which the 
elements are multiplied.
+If dtype is not specified, it defaults to the dtype of data.
+
+exclusive : bool, optional
+If True, will return exclusive product in which the first element is 
not
+included. In other terms, if True, the j-th output element would be
+the product of the first (j-1) elements. Otherwise, it would be the 
product of
+the first j elements.

Review comment:
   I don't think numpy has an exclusive option? Do we need this? 
https://numpy.org/doc/stable/reference/generated/numpy.cumprod.html




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600826676



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +248,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+  }
+

[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600824893



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +248,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());
+static const Op& op = Op::Get("collapse_sum_to");
+auto attrs = make_object();
+attrs->shape = shape;
+auto cshape =
+MakeConstantTensor(DataType::Int(32), 
{static_cast(shape.size())}, shape);
+return Call(op, {node_map[data_pat_][0], cshape}, Attrs(attrs));
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeCollapseSumLikeRewrite);
+};
+
+class ConcretizeBroadcastToLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeBroadcastToLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("broadcast_to_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeBroadCastTo(node_map[data_pat_][0], shape);
+  }
+

[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600823616



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +248,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());

Review comment:
   while extremely unlikely, in theory it's possible for `shape.size()` 
(which is `size_t`) to be greater than INT_MAX, so I just added a check here. 
(I have to cast the size later to int64_t to pass the dimension to 
`MakeConstantTensor`)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch edited a comment on issue #7590: [CI][FLAKY] Qemu pipeline timeout

2021-03-24 Thread GitBox


areusch edited a comment on issue #7590:
URL: https://github.com/apache/tvm/issues/7590#issuecomment-806095709


   
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-7438/12/pipeline/
   
   `INFO: NODE_NAME=node.aladdin.cudabuild EXECUTOR_NUMBER=1`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on issue #7590: [CI][FLAKY] Qemu pipeline timeout

2021-03-24 Thread GitBox


areusch commented on issue #7590:
URL: https://github.com/apache/tvm/issues/7590#issuecomment-806095709


   
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/PR-7438/12/pipeline/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7696: [GO] Fix go bindings

2021-03-24 Thread GitBox


tqchen commented on a change in pull request #7696:
URL: https://github.com/apache/tvm/pull/7696#discussion_r600805865



##
File path: golang/Makefile
##
@@ -25,7 +25,7 @@ NATIVE_SRC = tvm_runtime_pack.cc
 GOPATH=$(CURDIR)/gopath
 GOPATHDIR=${GOPATH}/src/${TARGET}/
 CGO_CPPFLAGS="-I. -I${TVM_BASE}/ -I${TVM_BASE}/3rdparty/dmlc-core/include 
-I${TVM_BASE}/include -I${TVM_BASE}/3rdparty/dlpack/include/"
-CGO_CXXFLAGS="-std=c++14 -DDMLC_USE_LOGGING_LIBRARY=\"
+CGO_CXXFLAGS="-std=c++14 -DDMLC_USE_LOGGING_LIBRARY= 
-DTVM_BACKTRACE_DISABLED=1"

Review comment:
   -DTVM_USE_LIBBACKTRACE=0




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


comaniac commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600800776



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +248,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-CreateCallback(SimplifyTranspose());
-CreateCallback(FullElementwise());
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
   }
-  template 
-  void CreateCallback(const T& pattern) {
-auto func = [pattern](TVMArgs args, TVMRetValue* rv) {
-  Expr pre = args[0];
-  Expr post = args[1];
-  Map> node_map = args[2];
-  *rv = pattern.callback(pre, post, node_map);
-};
-callbacks_.push_back(DFPatternCallback(pattern.pattern(), 
PackedFunc(func), true));
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type().as()) {
+  return false;
+}
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {
+if (!Check(pre, post, node_map)) {
+  return post;
+}
+
+const TensorTypeNode* like_ty = pre->checked_type().as();
+Array cshape;
+for (const auto& dim : like_ty->shape) {
+  if (const auto* imm = dim.as()) {
+cshape.push_back(Integer(GetRef(imm)));
+  } else {
+// shape is not static, don't concretize
+return post;
+  }
+}
+
+return Concretize(node_map, cshape, like_ty->dtype);
+  }
+
+ protected:
+  DFPattern data_pat_;
+  DFPattern like_pat_;
+};
+
+class ConcretizeZerosLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeZerosLikeRewrite() : ConcretizeLikeRewrite(Op::Get("zeros_like")) 
{}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeZeros(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeZerosLikeRewrite);
+};
+
+class ConcretizeOnesLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeOnesLikeRewrite() : ConcretizeLikeRewrite(Op::Get("ones_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeOnes(shape, dtype);
+  }
+
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeOnesLikeRewrite);
+};
+
+class ConcretizeReshapeLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeReshapeLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("reshape_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+return MakeReshape(node_map[data_pat_][0], shape);
   }
 
-  Expr Simplify(const Expr& expr) { return RewritePatterns(callbacks_, expr, 
mod_); }
+  TVM_DF_PATTERN_REWRITE_GETTER(ConcretizeReshapeLikeRewrite);
+};
+
+class ConcretizeCollapseSumLikeRewrite : public ConcretizeLikeRewrite {
+ public:
+  ConcretizeCollapseSumLikeRewrite() : 
ConcretizeLikeRewrite(Op::Get("collapse_sum_like")) {}
+
+  Expr Concretize(const Map>& node_map, Array 
shape,
+  DataType dtype) const override {
+ICHECK_LE(shape.size(), std::numeric_limits::max());

Review comment:
   Why we need this check?

##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -249,36 +248,214 @@ class FullElementwise : public SimplifyPattern {
 };
 
 /*!
- * \brief ExprSimplifier simplifies the Relay expression.
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
  */
-class ExprSimplifier {
+class ConcretizeLikeRewrite : public DFPatternRewrite {
  public:
-  explicit ExprSimplifier(IRModule mod) : mod_(mod) {
-CreateCallback(SimplifyReshape());
-Cr

[GitHub] [tvm] hogepodge commented on a change in pull request #7642: [docs] Getting Started With TVM: Tensor Expressions

2021-03-24 Thread GitBox


hogepodge commented on a change in pull request #7642:
URL: https://github.com/apache/tvm/pull/7642#discussion_r600791467



##
File path: tutorials/get_started/tensor_expr_get_started.py
##
@@ -302,18 +385,452 @@
 fadd_cl(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
-# Summary
-# ---
-# This tutorial provides a walk through of TVM workflow using
-# a vector add example. The general workflow is
+
+# .. note:: Code Specialization
+#
+#   As you may have noticed, the declarations of A, B and C all take the same
+#   shape argument, n. TVM will take advantage of this to pass only a single
+#   shape argument to the kernel, as you will find in the printed device code.
+#   This is one form of specialization.
+#
+#   On the host side, TVM will automatically generate check code that checks
+#   the constraints in the parameters. So if you pass arrays with different
+#   shapes into fadd, an error will be raised.
+#
+#   We can do more specializations. For example, we can write :code:`n =
+#   tvm.runtime.convert(1024)` instead of :code:`n = te.var("n")`, in the
+#   computation declaration. The generated function will only take vectors with
+#   length 1024.
+
+
+# .. note:: TE Scheduling Primitives
+#
+#   TVM includes a number of different scheduling primitives:
+#
+#   - split: splits a specified axis into two axises by the defined factor.
+#   - tile: tiles will split a computation across two axes by the defined 
factors.
+#   - fuse: fuses two consecutive axises of one computation.
+#   - reorder: can reorder the axises of a computation into a defined order.
+#   - bind: can bind a computation to a specific thread, useful in GPU 
programming.
+#   - compute_at: by default, TVM will compute tensors at the outermost level
+# of the function, or the root, by default. compute_at specifies that one
+# tensor should be computed at the first axis of computation for another
+# operator.
+#   - compute_inline: when marked inline, a computation will be expanded then
+# inserted into the address where the tensor is required.
+#   - compute_root: moves a computation to the outermost layer, or root, of the
+# function. This means that stage of the computation will be fully computed
+# before it moves on to the next stage.
+#
+#   A complete description of these primitives can be found in the
+# [Schedule 
Primitives](https://tvm.apache.org/docs/tutorials/language/schedule_primitives.html)
 docs page.
+
+
+# Example 2: Manually Optimizing Matrix Multiplication with TE
+# 
+#
+# Now we will consider a second, more advanced example, demonstrating how with
+# just 18 lines of python code TVM speeds up a common matrix multiplication 
operation by 18x.
+#
+# **Matrix multiplication is a compute intensive operation. There are two 
important optimizations for good CPU performance:**
+# 1. Increase the cache hit rate of memory access. Both complex numerical
+#computation and hot-spot memory access can be accelerated by a high cache 
hit
+#rate. This requires us to transform the origin memory access pattern to a 
pattern that fits the cache policy.
+# 2. SIMD (Single instruction multi-data), also known as the vector processing
+#unit. On each cycle instead of processing a single value, SIMD can 
process a small batch of data.
+#This requires us to transform the data access pattern in the loop
+#body in uniform pattern so that the LLVM backend can lower it to SIMD.
+#
+# The techniques used in this tutorial are a subset of tricks mentioned in this
+# `repository `_. Some of them
+# have been applied by TVM abstraction automatically, but some of them cannot
+# be automatically applied due to TVM constraints.
+#
+# All the experiment results mentioned below are executed on 2015 15" MacBook
+# equipped with Intel i7-4770HQ CPU. The cache line size should be 64 bytes for
+# all the x86 CPUs.

Review comment:
   Since we will always be chasing the exact specs on this, I just decided 
to remove the line.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600785922



##
File path: src/relay/transforms/concretize_like.cc
##
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file concretize_like.cc
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
+ */
+
+#include 
+
+#include "pattern_utils.h"
+#include "simplify_expr.h"
+
+namespace tvm {
+namespace relay {
+
+class ConcretizeLikeRewrite : public DFPatternRewrite {
+ public:
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
+require_type_ = true;
+  }
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type_.defined()) {
+  // TODO(@altanh): maybe because of the input being rewritten?

Review comment:
   I ended up removing this, I think the checked type should always be 
defined for the `pre` node when `require_type` is true. Previously I was 
getting the type from a different node which was the wrong approach, now this 
works.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] hogepodge commented on a change in pull request #7642: [docs] Getting Started With TVM: Tensor Expressions

2021-03-24 Thread GitBox


hogepodge commented on a change in pull request #7642:
URL: https://github.com/apache/tvm/pull/7642#discussion_r600785887



##
File path: tutorials/get_started/tensor_expr_get_started.py
##
@@ -163,52 +145,156 @@
 fadd(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
-# Inspect the Generated Code
-# --
-# You can inspect the generated code in TVM. The result of tvm.build
-# is a TVM Module. fadd is the host module that contains the host wrapper,
-# it also contains a device module for the CUDA (GPU) function.
-#
-# The following code fetches the device module and prints the content code.
-#
-if tgt == "cuda" or tgt == "rocm" or tgt.startswith("opencl"):
-dev_module = fadd.imported_modules[0]
-print("-GPU code-")
-print(dev_module.get_source())
-else:
-print(fadd.get_source())
+
+# Updating the Schedule to Use Paralleism
+# ~~~
+#
+# Now that we've illustrated the fundamentals of TE, let's go deeper into what
+# schedules do, and how they can be used to optimize tensor expressions for
+# different architectures. A schedule is a series of steps that are applied to
+# an expression to transform it in a number of different ways. When a schedule
+# is applied to an expression in TE, the inputs and outputs remain the same,
+# but when compiled the implementation of the expression can change. This
+# tensor addition, in the default schedule, is run serially but is easy to
+# parallelize across all of the processor threads. We can apply the parallel
+# schedule operation to our computation.
 
-##
-# .. note:: Code Specialization
-#
-#   As you may have noticed, the declarations of A, B and C all
-#   take the same shape argument, n. TVM will take advantage of this
-#   to pass only a single shape argument to the kernel, as you will find in
-#   the printed device code. This is one form of specialization.
-#
-#   On the host side, TVM will automatically generate check code
-#   that checks the constraints in the parameters. So if you pass
-#   arrays with different shapes into fadd, an error will be raised.
-#
-#   We can do more specializations. For example, we can write
-#   :code:`n = tvm.runtime.convert(1024)` instead of :code:`n = te.var("n")`,
-#   in the computation declaration. The generated function will
-#   only take vectors with length 1024.
-#
+s[C].parallel(C.op.axis[0])
 
-##
-# Save Compiled Module
-# 
-# Besides runtime compilation, we can save the compiled modules into
-# a file and load them back later. This is called ahead of time compilation.
+
+# The ``tvm.lower`` command will generate the Intermediate Representation (IR)
+# of the TE, with the corresponding schedule. By lowering the expression as we
+# apply different schedule operations, we can see the effect of scheduling on
+# the ordering of the computation.
+
+print(tvm.lower(s, [A, B, C], simple_mode=True))
+
+
+# It's now possible for TVM to run these blocks on independent threads. Let's
+# compile and run this new schedule with the parallel operation applied:
+
+fadd_parallel = tvm.build(s, [A, B, C], tgt, target_host=tgt_host, 
name="myadd_parallel")
+fadd_parallel(a, b, c)
+
+tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
+
+
+# Updating the Schedule to Use Vectorization
+# ~~
+# Modern CPUs also have the ability to perform SIMD operations on floating
+# point values, and we can apply another schedule to our computation expression
+# to take advantage of this. Accomplishing this requires multiple steps: first
+# we have to split the schedule into inner and outer loops using the split
+# scheduling primitive. The inner loops can use vectorization to use SIMD
+# instructions using the vectorize scheduling primitive, then the outer loops
+# can be parallelized using the parallel scheduling primitive. Choose the split
+# factor to be the number of threads on your CPU.
+
+# Recreate the schedule, since we modified it with the parallel operation in 
the previous example
+n = te.var("n")
+A = te.placeholder((n,), name="A")
+B = te.placeholder((n,), name="B")
+C = te.compute(A.shape, lambda i: A[i] + B[i], name="C")
+
+s = te.create_schedule(C.op)
+
+factor = 4
+
+outer, inner = s[C].split(C.op.axis[0], factor=factor)
+s[C].parallel(outer)
+s[C].vectorize(inner)
+
+print(tvm.lower(s, [A, B, C], simple_mode=True)

[GitHub] [tvm] altanh commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806077018


   @comaniac @mbrookhart I've merged them and updated the unit tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan commented on issue #7730: [Bug] Missing broadcast_to before batch_matmul for CuBLAS

2021-03-24 Thread GitBox


csullivan commented on issue #7730:
URL: https://github.com/apache/tvm/issues/7730#issuecomment-806073946


   Reasonable and doable for the short term. The downside being that it only 
fixes the problem for one target at a time. We would also need to add broadcast 
support to RocBLAS and CBLAS/MKL to avoid the issue for those targets.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #7722: [Topi, Relay] Add cumprod

2021-03-24 Thread GitBox


AndrewZhaoLuo commented on pull request #7722:
URL: https://github.com/apache/tvm/pull/7722#issuecomment-806073342


   @masahi PTAL. I've implemented all your suggested changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #7557: Clean up uTVM demo runtime, add ONNX model test and tutorial

2021-03-24 Thread GitBox


areusch commented on pull request #7557:
URL: https://github.com/apache/tvm/pull/7557#issuecomment-806070247


   @mdw-octoml please retrigger this against the CI, it should now be possible 
to make it pass as ci-qemu has the onnx package.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on issue #7730: [Bug] Missing broadcast_to before batch_matmul for CuBLAS

2021-03-24 Thread GitBox


comaniac commented on issue #7730:
URL: https://github.com/apache/tvm/issues/7730#issuecomment-806069140


   Another direction I can think of is adding the broadcast support in CuBLAS 
batch_matmul so that we could have a unified behavior of batch_matmul op in 
Relay, and we don't need to change anything else. Do you think that's 
reasonable and doable?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-24 Thread GitBox


areusch commented on a change in pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#discussion_r600763358



##
File path: 
apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java
##
@@ -224,108 +224,115 @@ protected void onPostExecute(Integer status) {
 }
 
 /*
-Execute prediction for processed decode input bitmap image content on 
TVM graph runtime.
+Execute prediction for processed decode input bitmap image content on 
TVM graph executor.
  */
 private class ModelRunAsyncTask extends AsyncTask {
 ProgressDialog dialog = new ProgressDialog(MainActivity.this);
 
 @Override
 protected Integer doInBackground(Bitmap... bitmaps) {
-if (null != graphRuntimeModule) {
-int count  = bitmaps.length;
-for (int i = 0 ; i < count ; i++) {
-long processingTimeMs = SystemClock.uptimeMillis();
-Log.i(TAG, "Decode JPEG image content");
-
-// extract the jpeg content
-ByteArrayOutputStream stream = new ByteArrayOutputStream();
-bitmaps[i].compress(Bitmap.CompressFormat.JPEG,100,stream);
-byte[] byteArray = stream.toByteArray();
-Bitmap imageBitmap = 
BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length);
-
-// crop input image at centre to model input size
-// commecial deploy note:: instead of cropying image do 
resize
-// image to model input size so we never lost the image 
content
-Bitmap cropImageBitmap = 
Bitmap.createBitmap(MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, 
Bitmap.Config.ARGB_);
-Matrix frameToCropTransform = 
getTransformationMatrix(imageBitmap.getWidth(), imageBitmap.getHeight(),
-MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, 0, true);
-Canvas canvas = new Canvas(cropImageBitmap);
-canvas.drawBitmap(imageBitmap, frameToCropTransform, null);
-
-// image pixel int values
-int[] pixelValues = new int[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE];
-// image RGB float values
-float[] imgRgbValues = new float[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE * IMG_CHANNEL];
-// image RGB transpose float values
-float[] imgRgbTranValues = new float[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE * IMG_CHANNEL];
-
-// pre-process the image data from 0-255 int to normalized 
float based on the
-// provided parameters.
-cropImageBitmap.getPixels(pixelValues, 0, 
MODEL_INPUT_SIZE, 0, 0, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE);
-for (int j = 0; j < pixelValues.length; ++j) {
-imgRgbValues[j * 3 + 0] = ((pixelValues[j] >> 16) & 
0xFF)/255.0f;
-imgRgbValues[j * 3 + 1] = ((pixelValues[j] >> 8) & 
0xFF)/255.0f;
-imgRgbValues[j * 3 + 2] = (pixelValues[j] & 
0xFF)/255.0f;
-}
-
-// pre-process the image rgb data transpose based on the 
provided parameters.
-for (int k = 0; k < IMG_CHANNEL; ++k) {
-for (int l = 0; l < MODEL_INPUT_SIZE; ++l) {
-for (int m = 0; m < MODEL_INPUT_SIZE; ++m) {
-int dst_index = m + MODEL_INPUT_SIZE*l + 
MODEL_INPUT_SIZE*MODEL_INPUT_SIZE*k;
-int src_index = k + IMG_CHANNEL*m + 
IMG_CHANNEL*MODEL_INPUT_SIZE*l;
-imgRgbTranValues[dst_index] = 
imgRgbValues[src_index];
-}
-}
-}
-
-// get the function from the module(set input data)
-Log.i(TAG, "set input data");
-NDArray inputNdArray = NDArray.empty(new long[]{1, 
IMG_CHANNEL, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE}, new TVMType("float32"));;
-inputNdArray.copyFrom(imgRgbTranValues);
-Function setInputFunc = 
graphRuntimeModule.getFunction("set_input");
-
setInputFunc.pushArg(INPUT_NAME).pushArg(inputNdArray).invoke();
-// release tvm local variables
-inputNdArray.release();
-setInputFunc.release();
-
-// get the function from the module(run it)
-Log.i(TAG, "run function on target");
-Function runFunc = graphRuntimeModule.getFunction("run");
-runFunc.invoke();
-// release tvm local variables
-runFunc.release();
-
-// get the function from the module(get output data)
-  

[GitHub] [tvm] areusch commented on pull request #7733: [crt] fix shift out of type bounds

2021-03-24 Thread GitBox


areusch commented on pull request #7733:
URL: https://github.com/apache/tvm/pull/7733#issuecomment-806059563


   thanks @rafzi, this change looks good to me. can you retrigger the CI by 
adding an empty commit? looks like it got stuck on task_rust.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (63d8e97 -> cfe2e28)

2021-03-24 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 63d8e97  [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency) 
(#7728)
 add cfe2e28  [crt] fix heap corruption from bad allocation (#7735)

No new revisions were added by this update.

Summary of changes:
 src/runtime/crt/graph_runtime/graph_runtime.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


[GitHub] [tvm] areusch merged pull request #7735: [crt] fix heap corruption from bad allocation

2021-03-24 Thread GitBox


areusch merged pull request #7735:
URL: https://github.com/apache/tvm/pull/7735


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan edited a comment on issue #7730: [Bug] Missing broadcast_to before batch_matmul for CuBLAS

2021-03-24 Thread GitBox


csullivan edited a comment on issue #7730:
URL: https://github.com/apache/tvm/issues/7730#issuecomment-806052775


   Thanks @comaniac @masahi. Yes the problem is that different targets, and 
target specific topi implementations, can support different optimizations. In 
the case of using the blas libraries supported for a target, implicit broadcast 
is not supported. 
   
   One option that comes to mind is to add a shape legalization pass that adds 
the broadcast if a target has specific attributes (e.g. libs=cublas/rocblas 
etc). However this isn't sufficient; depending on the op strategy priorities or 
the applied tuning configs, it's possible that the blas library implementation 
won't be used. A better option could be to make use of #7518, and do the shape 
legalization after the primitive functions have been lowered to TIR and can be 
inspected.
   
   We could also disable implicit broadcast, but that can increase the memory 
use (from folding the constant broadcasts) which we've seen overflow device 
memory for larger batch sizes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan commented on issue #7730: [Bug] Missing broadcast_to before batch_matmul for CuBLAS

2021-03-24 Thread GitBox


csullivan commented on issue #7730:
URL: https://github.com/apache/tvm/issues/7730#issuecomment-806052775


   Thanks @comaniac @masahi. Yes the problem is that different targets, and 
target specific topi implementations, can support different optimizations. In 
the case of using the blas libraries supported for a target, implicit broadcast 
is not supported. 
   
   One option that comes to mind is to add a shape legalization pass that adds 
the broadcast if a target has specific attributes (e.g. libs=cublas/rocblas 
etc). However this isn't sufficient; depending on the op strategy priorities or 
the applied tuning configs, it's possible that the blas library implementation 
won't be used. A better option could be to make use of #7518, and do the shape 
legalization after the primitive functions have been lowered to TIR and can be 
inspected.
   
   We could also disable implicit broadcast, but that can increase the memory 
use (from constant folding the constant broadcasts) which we've seen overflow 
device memory for larger batch sizes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] michalpiszczek opened a new pull request #7738: [AutoTVM] Catch cuda errors when using LocalRunner

2021-03-24 Thread GitBox


michalpiszczek opened a new pull request #7738:
URL: https://github.com/apache/tvm/pull/7738


   This block in `measure_methods.py` intended to capture CUDA errors caused by 
invalid kernels generated during AutoTuning:
   
   ```Python
   except TVMError as exc:
   msg = str(exc)
   if "Stack trace returned" in msg:
   msg = msg[: msg.index("Stack trace returned")]
   if "CUDA Source" in msg:
   msg = msg[: msg.index("CUDA Source")]
   costs = (RuntimeError(msg[:1024]),)
   errno = MeasureErrorNo.RUNTIME_DEVICE
   ```
   
   but when using `LocalRunner` the cuda errors come through here wrapped in 
`RPCError` not `TVMError`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


mbrookhart commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806029284


   Yeah, as long as we aren't commonly manifesting full sized arrays of zero or 
one, that should be fine. Given the full/zeros/ones ops and their like 
counterparts, plus auto-broadcasting, I think that's generally a reasonable 
assumption to make.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] cnv1989 commented on a change in pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-24 Thread GitBox


cnv1989 commented on a change in pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#discussion_r600716290



##
File path: 
apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java
##
@@ -224,108 +224,115 @@ protected void onPostExecute(Integer status) {
 }
 
 /*
-Execute prediction for processed decode input bitmap image content on 
TVM graph runtime.
+Execute prediction for processed decode input bitmap image content on 
TVM graph executor.
  */
 private class ModelRunAsyncTask extends AsyncTask {
 ProgressDialog dialog = new ProgressDialog(MainActivity.this);
 
 @Override
 protected Integer doInBackground(Bitmap... bitmaps) {
-if (null != graphRuntimeModule) {
-int count  = bitmaps.length;
-for (int i = 0 ; i < count ; i++) {
-long processingTimeMs = SystemClock.uptimeMillis();
-Log.i(TAG, "Decode JPEG image content");
-
-// extract the jpeg content
-ByteArrayOutputStream stream = new ByteArrayOutputStream();
-bitmaps[i].compress(Bitmap.CompressFormat.JPEG,100,stream);
-byte[] byteArray = stream.toByteArray();
-Bitmap imageBitmap = 
BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length);
-
-// crop input image at centre to model input size
-// commecial deploy note:: instead of cropying image do 
resize
-// image to model input size so we never lost the image 
content
-Bitmap cropImageBitmap = 
Bitmap.createBitmap(MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, 
Bitmap.Config.ARGB_);
-Matrix frameToCropTransform = 
getTransformationMatrix(imageBitmap.getWidth(), imageBitmap.getHeight(),
-MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, 0, true);
-Canvas canvas = new Canvas(cropImageBitmap);
-canvas.drawBitmap(imageBitmap, frameToCropTransform, null);
-
-// image pixel int values
-int[] pixelValues = new int[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE];
-// image RGB float values
-float[] imgRgbValues = new float[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE * IMG_CHANNEL];
-// image RGB transpose float values
-float[] imgRgbTranValues = new float[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE * IMG_CHANNEL];
-
-// pre-process the image data from 0-255 int to normalized 
float based on the
-// provided parameters.
-cropImageBitmap.getPixels(pixelValues, 0, 
MODEL_INPUT_SIZE, 0, 0, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE);
-for (int j = 0; j < pixelValues.length; ++j) {
-imgRgbValues[j * 3 + 0] = ((pixelValues[j] >> 16) & 
0xFF)/255.0f;
-imgRgbValues[j * 3 + 1] = ((pixelValues[j] >> 8) & 
0xFF)/255.0f;
-imgRgbValues[j * 3 + 2] = (pixelValues[j] & 
0xFF)/255.0f;
-}
-
-// pre-process the image rgb data transpose based on the 
provided parameters.
-for (int k = 0; k < IMG_CHANNEL; ++k) {
-for (int l = 0; l < MODEL_INPUT_SIZE; ++l) {
-for (int m = 0; m < MODEL_INPUT_SIZE; ++m) {
-int dst_index = m + MODEL_INPUT_SIZE*l + 
MODEL_INPUT_SIZE*MODEL_INPUT_SIZE*k;
-int src_index = k + IMG_CHANNEL*m + 
IMG_CHANNEL*MODEL_INPUT_SIZE*l;
-imgRgbTranValues[dst_index] = 
imgRgbValues[src_index];
-}
-}
-}
-
-// get the function from the module(set input data)
-Log.i(TAG, "set input data");
-NDArray inputNdArray = NDArray.empty(new long[]{1, 
IMG_CHANNEL, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE}, new TVMType("float32"));;
-inputNdArray.copyFrom(imgRgbTranValues);
-Function setInputFunc = 
graphRuntimeModule.getFunction("set_input");
-
setInputFunc.pushArg(INPUT_NAME).pushArg(inputNdArray).invoke();
-// release tvm local variables
-inputNdArray.release();
-setInputFunc.release();
-
-// get the function from the module(run it)
-Log.i(TAG, "run function on target");
-Function runFunc = graphRuntimeModule.getFunction("run");
-runFunc.invoke();
-// release tvm local variables
-runFunc.release();
-
-// get the function from the module(get output data)
-  

[GitHub] [tvm] altanh commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806028276


   Ah, well looks like this might only make sense for constants that only have 
one element, unless we want to loop over every single element and check that it 
is equal to 0 or 1. But if I understand correctly, the `FullElementwise` pass 
only rewrites to scalar constants so those will be rewritten correctly; it's 
just that if the input IR has non-scalar constants that it won't be simplified.
   
   Do you guys think this is a reasonable tradeoff?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] cnv1989 commented on a change in pull request #7653: Rename GraphRuntime to GraphExecutor

2021-03-24 Thread GitBox


cnv1989 commented on a change in pull request #7653:
URL: https://github.com/apache/tvm/pull/7653#discussion_r600716290



##
File path: 
apps/android_deploy/app/src/main/java/org/apache/tvm/android/demo/MainActivity.java
##
@@ -224,108 +224,115 @@ protected void onPostExecute(Integer status) {
 }
 
 /*
-Execute prediction for processed decode input bitmap image content on 
TVM graph runtime.
+Execute prediction for processed decode input bitmap image content on 
TVM graph executor.
  */
 private class ModelRunAsyncTask extends AsyncTask {
 ProgressDialog dialog = new ProgressDialog(MainActivity.this);
 
 @Override
 protected Integer doInBackground(Bitmap... bitmaps) {
-if (null != graphRuntimeModule) {
-int count  = bitmaps.length;
-for (int i = 0 ; i < count ; i++) {
-long processingTimeMs = SystemClock.uptimeMillis();
-Log.i(TAG, "Decode JPEG image content");
-
-// extract the jpeg content
-ByteArrayOutputStream stream = new ByteArrayOutputStream();
-bitmaps[i].compress(Bitmap.CompressFormat.JPEG,100,stream);
-byte[] byteArray = stream.toByteArray();
-Bitmap imageBitmap = 
BitmapFactory.decodeByteArray(byteArray, 0, byteArray.length);
-
-// crop input image at centre to model input size
-// commecial deploy note:: instead of cropying image do 
resize
-// image to model input size so we never lost the image 
content
-Bitmap cropImageBitmap = 
Bitmap.createBitmap(MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, 
Bitmap.Config.ARGB_);
-Matrix frameToCropTransform = 
getTransformationMatrix(imageBitmap.getWidth(), imageBitmap.getHeight(),
-MODEL_INPUT_SIZE, MODEL_INPUT_SIZE, 0, true);
-Canvas canvas = new Canvas(cropImageBitmap);
-canvas.drawBitmap(imageBitmap, frameToCropTransform, null);
-
-// image pixel int values
-int[] pixelValues = new int[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE];
-// image RGB float values
-float[] imgRgbValues = new float[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE * IMG_CHANNEL];
-// image RGB transpose float values
-float[] imgRgbTranValues = new float[MODEL_INPUT_SIZE * 
MODEL_INPUT_SIZE * IMG_CHANNEL];
-
-// pre-process the image data from 0-255 int to normalized 
float based on the
-// provided parameters.
-cropImageBitmap.getPixels(pixelValues, 0, 
MODEL_INPUT_SIZE, 0, 0, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE);
-for (int j = 0; j < pixelValues.length; ++j) {
-imgRgbValues[j * 3 + 0] = ((pixelValues[j] >> 16) & 
0xFF)/255.0f;
-imgRgbValues[j * 3 + 1] = ((pixelValues[j] >> 8) & 
0xFF)/255.0f;
-imgRgbValues[j * 3 + 2] = (pixelValues[j] & 
0xFF)/255.0f;
-}
-
-// pre-process the image rgb data transpose based on the 
provided parameters.
-for (int k = 0; k < IMG_CHANNEL; ++k) {
-for (int l = 0; l < MODEL_INPUT_SIZE; ++l) {
-for (int m = 0; m < MODEL_INPUT_SIZE; ++m) {
-int dst_index = m + MODEL_INPUT_SIZE*l + 
MODEL_INPUT_SIZE*MODEL_INPUT_SIZE*k;
-int src_index = k + IMG_CHANNEL*m + 
IMG_CHANNEL*MODEL_INPUT_SIZE*l;
-imgRgbTranValues[dst_index] = 
imgRgbValues[src_index];
-}
-}
-}
-
-// get the function from the module(set input data)
-Log.i(TAG, "set input data");
-NDArray inputNdArray = NDArray.empty(new long[]{1, 
IMG_CHANNEL, MODEL_INPUT_SIZE, MODEL_INPUT_SIZE}, new TVMType("float32"));;
-inputNdArray.copyFrom(imgRgbTranValues);
-Function setInputFunc = 
graphRuntimeModule.getFunction("set_input");
-
setInputFunc.pushArg(INPUT_NAME).pushArg(inputNdArray).invoke();
-// release tvm local variables
-inputNdArray.release();
-setInputFunc.release();
-
-// get the function from the module(run it)
-Log.i(TAG, "run function on target");
-Function runFunc = graphRuntimeModule.getFunction("run");
-runFunc.invoke();
-// release tvm local variables
-runFunc.release();
-
-// get the function from the module(get output data)
-  

[GitHub] [tvm] hogepodge commented on a change in pull request #7642: [docs] Getting Started With TVM: Tensor Expressions

2021-03-24 Thread GitBox


hogepodge commented on a change in pull request #7642:
URL: https://github.com/apache/tvm/pull/7642#discussion_r600710517



##
File path: tutorials/get_started/tensor_expr_get_started.py
##
@@ -255,41 +340,39 @@
 fadd1(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
+
 # Pack Everything into One Library
-# 
-# In the above example, we store the device and host code separately.
-# TVM also supports export everything as one shared library.
-# Under the hood, we pack the device modules into binary blobs and link
-# them together with the host code.
-# Currently we support packing of Metal, OpenCL and CUDA modules.
-#
+# 
+# In the above example, we store the device and host code separately. TVM also
+# supports export everything as one shared library. Under the hood, we pack
+# the device modules into binary blobs and link them together with the host
+# code. Currently we support packing of Metal, OpenCL and CUDA modules.
+
 fadd.export_library(temp.relpath("myadd_pack.so"))
 fadd2 = tvm.runtime.load_module(temp.relpath("myadd_pack.so"))
 fadd2(a, b, c)
 tvm.testing.assert_allclose(c.asnumpy(), a.asnumpy() + b.asnumpy())
 
-##
+
 # .. note:: Runtime API and Thread-Safety
 #
-#   The compiled modules of TVM do not depend on the TVM compiler.
-#   Instead, they only depend on a minimum runtime library.
-#   The TVM runtime library wraps the device drivers and provides
-#   thread-safe and device agnostic calls into the compiled functions.
-#
-#   This means that you can call the compiled TVM functions from any thread,
-#   on any GPUs.
+#   The compiled modules of TVM do not depend on the TVM compiler. Instead,
+#   they only depend on a minimum runtime library. The TVM runtime library
+#   wraps the device drivers and provides thread-safe and device agnostic calls
+#   into the compiled functions.
 #
+#   This means that you can call the compiled TVM functions from any thread, on
+#   any GPUs, provided that you have compiled the code for that GPU.
 
-##
+
 # Generate OpenCL Code

Review comment:
   I agree completely. Once this lands I'll send a new patch splitting 
these sections out and improving the flow.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806019388


   > I think I agree that merging these with SimplifyExpr would be a win in 
terms of our ability to control the order of execution.
   > 
   > On the simplification of things like `x * const(0)`, you can get the 
ndarray out of the const as ConstantNode->data, and then you can pass that to 
[this 
utility](https://github.com/apache/tvm/blob/63d8e97dfbe046e70c91c72cbbf7da8646824217/src/relay/transforms/pattern_utils.h#L385),
 which will return a `long double` version of the value, which shouldn't loose 
precision for any of the 64 bit or smaller datatypes we use. You can then do 
your comparison in a single dtype.
   > 
   > I've been meaning to implement this for like 6 months, and I haven't had a 
strong enough forcing function to bubble it up to the top of my priority list.
   
   This is just what I needed, thanks! I think since I'm also just checking 0 
or 1, there should be no problem casting since the floating point repr should 
all be the same.
   
   I'll fix the Windows issue, it was because I overloaded the same name too 
much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


mbrookhart commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806018953


   It looks like you have a windows build issue?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


mbrookhart commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806017937


   I think I agree that merging these with SimplifyExpr would be a win in terms 
of our ability to control the order of execution.
   
   On the simplification of things like `x * const(0)`, you can get the ndarray 
out of the const as ConstantNode->data, and then you can pass that to [this 
utility](https://github.com/apache/tvm/blob/63d8e97dfbe046e70c91c72cbbf7da8646824217/src/relay/transforms/pattern_utils.h#L385),
 which will return a `long double` version of the value, which shouldn't loose 
precision for any of the 64 bit or smaller datatypes we use. You can then do 
your comparison in a single dtype.
   
   I've been meaning to implement this for like 6 months, and I haven't had a 
strong enough forcing function to bubble it up to the top of my priority list.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


comaniac commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806008203


   If the purpose is just for testing, then I'll prefer to have them in a 
single pass. You can still test the pattern one-by-one as SimplifyExpr does 
now. Since the unrelated patterns won't be matched, I didn't see the problem of 
testing. In this case, you can also control the order of rewriting patterns. 
i.e., always run `EliminateIdentity` before `FullElementWise` in the 
SimplifyExpr pass. This can also reduce the possible confusion from users.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-806004438


   > Overall LGTM. Meanwhile I have some questions about the design:
   > 
   > 1. It seems to me that ConcretizeLike and EliminateIdentity can also 
be merged to SImplifyExpr in terms of the implementation and semantic. What's 
the concern of having 3 separate passes?
   > 
   > 2. You mentioned that for a certain reason, EliminateIdentity should 
be run before SimplifyExpr, but I didn't get the point about what would happen 
if we run them in the reverse order. Could you elaborate a bit further?
   
   1. It is definitely possible- I separated them mainly for ability to test 
them separately, as otherwise the overall semantics of the combined pass might 
be a bit tricky to write test cases for (e.g. will need to adjust the cases 
where we are adding 0 or multiplying by 1). I can definitely add additional 
test cases that run all of them in sequence (as if it was 1 single pass), or 
just try to merge them into SimplifyExpr and update the test cases. lmk
   2. Yeah, so SimplifyExpr has a rewrite called `FullElementwise` that takes 
(for example) `x + zeros_like(x)` and rewrites it to `x + const(0)`. I couldn't 
think of a portable way to rewrite `x + const(0)` to `x` in 
`EliminateIdentity`, so it won't reduce this expression. For this reason you 
should run `EliminateIdentity` first- hope this makes sense. That being said, 
if there is a good way to examine constant values for any dtype (e.g. casting?) 
then we could also eliminate this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (1fe0abc -> 63d8e97)

2021-03-24 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 1fe0abc  [TVMC] Python Scripting Init Files (#7698)
 add 63d8e97  [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency) 
(#7728)

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile |  2 +-
 docker/Dockerfile.ci_qemu   |  4 ++
 docker/install/ubuntu_install_zephyr.sh | 69 +
 3 files changed, 7 insertions(+), 68 deletions(-)


[GitHub] [tvm] tmoreau89 commented on pull request #7728: [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency)

2021-03-24 Thread GitBox


tmoreau89 commented on pull request #7728:
URL: https://github.com/apache/tvm/pull/7728#issuecomment-806002893


   Thanks @mdw-octoml @areusch the PR has been merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tmoreau89 merged pull request #7728: [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency)

2021-03-24 Thread GitBox


tmoreau89 merged pull request #7728:
URL: https://github.com/apache/tvm/pull/7728


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] altanh commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


altanh commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600683869



##
File path: src/relay/transforms/simplify_expr.cc
##
@@ -22,44 +22,37 @@
  * \brief A pass for simplifying the Relay expression.
  */
 
+#include "simplify_expr.h"
+
 #include 
 #include 
 #include 
 #include 
 #include 
 
+#include 
+
 #include "../op/tensor/transform.h"
 #include "pattern_utils.h"
 
 namespace tvm {
 namespace relay {
 
-class SimplifyPattern {
- public:
-  virtual Expr callback(const Expr& pre, const Expr& post,
-const Map>& node_map) const = 0;
-
-  DFPattern pattern() const { return pattern_; }
-
- protected:
-  /*! \brief Pattern for rewriting */
-  DFPattern pattern_;
-};
-
 /*!
  * \brief SimplifyReshape matches the pattern of consecutive reshape or 
reverse_reshape ops,
  *   and merges into one reshape op.
  */
-class SimplifyReshape : public SimplifyPattern {
+class SimplifyReshape : public DFPatternRewrite {
  public:
   SimplifyReshape() {
 x_ = IsWildcard();
 auto reshape1 = IsOp("reshape") || IsOp("contrib_reverse_reshape");
 auto reshape2 = IsOp("reshape") || IsOp("contrib_reverse_reshape");
 pattern_ = reshape1({reshape2({x_})});
+require_type_ = true;

Review comment:
   good point, thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #7728: [µTVM] Rev ci-qemu to 0.02 (Introduce onnx python dependency)

2021-03-24 Thread GitBox


areusch commented on a change in pull request #7728:
URL: https://github.com/apache/tvm/pull/7728#discussion_r600683088



##
File path: docker/Dockerfile.ci_qemu
##
@@ -64,3 +64,7 @@ RUN bash /install/ubuntu_install_qemu.sh
 COPY install/ubuntu_install_zephyr.sh /install/ubuntu_install_zephyr.sh
 RUN bash /install/ubuntu_install_zephyr.sh
 ENV ZEPHYR_BASE=/opt/zephyrproject/zephyr
+
+# Install ONNX

Review comment:
   must reduce variation in ci-* images :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


comaniac commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-805995681


   Also cc @mbrookhart 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


comaniac commented on a change in pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#discussion_r600673972



##
File path: src/relay/transforms/concretize_like.cc
##
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file concretize_like.cc
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
+ */
+
+#include 
+
+#include "pattern_utils.h"
+#include "simplify_expr.h"
+
+namespace tvm {
+namespace relay {
+
+class ConcretizeLikeRewrite : public DFPatternRewrite {
+ public:
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
+require_type_ = true;
+  }
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type_.defined()) {
+  // TODO(@altanh): maybe because of the input being rewritten?

Review comment:
   You could manually assign the `checked_type_` of input to the new 
created node in the rewrite callback.

##
File path: src/relay/transforms/concretize_like.cc
##
@@ -0,0 +1,182 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file concretize_like.cc
+ * \brief Converts `*_like` operators to their explicit shape equivalent (e.g. 
`zeros_like(x, y)` to
+ * `zeros(x, y.shape)`), when the target shape is concrete. This removes 
unnecessary dependencies
+ * and can enable more opportunities for operator fusion.
+ */
+
+#include 
+
+#include "pattern_utils.h"
+#include "simplify_expr.h"
+
+namespace tvm {
+namespace relay {
+
+class ConcretizeLikeRewrite : public DFPatternRewrite {
+ public:
+  explicit ConcretizeLikeRewrite(const Op& op) {
+ICHECK(op->num_inputs == 1 || op->num_inputs == 2)
+<< "ConcretizeLike does not handle operators that aren't unary or 
binary, got: " << op;
+like_pat_ = IsWildcard();
+data_pat_ = IsWildcard();
+if (op->num_inputs == 1) {
+  pattern_ = IsExpr(op)({like_pat_});
+} else {
+  pattern_ = IsExpr(op)({data_pat_, like_pat_});
+}
+require_type_ = true;
+  }
+
+  virtual bool Check(const Expr& pre, const Expr& post,
+ const Map>& node_map) const {
+const CallNode* call_node = pre.as();
+ICHECK(call_node);
+
+if (!call_node->checked_type_.defined()) {
+  // TODO(@altanh): maybe because of the input being rewritten?
+  return false;
+}
+
+const TensorTypeNode* like_ty = 
call_node->checked_type().as();
+ICHECK(like_ty) << "got non-Tensor *_like call type " << 
PrettyPrint(call_node->checked_type());
+
+return true;
+  }
+
+  virtual Expr Concretize(const Map>& node_map, 
Array shape,
+  DataType dtype) const = 0;
+
+  Expr Callback(const Expr& pre, const Expr& post,
+const Map>& node_map) const override {

[GitHub] [tvm] tqchen commented on pull request #7731: [Relay][Pass] ConcretizeLike and EliminateIdentity Passes

2021-03-24 Thread GitBox


tqchen commented on pull request #7731:
URL: https://github.com/apache/tvm/pull/7731#issuecomment-805976301


   cc @comaniac @yzhliu please help to review this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on issue #7732: [Bug][Auto-scheduler] generated cuda code

2021-03-24 Thread GitBox


comaniac commented on issue #7732:
URL: https://github.com/apache/tvm/issues/7732#issuecomment-805973967


   How many trials did you try and how did you apply the schedule from the log 
file? It's possible for auto-scheduler to generate invalid kernels during the 
tuning, but they shouldn't be picked ultimately when compiling the model.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jwfromm commented on pull request #7698: [TVMC] Python Scripting Init Files

2021-03-24 Thread GitBox


jwfromm commented on pull request #7698:
URL: https://github.com/apache/tvm/pull/7698#issuecomment-805949425


   Thanks @CircleSpin @leandron and @comaniac. This is now merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Hzfengsy opened a new pull request #7737: [TensorIR] Fix parser autocompletion mode

2021-03-24 Thread GitBox


Hzfengsy opened a new pull request #7737:
URL: https://github.com/apache/tvm/pull/7737


   This PR will fix the Script parser autocompletion mode and add checks for 
opaque blocks (which is not allowed now)
   
   Co-authored-by: Ruihang Lai 
   
   cc @tqchen @junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (8131364 -> 1fe0abc)

2021-03-24 Thread jwfromm
This is an automated email from the ASF dual-hosted git repository.

jwfromm pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 8131364  [ONNX] Onnx node tests (#7720)
 add 1fe0abc  [TVMC] Python Scripting Init Files (#7698)

No new revisions were added by this update.

Summary of changes:
 python/tvm/driver/tvmc/__init__.py |  4 
 tests/python/driver/tvmc/test_compiler.py  | 20 +---
 tests/python/driver/tvmc/test_frontends.py | 18 --
 tests/python/driver/tvmc/test_runner.py|  2 +-
 4 files changed, 22 insertions(+), 22 deletions(-)


[GitHub] [tvm] jwfromm merged pull request #7698: [TVMC] Python Scripting Init Files

2021-03-24 Thread GitBox


jwfromm merged pull request #7698:
URL: https://github.com/apache/tvm/pull/7698


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] euntaik opened a new pull request #7736: [frontend][tflite] float16 quant support

2021-03-24 Thread GitBox


euntaik opened a new pull request #7736:
URL: https://github.com/apache/tvm/pull/7736


   add float16 quant support for fc and transpose_conv


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rafzi opened a new pull request #7735: [crt] fix heap corruption from bad allocation

2021-03-24 Thread GitBox


rafzi opened a new pull request #7735:
URL: https://github.com/apache/tvm/pull/7735


   The type of runtime->storage_pool was changed at some point from TVMNDArray 
to TVMGraphRuntimeStorageEntry. This change was not reflected in the call to 
the allocation for its buffer. If this unclaimed space is allocated to 
something else, data corruption will happen.
   
   @areusch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Wheest commented on pull request #6137: Better grouped convolution for CPU targets

2021-03-24 Thread GitBox


Wheest commented on pull request #6137:
URL: https://github.com/apache/tvm/pull/6137#issuecomment-805772836


   > @Wheest Seems you have wrong rebase. please refer this documentation : 
https://tvm.apache.org/docs/contribute/git_howto.html
   
   Believe I have fixed the rebase/commit problems, will try to avoid these in 
future for a cleaner PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] majercakdavid opened a new issue #7734: [Web] tvmjs runtime error when loading, missing "runtime.SystemLib"

2021-03-24 Thread GitBox


majercakdavid opened a new issue #7734:
URL: https://github.com/apache/tvm/issues/7734


   I would like to report a problem with the JS runtime. Currently when being 
run the instantiation of tvm run smoothly, not resulting in any error:
   ```
   try {
   this.tvm = await tvmjs.instantiate(
   wasmSource,
   new EmccWASI()
   );
   } catch(e){
   console.log(e);
   }
   ```
   However, after invoking this line (just like proposed in the 
https://github.com/apache/tvm/blob/main/web/tests/node/test_module_load.js):
   
   ```
   this.sysLib = this.tvm.systemLib();
   ```
   
   The application ended up with the following error:
   
   ```
   tvmjs_runtime.wasi.js:formatted:410 Uncaught (in promise) RuntimeError: 
abort([object Object]). Build with -s ASSERTIONS=1 for more info.
   at abort (https://localhost:4443/tvm/wasm/tvmjs_runtime.wasi.js:3:9079)
   at _proc_exit 
(https://localhost:4443/tvm/wasm/tvmjs_runtime.wasi.js:3:66932)
   at :wasm-function[1680]:0x75969
   at :wasm-function[1683]:0x759d0
   at :wasm-function[188]:0x1eb8b
   at :wasm-function[193]:0x1f194
   at :wasm-function[866]:0x47c0f
   at TVMFuncCall (:wasm-function[78]:0x85e8)
   at packedFunc (https://localhost:4443/tvm/tvmjs.bundle.js:1877:46)
   at Instance.systemLib 
(https://localhost:4443/tvm/tvmjs.bundle.js:1466:22)
   ```
   Furthermore, I have been able to investigate it a little bit more and the 
issue most probably lies in here:
   
   
![image](https://user-images.githubusercontent.com/9350520/112303774-f8aff500-8c9c-11eb-8b16-223b2dc7f776.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] wxyhv commented on issue #4118: [RFC] Dynamic Shape Support - Graph Dispatching

2021-03-24 Thread GitBox


wxyhv commented on issue #4118:
URL: https://github.com/apache/tvm/issues/4118#issuecomment-805659240


   Dose any progress in dynamic input shape inference? I am expecting it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhuwenxi commented on pull request #7619: [BugFix] Fix the race condition issue of packed func. (#7246).

2021-03-24 Thread GitBox


zhuwenxi commented on pull request #7619:
URL: https://github.com/apache/tvm/pull/7619#issuecomment-805634130


   @tqchen Please take a look at my latest commit if you have time, thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch edited a comment on pull request #7518: [RFC]TECompiler: Staged refactor and removal of compile engine

2021-03-24 Thread GitBox


jroesch edited a comment on pull request #7518:
URL: https://github.com/apache/tvm/pull/7518#issuecomment-805600377


   cc @mbaret 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on pull request #7518: [RFC]TECompiler: Staged refactor and removal of compile engine

2021-03-24 Thread GitBox


jroesch commented on pull request #7518:
URL: https://github.com/apache/tvm/pull/7518#issuecomment-805600377


   cc @MatthewARM 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on pull request #7518: [RFC]TECompiler: Staged refactor and removal of compile engine

2021-03-24 Thread GitBox


jroesch commented on pull request #7518:
URL: https://github.com/apache/tvm/pull/7518#issuecomment-805594912


   Modulo some left over polish work and documentation I think this is ready 
for review @icemelon9 @comaniac @csullivan @tkonolige @rkimball @junrushao1994 
@areusch @mehrdadh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on pull request #7518: [RFC]TECompiler: Staged refactor and removal of compile engine

2021-03-24 Thread GitBox


jroesch commented on pull request #7518:
URL: https://github.com/apache/tvm/pull/7518#issuecomment-805594042


   Need to port fix from #7703 but otherwise ready for review. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rafzi opened a new pull request #7733: [crt] fix shift out of type bounds

2021-03-24 Thread GitBox


rafzi opened a new pull request #7733:
URL: https://github.com/apache/tvm/pull/7733


   When running with "-fsanitize=undefined" I got the following error:
   
   tvm/src/runtime/crt/common/crt_runtime_api.c:213:32: runtime error: left 
shift of 32770 by 16 places cannot be represented in type 'int'
   
   The expression "module_index | 0x8000" is of type "int" and needs to be 
casted to unsigned before the shift.
   
   @areusch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] xiebaiyuan commented on issue #7705: build tvm failed with libbacktrace.a : ld: symbol(s) not found for architecture x86_64

2021-03-24 Thread GitBox


xiebaiyuan commented on issue #7705:
URL: https://github.com/apache/tvm/issues/7705#issuecomment-805570369


   config.log
   
   >
   
   This file contains any messages produced by compilers while
   running configure, to aid debugging if configure makes a mistake.
   
   It was created by package-unused configure version-unused, which was
   generated by GNU Autoconf 2.69.  Invocation command line was
   
 $ 
/Users/xiebaiyuan/Downloads/incubator-tvm/cmake/libs/../../3rdparty/libbacktrace/configure
 --prefix=/Users/xiebaiyuan/Downloads/incubator-tvm/build/libbacktrace 
--with-pic CPP=/Library/Developer/CommandLineTools/usr/bin/cc -E 
CC=/Library/Developer/CommandLineTools/usr/bin/cc AR= RANLIB= 
NM=/Library/Developer/CommandLineTools/usr/bin/nm 
STRIP=/Library/Developer/CommandLineTools/usr/bin/strip CFLAGS=-O2 -Wall -fPIC  
CPPFLAGS=-O2 -Wall -fPIC  LDFLAGS=
   
   ## - ##
   ## Platform. ##
   ## - ##
   
   hostname = xiebaiyuandeMBP
   uname -m = x86_64
   uname -r = 20.3.0
   uname -s = Darwin
   uname -v = Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; 
root:xnu-7195.81.3~1/RELEASE_X86_64
   
   /usr/bin/uname -p = i386
   /bin/uname -X = unknown
   
   /bin/arch  = unknown
   /usr/bin/arch -k   = unknown
   /usr/convex/getsysinfo = unknown
   /usr/bin/hostinfo  = Mach kernel version:
 Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; 
root:xnu-7195.81.3~1/RELEASE_X86_64
   Kernel configured for up to 12 processors.
   6 processors are physically available.
   12 processors are logically available.
   Processor type: x86_64h (Intel x86-64h Haswell)
   Processors active: 0 1 2 3 4 5 6 7 8 9 10 11
   Primary memory available: 32.00 gigabytes
   Default processor set: 740 tasks, 3346 threads, 12 processors
   Load average: 2.17, Mach factor: 9.82
   /bin/machine   = unknown
   /usr/bin/oslevel   = unknown
   /bin/universe  = unknown
   
   PATH: /usr/local/opt/openjdk@8/bin
   PATH: /opt/anaconda3/bin
   PATH: /opt/anaconda3/condabin
   PATH: /Users/xiebaiyuan/android-ndk-r16b
   PATH: /Users/xiebaiyuan/.oh-my-zsh/custom/plugins/git-open
   PATH: /usr/local/bin
   PATH: /usr/bin
   PATH: /bin
   PATH: /usr/sbin
   PATH: /sbin
   PATH: /Library/Apple/usr/bin
   PATH: /Library/Frameworks/Mono.framework/Versions/Current/Commands
   PATH: /Users/xiebaiyuan/baidu/searchbox/mgit
   PATH: /Users/xiebaiyuan/Library/Android/sdk/tools
   PATH: /Users/xiebaiyuan/Library/Android/sdk/platform-tools
   PATH: /Users/xiebaiyuan/android-ndk-r16b
   PATH: /Library/Java/JavaVirtualMachines/openjdk-8.jdk/Contents/Home
   
   
   ## --- ##
   ## Core tests. ##
   ## --- ##
   
   configure:2379: checking build system type
   configure:2393: result: x86_64-apple-darwin20.3.0
   configure:2413: checking host system type
   configure:2426: result: x86_64-apple-darwin20.3.0
   configure:2446: checking target system type
   configure:2459: result: x86_64-apple-darwin20.3.0
   configure:2539: checking for gcc
   configure:2566: result: /Library/Developer/CommandLineTools/usr/bin/cc
   configure:2795: checking for C compiler version
   configure:2804: /Library/Developer/CommandLineTools/usr/bin/cc --version >&5
   Apple clang version 12.0.0 (clang-1200.0.32.29)
   Target: x86_64-apple-darwin20.3.0
   Thread model: posix
   InstalledDir: /Library/Developer/CommandLineTools/usr/bin
   configure:2815: $? = 0
   configure:2804: /Library/Developer/CommandLineTools/usr/bin/cc -v >&5
   Apple clang version 12.0.0 (clang-1200.0.32.29)
   Target: x86_64-apple-darwin20.3.0
   Thread model: posix
   InstalledDir: /Library/Developer/CommandLineTools/usr/bin
   configure:2815: $? = 0
   configure:2804: /Library/Developer/CommandLineTools/usr/bin/cc -V >&5
   clang: error: argument to '-V' is missing (expected 1 value)
   clang: error: no input files
   configure:2815: $? = 1
   configure:2804: /Library/Developer/CommandLineTools/usr/bin/cc -qversion >&5
   clang: error: unknown argument '-qversion'; did you mean '--version'?
   clang: error: no input files
   configure:2815: $? = 1
   configure:2835: checking whether the C compiler works
   configure:2857: /Library/Developer/CommandLineTools/usr/bin/cc -O2 -Wall 
-fPIC  -O2 -Wall -fPIC   conftest.c  >&5
   ld: library not found for -lSystem
   clang: error: linker command failed with exit code 1 (use -v to see 
invocation)
   configure:2861: $? = 1
   configure:2899: result: no
   configure: failed program was:
   | /* confdefs.h */
   | #define PACKAGE_NAME "package-unused"
   | #define PACKAGE_TARNAME "libbacktrace"
   | #define PACKAGE_VERSION "version-unused"
   | #define PACKAGE_STRING "package-unused version-unused"
   | #define PACKAGE_BUGREPORT ""
   | #define PACKAGE_URL ""
   | /* end confdefs.h.  */
   |
   | int
   | main ()
   | {
   |
   |   ;
   |   return 0;
   | }
   configure:2904: error: in 
`/Users/xiebaiyuan/Downloads/incubator-tvm/build/libbacktrace':
  

[GitHub] [tvm] xiebaiyuan commented on issue #7705: build tvm failed with libbacktrace.a : ld: symbol(s) not found for architecture x86_64

2021-03-24 Thread GitBox


xiebaiyuan commented on issue #7705:
URL: https://github.com/apache/tvm/issues/7705#issuecomment-805563728


   @tkonolige 
   https://github.com/tkonolige/incubator-tvm/tree/fix_libbacktrace_macos
   USE_LIBBACKTRACE=On
   
   ➜  build git:(fix_libbacktrace_macos) cmake ..
   -- The C compiler identification is AppleClang 12.0.0.1232
   -- The CXX compiler identification is AppleClang 12.0.0.1232
   -- Detecting C compiler ABI info
   -- Detecting C compiler ABI info - done
   -- Check for working C compiler: 
/Library/Developer/CommandLineTools/usr/bin/cc - skipped
   -- Detecting C compile features
   -- Detecting C compile features - done
   -- Detecting CXX compiler ABI info
   -- Detecting CXX compiler ABI info - done
   -- Check for working CXX compiler: 
/Library/Developer/CommandLineTools/usr/bin/c++ - skipped
   -- Detecting CXX compile features
   -- Detecting CXX compile features - done
   -- Build with RPC support...
   -- Build with Graph runtime support...
   -- Build with profiler...
   -- VTA build with 
VTA_HW_PATH=/Users/xiebaiyuan/Downloads/incubator-tvm/3rdparty/vta-hw
   -- Build VTA runtime with target: sim
   -- Build with contrib.random
   -- Build with contrib.sort
   -- Build with contrib.hybriddump
   -- Git found: /usr/local/bin/git
   -- Found TVM_GIT_COMMIT_HASH=42213e3b74cfc58107c806a372b29a4cd38e8438
   -- Performing Test SUPPORT_CXX14
   -- Performing Test SUPPORT_CXX14 - Success
   -- Building with libbacktrace...
   -- Building with TVM Map...
   -- Build with thread support...
   -- Looking for pthread.h
   -- Looking for pthread.h - found
   -- Performing Test CMAKE_HAVE_LIBC_PTHREAD
   -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
   -- Found Threads: TRUE
   -- Configuring done
   -- Generating done
   -- Build files have been written to: 
/Users/xiebaiyuan/Downloads/incubator-tvm/build
   
   > 
   build git:(fix_libbacktrace_macos) make -j4
   
   Scanning dependencies of target project_libbacktrace
   Scanning dependencies of target tvm_objs
   [  0%] Creating directories for 'project_libbacktrace'
   [  1%] No download step for 'project_libbacktrace'
   [  1%] No update step for 'project_libbacktrace'
   [  1%] No checkout step for 'project_libbacktrace'
   [  1%] No patch step for 'project_libbacktrace'
   [  1%] Performing configure step for 'project_libbacktrace'
   checking build system type... x86_64-apple-darwin20.3.0
   checking host system type... x86_64-apple-darwin20.3.0
   checking target system type... x86_64-apple-darwin20.3.0
   checking for gcc... /Library/Developer/CommandLineTools/usr/bin/cc
   checking whether the C compiler works... no
   configure: error: in 
`/Users/xiebaiyuan/Downloads/incubator-tvm/build/libbacktrace':
   configure: error: C compiler cannot create executables
   See `config.log' for more details
   make[2]: *** 
[libbacktrace/src/project_libbacktrace-stamp/project_libbacktrace-configure] 
Error 77
   make[1]: *** [CMakeFiles/project_libbacktrace.dir/all] Error 2
   make[1]: *** Waiting for unfinished jobs
   [  1%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/canonical_simplify.cc.o
   [  1%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/const_int_bound.cc.o
   [  1%] Building CXX object CMakeFiles/tvm_objs.dir/src/arith/analyzer.cc.o
   [  2%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/bound_deducer.cc.o
   [  3%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/detect_linear_equation.cc.o
   [  3%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/domain_touched.cc.o
   [  3%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/int_constraints.cc.o
   [  3%] Building CXX object CMakeFiles/tvm_objs.dir/src/arith/int_set.cc.o
   [  4%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/ir_mutator_with_analyzer.cc.o
   [  4%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/iter_affine_map.cc.o
   [  4%] Building CXX object CMakeFiles/tvm_objs.dir/src/arith/modular_set.cc.o
   [  4%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/rewrite_simplify.cc.o
   [  5%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/solve_linear_equation.cc.o
   [  5%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/arith/solve_linear_inequality.cc.o
   [  5%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/auto_scheduler/auto_schedule.cc.o
   [  6%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/auto_scheduler/compute_dag.cc.o
   [  6%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/auto_scheduler/cost_model.cc.o
   [  6%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/auto_scheduler/feature.cc.o
   [  6%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/auto_scheduler/loop_state.cc.o
   [  7%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/auto_scheduler/measure.cc.o
   [  7%] Building CXX object 
CMakeFiles/tvm_objs.dir/src/auto_scheduler/measure_record.cc.o
   
   


-- 
This is an automated message from the Apache Git Service.
To respond

  1   2   >