[GitHub] [tvm] manupa-arm commented on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


manupa-arm commented on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920580279


   Our proposal is to add a check to that loop whether it has 'local' 
storage_scope before we place them into the stack as it is the solution that 
works for the wider definition of the CPU, rather than performing a hidden 
optimization in the codegen that is applicable for a subset of CPUs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] manupa-arm commented on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


manupa-arm commented on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920577170


   @tqchen @mbs-octoml ,
   
   This is not specific to Arm(R) Ethos(TM)-U codegen and its generally 
applicable for any micro controller where we would want to avoid creating 
allocation of memories in the stack but rather service them via platform 
abstraction that is handled via TVMBackendWorkspaceAlloc --> 
TVMPlatformAllocate.
   
   This only showed up in Arm(R) Ethos(TM)-U codegen because we use 
TVMPlatformAllocate allocate memory from a buffer placed in a memory that is 
both accessible by CPU and NPU. Thus, it makes this a functional bug.
   However, with this change current main produces code that have much higher 
stack allocation for micros -- that is not desired.
   
   cc : @u99127  @areusch 
   
   > Stack allocation is important for the performance of the CPU code. In the 
case of TVM, we do not have explicit concept of registers in most cases. 
Instead we need to rely on LLVM's mem2reg pass to transform a set of constant 
indexing into stack allocation and turn them into registers, so the code can 
run effectively. So removing this code path can complicate the code generator 
side optimization by quite a bit and slow down the CPU code.
   
   The correct way represent this seems to be using tir.allocates with 
storage_scope="local" for device=CPU to go into the stack. For targets that 
needs this behavior, there should be an explicit pass to convert them to local 
to make them placed to the stack rather than assuming this default behaviour.
   
   > Of course this can be a target specific thing. LowerTVMBuiltin right now 
has the assumption to only run on host(CPU) code.
   
   >Allocate always prefers (native) stack allocation when possible, but 
also allows other means of opaque allocation(as long as the allocation is 
fulfilled)
   There are however, cases when stack allocation is not possible
   When the size of memory requested is too big, stack alloca will 
explode the stack space(That is why there is a size check in the CPU case and 
the use of global opaque was meant as a fallback to avoid stackoverflow in 
models with big intermediate temp space)
   LowerTVMBuiltin was originally designed to run on the host side, 
which means as soon as the allocation is about device side memory, it will need 
to call onto a (host side) device API to allocate the memory instead
   
   Clearly, this definition of CPU leaves out micros.
   It feels wrong to print out allocates with "global" storage_scope directly 
into CPU PrimFunc that gets printed as a stack allocation rather it should be 
serviced via TVMBAW call which moves the responsibility for the 
runtime/application layer.
   
   > So rationales for the specific CPU side logic:
   
>   We want to have stack alloca on host when possible(to gain mem2reg 
optimization)
   When the requested size is too large, we fallback to opaque workspace 
allocation on heap to allow the code to safely handle code with big temp memory 
requests as well as dynamic size allocation requests.
   
   This certainly sounds like we could use an optimization pass to convert the 
tir.allocate's storage_scope for targets that requires this rather than making 
that the default behaviour for tir.allocates with "global" storage scope.
   
   cc : @tom-gall @mbaret @Mousius 
   
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (89bcc79 -> 4c77bae)

2021-09-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 89bcc79  fix (#9021)
 add 4c77bae  [Onnx] Add momentum (#9000)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  | 54 ++
 tests/python/frontend/onnx/test_forward.py |  3 --
 2 files changed, 54 insertions(+), 3 deletions(-)


[tvm] branch main updated (ff0868f -> 89bcc79)

2021-09-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from ff0868f  [Community] @AndrewZhaoLuo -> Reviewer (#9020)
 add 89bcc79  fix (#9021)

No new revisions were added by this update.

Summary of changes:
 python/tvm/topi/cuda/pooling.py | 2 +-
 python/tvm/topi/x86/pooling.py  | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)


[GitHub] [tvm] masahi merged pull request #9000: [Onnx] Add momentum

2021-09-15 Thread GitBox


masahi merged pull request #9000:
URL: https://github.com/apache/tvm/pull/9000


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (148ddca -> ff0868f)

2021-09-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 148ddca  [Hexagon] Implement model launcher (#8986)
 add ff0868f  [Community] @AndrewZhaoLuo -> Reviewer (#9020)

No new revisions were added by this update.

Summary of changes:
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)


[GitHub] [tvm] masahi merged pull request #9021: [TOPI] Fix more pooling schedule

2021-09-15 Thread GitBox


masahi merged pull request #9021:
URL: https://github.com/apache/tvm/pull/9021


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #9020: [Community] @AndrewZhaoLuo -> Reviewer

2021-09-15 Thread GitBox


masahi merged pull request #9020:
URL: https://github.com/apache/tvm/pull/9020


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (777ace3 -> 148ddca)

2021-09-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 777ace3  [Relay][Pass] Add ExtractOperators pass (#8996)
 add 148ddca  [Hexagon] Implement model launcher (#8986)

No new revisions were added by this update.

Summary of changes:
 cmake/modules/HexagonSDK.cmake |   5 +
 src/runtime/hexagon/launcher/CMakeLists.txt| 156 ++
 src/runtime/hexagon/launcher/README.md | 175 
 src/runtime/hexagon/launcher/launcher_android.cc   | 164 +++
 src/runtime/hexagon/launcher/launcher_core.cc  | 176 
 src/runtime/hexagon/launcher/launcher_core.h   | 132 
 src/runtime/hexagon/launcher/launcher_hexagon.cc   | 229 +
 src/runtime/hexagon/launcher/launcher_main.cc  | 148 +
 .../runtime/hexagon/launcher/launcher_rpc.idl  |  25 +--
 src/runtime/hexagon/launcher/launcher_util.cc  |  68 ++
 .../runtime/hexagon/launcher/launcher_util.h   |  25 +--
 11 files changed, 1273 insertions(+), 30 deletions(-)
 create mode 100644 src/runtime/hexagon/launcher/CMakeLists.txt
 create mode 100644 src/runtime/hexagon/launcher/README.md
 create mode 100644 src/runtime/hexagon/launcher/launcher_android.cc
 create mode 100644 src/runtime/hexagon/launcher/launcher_core.cc
 create mode 100644 src/runtime/hexagon/launcher/launcher_core.h
 create mode 100644 src/runtime/hexagon/launcher/launcher_hexagon.cc
 create mode 100644 src/runtime/hexagon/launcher/launcher_main.cc
 copy nnvm/src/c_api/c_api_error.cc => 
src/runtime/hexagon/launcher/launcher_rpc.idl (59%)
 create mode 100644 src/runtime/hexagon/launcher/launcher_util.cc
 copy include/tvm/parser/parser.h => 
src/runtime/hexagon/launcher/launcher_util.h (62%)


[GitHub] [tvm] masahi merged pull request #8986: [Hexagon] Implement model launcher

2021-09-15 Thread GitBox


masahi merged pull request #8986:
URL: https://github.com/apache/tvm/pull/8986


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 edited a comment on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


junrushao1994 edited a comment on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920494401


   I think I sort of understand the usecase here: some negative indices are not 
known to be negative until in runtime. This forces us to defer the conversion 
from compile-time to runtime.
   
   On the other hand, I am not 100% sure if it is the best fix by adding a new 
argument in the public interface, given that in most cases indices are just 
positive and well in-range.
   
   I was thinking, if the issue comes from an importer, is it possible to add 
an operator like `normalize_indices`, mark it as `injective` which makes it 
fusible, so that there isn't architectural change in TE? What do you guys think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 edited a comment on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


junrushao1994 edited a comment on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920494401


   I think I sort of understand the usecase here: some negative indices are 
known to be negative until in runtime. This forces us to defer the conversion 
from compile-time to runtime.
   
   On the other hand, I am not 100% sure if it is the best fix by adding a new 
argument in the public interface, given that in most cases indices are just 
positive and well in-range.
   
   I was thinking, if the issue comes from an importer, is it possible to add 
an operator like `normalize_indices`, mark it as `injective` which makes it 
fusible, so that there isn't architectural change in TE? What do you guys think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


huajsj commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709691953



##
File path: include/tvm/runtime/logging.h
##
@@ -409,6 +416,68 @@ inline bool DebugLoggingEnabled() {
   return state == 1;
 }
 
+/*! \brief Helpers for \p VerboseLoggingEnabled. Exposed for unit testing 
only. */
+std::unordered_map ParseTvmLogDebugSpec(const char* 
opt_spec);
+bool VerboseEnabledInMap(const std::string& filename, int level,
+ const std::unordered_map& map);
+
+/*!
+ * \brief Returns true if a VLOG statement in \p filename is enabled by the \p 
TVM_LOG_DEBUG
+ * environment variable for logging at verbosity \p level.
+ *
+ * Filenames are canonicalized to be w.r.t. the src/ dir of the TVM tree. 
(VLOG's should not
+ * appear under include/).
+ *
+ * To enable file \p relay/foo.cc up to level 2 and \p ir/bar.cc for level 0 
only set:
+ * \code
+ * TVM_LOG_DEBUG="1;relay/foo.cc=2;ir/bar.cc=0;"
+ * \endcode
+ *
+ * To enable all files up to level 3 but disable \p ir/bar.cc set:
+ * \code
+ * TVM_LOG_DEBUG="1;*=2;ir/bar.cc=-1;"

Review comment:
   If we agree that is a problem , that sound not make sense not to fix it, 
 and better to avoid a temporary solution that in some case  will become a 
permanent solution, as a user for this VLOG feature, I like the idea filtering 
in file level, but  I think such  configuration method is unnecessary 
complicated and error-prone, if we can better to find a more easy to use 
method. 
   about CI is green, I can understand the CI consume time, to get it become 
green not very easy, but from my point view we should better to  more focus on 
code logic instead of CI.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #9018: [microTVM][autoTVM] Follow up fixes to #9003

2021-09-15 Thread GitBox


huajsj commented on a change in pull request #9018:
URL: https://github.com/apache/tvm/pull/9018#discussion_r709680012



##
File path: python/tvm/micro/testing.py
##
@@ -0,0 +1,36 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Defines testing methods used with microTVM."""
+
+import pathlib
+import json
+from typing import Union
+
+
+def _check_tune_log(log_path: Union[pathlib.Path, str]):
+"""Reads tune log and check each result"""
+results = []
+with open(log_path, "r") as f:
+line = f.readline()
+while line:
+results.append(json.loads(line))
+line = f.readline()
+
+for item in results:
+tune_result = item["result"]
+assert tune_result[0][0] < 10.0

Review comment:
   These two loops seems like can get merge into one.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


huajsj commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709691953



##
File path: include/tvm/runtime/logging.h
##
@@ -409,6 +416,68 @@ inline bool DebugLoggingEnabled() {
   return state == 1;
 }
 
+/*! \brief Helpers for \p VerboseLoggingEnabled. Exposed for unit testing 
only. */
+std::unordered_map ParseTvmLogDebugSpec(const char* 
opt_spec);
+bool VerboseEnabledInMap(const std::string& filename, int level,
+ const std::unordered_map& map);
+
+/*!
+ * \brief Returns true if a VLOG statement in \p filename is enabled by the \p 
TVM_LOG_DEBUG
+ * environment variable for logging at verbosity \p level.
+ *
+ * Filenames are canonicalized to be w.r.t. the src/ dir of the TVM tree. 
(VLOG's should not
+ * appear under include/).
+ *
+ * To enable file \p relay/foo.cc up to level 2 and \p ir/bar.cc for level 0 
only set:
+ * \code
+ * TVM_LOG_DEBUG="1;relay/foo.cc=2;ir/bar.cc=0;"
+ * \endcode
+ *
+ * To enable all files up to level 3 but disable \p ir/bar.cc set:
+ * \code
+ * TVM_LOG_DEBUG="1;*=2;ir/bar.cc=-1;"

Review comment:
   If we agree that is a problem , that sound not make sense not to fix it, 
 and better to avoid a temporary solution that in some case  will become a 
permanent solution, as a user for this VLOG feature, I like the idea filtering 
in file level, but  I think such  configuration method is unnecessary 
complicated and error-prone, if we can better to find a more easy to use 
method. 
   about CI is green, I can understand the CI consume time and sometime it also 
random fail, to get it become green not very easy, but from my point view we 
should better to  more focus on code logic instead of CI.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mehrdadh opened a new pull request #9026: [microtvm][Zephyr] Add MAIN_STACK_SIZE option to API server

2021-09-15 Thread GitBox


mehrdadh opened a new pull request #9026:
URL: https://github.com/apache/tvm/pull/9026


   - When using API server in for external project, sometime we need to set 
main stack size for zephyr board if model is large. This PR adds this option to 
Zephyr project API.
   - In addition, this PR moves zephyr board properties to a json file to 
create a single source for board information for testing purposes.
   - Finally, this PR adds a validation check for projectOption passed to api 
server.
   
   cc @areusch @gromero 
   
   waiting for https://github.com/apache/tvm/pull/9018 before merging this.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


junrushao1994 commented on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920494401


   I think I sort of understand the usecase here, where some indices are known 
negative only in runtime, which forces us to check things on runtime. On the 
other hand, I am not 100% sure if it is the best fix by adding a new argument 
in the public interface, given that in most cases indices are just positive and 
well in-range. If the issue comes from an importer, is it possible to add an 
operator like `normalize_indices`, mark it as `injective` which makes it 
fusible, so that there isn't architectural change? What do you guys think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #8996: [Relay][Pass] Add ExtractOperators pass

2021-09-15 Thread GitBox


masahi merged pull request #8996:
URL: https://github.com/apache/tvm/pull/8996


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (6f5b674 -> 777ace3)

2021-09-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 6f5b674  [BYOC][TensorRT] Add TensorRT own int8 calibration support to 
TensorRT BYOC integration (#8808)
 add 777ace3  [Relay][Pass] Add ExtractOperators pass (#8996)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/analysis/analysis.py  | 17 +
 src/relay/analysis/extract_operators.cc| 75 +++
 ...tions.py => test_analysis_extract_operators.py} | 86 +++---
 3 files changed, 136 insertions(+), 42 deletions(-)
 create mode 100644 src/relay/analysis/extract_operators.cc
 copy tests/python/relay/{test_analysis_extract_fused_functions.py => 
test_analysis_extract_operators.py} (51%)


[GitHub] [tvm] tqchen commented on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


tqchen commented on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920486224


   Right, this the gets to the target dependent generation regime where 
TargetKind attribute is indeed the right solution. We should also send a PR to 
add comments to that code block so we could have more context in the future.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml edited a comment on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


mbs-octoml edited a comment on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920485102


   Thanks so much for for the context. I'll try to capture that in a comment.
   This is a 'bug' only in the sense that this heuristic is not working for the 
EthosU AOT codegen, I think because they are expecting to intercept the 
workpool ops downstream? But it does suggest a very simple way forward: make 
kMaxStackAlloca a TargetKind attribute so they can force it to zero.
   @manupa-arm can you chime in here? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml edited a comment on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


mbs-octoml edited a comment on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920485102


   Thanks so much for for the context. I'll try to capture that in a comment.
   This is a 'bug' only in the sense that this heuristic is not working for the 
EthosU AOT codegen, I think because they are expecting to intercept the 
workpool ops downstream? But it does suggest a very simply way forward: make 
kMaxStackAlloca a TargetKind attribute so they can force it to zero.
   @manupa-arm can you chime in here? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml edited a comment on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


mbs-octoml edited a comment on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920485102


   Thanks so much for for the context. I'll try to capture that in a comment.
   This is a 'bug' only in the sense that this heuristic is not working for the 
EthosU AOT codegen, I think because they are expecting to intercept the 
workpool ops downstream? But it does suggest a very simply way forward: make 
kMaxAlloca size a TargetKind attribute so they can force it to zero.
   @manupa-arm can you chime in here? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


mbs-octoml commented on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920485102


   Thank so much for for the context. I'll try to capture that in a comment.
   This is a 'bug' only in the sense that this heuristic is not working for the 
EthosU AOT codegen, I think because they are expecting to intercept the 
workpool ops downstream? But it does suggest a very simply way forward: make 
kMaxAlloca size a TargetKind attribute so they can force it to zero.
   @manupa-arm can you chime in here? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan commented on a change in pull request #8986: [Hexagon] Implement model launcher

2021-09-15 Thread GitBox


csullivan commented on a change in pull request #8986:
URL: https://github.com/apache/tvm/pull/8986#discussion_r709658089



##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK
+   - `CMAKE_C_COMPILER=hexagon-clang`
+   - `CMAKE_CXX_COMPILER=hexagon-clang++`
+   - `HEXAGON_ARCH` to one of v65, v66, v68
+   - `TVM_RUNTIME_HEXAGON=/path/to/libtvm_runtime.a` _statically_ linked
+ TVM runtime
+   Make sure to provide the path to launcher's `CMakeLists.txt` directory
+   in `cmake` invocation.
+
+3. Run `make`. This will create `liblauncher_rpc_skel.so`.
+
+### Compilation of the Android part
+
+1. Build TVM runtime for Android. Unlike in the Hexagon case, this should be
+   the dynamic library (which is the default), i.e. `libtvm_runtime.so`.
+
+2. Create a subdirectory for the build files (different from the one used for
+   Hexagon files), and run `cmake` with the following variables set:
+   - `FASTRPC_LIBS=STUB`

Review comment:
   Is it worth adding a comment about using `mini-dm` to inspect issues 
with FastRPC? I'll leave that up to your discretion. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


tqchen commented on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920469704


   @yzh119 I believe the intended usage was for the case where the value of 
indices are not known at compile time, otherwise compiler will be able to prove 
and simplify the conditionals


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


tqchen edited a comment on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920463412


   @mbs-octoml To give a bit of context
   
   In the context of CPU, we want to preserve small alloca until the code 
generation point. And then the code will generate the stack alloca in an 
explicit way. Only when memory is big enough(bigger than a constant), we will 
use an opaque allocation instead.
   
   Stack allocation is important for the performance of the CPU code. In the 
case of TVM, we do not have explicit concept of registers in most cases.  
Instead we need to rely on LLVM's mem2reg pass to transform a set of constant 
indexing into stack allocation and turn them into registers, so the code can 
run effectively. So removing this code path can complicate the code generator 
side optimization by quite a bit and slow down the CPU code.
   
   Of course this can be a target specific thing. LowerTVMBuiltin right now has 
the assumption to only run on host(CPU) code.
   
   - Allocate always prefers (native) stack allocation when possible, but also 
allows other means of opaque allocation(as long as the allocation is fulfilled)
   - There are however, cases when stack allocation is not possible 
   - When the size of memory requested is too big, stack alloca will 
explode the stack space(That is why there is a size check in the CPU case and 
the use of global opaque was meant as a fallback to avoid stackoverflow in 
models with big intermediate temp space)
   - LowerTVMBuiltin was originally designed to run on the host side, which 
means as soon as the allocation is about device side memory, it will need to 
call onto a (host side) device API to allocate the memory instead
   
   
   So rationales for the specific CPU side logic:
   - We want to have stack alloca on host when possible(to gain mem2reg 
optimization)
   - When the requested size is too large, we fallback to opaque workspace 
allocation on heap to allow the code to safely handle code with big temp memory 
requests as well as dynamic size allocation requests.
   
   My guess is we need to look into why VM cannot work with code that allocates 
on stack in the multiple target case


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


tqchen edited a comment on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920463412


   @mbs-octoml To give a bit of context
   
   In the context of CPU, we want to preserve small alloca until the code 
generation point. And then the code will generate the stack alloca in an 
explicit way. Only when memory is big enough(bigger than a constant), we will 
use an opaque allocation instead.
   
   Stack allocation is important for the performance of the CPU code. In the 
case of TVM, we do not have explicit concept of registers in most cases.  
Instead we need to rely on LLVM's mem2reg pass to transform a set of constant 
indexing into stack allocation and turn them into registers, so the code can 
run effectively. So removing this code path can complicate the code generator 
side optimization by quite a bit and slow down the CPU code.
   
   Of course this can be a target specific thing. LowerTVMBuiltin right now has 
the assumption to only run on host(CPU) code.
   
   - Allocate always prefers (native) stack allocation when possible, but also 
allows other means of opaque allocation(as long as the allocation is fulfilled)
   - There are however, cases when stack allocation is not possible 
   - When the size of memory requested is too big, stack alloca will 
explode the stack space(That is why there is a size check in the CPU case and 
the use of global opaque was meant as a fallback to avoid stackoverflow in 
models with big intermediate temp space)
   - LowerTVMBuiltin was originally designed to run on the host side, which 
means as soon as the allocation is about device side memory, it will need to 
call onto a (host side) device API to allocate the memory instead
   
   So in this case the special case logic for CPU was intentionally and my 
guess is we need to look into why VM cannot work with code that allocates on 
stack in the multiple target case


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


tqchen edited a comment on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920463412


   @mbs-octoml To give a bit of context
   
   In the context of CPU, we want to preserve small alloca until the code 
generation point. And then the code will generate the stack alloca in an 
explicit way. Only when memory is big enough(bigger than a constant), we will 
use an opaque allocation instead.
   
   Stack allocation is important for the performance of the CPU code. In the 
case of TVM, we do not have explicit concept of registers in most cases.  
Instead we need to rely on LLVM's mem2reg pass to transform a set of constant 
indexing into stack allocation and turn them into registers, so the code can 
run effectively. So removing this code path can complicate the code generator 
side optimization by quite a bit and slow down the CPU code.
   
   Of course this can be a target specific thing. LowerTVMBuiltin right now has 
the assumption to only run on host(CPU) code.
   
   - Allocate always prefers (native) stack allocation when possible, but also 
allows other means of opaque allocation(as long as the allocation is fulfilled)
   - There are however, cases when stack allocation is not possible 
   - When the size of memory requested is too big, stack alloca will 
explode the stack space(That is why there is a size check in the CPU case and 
the use of global opaque was meant as a fallback to avoid stackoverflow in big 
models)
   - LowerTVMBuiltin was originally designed to run on the host side, which 
means as soon as the allocation is about device side memory, it will need to 
call onto a (host side) device API to allocate the memory instead
   
   So in this case the special case logic for CPU was intentionally and my 
guess is we need to look into why VM cannot work with code that allocates on 
stack in the multiple target case


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


tqchen edited a comment on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920463412


   @mbs-octoml To give a bit of context
   
   In the context of CPU, we want to preserve small alloca until the code 
generation point. And then the code will generate the stack alloca in an 
explicit way. Only when memory is big enough(bigger than a constant), we will 
use an opaque allocation instead.
   
   Stack allocation is important for the performance of the CPU code. In the 
case of TVM, we do not have explicit concept of registers in most cases.  
Instead we need to rely on LLVM's mem2reg pass to transform a set of constant 
indexing into stack allocation and turn them into registers, so the code can 
run effectively. So removing this code path can complicate the code generator 
side optimization by quite a bit and slow down the CPU code.
   
   Of course this can be a target specific thing. LowerTVMBuiltin right now has 
the assumption to only run on host(CPU) code.
   
   - Allocate always prefers (native) stack allocation when possible, but also 
allows other means of opaque allocation(as long as the allocation is fulfilled)
   - There are however, cases when stack allocation is not possible 
   - When the size of memory requested is too big, stack alloca will 
explode the space(That is why there is a size check in the CPU case)
   - LowerTVMBuiltin was originally designed to run on the host side, which 
means as soon as the allocation is about device side memory, it will need to 
call onto a (host side) device API to allocate the memory instead
   
   So in this case the special case logic for CPU was intentionally and my 
guess is we need to look into why VM cannot work with code that allocates on 
stack in the multiple target case


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] yzh119 commented on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


yzh119 commented on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920465162


   Is the negative indices used inside TE? If so I wonder can we make it a 
transformation pass rather than runtime behavior?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


tqchen commented on issue #9022:
URL: https://github.com/apache/tvm/issues/9022#issuecomment-920463412


   @mbs-octoml I believe the current behavior is intended. 
   
   In the context of CPU, we want to preserve small alloca until the code 
generation point. And then the code will generate the stack alloca in an 
explicit way. Only when memory is big enough(bigger than a constant), we will 
use an opaque allocation instead.
   
   
   Stack allocation is important for the prformance of the CPU code, because we 
need to rely on LLVM's mem2reg pass to transform a set of constant indexing 
into stack allocation and turn them into registers, so the code can run 
effectively. Of course this can be a target specific thing. LowerTVMBuiltin 
right now has the assumption to only run on host(CPU) code.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


tqchen edited a comment on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920459959


   I think this can be useful for some of the indexing operations. We could try 
to make it more explicit though. e.g. introduce a new API. 
```tensor.LookupWithNegativeIndices(indices)``` and explicitly call into them 
in these cases


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


tqchen commented on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920459959


   I think this can be useful for some of the indexing operations. We could try 
to make it more explicit though. e.g. introduce a new API. 
```Tensor.LookupWithNegativeIndices(indices)``` and explicitly call into them 
in these cases


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tmoreau89 commented on a change in pull request #8955: [Hexagon] Pytestify Hexagon unit test

2021-09-15 Thread GitBox


tmoreau89 commented on a change in pull request #8955:
URL: https://github.com/apache/tvm/pull/8955#discussion_r709644590



##
File path: tests/python/unittest/test_target_codegen_hexagon.py
##
@@ -26,8 +26,10 @@
 import tvm.contrib.hexagon as hexagon
 
 
-# Register a phony linker, so that we can test codegen without a Hexagon 
toolchain.
-hexagon.register_linker(lambda: "/bin/true")
+@pytest.fixture(scope="session", autouse=True)
+def register_linker():
+# Register a phony linker, so that we can test codegen without a Hexagon 
toolchain.
+hexagon.register_linker(lambda: "/bin/true")

Review comment:
   @kparzysz-quic looks like this is a requested change: let us know if you 
have any questions




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


tqchen commented on a change in pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#discussion_r709643444



##
File path: src/te/tensor.cc
##
@@ -39,15 +39,26 @@ IterVar reduce_axis(Range dom, std::string name) { return 
IterVar(dom, Var(name)
 Var var(std::string name_hint, DataType t) { return Var(name_hint, t); }
 
 // Tensor
-PrimExpr Tensor::operator()(Array indices) const {
+PrimExpr Tensor::operator()(Array indices, bool support_negative_indices 
= false) const {
   Array arr(indices.begin(), indices.end());
-  return operator()(arr);
+  return operator()(arr, support_negative_indices);
 }
 
-PrimExpr Tensor::operator()(Array indices) const {
-  if (ndim() != 0) {
-ICHECK_EQ(ndim(), indices.size()) << "Tensor dimension mismatch in read "
-  << "ndim = " << ndim() << ", 
indices.size=" << indices.size();
+PrimExpr Tensor::operator()(Array indices, bool 
support_negative_indices = false) const {
+  Array shape = (*this)->shape;
+
+  if (shape.size() != 0) {
+ICHECK_EQ(shape.size(), indices.size())
+<< "Tensor dimension mismatch in read "
+<< "ndim = " << ndim() << ", indices.size=" << indices.size();
+  }
+
+  if (support_negative_indices) {
+for (size_t i = 0; i < shape.size(); i++) {
+  PrimExpr new_index = if_then_else(indices[i] < 
make_const(indices[i]->dtype, 0),

Review comment:
   you want to use select here, because it does not involves a memory 
operation(thus can evaluate both sides)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709643127



##
File path: include/tvm/runtime/logging.h
##
@@ -409,6 +416,68 @@ inline bool DebugLoggingEnabled() {
   return state == 1;
 }
 
+/*! \brief Helpers for \p VerboseLoggingEnabled. Exposed for unit testing 
only. */
+std::unordered_map ParseTvmLogDebugSpec(const char* 
opt_spec);
+bool VerboseEnabledInMap(const std::string& filename, int level,
+ const std::unordered_map& map);
+
+/*!
+ * \brief Returns true if a VLOG statement in \p filename is enabled by the \p 
TVM_LOG_DEBUG
+ * environment variable for logging at verbosity \p level.
+ *
+ * Filenames are canonicalized to be w.r.t. the src/ dir of the TVM tree. 
(VLOG's should not
+ * appear under include/).
+ *
+ * To enable file \p relay/foo.cc up to level 2 and \p ir/bar.cc for level 0 
only set:
+ * \code
+ * TVM_LOG_DEBUG="1;relay/foo.cc=2;ir/bar.cc=0;"
+ * \endcode
+ *
+ * To enable all files up to level 3 but disable \p ir/bar.cc set:
+ * \code
+ * TVM_LOG_DEBUG="1;*=2;ir/bar.cc=-1;"

Review comment:
   I definitely  agree with you the ergonomics are horrible! Using an env 
variable at all is already unfortunate but forced on us by the 
python-is-the-driver convention for TVM. But I think this is all very temporary 
and 'for developers only' while we figure out a more structured approach to 
controlling, capturing and redirecting logging, including both 'for debug' 
logging as well as logging which may be relevant to downstream customers 
running tvm in a production environment. 
   (+ @jroesch since we were chatting about this today).
   
   At this point the CI is green -- do you feel strongly enough we should not 
proceed as is?  




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on issue #8978: [Bug][VTA][OpenCL] If allowed to allocate in stack, VTA multiple target test will fail

2021-09-15 Thread GitBox


mbrookhart commented on issue #8978:
URL: https://github.com/apache/tvm/issues/8978#issuecomment-920445448


   @mbs-octoml opened this issue to track an approach to better distinguish 
multiple targets https://github.com/apache/tvm/issues/9022


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on issue #8978: [Bug][VTA][OpenCL] If allowed to allocate in stack, VTA multiple target test will fail

2021-09-15 Thread GitBox


mbrookhart commented on issue #8978:
URL: https://github.com/apache/tvm/issues/8978#issuecomment-920445094


   FYI - this PR renables the test that this change breaks 
https://github.com/apache/tvm/pull/9019


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on issue #8977: [Bug][VM] If not allocate on stack, VM runtime cannot work?

2021-09-15 Thread GitBox


mbrookhart commented on issue #8977:
URL: https://github.com/apache/tvm/issues/8977#issuecomment-920444851


   This is the test that fails in the VM without this code: 
https://github.com/apache/tvm/pull/9019


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


mbrookhart commented on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920426352


   I like this idea. I've been slowly throwing `relay.where` around the ops and 
the importers to solve this problem when I hit it, but that adds some 
complication to ops, fusion, and the risk of making things slower than they 
need to be. This would enable things to be much simpler at the Relay level in a 
number of places, I'm very curious to hear Tianqi and Junru's thoughts.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


huajsj commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709585141



##
File path: include/tvm/runtime/logging.h
##
@@ -409,6 +416,68 @@ inline bool DebugLoggingEnabled() {
   return state == 1;
 }
 
+/*! \brief Helpers for \p VerboseLoggingEnabled. Exposed for unit testing 
only. */
+std::unordered_map ParseTvmLogDebugSpec(const char* 
opt_spec);
+bool VerboseEnabledInMap(const std::string& filename, int level,
+ const std::unordered_map& map);
+
+/*!
+ * \brief Returns true if a VLOG statement in \p filename is enabled by the \p 
TVM_LOG_DEBUG
+ * environment variable for logging at verbosity \p level.
+ *
+ * Filenames are canonicalized to be w.r.t. the src/ dir of the TVM tree. 
(VLOG's should not
+ * appear under include/).
+ *
+ * To enable file \p relay/foo.cc up to level 2 and \p ir/bar.cc for level 0 
only set:
+ * \code
+ * TVM_LOG_DEBUG="1;relay/foo.cc=2;ir/bar.cc=0;"
+ * \endcode
+ *
+ * To enable all files up to level 3 but disable \p ir/bar.cc set:
+ * \code
+ * TVM_LOG_DEBUG="1;*=2;ir/bar.cc=-1;"

Review comment:
   According the logic, TVM_LOG_DEBUG now have a hiding strict format 
assumptions, these setting  like "*=2;1;" or "*=2;1", "*=1;" will be illegal,   
I don't see any necessary here to introduce such complexity for environment 
variable content.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mehrdadh commented on pull request #9018: [microTVM][autoTVM] Follow up fixes to #9003

2021-09-15 Thread GitBox


mehrdadh commented on pull request #9018:
URL: https://github.com/apache/tvm/pull/9018#issuecomment-920391097


   cc @leandron @areusch for possible review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kparzysz-quic opened a new pull request #9025: [Hexagon] Disable `thread_local` on Hexagon

2021-09-15 Thread GitBox


kparzysz-quic opened a new pull request #9025:
URL: https://github.com/apache/tvm/pull/9025


   This is specific to running code on hardware: libc++abi can create TLS keys 
with destructors in the libc++abi library. Despite that,
   the library gets unloaded before the keys are destroyed, leading to a crash. 
Turning off the use of `thread_local` is a workaround for this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kparzysz-quic opened a new pull request #9024: [Hexagon] Allow undefined symbols in libtvm_runtime.so on Hexagon

2021-09-15 Thread GitBox


kparzysz-quic opened a new pull request #9024:
URL: https://github.com/apache/tvm/pull/9024


   The shared library `libtvm_runtime.so` (or any other shared library built 
for Hexagon) will not contain definitions of symbols from libc. To avoid 
undefined symbol errors, turn that check off when building shared libs for 
Hexagon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


masahi edited a comment on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920369621


   For the context, this PR was split from another PR following the discussion 
https://github.com/apache/tvm/pull/8971#discussion_r707850237, since this could 
be potentially a controversial change (change the semantics of `te::Tensor` 
indexing)
   
   cc @tqchen @junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi edited a comment on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


masahi edited a comment on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920369621


   For the context, this PR was split from another PR following the discussion 
https://github.com/apache/tvm/pull/8971#discussion_r707850237
   
   cc @tqchen @junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


masahi commented on pull request #9023:
URL: https://github.com/apache/tvm/pull/9023#issuecomment-920369621


   For the context, this PR was split from another PR following the discussion 
https://github.com/apache/tvm/pull/8971#discussion_r707850237


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8971: [Onnx] Fix NLL Loss tests

2021-09-15 Thread GitBox


AndrewZhaoLuo commented on pull request #8971:
URL: https://github.com/apache/tvm/pull/8971#issuecomment-920359675


   https://github.com/apache/tvm/pull/9023 <-- discussion about making negative 
indices simpler 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo opened a new pull request #9023: [TE] Support negative indices

2021-09-15 Thread GitBox


AndrewZhaoLuo opened a new pull request #9023:
URL: https://github.com/apache/tvm/pull/9023


   Negative indices are a pretty common feature in most modern programming 
languages and libraries. This proposed PR would add optional support for 
negative indices out of the box for tensors in TE.
   
   A lot of operators from other frontends support negative indices. See ONNX, 
numpy, and PyTorch where their tensors support negative indices out of the box.
   
   One benefit of this change is now it is trivial to change operators to 
support negative indices. This makes adapting frontends to our operators 
simpler. For example an operator using this simply has to turn the 
`support_negative_indices` flag to true for every relevant indexing operation. 
An example of doing this can be found in 
https://github.com/AndrewZhaoLuo/tvm/blob/02f1870d7f2e274cfd7e04678691da019ff201f0/include/tvm/topi/transform.h#L1265.
 Right now how we handle negative indices is by manually converting them to 
positive indices which can be cumbersome and results in lots of code 
duplication.
   
   The downside is that it technically adds a little more computation per 
indexing operation. While these operations are probably memory and not compute 
bound I am not 100% confident that setting `support_negative_indices=true` 
would not result in performance regressions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on pull request #8808: [BYOC][TensorRT] Add TensorRT own int8 calibration support to TensorRT BYOC integration

2021-09-15 Thread GitBox


comaniac commented on pull request #8808:
URL: https://github.com/apache/tvm/pull/8808#issuecomment-920348214


   Thanks @tiandiao123 @trevor-m @FrozenGene @jcf94 @vinx13 @Laurawly 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (e44f6c0 -> 6f5b674)

2021-09-15 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from e44f6c0  [ONNX] Add Einsum converter (#8985)
 add 6f5b674  [BYOC][TensorRT] Add TensorRT own int8 calibration support to 
TensorRT BYOC integration (#8808)

No new revisions were added by this update.

Summary of changes:
 src/runtime/contrib/tensorrt/tensorrt_builder.cc   |  19 ++-
 src/runtime/contrib/tensorrt/tensorrt_builder.h|  11 +-
 src/runtime/contrib/tensorrt/tensorrt_calibrator.h | 130 ++
 src/runtime/contrib/tensorrt/tensorrt_runtime.cc   | 108 +--
 tests/python/contrib/test_tensorrt_int8_exp.py | 149 +
 5 files changed, 399 insertions(+), 18 deletions(-)
 create mode 100755 src/runtime/contrib/tensorrt/tensorrt_calibrator.h
 create mode 100644 tests/python/contrib/test_tensorrt_int8_exp.py


[GitHub] [tvm] comaniac merged pull request #8808: [BYOC][TensorRT] Add TensorRT own int8 calibration support to TensorRT BYOC integration

2021-09-15 Thread GitBox


comaniac merged pull request #8808:
URL: https://github.com/apache/tvm/pull/8808


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709543170



##
File path: include/tvm/runtime/logging.h
##
@@ -395,10 +399,13 @@ class LogMessageVoidify {
 inline bool DebugLoggingEnabled() {
   static int state = 0;
   if (state == 0) {
-if (auto var = std::getenv("TVM_LOG_DEBUG")) {
-  if (std::string(var) == "1") {
+if (const char* var = std::getenv("TVM_LOG_DEBUG")) {

Review comment:
   We don't have a consistent style on auto, but generally I spell out the 
types except for auto x = make_object(...), auto f = [...](...) {...}, and 
iterators.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8808: [BYOC][TensorRT] Add TensorRT own int8 calibration support to TensorRT BYOC integration

2021-09-15 Thread GitBox


comaniac commented on a change in pull request #8808:
URL: https://github.com/apache/tvm/pull/8808#discussion_r709538959



##
File path: src/runtime/contrib/tensorrt/tensorrt_builder.h
##
@@ -153,6 +153,9 @@ class TensorRTBuilder {
   /*! \brief Whether to automatically convert model to 16-bit floating point 
precision. */
   bool use_fp16_;
 
+  /*! \brief whether to automatically convert model to int8 precision */

Review comment:
   IIUC, `use_int8_` is exclusive with `use_fp16_`? If so, we should 
combine them to be a single variable like `target_dtype`.

##
File path: src/runtime/contrib/tensorrt/tensorrt_runtime.cc
##
@@ -308,6 +370,7 @@ class TensorRTRuntime : public JSONRuntimeBase {
 helper.ReadAllFields();
 const int batch_size = GetBatchSize();
 trt_engine_cache_[std::make_pair(symbol_name_, batch_size)] = 
engine_and_context;
+LOG(INFO) << "finished saving engine and context ... ";

Review comment:
   nit:
   ```suggestion
   LOG(INFO) << "Finished saving engine and context ... ";
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709540519



##
File path: include/tvm/runtime/logging.h
##
@@ -409,6 +416,68 @@ inline bool DebugLoggingEnabled() {
   return state == 1;
 }
 
+/*! \brief Helpers for \p VerboseLoggingEnabled. Exposed for unit testing 
only. */
+std::unordered_map ParseTvmLogDebugSpec(const char* 
opt_spec);
+bool VerboseEnabledInMap(const std::string& filename, int level,
+ const std::unordered_map& map);
+
+/*!
+ * \brief Returns true if a VLOG statement in \p filename is enabled by the \p 
TVM_LOG_DEBUG
+ * environment variable for logging at verbosity \p level.
+ *
+ * Filenames are canonicalized to be w.r.t. the src/ dir of the TVM tree. 
(VLOG's should not
+ * appear under include/).
+ *
+ * To enable file \p relay/foo.cc up to level 2 and \p ir/bar.cc for level 0 
only set:
+ * \code
+ * TVM_LOG_DEBUG="1;relay/foo.cc=2;ir/bar.cc=0;"
+ * \endcode
+ *
+ * To enable all files up to level 3 but disable \p ir/bar.cc set:
+ * \code
+ * TVM_LOG_DEBUG="1;*=2;ir/bar.cc=-1;"

Review comment:
   There are many possible designs. I settled on re-using the existing 
TWM_LOG_DEBUG since
- it is one control surface
- I often modules on and off during debugging and having one var to 
redefine makes that easy




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709539416



##
File path: include/tvm/runtime/logging.h
##
@@ -395,10 +399,13 @@ class LogMessageVoidify {
 inline bool DebugLoggingEnabled() {
   static int state = 0;
   if (state == 0) {
-if (auto var = std::getenv("TVM_LOG_DEBUG")) {
-  if (std::string(var) == "1") {
+if (const char* var = std::getenv("TVM_LOG_DEBUG")) {
+  std::string var_str(var);
+  if (var_str == "1" || var_str.rfind("1;", 0) == 0) {

Review comment:
   Will address in your comment below.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709538852



##
File path: src/runtime/logging.cc
##
@@ -166,10 +167,127 @@ namespace tvm {
 namespace runtime {
 namespace detail {
 
+std::unordered_map ParseTvmLogDebugSpec(const char* 
opt_spec) {
+  // Cache the verbosity level map.
+  std::unordered_map map;
+  LOG(INFO) << "initializing VLOG map";
+  if (opt_spec == nullptr) {
+LOG(INFO) << "VLOG disabled, no TVM_LOG_DEBUG environment variable";
+return map;
+  }
+  std::string spec(opt_spec);
+  // Check we are enabled overall with at least one VLOG option.
+  if (spec.rfind("1;", 0) != 0) {
+LOG(INFO) << "VLOG disabled, TVM_LOG_DEBUG does not start with '1;'";
+return map;
+  }
+  size_t start = 2UL;
+  while (start < spec.size()) {
+// We are looking for "name=level;" or "*=level;"
+size_t end = start;
+// Scan up to '='.
+while (spec[end] != '=') {
+  ++end;
+  if (end >= spec.size()) {
+LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, missing '='";
+return map;
+  }
+}
+if (end == start) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, empty name";
+  return map;
+}
+std::string name(spec.substr(start, end - start));
+// Skip '='
+++end;
+if (end >= spec.size()) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, missing level";
+  return map;
+}
+// Scan up to ';'.
+start = end;
+while (spec[end] != ';') {
+  ++end;
+  if (end >= spec.size()) {
+LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, missing ';'";
+return map;
+  }
+}
+if (end == start) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, empty level";
+  return map;
+}
+std::string level_str(spec.substr(start, end - start));
+// Skip ';'.
+++end;
+// Parse level, default to 0 if ill-formed which we don't detect.
+char* end_of_level = nullptr;
+int level = static_cast(strtol(level_str.c_str(), _of_level, 10));
+if (end_of_level != level_str.c_str() + level_str.size()) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, invalid level";
+}
+LOG(INFO) << "adding VLOG entry for '" << name << "' at level " << level;
+map.emplace(name, level);
+start = end;
+  }
+  return map;
+}
+
+constexpr const char* kSrcPrefix = "/src/";
+constexpr const size_t kSrcPrefixLength = 5;
+
+bool VerboseEnabledInMap(const std::string& filename, int level,
+ const std::unordered_map& map) {
+  if (level < 0) {
+return false;
+  }
+  // Canonicalize filename.
+  // TODO(mbs): Not Windows friendly.
+
+  size_t last_src = filename.rfind(kSrcPrefix, std::string::npos, 
kSrcPrefixLength);
+  // Strip anything before the /src/ prefix, on the assumption that will yield 
the
+  // TVM project relative filename. If no such prefix fallback to filename 
without
+  // canonicalization.
+  std::string key =
+  last_src == std::string::npos ? filename : filename.substr(last_src + 
kSrcPrefixLength);
+  // Check for exact.
+  auto itr = map.find(key);
+  if (itr != map.end()) {
+return level <= itr->second;
+  }
+  // Check for '*' wildcard.
+  itr = map.find("*");
+  if (itr != map.end()) {
+return level <= itr->second;
+  }
+  return false;
+}
+
+bool VerboseLoggingEnabled(const char* filename, int level) {
+  // Cache the verbosity level map.
+  static const std::unordered_map* map =
+  new std::unordered_map(ParseTvmLogDebugSpec(std::getenv("TVM_LOG_DEBUG")));

Review comment:
   This is the idiom for statically initialized aggregate structures. Since 
they can never be freed they go in 'raw' pointers.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8960: [Hexagon] Add contrib tests for blocked conv2d and maxpool2d

2021-09-15 Thread GitBox


areusch commented on a change in pull request #8960:
URL: https://github.com/apache/tvm/pull/8960#discussion_r709528557



##
File path: tests/python/contrib/test_hexagon/test_conv2d_blocked.py
##
@@ -0,0 +1,473 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import sys
+
+import tvm
+from tvm import te
+from tvm import topi
+from tvm.topi import testing
+from .infrastructure import (
+ceildiv,
+build_and_run,
+get_block_shape,
+get_conv2d_nhwc_shape,
+get_packed_filter_layout,
+get_packed_activation_layout,
+)
+
+import numpy as np
+import pytest
+
+
+def conv2d_logical(
+shape_nhwc,
+shape_oihw,
+kernel_size,
+stride,
+padding,
+dtype,
+storage_scope="global",
+):
+"""
+Conv2d TE wherein both input activation and filter tensors
+are defined with their logical NHWC/OIHW shapes, respectively.
+The packed physical layout for the activation and filter are:
+  Activation: nhwc8h8w32c
+  Filter: oihw8i32o4i
+"""
+assert kernel_size == tuple(shape_oihw[2:])
+
+block_shape = get_block_shape()
+block_H, block_W, block_C = block_shape
+shape = get_packed_activation_layout(shape_nhwc, block_shape)
+logical_output_shape = get_conv2d_nhwc_shape(
+shape_nhwc, kernel_size, stride, padding, [1, 1], shape_oihw[0]
+)
+output_shape = get_packed_activation_layout(logical_output_shape, 
block_shape)
+
+N, H, W, C = shape_nhwc
+X = te.placeholder(shape_nhwc, dtype=dtype)
+# Combination of padding required by conv2d operator and padding to evenly 
divisible
+# number of blocks. Note that this padding should be inlined in the 
schedule so
+# as to avoid input copying.
+pad_h = (block_H - ((H + padding[1]) % block_H)) % block_H

Review comment:
   is the second `% block_H` necessary?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709537682



##
File path: include/tvm/runtime/logging.h
##
@@ -129,8 +132,9 @@
  *   a = ...
  *   b = ...
  *   // if quit_on_assertion is true, if a==b, continue, otherwise quit.
- *   // if quit_on_assertion is false, if a==b, continue, otherwise 'return 
false' (default
- * behaviour) COND_CHECK_EQ(quit_on_assertion, a, b) << "some error message 
when  quiting"
+ *   // if quit_on_assertion is false, if a==b, continue, otherwise 'return 
false'
+ *   // (default behaviour)
+ *   COND_CHECK_EQ(quit_on_assertion, a, b) << "some error message when  
quiting"

Review comment:
   ok, but that will have to be for another cl unless ci fails :-|




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709537186



##
File path: CMakeLists.txt
##
@@ -575,7 +575,7 @@ endif()
 # Create the `cpptest` target if we can find GTest.  If not, we create dummy
 # targets that give the user an informative error message.
 if(GTEST_INCLUDE_DIR AND GTEST_LIB)
-  file(GLOB TEST_SRCS tests/cpp/*.cc)
+  file(GLOB_RECURSE TEST_SRCS tests/cpp/*.cc)

Review comment:
   Because i'm adding a unit test under runtime/, paving the way for adding 
more c++ unit tests in a directory structure that mirrors src/.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on pull request #8985: [ONNX] Add Einsum converter

2021-09-15 Thread GitBox


masahi commented on pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#issuecomment-920338681


   Thanks @anwang2009 @AndrewZhaoLuo @mbrookhart @junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (2aebd33 -> e44f6c0)

2021-09-15 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 2aebd33  Add standalone_crt/ to be part of the wheel package, when 
available. (#9005)
 add e44f6c0  [ONNX] Add Einsum converter (#8985)

No new revisions were added by this update.

Summary of changes:
 include/tvm/relay/attrs/transform.h|  11 +-
 python/tvm/relay/frontend/onnx.py  |  10 ++
 python/tvm/relay/op/__init__.py|   1 +
 .../EthosU.cmake => python/tvm/relay/op/_math.py   |   9 +-
 python/tvm/relay/op/_transform.py  |   1 +
 python/tvm/relay/op/strategy/cuda.py   |  13 +++
 python/tvm/relay/op/strategy/generic.py|  21 
 python/tvm/relay/op/tensor.py  |  23 +
 python/tvm/topi/generic/__init__.py|   1 +
 .../tvm/topi/generic/math.py   |  22 +++-
 src/relay/op/tensor/math.cc| 115 +
 tests/python/frontend/onnx/test_forward.py |   5 -
 12 files changed, 217 insertions(+), 15 deletions(-)
 copy cmake/modules/contrib/EthosU.cmake => python/tvm/relay/op/_math.py (82%)
 copy cmake/modules/contrib/Random.cmake => python/tvm/topi/generic/math.py 
(69%)
 create mode 100644 src/relay/op/tensor/math.cc


[GitHub] [tvm] masahi merged pull request #8985: [ONNX] Add Einsum converter

2021-09-15 Thread GitBox


masahi merged pull request #8985:
URL: https://github.com/apache/tvm/pull/8985


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on a change in pull request #8985: [ONNX] Add Einsum converter

2021-09-15 Thread GitBox


masahi commented on a change in pull request #8985:
URL: https://github.com/apache/tvm/pull/8985#discussion_r709533999



##
File path: python/tvm/relay/op/strategy/cuda.py
##
@@ -1210,3 +1210,16 @@ def invert_permutation_strategy_cuda(attrs, inputs, 
out_type, target):
 name="invert_permutation.cuda",
 )
 return strategy
+
+
+@einsum_strategy.register(["cuda", "gpu"])
+def einsum_strategy_cuda(attrs, inputs, out_type, target):
+"""einsum cuda strategy"""
+strategy = _op.OpStrategy()
+# TODO: Add cuda-specific op implementation for einsum
+strategy.add_implementation(

Review comment:
   Oh interesting. I didn't know that `topi.generic.schedule_extern` 
somehow generates a valid schedule. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #8849: [5/6] Arm(R) Ethos(TM)-U NPU codegen integration

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #8849:
URL: https://github.com/apache/tvm/pull/8849#discussion_r709531817



##
File path: src/tir/transforms/lower_tvm_builtin.cc
##
@@ -113,16 +113,6 @@ class BuiltinLower : public StmtExprMutator {
 op = stmt.as();
 // Get constant allocation bound.
 int64_t nbytes = GetVectorBytes(op->dtype);
-if (device_type_.defined()) {

Review comment:
   Filed https://github.com/apache/tvm/issues/9022.
   Just checking your use case: You want to see the allocas so you can later 
rewrite them, right? It's a bit confusing because I thought if the final c code 
generator sees these Allocates it rewrites them to use globals which I assumed 
was what you wanted.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] sergey-grovety commented on pull request #8990: [microTVM] Update support for ARMv7m intrinsic

2021-09-15 Thread GitBox


sergey-grovety commented on pull request #8990:
URL: https://github.com/apache/tvm/pull/8990#issuecomment-920335684


   > for the test image maybe you could reuse this image: 
https://github.com/apache/tvm/blob/main/tests/micro/testdata/mnist/digit-2.jpg
   
   Yes, sure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on a change in pull request #8849: [5/6] Arm(R) Ethos(TM)-U NPU codegen integration

2021-09-15 Thread GitBox


mbs-octoml commented on a change in pull request #8849:
URL: https://github.com/apache/tvm/pull/8849#discussion_r709531817



##
File path: src/tir/transforms/lower_tvm_builtin.cc
##
@@ -113,16 +113,6 @@ class BuiltinLower : public StmtExprMutator {
 op = stmt.as();
 // Get constant allocation bound.
 int64_t nbytes = GetVectorBytes(op->dtype);
-if (device_type_.defined()) {

Review comment:
   Filed https://github.com/apache/tvm/issues/9022.
   Just checking your use case: You want to see the allocas so you can later 
rewrite them, right? It's a bit confusing because I though if the final c code 
generator sees these Allocates it rewrites them to use globals which I assumed 
was what you wanted.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] sergey-grovety commented on a change in pull request #8990: [microTVM] Update support for ARMv7m intrinsic

2021-09-15 Thread GitBox


sergey-grovety commented on a change in pull request #8990:
URL: https://github.com/apache/tvm/pull/8990#discussion_r709531309



##
File path: tests/micro/zephyr/test_zephyr_armv7m.py
##
@@ -0,0 +1,293 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import io
+import logging
+import os
+import pathlib
+import sys
+import logging
+import tarfile
+import tempfile
+
+import pytest
+import numpy as np
+
+import tvm
+import tvm.rpc
+import tvm.micro
+import tvm.testing
+import tvm.relay as relay
+
+from tvm.micro.interface_api import generate_c_interface_header
+
+import conftest
+
+_LOG = logging.getLogger(__name__)
+logging.basicConfig(level=logging.INFO)
+
+PLATFORMS = conftest.PLATFORMS
+
+TEMPLATE_PROJECT_DIR = (
+pathlib.Path(__file__).parent
+/ ".."
+/ ".."
+/ ".."
+/ "apps"
+/ "microtvm"
+/ "zephyr"
+/ "template_project"
+).resolve()
+
+
+def _read_line(fd, timeout_sec: int):
+data = ""
+new_line = False
+while True:
+if new_line:
+break
+new_data = fd.read(1, timeout_sec=timeout_sec)
+logging.debug(f"read data: {new_data}")
+for item in new_data:
+new_c = chr(item)
+data = data + new_c
+if new_c == "\n":
+new_line = True
+break
+return data
+
+
+def _get_message(fd, expr: str, timeout_sec: int):
+while True:
+data = _read_line(fd, timeout_sec)
+logging.debug(f"new line: {data}")
+if expr in data:
+return data
+
+def _build_project(temp_dir, zephyr_board, west_cmd, mod, build_config, 
extra_files_tar=None):
+template_project_dir = (
+pathlib.Path(__file__).parent
+/ ".."
+/ ".."
+/ ".."
+/ "apps"
+/ "microtvm"
+/ "zephyr"
+/ "template_project"
+).resolve()
+project_dir = temp_dir / "project"
+project = tvm.micro.generate_project(
+str(template_project_dir),
+mod,
+project_dir,
+{
+"extra_files_tar": extra_files_tar,
+"project_type": "aot_demo",
+"west_cmd": west_cmd,
+"verbose": bool(build_config.get("debug")),
+"zephyr_board": zephyr_board,
+},
+)
+project.build()
+return project, project_dir
+
+
+def _create_header_file(tensor_name, npy_data, output_path, tar_file):
+"""
+This method generates a header file containing the data contained in the 
numpy array provided.
+It is used to capture the tensor data (for both inputs and expected 
outputs).
+"""
+header_file = io.StringIO()
+header_file.write("#include \n")
+header_file.write("#include \n")
+header_file.write("#include \n")
+header_file.write(f"const size_t {tensor_name}_len = {npy_data.size};\n")
+
+if npy_data.dtype == "int8":
+header_file.write(f"int8_t {tensor_name}[] =")
+elif npy_data.dtype == "int32":
+header_file.write(f"int32_t {tensor_name}[] = ")
+elif npy_data.dtype == "uint8":
+header_file.write(f"uint8_t {tensor_name}[] = ")
+elif npy_data.dtype == "float32":
+header_file.write(f"float {tensor_name}[] = ")
+else:
+raise ValueError("Data type not expected.")
+
+header_file.write("{")
+for i in np.ndindex(npy_data.shape):
+header_file.write(f"{npy_data[i]}, ")
+header_file.write("};\n\n")
+
+header_file_bytes = bytes(header_file.getvalue(), "utf-8")
+raw_path = pathlib.Path(output_path) / f"{tensor_name}.h"
+ti = tarfile.TarInfo(name=str(raw_path))
+ti.size = len(header_file_bytes)
+ti.mode = 0o644
+ti.type = tarfile.REGTYPE
+tar_file.addfile(ti, io.BytesIO(header_file_bytes))
+
+
+
+
+def _open_tflite_model(model_path: str):
+# Import TFLite model
+tflite_model_buf = open(model_path, "rb").read()
+try:
+import tflite
+
+tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0)
+except AttributeError:
+import tflite.Model
+
+tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buf, 0)
+
+relay_mod, params = relay.frontend.from_tflite(tflite_model)
+
+return relay_mod, params
+

[GitHub] [tvm] mbs-octoml opened a new issue #9022: [Bug] BuiltinLower does not use alloca for storage on kDLCPU target devices

2021-09-15 Thread GitBox


mbs-octoml opened a new issue #9022:
URL: https://github.com/apache/tvm/issues/9022


   
https://github.com/apache/tvm/blob/2aebd3335d89bb32d330b0f851ddaf2d551fc56e/src/tir/transforms/lower_tvm_builtin.cc#L115
   
   This particular code has a complex history but the upshot is we need finer 
grained control for Allocate statements which is not gated by device_type, and 
the storage_scope in ProducerStore stmts fits the bill.
   
   This is a placeholder for figuring that out since I'm unfamiliar with 
storage handling once we enter TIR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] sergey-grovety commented on a change in pull request #8990: [microTVM] Update support for ARMv7m intrinsic

2021-09-15 Thread GitBox


sergey-grovety commented on a change in pull request #8990:
URL: https://github.com/apache/tvm/pull/8990#discussion_r709527891



##
File path: apps/microtvm/reference-vm/zephyr/base-box/base_box_test.sh
##
@@ -37,3 +37,5 @@ if [ $board == "stm32f746xx" ]; then
 else
 pytest tests/micro/zephyr/test_zephyr_aot.py --zephyr-board=${board}
 fi
+
+pytest tests/micro/zephyr/test_zephyr_armv7m.py --zephyr-board=${board}

Review comment:
   Ok. Will download subset of CMSIS headers in temporary directory for 
this specific test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8423: Implementation of relay_to_tir target hook

2021-09-15 Thread GitBox


areusch commented on a change in pull request #8423:
URL: https://github.com/apache/tvm/pull/8423#discussion_r709518575



##
File path: cmake/modules/contrib/ExampleTargetHooks.cmake
##
@@ -0,0 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+file(GLOB EXAMPLE_TARGET_HOOKS_SRC 
src/relay/backend/contrib/example_target_hooks/relay_to_tir.cc)

Review comment:
   cc @jroesch @tqchen @junrushao1994 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8423: Implementation of relay_to_tir target hook

2021-09-15 Thread GitBox


areusch commented on a change in pull request #8423:
URL: https://github.com/apache/tvm/pull/8423#discussion_r709518464



##
File path: cmake/modules/contrib/ExampleTargetHooks.cmake
##
@@ -0,0 +1,19 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+file(GLOB EXAMPLE_TARGET_HOOKS_SRC 
src/relay/backend/contrib/example_target_hooks/relay_to_tir.cc)

Review comment:
   everything in this PR looks great, except that i'm a little concerned 
we're linking in test-only C++ here into `libtvm.so`. Possible to do this with 
TVMScript in Python? i feel like we would need to e.g. add a 
tests/libtest/src/*.cc plus a separate `cmake` build target to create a `.so` 
for the code in there that links against `libtvm.so`, then a pytest fixture to 
load it in one time at the start of testing and provide the module to tests for 
use. and that sounds like a lot of extra ask for this pr :/




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on pull request #8971: [Onnx] Fix NLL Loss tests

2021-09-15 Thread GitBox


AndrewZhaoLuo commented on pull request #8971:
URL: https://github.com/apache/tvm/pull/8971#issuecomment-920304017


   Ok folks, I've removed the controversial changes and did an alternate work 
around. PTAL when you have time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] huajsj commented on a change in pull request #9012: [Relay] VLOG for finer grained control of hyper-detailed logging

2021-09-15 Thread GitBox


huajsj commented on a change in pull request #9012:
URL: https://github.com/apache/tvm/pull/9012#discussion_r709444137



##
File path: include/tvm/runtime/logging.h
##
@@ -395,10 +399,13 @@ class LogMessageVoidify {
 inline bool DebugLoggingEnabled() {
   static int state = 0;
   if (state == 0) {
-if (auto var = std::getenv("TVM_LOG_DEBUG")) {
-  if (std::string(var) == "1") {
+if (const char* var = std::getenv("TVM_LOG_DEBUG")) {

Review comment:
   seems like change not necessary, should keep no change "auto var = 
std::getenv("TVM_LOG_DEBUG")" 

##
File path: include/tvm/runtime/logging.h
##
@@ -129,8 +132,9 @@
  *   a = ...
  *   b = ...
  *   // if quit_on_assertion is true, if a==b, continue, otherwise quit.
- *   // if quit_on_assertion is false, if a==b, continue, otherwise 'return 
false' (default
- * behaviour) COND_CHECK_EQ(quit_on_assertion, a, b) << "some error message 
when  quiting"
+ *   // if quit_on_assertion is false, if a==b, continue, otherwise 'return 
false'
+ *   // (default behaviour)
+ *   COND_CHECK_EQ(quit_on_assertion, a, b) << "some error message when  
quiting"

Review comment:
   136 and 137 should be same line.

##
File path: src/runtime/logging.cc
##
@@ -166,10 +167,127 @@ namespace tvm {
 namespace runtime {
 namespace detail {
 
+std::unordered_map ParseTvmLogDebugSpec(const char* 
opt_spec) {
+  // Cache the verbosity level map.
+  std::unordered_map map;
+  LOG(INFO) << "initializing VLOG map";
+  if (opt_spec == nullptr) {
+LOG(INFO) << "VLOG disabled, no TVM_LOG_DEBUG environment variable";
+return map;
+  }
+  std::string spec(opt_spec);
+  // Check we are enabled overall with at least one VLOG option.
+  if (spec.rfind("1;", 0) != 0) {
+LOG(INFO) << "VLOG disabled, TVM_LOG_DEBUG does not start with '1;'";
+return map;
+  }
+  size_t start = 2UL;
+  while (start < spec.size()) {
+// We are looking for "name=level;" or "*=level;"
+size_t end = start;
+// Scan up to '='.
+while (spec[end] != '=') {
+  ++end;
+  if (end >= spec.size()) {
+LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, missing '='";
+return map;
+  }
+}
+if (end == start) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, empty name";
+  return map;
+}
+std::string name(spec.substr(start, end - start));
+// Skip '='
+++end;
+if (end >= spec.size()) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, missing level";
+  return map;
+}
+// Scan up to ';'.
+start = end;
+while (spec[end] != ';') {
+  ++end;
+  if (end >= spec.size()) {
+LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, missing ';'";
+return map;
+  }
+}
+if (end == start) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, empty level";
+  return map;
+}
+std::string level_str(spec.substr(start, end - start));
+// Skip ';'.
+++end;
+// Parse level, default to 0 if ill-formed which we don't detect.
+char* end_of_level = nullptr;
+int level = static_cast(strtol(level_str.c_str(), _of_level, 10));
+if (end_of_level != level_str.c_str() + level_str.size()) {
+  LOG(FATAL) << "TVM_LOG_DEBUG ill-formed, invalid level";
+}
+LOG(INFO) << "adding VLOG entry for '" << name << "' at level " << level;
+map.emplace(name, level);
+start = end;
+  }
+  return map;
+}
+
+constexpr const char* kSrcPrefix = "/src/";
+constexpr const size_t kSrcPrefixLength = 5;
+
+bool VerboseEnabledInMap(const std::string& filename, int level,
+ const std::unordered_map& map) {
+  if (level < 0) {
+return false;
+  }
+  // Canonicalize filename.
+  // TODO(mbs): Not Windows friendly.
+
+  size_t last_src = filename.rfind(kSrcPrefix, std::string::npos, 
kSrcPrefixLength);
+  // Strip anything before the /src/ prefix, on the assumption that will yield 
the
+  // TVM project relative filename. If no such prefix fallback to filename 
without
+  // canonicalization.
+  std::string key =
+  last_src == std::string::npos ? filename : filename.substr(last_src + 
kSrcPrefixLength);
+  // Check for exact.
+  auto itr = map.find(key);
+  if (itr != map.end()) {
+return level <= itr->second;
+  }
+  // Check for '*' wildcard.
+  itr = map.find("*");
+  if (itr != map.end()) {
+return level <= itr->second;
+  }
+  return false;
+}
+
+bool VerboseLoggingEnabled(const char* filename, int level) {
+  // Cache the verbosity level map.
+  static const std::unordered_map* map =
+  new std::unordered_map(ParseTvmLogDebugSpec(std::getenv("TVM_LOG_DEBUG")));

Review comment:
   recommend to use smart pointer.

##
File path: include/tvm/runtime/logging.h
##
@@ -395,10 +399,13 @@ class LogMessageVoidify {
 inline bool DebugLoggingEnabled() {
   static int state = 0;
   if (state == 0) {
-if (auto var = std::getenv("TVM_LOG_DEBUG")) {
-  if (std::string(var) == "1") {
+if 

[GitHub] [tvm] icemelon commented on pull request #9021: [TOPI] Fix more pooling schedule

2021-09-15 Thread GitBox


icemelon commented on pull request #9021:
URL: https://github.com/apache/tvm/pull/9021#issuecomment-920295600


   cc @comaniac @junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] icemelon opened a new pull request #9021: [TOPI] Fix more pooling schedule

2021-09-15 Thread GitBox


icemelon opened a new pull request #9021:
URL: https://github.com/apache/tvm/pull/9021


   It's a follow-up update for #8957. In previous PR, I forgot to other pooling 
schedule in cuda and x86.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac opened a new pull request #9020: [Community] @AndrewZhaoLuo -> Reviewer

2021-09-15 Thread GitBox


comaniac opened a new pull request #9020:
URL: https://github.com/apache/tvm/pull/9020


   Please join us to welcome @AndrewZhaoLuo as a new reviewer to TVM. Andrew 
has mainly contributed to automatic mixed precision (AMP) support and ONNX 
frontend improvements.
   
   - [Commits 
History](https://github.com/apache/tvm/commits?author=AndrewZhaoLuo)
   - [Code 
Review](https://github.com/apache/tvm/pulls?utf8=%E2%9C%93=reviewed-by:AndrewZhaoLuo)
   - [Community Forum 
Summary](https://discuss.tvm.apache.org/u/AndrewZhaoLuo/summary)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch comaniac-patch-1 created (now afb3c28)

2021-09-15 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a change to branch comaniac-patch-1
in repository https://gitbox.apache.org/repos/asf/tvm.git.


  at afb3c28  [Community] @AndrewZhaoLuo -> Reviewer

This branch includes the following new commits:

 new afb3c28  [Community] @AndrewZhaoLuo -> Reviewer

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.



[tvm] 01/01: [Community] @AndrewZhaoLuo -> Reviewer

2021-09-15 Thread comaniac
This is an automated email from the ASF dual-hosted git repository.

comaniac pushed a commit to branch comaniac-patch-1
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit afb3c289a6ee1c74d6d850248e168bc6c04c051b
Author: Cody Yu 
AuthorDate: Wed Sep 15 11:42:57 2021 -0700

[Community] @AndrewZhaoLuo -> Reviewer
---
 CONTRIBUTORS.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 2821446..14f8191 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -108,6 +108,7 @@ We do encourage everyone to work anything they are 
interested in.
 - [Yizhi Liu](https://github.com/yzhliu) : @yzhliu
 - [Hao Lu](https://github.com/hlu1): @hlu1
 - [Eric Lunderberg](https://github.com/Lunderberg): @Lunderberg
+- [Andrew Z. Luo](https://github.com/AndrewZhaoLuo): @AndrewZhaoLuo
 - [Steven Lyubomirsky](https://github.com/slyubomirsky): @slyubomirsky
 - [Masahiro Masuda](https://github.com/masahi): @masahi
 - [Sergey Mironov](https://github.com/grwlf): @grwlf


[GitHub] [tvm] manupa-arm commented on a change in pull request #8849: [5/6] Arm(R) Ethos(TM)-U NPU codegen integration

2021-09-15 Thread GitBox


manupa-arm commented on a change in pull request #8849:
URL: https://github.com/apache/tvm/pull/8849#discussion_r709445247



##
File path: src/tir/transforms/lower_tvm_builtin.cc
##
@@ -113,16 +113,6 @@ class BuiltinLower : public StmtExprMutator {
 op = stmt.as();
 // Get constant allocation bound.
 int64_t nbytes = GetVectorBytes(op->dtype);
-if (device_type_.defined()) {

Review comment:
   Yes, if we want the allocates to be placed on stack in CPU PrimFuncs, 
maybe we should make them have storage_scope = 'local' and we should generate 
TVMBAWs for 'global' allocates -- that could be made to work for both cases.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] manupa-arm commented on a change in pull request #8849: [5/6] Arm(R) Ethos(TM)-U NPU codegen integration

2021-09-15 Thread GitBox


manupa-arm commented on a change in pull request #8849:
URL: https://github.com/apache/tvm/pull/8849#discussion_r709445247



##
File path: src/tir/transforms/lower_tvm_builtin.cc
##
@@ -113,16 +113,6 @@ class BuiltinLower : public StmtExprMutator {
 op = stmt.as();
 // Get constant allocation bound.
 int64_t nbytes = GetVectorBytes(op->dtype);
-if (device_type_.defined()) {

Review comment:
   Yes, if we want the allocates to be placed on stack in CPU PrimFuncs, 
maybe we should gate them to have storage_scope = 'local' and we should 
generate TVMBAWs for 'global' allocates -- that could be made to work for both 
cases.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] manupa-arm commented on a change in pull request #8849: [5/6] Arm(R) Ethos(TM)-U NPU codegen integration

2021-09-15 Thread GitBox


manupa-arm commented on a change in pull request #8849:
URL: https://github.com/apache/tvm/pull/8849#discussion_r709445247



##
File path: src/tir/transforms/lower_tvm_builtin.cc
##
@@ -113,16 +113,6 @@ class BuiltinLower : public StmtExprMutator {
 op = stmt.as();
 // Get constant allocation bound.
 int64_t nbytes = GetVectorBytes(op->dtype);
-if (device_type_.defined()) {

Review comment:
   Yes, if we want the allocates to be placed on stack in CPU PrimFuncs, 
maybe we should gate them to have storage_scope = 'local' and we should 
generate TVMBAWs for 'global' allocates -- that could make to work for both 
cases.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] manupa-arm commented on issue #8978: [Bug][VTA][OpenCL] If allowed to allocate in stack, VTA multiple target test will fail

2021-09-15 Thread GitBox


manupa-arm commented on issue #8978:
URL: https://github.com/apache/tvm/issues/8978#issuecomment-920268526


   Also for some reason, if we really need the tir.allocate to be translated 
down to as a stack placement, we should probably make the tir.allocate with 
storage_scope = "local" to PrimFunc that is placed on CPU.
   
   keep the global ones served via TVMBAWs, because TVMBAW could serve as the 
'global' allocator for memory.
   In this way, in micro we could still use TVMBAWs to serve memory from the 
application/platform layer for 'global' allocates.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kparzysz-quic commented on a change in pull request #8986: [Hexagon] Implement model launcher

2021-09-15 Thread GitBox


kparzysz-quic commented on a change in pull request #8986:
URL: https://github.com/apache/tvm/pull/8986#discussion_r709459893



##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK
+   - `CMAKE_C_COMPILER=hexagon-clang`
+   - `CMAKE_CXX_COMPILER=hexagon-clang++`
+   - `HEXAGON_ARCH` to one of v65, v66, v68
+   - `TVM_RUNTIME_HEXAGON=/path/to/libtvm_runtime.a` _statically_ linked
+ TVM runtime
+   Make sure to provide the path to launcher's `CMakeLists.txt` directory
+   in `cmake` invocation.
+
+3. Run `make`. This will create `liblauncher_rpc_skel.so`.
+
+### Compilation of the Android part
+
+1. Build TVM runtime for Android. Unlike in the Hexagon case, this should be
+   the dynamic library (which is the default), i.e. `libtvm_runtime.so`.

Review comment:
   Done.

##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.

Review comment:
   Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mehrdadh commented on a change in pull request #8990: [microTVM] Update support for ARMv7m intrinsic

2021-09-15 Thread GitBox


mehrdadh commented on a change in pull request #8990:
URL: https://github.com/apache/tvm/pull/8990#discussion_r709458696



##
File path: tests/micro/zephyr/test_zephyr_armv7m.py
##
@@ -0,0 +1,293 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import io
+import logging
+import os
+import pathlib
+import sys
+import logging
+import tarfile
+import tempfile
+
+import pytest
+import numpy as np
+
+import tvm
+import tvm.rpc
+import tvm.micro
+import tvm.testing
+import tvm.relay as relay
+
+from tvm.micro.interface_api import generate_c_interface_header
+
+import conftest
+
+_LOG = logging.getLogger(__name__)
+logging.basicConfig(level=logging.INFO)
+
+PLATFORMS = conftest.PLATFORMS
+
+TEMPLATE_PROJECT_DIR = (
+pathlib.Path(__file__).parent
+/ ".."
+/ ".."
+/ ".."
+/ "apps"
+/ "microtvm"
+/ "zephyr"
+/ "template_project"
+).resolve()
+
+
+def _read_line(fd, timeout_sec: int):
+data = ""
+new_line = False
+while True:
+if new_line:
+break
+new_data = fd.read(1, timeout_sec=timeout_sec)
+logging.debug(f"read data: {new_data}")
+for item in new_data:
+new_c = chr(item)
+data = data + new_c
+if new_c == "\n":
+new_line = True
+break
+return data
+
+
+def _get_message(fd, expr: str, timeout_sec: int):
+while True:
+data = _read_line(fd, timeout_sec)
+logging.debug(f"new line: {data}")
+if expr in data:
+return data
+
+def _build_project(temp_dir, zephyr_board, west_cmd, mod, build_config, 
extra_files_tar=None):
+template_project_dir = (
+pathlib.Path(__file__).parent
+/ ".."
+/ ".."
+/ ".."
+/ "apps"
+/ "microtvm"
+/ "zephyr"
+/ "template_project"
+).resolve()
+project_dir = temp_dir / "project"
+project = tvm.micro.generate_project(
+str(template_project_dir),
+mod,
+project_dir,
+{
+"extra_files_tar": extra_files_tar,
+"project_type": "aot_demo",
+"west_cmd": west_cmd,
+"verbose": bool(build_config.get("debug")),
+"zephyr_board": zephyr_board,
+},
+)
+project.build()
+return project, project_dir
+
+
+def _create_header_file(tensor_name, npy_data, output_path, tar_file):
+"""
+This method generates a header file containing the data contained in the 
numpy array provided.
+It is used to capture the tensor data (for both inputs and expected 
outputs).
+"""
+header_file = io.StringIO()
+header_file.write("#include \n")
+header_file.write("#include \n")
+header_file.write("#include \n")
+header_file.write(f"const size_t {tensor_name}_len = {npy_data.size};\n")
+
+if npy_data.dtype == "int8":
+header_file.write(f"int8_t {tensor_name}[] =")
+elif npy_data.dtype == "int32":
+header_file.write(f"int32_t {tensor_name}[] = ")
+elif npy_data.dtype == "uint8":
+header_file.write(f"uint8_t {tensor_name}[] = ")
+elif npy_data.dtype == "float32":
+header_file.write(f"float {tensor_name}[] = ")
+else:
+raise ValueError("Data type not expected.")
+
+header_file.write("{")
+for i in np.ndindex(npy_data.shape):
+header_file.write(f"{npy_data[i]}, ")
+header_file.write("};\n\n")
+
+header_file_bytes = bytes(header_file.getvalue(), "utf-8")
+raw_path = pathlib.Path(output_path) / f"{tensor_name}.h"
+ti = tarfile.TarInfo(name=str(raw_path))
+ti.size = len(header_file_bytes)
+ti.mode = 0o644
+ti.type = tarfile.REGTYPE
+tar_file.addfile(ti, io.BytesIO(header_file_bytes))
+
+
+
+
+def _open_tflite_model(model_path: str):
+# Import TFLite model
+tflite_model_buf = open(model_path, "rb").read()
+try:
+import tflite
+
+tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0)
+except AttributeError:
+import tflite.Model
+
+tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buf, 0)
+
+relay_mod, params = relay.frontend.from_tflite(tflite_model)
+
+return relay_mod, params
+
+def 

[GitHub] [tvm] kparzysz-quic commented on a change in pull request #8986: [Hexagon] Implement model launcher

2021-09-15 Thread GitBox


kparzysz-quic commented on a change in pull request #8986:
URL: https://github.com/apache/tvm/pull/8986#discussion_r709457724



##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK
+   - `CMAKE_C_COMPILER=hexagon-clang`
+   - `CMAKE_CXX_COMPILER=hexagon-clang++`
+   - `HEXAGON_ARCH` to one of v65, v66, v68

Review comment:
   Done.

##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK

Review comment:
   Done.

##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK
+   - `CMAKE_C_COMPILER=hexagon-clang`
+   - `CMAKE_CXX_COMPILER=hexagon-clang++`
+   - `HEXAGON_ARCH` to one of v65, v66, v68
+   - `TVM_RUNTIME_HEXAGON=/path/to/libtvm_runtime.a` _statically_ linked
+ TVM runtime
+   Make sure to provide the path to launcher's `CMakeLists.txt` directory

Review comment:
   Fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kparzysz-quic commented on a change in pull request #8986: [Hexagon] Implement model launcher

2021-09-15 Thread GitBox


kparzysz-quic commented on a change in pull request #8986:
URL: https://github.com/apache/tvm/pull/8986#discussion_r709450729



##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK
+   - `CMAKE_C_COMPILER=hexagon-clang`
+   - `CMAKE_CXX_COMPILER=hexagon-clang++`
+   - `HEXAGON_ARCH` to one of v65, v66, v68
+   - `TVM_RUNTIME_HEXAGON=/path/to/libtvm_runtime.a` _statically_ linked
+ TVM runtime
+   Make sure to provide the path to launcher's `CMakeLists.txt` directory
+   in `cmake` invocation.
+
+3. Run `make`. This will create `liblauncher_rpc_skel.so`.
+
+### Compilation of the Android part
+
+1. Build TVM runtime for Android. Unlike in the Hexagon case, this should be
+   the dynamic library (which is the default), i.e. `libtvm_runtime.so`.
+
+2. Create a subdirectory for the build files (different from the one used for
+   Hexagon files), and run `cmake` with the following variables set:
+   - `FASTRPC_LIBS=STUB`

Review comment:
   There can be a number of reasons for that.  The diagnostic output from 
`mini-dm` usually contains enough information to help resolve it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] manupa-arm commented on a change in pull request #8849: [5/6] Arm(R) Ethos(TM)-U NPU codegen integration

2021-09-15 Thread GitBox


manupa-arm commented on a change in pull request #8849:
URL: https://github.com/apache/tvm/pull/8849#discussion_r709445247



##
File path: src/tir/transforms/lower_tvm_builtin.cc
##
@@ -113,16 +113,6 @@ class BuiltinLower : public StmtExprMutator {
 op = stmt.as();
 // Get constant allocation bound.
 int64_t nbytes = GetVectorBytes(op->dtype);
-if (device_type_.defined()) {

Review comment:
   Yes, if we want the allocates for to stack in CPU PrimFuncs, maybe we 
should gate them to have storage_scope = 'local' and we should generate TVMBAWs 
for 'global' allocates.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan commented on a change in pull request #8986: [Hexagon] Implement model launcher

2021-09-15 Thread GitBox


csullivan commented on a change in pull request #8986:
URL: https://github.com/apache/tvm/pull/8986#discussion_r709380322



##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK
+   - `CMAKE_C_COMPILER=hexagon-clang`
+   - `CMAKE_CXX_COMPILER=hexagon-clang++`
+   - `HEXAGON_ARCH` to one of v65, v66, v68
+   - `TVM_RUNTIME_HEXAGON=/path/to/libtvm_runtime.a` _statically_ linked
+ TVM runtime
+   Make sure to provide the path to launcher's `CMakeLists.txt` directory

Review comment:
   You need an extra space here otherwise this line appears as a 
continuation of the previous bullet.

##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK

Review comment:
   ```suggestion
  - `USE_HEXAGON_SDK` to the path to the Hexagon SDK
   ```
   nit: would be nice to normalize to the naming convention used for the 
hexagon cmake variables in TVM.

##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part running on Hexagon, and part running
+on Android. They need to be compiled separately. Since some source files are
+shared between these two parts, make sure to delete all object files between
+compilations. Compile the Hexagon code first.
+
+The supported Snapdragon architectures are 855, 865, and 888.
+
+### Prerequisites
+
+1. Android NDK version r19c or later.
+2. Hexagon SDK version 4.0.0 or later.
+
+Android NDK can be downloaded from https://developer.android.com/ndk.
+Hexagon SDK is available at //developer.qualcomm.com/software/hexagon-dsp-sdk.
+
+### Compilation of the Hexagon part
+
+1. Build the static version of TVM runtime for Hexagon: this step is the same
+   as building the shared version, except at the cmake step, add
+   `-DBUILD_STATIC_RUNTIME=ON`. The compilation step should create
+   `libtvm_runtime.a`.
+
+2. Create a subdirectory for the build files, and run `cmake` with the
+   following variables set:
+   - `FASTRPC_LIBS=SKEL`
+   - `HEXAGON_SDK_ROOT` to the path to the Hexagon SDK
+   - `CMAKE_C_COMPILER=hexagon-clang`
+   - `CMAKE_CXX_COMPILER=hexagon-clang++`
+   - `HEXAGON_ARCH` to one of v65, v66, v68

Review comment:
   ```suggestion
  - `USE_HEXAGON_ARCH` to one of v65, v66, v68
   ```
   nit: would be nice to normalize to the naming convention used for the 
hexagon cmake variables in TVM.

##
File path: src/runtime/hexagon/launcher/README.md
##
@@ -0,0 +1,173 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+# Hexagon Graph Launcher
+
+## Compilation
+
+The launcher consists of two parts: part 

[GitHub] [tvm] jroesch commented on a change in pull request #8849: [5/6] Arm(R) Ethos(TM)-U NPU codegen integration

2021-09-15 Thread GitBox


jroesch commented on a change in pull request #8849:
URL: https://github.com/apache/tvm/pull/8849#discussion_r709441749



##
File path: src/tir/transforms/lower_tvm_builtin.cc
##
@@ -113,16 +113,6 @@ class BuiltinLower : public StmtExprMutator {
 op = stmt.as();
 // Get constant allocation bound.
 int64_t nbytes = GetVectorBytes(op->dtype);
-if (device_type_.defined()) {

Review comment:
   cc @areusch @mbrookhart this is the code we discussed yesterday, 
currently CI passes with this code? that seems to contradict the conversations 
we had, but also I think these should be stack allocations for performance on 
cloud cpus.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (98ecefb -> 2aebd33)

2021-09-15 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 98ecefb  [Relay] Remove memory planing from LowerTEPass  (#8974)
 add 2aebd33  Add standalone_crt/ to be part of the wheel package, when 
available. (#9005)

No new revisions were added by this update.

Summary of changes:
 python/setup.py | 34 --
 1 file changed, 28 insertions(+), 6 deletions(-)


[GitHub] [tvm] areusch commented on a change in pull request #8990: [microTVM] Update support for ARMv7m intrinsic

2021-09-15 Thread GitBox


areusch commented on a change in pull request #8990:
URL: https://github.com/apache/tvm/pull/8990#discussion_r709435990



##
File path: tests/micro/zephyr/test_zephyr_armv7m.py
##
@@ -0,0 +1,293 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import io
+import logging
+import os
+import pathlib
+import sys
+import logging
+import tarfile
+import tempfile
+
+import pytest
+import numpy as np
+
+import tvm
+import tvm.rpc
+import tvm.micro
+import tvm.testing
+import tvm.relay as relay
+
+from tvm.micro.interface_api import generate_c_interface_header
+
+import conftest
+
+_LOG = logging.getLogger(__name__)
+logging.basicConfig(level=logging.INFO)
+
+PLATFORMS = conftest.PLATFORMS
+
+TEMPLATE_PROJECT_DIR = (
+pathlib.Path(__file__).parent
+/ ".."
+/ ".."
+/ ".."
+/ "apps"
+/ "microtvm"
+/ "zephyr"
+/ "template_project"
+).resolve()
+
+
+def _read_line(fd, timeout_sec: int):
+data = ""
+new_line = False
+while True:
+if new_line:
+break
+new_data = fd.read(1, timeout_sec=timeout_sec)
+logging.debug(f"read data: {new_data}")
+for item in new_data:
+new_c = chr(item)
+data = data + new_c
+if new_c == "\n":
+new_line = True
+break
+return data
+
+
+def _get_message(fd, expr: str, timeout_sec: int):
+while True:
+data = _read_line(fd, timeout_sec)
+logging.debug(f"new line: {data}")
+if expr in data:
+return data
+
+def _build_project(temp_dir, zephyr_board, west_cmd, mod, build_config, 
extra_files_tar=None):
+template_project_dir = (
+pathlib.Path(__file__).parent
+/ ".."
+/ ".."
+/ ".."
+/ "apps"
+/ "microtvm"
+/ "zephyr"
+/ "template_project"
+).resolve()
+project_dir = temp_dir / "project"
+project = tvm.micro.generate_project(
+str(template_project_dir),
+mod,
+project_dir,
+{
+"extra_files_tar": extra_files_tar,
+"project_type": "aot_demo",
+"west_cmd": west_cmd,
+"verbose": bool(build_config.get("debug")),
+"zephyr_board": zephyr_board,
+},
+)
+project.build()
+return project, project_dir
+
+
+def _create_header_file(tensor_name, npy_data, output_path, tar_file):
+"""
+This method generates a header file containing the data contained in the 
numpy array provided.
+It is used to capture the tensor data (for both inputs and expected 
outputs).
+"""
+header_file = io.StringIO()
+header_file.write("#include \n")
+header_file.write("#include \n")
+header_file.write("#include \n")
+header_file.write(f"const size_t {tensor_name}_len = {npy_data.size};\n")
+
+if npy_data.dtype == "int8":
+header_file.write(f"int8_t {tensor_name}[] =")
+elif npy_data.dtype == "int32":
+header_file.write(f"int32_t {tensor_name}[] = ")
+elif npy_data.dtype == "uint8":
+header_file.write(f"uint8_t {tensor_name}[] = ")
+elif npy_data.dtype == "float32":
+header_file.write(f"float {tensor_name}[] = ")
+else:
+raise ValueError("Data type not expected.")
+
+header_file.write("{")
+for i in np.ndindex(npy_data.shape):
+header_file.write(f"{npy_data[i]}, ")
+header_file.write("};\n\n")
+
+header_file_bytes = bytes(header_file.getvalue(), "utf-8")
+raw_path = pathlib.Path(output_path) / f"{tensor_name}.h"
+ti = tarfile.TarInfo(name=str(raw_path))
+ti.size = len(header_file_bytes)
+ti.mode = 0o644
+ti.type = tarfile.REGTYPE
+tar_file.addfile(ti, io.BytesIO(header_file_bytes))
+
+
+
+
+def _open_tflite_model(model_path: str):
+# Import TFLite model
+tflite_model_buf = open(model_path, "rb").read()
+try:
+import tflite
+
+tflite_model = tflite.Model.GetRootAsModel(tflite_model_buf, 0)
+except AttributeError:
+import tflite.Model
+
+tflite_model = tflite.Model.Model.GetRootAsModel(tflite_model_buf, 0)
+
+relay_mod, params = relay.frontend.from_tflite(tflite_model)
+
+return relay_mod, params
+
+def 

[GitHub] [tvm] jroesch merged pull request #9005: Add standalone_crt/ to be part of the wheel package, when available.

2021-09-15 Thread GitBox


jroesch merged pull request #9005:
URL: https://github.com/apache/tvm/pull/9005


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jroesch commented on pull request #9005: Add standalone_crt/ to be part of the wheel package, when available.

2021-09-15 Thread GitBox


jroesch commented on pull request #9005:
URL: https://github.com/apache/tvm/pull/9005#issuecomment-920244206


   @manupa-arm I agree we can land but might be worth revisiting, from my 
perspective library should only point to `.so`, `.dylib`, and so on. I think it 
would be good to at minimum to clarify in a comment somewhere if not change the 
code as Tristan suggested. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8990: [microTVM] Update support for ARMv7m intrinsic

2021-09-15 Thread GitBox


areusch commented on a change in pull request #8990:
URL: https://github.com/apache/tvm/pull/8990#discussion_r709432323



##
File path: apps/microtvm/reference-vm/zephyr/base-box/base_box_test.sh
##
@@ -37,3 +37,5 @@ if [ $board == "stm32f746xx" ]; then
 else
 pytest tests/micro/zephyr/test_zephyr_aot.py --zephyr-board=${board}
 fi
+
+pytest tests/micro/zephyr/test_zephyr_armv7m.py --zephyr-board=${board}

Review comment:
   it's tempting to inline `read_and_pad`, which should get inlined by the 
compiler at codegen time. CMSIS 5 is [under ASF 
2.0](https://github.com/ARM-software/CMSIS_5/blob/develop/LICENSE.txt). the 
tricky thing is that there are quite a lot of header dependencies designed to 
support a variety of toolchains. it would be nice to preserve support for those.
   
   given that, i'm tempted to opt for downloading CMSIS to the temporary 
directory. we will likely need to come up with a solution for this more broadly 
to work with CMSIS-NN as well cc @ashutosh-arm @u99127 @manupa-arm 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] vinx13 commented on pull request #8983: [Bugfix] Fix other div zero errors also in rewrite_simplify

2021-09-15 Thread GitBox


vinx13 commented on pull request #8983:
URL: https://github.com/apache/tvm/pull/8983#issuecomment-920236818


   There are some flaky tests on CI, could you push again to retrigger CI?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #9017: [ONNX] QLinearAveragePool and QLinearGlobalAveragePool contrib op

2021-09-15 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #9017:
URL: https://github.com/apache/tvm/pull/9017#discussion_r709416154



##
File path: python/tvm/relay/frontend/onnx.py
##
@@ -654,6 +676,40 @@ def _impl_v1(cls, inputs, attr, params):
 )
 
 
+class QLinearGlobalAveragePool(OnnxOpConverter):
+"Operator converter for QLinearGlobalAveragePool from Microsoft 
onnxruntime contrib opset."
+
+@classmethod
+def _impl_v1(cls, inputs, attr, params):
+rank = len(infer_shape(inputs[0]))
+
+x_scale = get_scalar(inputs[1], params)
+x_zero_point = get_scalar(inputs[2], params, dtype="int32")
+y_scale = fold_constant(get_scalar(inputs[3], params))
+y_zero_point = get_scalar(inputs[4], params, dtype="int32")
+
+input_dtype = infer_type(inputs[0]).checked_type.dtype
+
+# Onnxruntime documentation does not mention that this global avg_pool 
should follow the

Review comment:
   I'm fine with this for now but this should be a TODO since I believe the 
actual implementation does not dq -> pool -> q
   
https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/mlas/lib/qlgavgpool.cpp

##
File path: python/tvm/relay/frontend/onnx.py
##
@@ -351,6 +366,13 @@ class AveragePool(Pool):
 name = "avg_pool"
 
 
+class QLinearAveragePool(Pool):

Review comment:
   I think composition rather than subclassing would be a cleaner solution. 
Right now all the code to handle quantization + not quantization are in the 
same place which makes it a bit harder to read. Please separate it.
   
   You can do something like refactor the Pool impl to a new class method like 
_run_calculation(...) and call it from QLinearAveragePool

##
File path: tests/python/frontend/onnx/test_forward.py
##
@@ -3056,6 +3056,152 @@ def verify_global_pooling(x_shape, mode):
 verify_global_pooling([4, 1, 2, 6, 4], mode)
 
 
+@tvm.testing.parametrize_targets
+def test_qlinear_average_pool(target, dev):
+def verify_qlinear_average_pool(
+x_shape, kernel_shape, strides, pads, out_shape, auto_pad="NOTSET"
+):
+input_nodes = [
+helper.make_tensor_value_info("X", TensorProto.FLOAT, 
list(x_shape)),
+]
+
+output_nodes = [
+helper.make_tensor_value_info("Y", TensorProto.FLOAT, 
list(out_shape)),
+]
+
+input_names = ["X"]
+
+node = helper.make_node(
+"AveragePool",

Review comment:
   Should these be QLinear Nodes?

##
File path: python/tvm/relay/frontend/onnx.py
##
@@ -3794,12 +3850,14 @@ def _get_convert_map(opset):
 "Xor": Renamer("logical_xor"),
 # defs/nn
 "AveragePool": AveragePool.get_converter(opset),
+"QLinearAveragePool": QLinearAveragePool.get_converter(opset),

Review comment:
   There's a quantization section down below, you should move these there

##
File path: tests/python/frontend/onnx/test_forward.py
##
@@ -3056,6 +3056,152 @@ def verify_global_pooling(x_shape, mode):
 verify_global_pooling([4, 1, 2, 6, 4], mode)
 
 
+@tvm.testing.parametrize_targets
+def test_qlinear_average_pool(target, dev):
+def verify_qlinear_average_pool(
+x_shape, kernel_shape, strides, pads, out_shape, auto_pad="NOTSET"
+):
+input_nodes = [
+helper.make_tensor_value_info("X", TensorProto.FLOAT, 
list(x_shape)),
+]
+
+output_nodes = [
+helper.make_tensor_value_info("Y", TensorProto.FLOAT, 
list(out_shape)),
+]
+
+input_names = ["X"]
+
+node = helper.make_node(
+"AveragePool",
+inputs=input_names,
+outputs=["Y"],
+kernel_shape=kernel_shape,
+strides=strides,
+)
+
+if pads is None:
+pad_attr = helper.make_attribute("auto_pad", auto_pad)
+else:
+pad_attr = helper.make_attribute("pads", pads)
+node.attribute.append(pad_attr)
+
+graph = helper.make_graph(
+[node],
+"qlinear_average_pool_test",
+inputs=input_nodes,
+outputs=output_nodes,
+)
+
+model = helper.make_model(graph, 
producer_name="qlinear_average_pool_Test")
+quantize_and_verify_with_ort(model, input_names, [x_shape], target, 
dev)
+
+# Pool1D
+verify_qlinear_average_pool(
+x_shape=[1, 1, 32],
+kernel_shape=[3],
+strides=[1],
+pads=[1, 1],
+out_shape=[1, 1, 32],
+)
+# Pool2D
+verify_qlinear_average_pool(
+x_shape=[1, 1, 32, 32],
+kernel_shape=[3, 3],
+strides=[1, 1],
+pads=[1, 1, 1, 1],
+out_shape=[1, 1, 32, 32],
+)
+
+# Pool1D with stride
+verify_qlinear_average_pool(
+x_shape=[1, 1, 32],
+kernel_shape=[3],
+strides=[2],
+pads=[1, 1],
+out_shape=[1, 1, 16],
+)

[GitHub] [tvm] manupa-arm commented on pull request #9005: Add standalone_crt/ to be part of the wheel package, when available.

2021-09-15 Thread GitBox


manupa-arm commented on pull request #9005:
URL: https://github.com/apache/tvm/pull/9005#issuecomment-920228506


   @areusch Shall we merge this ?  
   
   I think we agree that this step is required eitherway.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (57386a2 -> 98ecefb)

2021-09-15 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 57386a2  [Hexagon] Treat floats as float32 when passing args to 
offloaded kernels (#9010)
 add 98ecefb  [Relay] Remove memory planing from LowerTEPass  (#8974)

No new revisions were added by this update.

Summary of changes:
 src/relay/backend/aot_executor_codegen.cc   | 16 +--
 src/relay/backend/graph_executor_codegen.cc | 17 +--
 src/relay/backend/interpreter.cc| 28 +--
 src/relay/backend/te_compiler.cc| 73 +++--
 src/relay/backend/te_compiler.h | 27 +--
 src/relay/backend/utils.h   | 11 +
 6 files changed, 89 insertions(+), 83 deletions(-)


  1   2   >