[GitHub] [tvm] FrozenGene commented on a change in pull request #8223: support adb-shell style cpp_rpc

2021-06-14 Thread GitBox


FrozenGene commented on a change in pull request #8223:
URL: https://github.com/apache/tvm/pull/8223#discussion_r651470581



##
File path: apps/cpp_rpc/rpc_env.cc
##
@@ -95,7 +96,16 @@ RPCEnv::RPCEnv(const std::string& wd) {
 auto cmdline = fopen("/proc/self/cmdline", "r");
 fread(cwd, 1, sizeof(cwd), cmdline);
 fclose(cmdline);
-base_ = "/data/data/" + std::string(cwd) + "/cache/rpc";
+std::string android_base_ = "/data/data/" + std::string(cwd) + "/cache";
+struct stat statbuf;
+// Check if application data directory exist. If not exist usually mean 
tvm_rpc run from adb

Review comment:
   Nitty comment. `If not exist, usually means we run tvm_rpc from adb 
shell terminal`

##
File path: apps/cpp_rpc/rpc_env.cc
##
@@ -95,7 +96,16 @@ RPCEnv::RPCEnv(const std::string& wd) {
 auto cmdline = fopen("/proc/self/cmdline", "r");
 fread(cwd, 1, sizeof(cwd), cmdline);
 fclose(cmdline);
-base_ = "/data/data/" + std::string(cwd) + "/cache/rpc";
+std::string android_base_ = "/data/data/" + std::string(cwd) + "/cache";
+struct stat statbuf;
+// Check if application data directory exist. If not exist usually mean 
tvm_rpc run from adb
+// shell terminal.
+if (stat(android_base_.data(), ) == -1 || 
!S_ISDIR(statbuf.st_mode)) {
+  // Tmp directory always writable for 'shell' user.

Review comment:
   `is always writable...`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies edited a comment on issue #6624: [Relay] Module mutated in-place

2021-06-14 Thread GitBox


electriclilies edited a comment on issue #6624:
URL: https://github.com/apache/tvm/issues/6624#issuecomment-861194896


   @m3at I think #8143 was just a workaround, so we can't close this issue yet. 
#7979 is an issue tracking all the times AlterOpLayout does in place 
modification of modules; so instead of closing this issue, I'll link it to 
#7979 and we can close it when there is an actual solution, not just a 
workaround.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on issue #7979: AlterOpLayout modifies input module inplace (and other issues)

2021-06-14 Thread GitBox


electriclilies commented on issue #7979:
URL: https://github.com/apache/tvm/issues/7979#issuecomment-861194872


   #6624 is an old issue that was worked around by #8143


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on issue #6624: [Relay] Module mutated in-place

2021-06-14 Thread GitBox


electriclilies commented on issue #6624:
URL: https://github.com/apache/tvm/issues/6624#issuecomment-861194896


   @m3at I think #8143 was just a workaround, so we can't close this issue yet. 
#7979 is an issue tracking all the times AlterOpLayout does in place 
modification of modules; instead of closing it, I'll link it to the other issue 
and we can close it when there is an actual solution, not just a workaround.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zotanika opened a new pull request #8260: [Caffe Frontend] supporting group > 1 cases for Deconv op

2021-06-14 Thread GitBox


zotanika opened a new pull request #8260:
URL: https://github.com/apache/tvm/pull/8260


   - Handling group > 1 cases, assuming group == output channels
   - Simply decomposed into Relay split, conv2d_transposed, and multi-leveled 
concatenate ops
   - Added some test cases
   
   Signed-off-by: zotanika 
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] m3at commented on issue #6624: [Relay] Module mutated in-place

2021-06-14 Thread GitBox


m3at commented on issue #6624:
URL: https://github.com/apache/tvm/issues/6624#issuecomment-861173880


   @jwfromm I think this was fixed in #8143 and can now be closed ? (sorry I 
can't test it atm) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Add check to only cast opaque handles to cl::BufferDescriptor at runtime. (#8256)

2021-06-14 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 75d9b78  Add check to only cast opaque handles to cl::BufferDescriptor 
at runtime. (#8256)
75d9b78 is described below

commit 75d9b78054ca005f95dbbd02dea1395a8c28eac5
Author: Chris Sullivan 
AuthorDate: Mon Jun 14 21:33:02 2021 -0700

Add check to only cast opaque handles to cl::BufferDescriptor at runtime. 
(#8256)
---
 src/runtime/opencl/opencl_module.cc | 9 +++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/src/runtime/opencl/opencl_module.cc 
b/src/runtime/opencl/opencl_module.cc
index 631d404..4040d82 100644
--- a/src/runtime/opencl/opencl_module.cc
+++ b/src/runtime/opencl/opencl_module.cc
@@ -64,8 +64,13 @@ class OpenCLWrappedFunc {
 }
 // setup arguments.
 for (cl_uint i = 0; i < arg_size_.size(); ++i) {
-  auto* arg = static_cast(void_args[i]);
-  OPENCL_CALL(clSetKernelArg(kernel, i, arg_size_[i], arg->buffer));
+  void* arg = nullptr;
+  if (args.type_codes[i] == DLDataTypeCode::kDLOpaqueHandle) {
+arg = static_cast(void_args[i])->buffer;
+  } else {
+arg = void_args[i];
+  }
+  OPENCL_CALL(clSetKernelArg(kernel, i, arg_size_[i], arg));
 }
 cl_command_queue queue = w_->GetQueue(t->device);
 ThreadWorkLoad wl = thread_axis_cfg_.Extract(args);


[tvm] branch main updated: Update parsed kernel sources check. (#8257)

2021-06-14 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new b85ac0e  Update parsed kernel sources check. (#8257)
b85ac0e is described below

commit b85ac0ef0f21de5528de695eec388eca98152347
Author: Chris Sullivan 
AuthorDate: Mon Jun 14 21:32:54 2021 -0700

Update parsed kernel sources check. (#8257)
---
 src/runtime/opencl/opencl_module.cc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/runtime/opencl/opencl_module.cc 
b/src/runtime/opencl/opencl_module.cc
index 397f57b..631d404 100644
--- a/src/runtime/opencl/opencl_module.cc
+++ b/src/runtime/opencl/opencl_module.cc
@@ -193,8 +193,8 @@ void OpenCLModuleNode::Init() {
   ICHECK(!parsed_kernels_.empty()) << "The OpenCL module expects a kernel 
delimited "
<< "source from code generation, but no 
kernel "
<< "delimiter was found.";
-  ICHECK_EQ(workspace_->num_registered_kernels, parsed_kernels_.size())
-  << "The number of registered kernels does not match number of parsed 
kernel sources";
+  ICHECK_EQ(fmap_.size(), parsed_kernels_.size())
+  << "The number of parsed kernel sources does not match the number of 
kernel functions";
   // zero initialize cl_program pointers for each device kernel
   for (auto& kv : parsed_kernels_) {
 programs_.insert({kv.first, 
std::vector(workspace_->devices.size(), nullptr)});


[GitHub] [tvm] masahi merged pull request #8256: [OpenCL] Add check to only cast opaque handles to cl::BufferDescriptor at runtime

2021-06-14 Thread GitBox


masahi merged pull request #8256:
URL: https://github.com/apache/tvm/pull/8256


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #8257: [OpenCL] Verify the number of parsed kernel sources against the expected number of kernel functions.

2021-06-14 Thread GitBox


masahi merged pull request #8257:
URL: https://github.com/apache/tvm/pull/8257


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zotanika closed pull request #8125: [Caffe Frontend] supporting group > 1 cases for Deconv op

2021-06-14 Thread GitBox


zotanika closed pull request #8125:
URL: https://github.com/apache/tvm/pull/8125


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-14 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r651431025



##
File path: src/relay/transforms/fp32_to_fp16.cc
##
@@ -0,0 +1,337 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file fp32_to_fp16.cc
+ * \brief Rewrite a graph into an fp16 form.
+ */
+#include "fp32_to_fp16.h"
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// A function which maps CallNodes to their initial conversion color
+using ColorFunc = std::function;
+
+// A function which maps green CallNodes to wanted accumulation and output 
dtypes
+using OutputDtypeFunc = std::function;
+
+class AmpGraphCreator : public ExprMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+  const ColorFunc colorer;
+  const OutputDtypeFunc output_dtype_func;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs new_attrs = Attrs(call->attrs);
+if (new_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes (creating new tensors of type dtype)
+  if (auto attrs = new_attrs.as()) {
+ModifyAttrsDType(attrs, accumulation_dtype);
+  }
+}
+
+return new_attrs;
+  }
+
+  template 
+  void ModifyAttrsOutputDType(const T* attrs, const DataType& 
accumulation_dtype) const {
+/*
+ Helper template to modify relevant attributes with out_dtype type.
+ These represent accumulation dtypes for some operations e.g.
+ conv2d might take in fp16 and give a fp32 result.
+ Attrs is const because we get it as a const.
+ */
+T* mutable_attrs = const_cast(attrs);
+
+DataType cur_type = (mutable_attrs->out_dtype);
+if (cur_type.is_float() || cur_type.is_void()) mutable_attrs->out_dtype = 
accumulation_dtype;
+  }
+
+  template 
+  void ModifyAttrsDType(const T* attrs, const DataType& accumulation_dtype) 
const {
+/*
+ Helper template to modify relevant attributes with dtype type.
+ This determines the output dtype for some ops. For example
+ zeros creates a tensor of zeros of the specified dtype.
+ Attrs is const 

[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-14 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r651430536



##
File path: src/relay/transforms/fp32_to_fp16.h
##
@@ -0,0 +1,232 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file fp32_to_fp16.h
+ * \brief Utilities and common types used for FP32->FP16 pass.
+ */
+#ifndef TVM_RELAY_TRANSFORMS_FP32_TO_FP16_H_
+#define TVM_RELAY_TRANSFORMS_FP32_TO_FP16_H_
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+struct FP16OpDType {
+  DataType accumulation_dtype;
+  DataType output_dtype;
+};
+
+// GREEN colored ops should always be done in FP16 due to the speed and memory 
savings
+// GRAY colored ops can be done in FP16 but don't have speedups to justify a 
dedicated cast.
+// RED colored ops should not be done in FP16 due to numerical reasons.
+enum FP16ConversionCategory { RED, GRAY, GREEN };

Review comment:
   I've implemented the suggestions listed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (6b72dc7 -> 84e94e9)

2021-06-14 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 6b72dc7  [BUG FIX] Add _type_has_method_sequal_reduce to Span and 
SourceNode (#8248)
 add 84e94e9  [Target] Allow 'true' and 'false' strings in conversions to 
integer (#8254)

No new revisions were added by this update.

Summary of changes:
 src/target/target.cc| 13 -
 tests/python/unittest/test_target_target.py | 11 +++
 2 files changed, 23 insertions(+), 1 deletion(-)


[GitHub] [tvm] masahi merged pull request #8254: [Target] Allow 'true' and 'false' strings in conversions to integer

2021-06-14 Thread GitBox


masahi merged pull request #8254:
URL: https://github.com/apache/tvm/pull/8254


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (24c2f5c -> 6b72dc7)

2021-06-14 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 24c2f5c  make simplify inference iterative (#8246)
 add 6b72dc7  [BUG FIX] Add _type_has_method_sequal_reduce to Span and 
SourceNode (#8248)

No new revisions were added by this update.

Summary of changes:
 include/tvm/ir/span.h | 3 +++
 1 file changed, 3 insertions(+)


[GitHub] [tvm] masahi merged pull request #8248: [BUG FIX] Add _type_has_method_sequal_reduce to Span and SourceNode

2021-06-14 Thread GitBox


masahi merged pull request #8248:
URL: https://github.com/apache/tvm/pull/8248


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] fantasyRqg commented on pull request #8223: support adb-shell style cpp_rpc

2021-06-14 Thread GitBox


fantasyRqg commented on pull request #8223:
URL: https://github.com/apache/tvm/pull/8223#issuecomment-861126514


   @FrozenGene All problems solved


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhuzilin commented on pull request #8056: [Relay, TOPI] Add negative log likelihood loss (nll_loss) op

2021-06-14 Thread GitBox


zhuzilin commented on pull request #8056:
URL: https://github.com/apache/tvm/pull/8056#issuecomment-861118701


   > One last change and then I think this will be good to go.
   
   @tkonolige Could you point out the change that I need to make? Thank you~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Beya2019 commented on pull request #8235: [TVMSCRIPT] add more type support in script function parameter

2021-06-14 Thread GitBox


Beya2019 commented on pull request #8235:
URL: https://github.com/apache/tvm/pull/8235#issuecomment-861118213


   > Thanks @Beya2019 , can you add a regression testcase? Right now we do 
support passing in value arguments
   
   Already added the get_valid_counts in test_tvmscript_ops.py
   ```
   @tvm.script.tir
   def get_valid_counts(
   data: ty.handle,
   valid_count: ty.handle,
   out: ty.handle,
   out_indices: ty.handle,
   score_threshold: ty.float32,
   id_index: ty.int32,
   score_index: ty.int32,
   ) -> None:
   ```
   
   In this operator, the data type of score_threshold is ty.float32 which is 
our newly supported situation


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Beya2019 removed a comment on pull request #8235: [TVMSCRIPT] add more type support in script function parameter

2021-06-14 Thread GitBox


Beya2019 removed a comment on pull request #8235:
URL: https://github.com/apache/tvm/pull/8235#issuecomment-861116899






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Beya2019 commented on pull request #8235: [TVMSCRIPT] add more type support in script function parameter

2021-06-14 Thread GitBox


Beya2019 commented on pull request #8235:
URL: https://github.com/apache/tvm/pull/8235#issuecomment-861117169


   > > Thanks @Beya2019 , can you add a regression testcase? Right now we do 
support passing in value arguments
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zackcquic commented on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-14 Thread GitBox


zackcquic commented on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-861117078


   > Ah! I saw what confused me. It's the level of "Pass Registration" section.
   > The current _pass_infra.rst_ section-hierachy is
   > 
   > **Pass Infrastructure** (Topmost)
   > 
   > * The Design
   >   **The design of backend and frontend are described here.**
   >   
   >   * C++ Backend
   > 
   > * PassContext
   > * Pass Constructs
   > * Module-Level Passes
   > * Function-Level Passes
   > * Sequential Passes
   >   * Pass Registration   <-This section has the same level with 
Backend/Frontend.
   >   * Python Frontend
   > 
   > * PassContext
   > * Pass Objects
   > 
   > Now I add Pass Instrument as:
   > 
   > **Pass Infrastructure** (Topmost)
   > 
   > * The Design
   >   **The design of backend and frontend are described here.**
   >   
   >   * C++ Backend
   > 
   > * PassContext
   > * Pass Constructs
   > * Module-Level Passes
   > * Function-Level Passes
   > * Sequential Passes
   > * Pass Registration   <- May I fix this to have the same level 
with other sub-sections in C++ backend?
   > * Pass Instruments   <--- Added in this PR.
   > * Built-in Instrument   <--- Added in this PR.
   >   * Python Frontend
   > 
   > * PassContext
   > * Pass Objects
   > * Pass Instrument   <--- Added in this PR.
   > * Override Instruments in Current PassContext   <--- Added in this PR.
   > 
   > This might looks matching with descriptions in "The Design" section.
   > Or, could we isolate Pass Instrument, and have another topmost section as 
**Pass Infrastructure**?
   > May I know your thoughts @zackcquic @areusch ?
   > Thanks a lot!
   
   Just know this PR put all in Pass Infrastructure
   I previous assume this a separate section like Pass Infrastructure with the 
sequence:
   1. Introduce what is a pass instrument
1. instrument points
   2. PassContext and multiple instance details
   3. Examples
   
   But this PR's organization is ok for me.
   Wait for @areusch @tkonolige to see if they have more comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Beya2019 commented on pull request #8235: [TVMSCRIPT] add more type support in script function parameter

2021-06-14 Thread GitBox


Beya2019 commented on pull request #8235:
URL: https://github.com/apache/tvm/pull/8235#issuecomment-861116899


   > Thanks @Beya2019 , can you add a regression testcase? Right now we do 
support passing in value arguments
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Beya2019 closed pull request #8235: [TVMSCRIPT] add more type support in script function parameter

2021-06-14 Thread GitBox


Beya2019 closed pull request #8235:
URL: https://github.com/apache/tvm/pull/8235


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zackcquic commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-14 Thread GitBox


zackcquic commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r651393990



##
File path: docs/dev/pass_infra.rst
##
@@ -526,16 +663,93 @@ decorators and then invoke it. For more examples about 
how to customize your own
 optimization pipeline and debug Relay and tir passes, please refer to the
 `use pass infra`_ tutorial.
 
+
+.. _pass_instrument_py_frontend:
+
+Pass Instrument
+^^^
+
+A customizable framework to instrument passes is provided. ``PassInstrument`` 
classes can be registered while constructing ``PassContext``.
+
+.. code:: python
+
+@tvm._ffi.register_object("transform.PassContext")
+class PassContext(tvm.runtime.Object):
+def __init__(
+self,
+opt_level=2,
+required_pass=None,
+disabled_pass=None,
+instruments=None,
+config=None,
+):
+# ...
+
+One can implement a ``PassInstrument`` by using the ``pass_instrument`` 
decorator(`python/tvm/ir/instrument.py`_) on a class implementing following 
methods:

Review comment:
   Nit: I think it maybe better to emphasize use decorator instead of 
subclassing. 

##
File path: docs/dev/pass_infra.rst
##
@@ -526,16 +663,93 @@ decorators and then invoke it. For more examples about 
how to customize your own
 optimization pipeline and debug Relay and tir passes, please refer to the
 `use pass infra`_ tutorial.
 
+
+.. _pass_instrument_py_frontend:
+
+Pass Instrument
+^^^
+
+A customizable framework to instrument passes is provided. ``PassInstrument`` 
classes can be registered while constructing ``PassContext``.
+
+.. code:: python
+
+@tvm._ffi.register_object("transform.PassContext")
+class PassContext(tvm.runtime.Object):
+def __init__(
+self,
+opt_level=2,
+required_pass=None,
+disabled_pass=None,
+instruments=None,
+config=None,
+):
+# ...
+
+One can implement a ``PassInstrument`` by using the ``pass_instrument`` 
decorator(`python/tvm/ir/instrument.py`_) on a class implementing following 
methods:

Review comment:
   Nit: Maybe it should be emphasized to use decorator, instead of 
overriding/subclassing.

##
File path: docs/dev/pass_infra.rst
##
@@ -389,6 +396,136 @@ To allow other C++ modules to apply this pass, we declare 
a free function in
 
 TVM_DLL Pass FoldConstant();
 
+.. _pass_instrument_cpp_backend:
+
+Pass Instrument
+^^^
+
+Currently we introduce four instrument point in the life-cycle of 
``PassContext``.

Review comment:
   instrument points




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-14 Thread GitBox


gromero commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651373115



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):

Review comment:
   Just a semantic nit here: it seems it's more a "get" than a "set" 
function? Like `get_pass_config_value` (also taking into account the suggestion 
from @comaniac about using `pass-config` flag instead of `config`, which I 
liked :)

##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -42,6 +42,13 @@ def add_compile_parser(subparsers):
 
 parser = subparsers.add_parser("compile", help="compile a model.")
 parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--config",
+action="append",
+metavar=("name=value"),
+help="configurations to be used at compile time. A subset of options 
provided "
+"by TVM are supported. e.g. 'relay.backend.use_auto_scheduler=0'",

Review comment:
   I'm wondering if it would make sense to enhance the help message a bit 
more so users don't try to do something like:
   
   `--config="tir.disable_vectorize=true,tir.disable_assert=true"` instead of 
   
   `--config=tir.disable_vectorize=true  --config=tir.disable_assert=true"`, 
i.e. know easily that multiple `--config` flags can be used and will be 
appended.
   
   I also see duplicated and even conflicting flags don't generate any error or 
warning. Should we treat them too? Like:
   
   ```
   $ python3 . compile --target="llvm" --config "tir.disable_vectorize=true" 
--config "tir.disable_vectorize=false" --config "tir.disable_assert=true" 
./sine_model.tflite
   One or more operators have not been tuned. Please tune your model for better 
performance. Use DEBUG logging level to see more details.
   $
   ```
   and
   
   ```
   $ python3 . compile --target="llvm" --config "tir.disable_vectorize=true" 
--config "tir.disable_vectorize=true" --config "tir.disable_assert=true" 
./sine_model.tflite
   One or more operators have not been tuned. Please tune your model for better 
performance. Use DEBUG logging level to see more details.
   $
   ```

##
File path: tests/python/driver/tvmc/test_tvmc_common.py
##
@@ -306,3 +306,49 @@ def test_parse_quotes_and_separators_on_options():
 
 assert len(targets_double_quote) == 1
 assert "+v1.0x,+value" == targets_double_quote[0]["opts"]["option1"]
+
+
+def test_config_invalid_format():
+with pytest.raises(TVMCException):
+_ = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler.missing.value"])
+
+
+def test_config_missing_from_tvm():
+with pytest.raises(TVMCException):
+_ = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler.missing.value=1234"])
+
+
+def test_config_unsupported_tvmc_config():
+with pytest.raises(TVMCException):
+_ = tvmc.common.parse_configs(["tir.LoopPartition=value"])
+
+
+def test_config_empty():
+with pytest.raises(TVMCException):
+_ = tvmc.common.parse_configs([""])
+
+
+def test_config_valid_config_bool():
+configs = 
tvmc.common.parse_configs(["relay.backend.use_auto_scheduler=true"])
+
+assert len(configs) == 1
+assert "relay.backend.use_auto_scheduler" in configs.keys()
+assert configs["relay.backend.use_auto_scheduler"] == True
+
+
+def test_config_valid_multiple_configs():
+configs = tvmc.common.parse_configs(
+[
+"relay.backend.use_auto_scheduler=false",
+"tir.detect_global_barrier=10",
+"relay.ext.vitis_ai.options.build_dir=mystring",

Review comment:
   CI complains about it. I believe `relay.ext.vitis_ai.options.build_dir`  
is neither `IntImm` nor `runtime.String` type?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mehrdadh commented on issue #8255: [microTVM] RPCSession Device Type Bug

2021-06-14 Thread GitBox


mehrdadh commented on issue #8255:
URL: https://github.com/apache/tvm/issues/8255#issuecomment-861098268


   Update:
   device_type should be converted before passing to DeviceName function. PR 
#8259 will fix this issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mehrdadh commented on pull request #8259: [Graph Debug Executor] Fix device_type for profile command

2021-06-14 Thread GitBox


mehrdadh commented on pull request #8259:
URL: https://github.com/apache/tvm/pull/8259#issuecomment-861097892


   https://github.com/apache/tvm/issues/8255


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mehrdadh opened a new pull request #8259: [Graph Debug Executor] Fix device_type for profile command

2021-06-14 Thread GitBox


mehrdadh opened a new pull request #8259:
URL: https://github.com/apache/tvm/pull/8259


   This PR:
   - Fixes issue with converting `device_type` to `device name` for profile 
command
   - Adds a test in test_crt for micro device to catch errors with profile 
command
   
   cc @areusch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo edited a comment on pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-14 Thread GitBox


AndrewZhaoLuo edited a comment on pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#issuecomment-858045479






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mehrdadh opened a new issue #8258: [CI QEMU][Docker] pip access to tvm-env

2021-06-14 Thread GitBox


mehrdadh opened a new issue #8258:
URL: https://github.com/apache/tvm/issues/8258


   Some people are using the CI docker image for development purposes and we 
need to install extra packages to test something. I'm using 
`tlcpack/ci-qemu:v0.04` and I'm trying to install some packages for a specific 
model that I'm testing. My understanding is that the image is built in a way 
that any user that login has similar access to the user who built the image. 
However when I login to ci_qemu and try `pip3 install` or even `sudo pip3 
install` I get this error:

   ```
   ERROR: Could not install packages due to an EnvironmentError: [Errno 13] 
Permission denied: '/opt/tvm-venv/lib/python3.6/site-packages/configparser.py'
   Consider using the `--user` option or check the permissions.
   ```
   
   The workaround that works for me is this:
   ```
   sudo su
   source /opt/tvm-venv/bin/activate
   pip3 install package
   exit
   ```
   
   I was wondering if anyone knows the issue with our build or docker/bash.sh 
that I use for login.
   cc @leandron 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: make simplify inference iterative (#8246)

2021-06-14 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 24c2f5c  make simplify inference iterative (#8246)
24c2f5c is described below

commit 24c2f5c1a893d7b1a42301a7ad671fbe6788fc94
Author: Matthew Brookhart 
AuthorDate: Mon Jun 14 17:28:48 2021 -0600

make simplify inference iterative (#8246)
---
 src/relay/transforms/simplify_inference.cc | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/src/relay/transforms/simplify_inference.cc 
b/src/relay/transforms/simplify_inference.cc
index 7e58766..846bc08 100644
--- a/src/relay/transforms/simplify_inference.cc
+++ b/src/relay/transforms/simplify_inference.cc
@@ -178,7 +178,7 @@ Expr L2NormToInferUnpack(const Attrs attrs, Expr data) {
   return Divide(data, sqrt);
 }
 
-class InferenceSimplifier : public ExprMutator {
+class InferenceSimplifier : public MixedModeMutator {
  public:
   InferenceSimplifier()
   : batch_norm_op_(Op::Get("nn.batch_norm")),
@@ -188,8 +188,7 @@ class InferenceSimplifier : public ExprMutator {
 group_norm_op_(Op::Get("nn.group_norm")),
 l2_norm_op_(Op::Get("nn.l2_normalize")) {}
 
-  Expr VisitExpr_(const TupleGetItemNode* n) final {
-Expr new_e = ExprMutator::VisitExpr_(n);
+  Expr Rewrite_(const TupleGetItemNode* n, const Expr& new_e) final {
 const auto* new_n = new_e.as();
 if (new_n->index != 0) {
   return new_e;
@@ -205,8 +204,7 @@ class InferenceSimplifier : public ExprMutator {
 return new_e;
   }
 
-  Expr VisitExpr_(const CallNode* n) {
-auto new_n = ExprMutator::VisitExpr_(n);
+  Expr Rewrite_(const CallNode* n, const Expr& new_n) {
 if (n->op == batch_norm_op_) {
   ty_map_[new_n.as()->args[0]] = n->args[0]->checked_type();
 } else if (n->op == layer_norm_op_) {


[GitHub] [tvm] zhiics commented on pull request #8246: [Relay] make simplify inference iterative

2021-06-14 Thread GitBox


zhiics commented on pull request #8246:
URL: https://github.com/apache/tvm/pull/8246#issuecomment-861058340


   Thanks @mbrookhart 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics merged pull request #8246: [Relay] make simplify inference iterative

2021-06-14 Thread GitBox


zhiics merged pull request #8246:
URL: https://github.com/apache/tvm/pull/8246


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan commented on pull request #8256: [OpenCL] Add check to only cast opaque handles to cl::BufferDescriptor at runtime

2021-06-14 Thread GitBox


csullivan commented on pull request #8256:
URL: https://github.com/apache/tvm/pull/8256#issuecomment-861041718


   cc @masahi 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi commented on issue #7979: AlterOpLayout modifies input module inplace (and other issues)

2021-06-14 Thread GitBox


masahi commented on issue #7979:
URL: https://github.com/apache/tvm/issues/7979#issuecomment-861042082


   Other related issues
   * https://github.com/apache/tvm/pull/8143
   * 
https://discuss.tvm.apache.org/t/frontend-onnx-fail-to-compile-an-onnx-model-at-opt-level-3/9926


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan opened a new pull request #8257: [OpenCL] Verify the number of parsed kernel sources against the expected number of kernel functions.

2021-06-14 Thread GitBox


csullivan opened a new pull request #8257:
URL: https://github.com/apache/tvm/pull/8257


   Improve this ICHECK to avoid false positives


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] apeskov commented on pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#issuecomment-861041156


   @areusch I've tried to answer your comments. Could you please continue a 
review?
   
   Some part of code you pointed previously to improve was redesigned with 
latest commits, specially the code which was borrowed from `cpp_rpc`. I hope 
the new one will not cause a lot of questions.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] csullivan opened a new pull request #8256: [OpenCL] Add check to only cast opaque handles to cl::BufferDescriptor at runtime

2021-06-14 Thread GitBox


csullivan opened a new pull request #8256:
URL: https://github.com/apache/tvm/pull/8256


   Previous casting of void_args was too aggressive and attempted to cast non 
backend allocated handles, such as kernel scalar arguments, to 
cl::BufferDescriptors. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] apeskov commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r651307207



##
File path: apps/ios_rpc/tvmrpc/rpc_server.h
##
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file rpc_server.h
+ * \brief RPC Server implementation.
+ */
+#ifndef TVM_APPS_IOS_RPC_SERVER_H_
+#define TVM_APPS_IOS_RPC_SERVER_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include "tvm/runtime/c_runtime_api.h"
+#include "runtime/rpc/rpc_endpoint.h"
+#include "runtime/rpc/rpc_socket_impl.h"
+#include "support/socket.h"
+#include "rpc_tracker_client.h"
+
+namespace tvm {
+namespace runtime {
+
+std::vector ListDir(const std::string& dirname) {

Review comment:
   It was a copy past from `cpp_rpc` app. It returns a vector of paths to 
all files placed in pointed dir. It was using to clean temp folder without 
removing it.
   
   I've decided to remove this part of functionality from this PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] apeskov commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r651305686



##
File path: apps/ios_rpc/tvmrpc/TVMRuntime.mm
##
@@ -71,88 +73,19 @@ void LogMessageImpl(const std::string& file, int lineno, 
const std::string& mess
 namespace tvm {
 namespace runtime {
 
-class NSStreamChannel final : public RPCChannel {
- public:
-  explicit NSStreamChannel(NSOutputStream* stream) : stream_(stream) {}
-
-  size_t Send(const void* data, size_t size) final {
-ssize_t nbytes = [stream_ write:reinterpret_cast(data) 
maxLength:size];
-if (nbytes < 0) {
-  NSLog(@"%@", [stream_ streamError].localizedDescription);
-  throw tvm::Error("Stream error");
-}
-return nbytes;
-  }
-
-  size_t Recv(void* data, size_t size) final {
-LOG(FATAL) << "Do not allow explicit receive for";
-return 0;
-  }
-
- private:
-  NSOutputStream* stream_;
-};
-
-FEventHandler CreateServerEventHandler(NSOutputStream* outputStream, 
std::string name,
-   std::string remote_key) {
-  std::unique_ptr ch(new NSStreamChannel(outputStream));
-  std::shared_ptr sess = RPCEndpoint::Create(std::move(ch), name, 
remote_key);
-  return [sess](const std::string& in_bytes, int flag) {
-return sess->ServerAsyncIOEventHandler(in_bytes, flag);
-  };
-}
-
-// Runtime environment
-struct RPCEnv {
- public:
-  RPCEnv() {
-NSString* path = NSTemporaryDirectory();
-base_ = [path UTF8String];
-if (base_[base_.length() - 1] != '/') {
-  base_ = base_ + '/';
-}
-  }
-  // Get Path.
-  std::string GetPath(const std::string& file_name) { return base_ + 
file_name; }
-
- private:
-  std::string base_;
-};
-
-void LaunchSyncServer() {
-  // only load dylib from frameworks.
-  NSBundle* bundle = [NSBundle mainBundle];
-  NSString* base = [bundle privateFrameworksPath];
-  NSString* path = [base stringByAppendingPathComponent:@"tvm/rpc_config.txt"];
-  std::string name = [path UTF8String];
-  std::ifstream fs(name, std::ios::in);
-  std::string url, key;
-  int port;
-  ICHECK(fs >> url >> port >> key) << "Invalid RPC config file " << name;
-  RPCConnect(url, port, "server:" + key, TVMArgs(nullptr, nullptr, 
0))->ServerLoop();
-}
-
 TVM_REGISTER_GLOBAL("tvm.rpc.server.workpath").set_body([](TVMArgs args, 
TVMRetValue* rv) {
-  static RPCEnv env;
-  *rv = env.GetPath(args[0]);
+  std::string name = args[0];
+  std::string base = [NSTemporaryDirectory() UTF8String];
+  *rv = base + "/" + name;
 });
 
 TVM_REGISTER_GLOBAL("tvm.rpc.server.load_module").set_body([](TVMArgs args, 
TVMRetValue* rv) {
   std::string name = args[0];
-  std::string fmt = GetFileFormat(name, "");

Review comment:
   Previous version of this patch was designed before merging of patch with 
custom DSO loader. But current PR state keeps this line unchanged.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] apeskov commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r651304190



##
File path: apps/ios_rpc/tvmrpc/TVMRuntime.mm
##
@@ -22,36 +22,38 @@
  */
 #include "TVMRuntime.h"
 // Runtime API
-#include "../../../src/runtime/c_runtime_api.cc"
-#include "../../../src/runtime/cpu_device_api.cc"
-#include "../../../src/runtime/dso_library.cc"
-#include "../../../src/runtime/file_utils.cc"
-#include "../../../src/runtime/library_module.cc"
-#include "../../../src/runtime/metadata_module.cc"
-#include "../../../src/runtime/module.cc"
-#include "../../../src/runtime/ndarray.cc"
-#include "../../../src/runtime/object.cc"
-#include "../../../src/runtime/registry.cc"
-#include "../../../src/runtime/system_library.cc"
-#include "../../../src/runtime/thread_pool.cc"
-#include "../../../src/runtime/threading_backend.cc"
-#include "../../../src/runtime/workspace_pool.cc"
-
-// RPC server
-#include "../../../src/runtime/rpc/rpc_channel.cc"
-#include "../../../src/runtime/rpc/rpc_endpoint.cc"
-#include "../../../src/runtime/rpc/rpc_local_session.cc"
-#include "../../../src/runtime/rpc/rpc_module.cc"
-#include "../../../src/runtime/rpc/rpc_server_env.cc"
-#include "../../../src/runtime/rpc/rpc_session.cc"
-#include "../../../src/runtime/rpc/rpc_socket_impl.cc"
-// Graph executor
-#include "../../../src/runtime/graph_executor/graph_executor.cc"
-// Metal
-#include "../../../src/runtime/metal/metal_device_api.mm"
-#include "../../../src/runtime/metal/metal_module.mm"
-// CoreML
-#include "../../../src/runtime/contrib/coreml/coreml_runtime.mm"
+//#include "../../../src/runtime/c_runtime_api.cc"

Review comment:
   Yes, Xcode precept is using prebuilt _tvm_runtime.dylib_ by specifying 
custom build attribute `TVM_BUILD_DIR` (during build as a part of Cmake it will 
be set automatically).  I've tried to explain the reason of this change in PR 
description.  




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8251: [Frontend, Tensorflow] Support for broadcasting in batch_matmul when shapes differ

2021-06-14 Thread GitBox


comaniac commented on a change in pull request #8251:
URL: https://github.com/apache/tvm/pull/8251#discussion_r651303466



##
File path: python/tvm/relay/frontend/tensorflow_ops.py
##
@@ -1157,11 +1154,18 @@ def _impl(inputs, attr, params, mod):
 new_shape_y = _op.concatenate(_op.Tuple(new_shape_y), axis=0)
 
 input_x = _op.reshape(input_x, newshape=new_shape_x)
-input_y = _op.reshape(input_y, newshape=new_shape_y)
+
+if np.prod(orig_shape_y) < np.prod(new_shape_y):
+input_y = _op.broadcast_to(input_y, new_shape_y)

Review comment:
   Agree. Please refer to ONNX and PyTorch frontend to avoid explicit 
broadcasting. Now both x86 and CUDA implementations of batch_matmul support 
implicit broadcasting, so simply `expand_dims(input_y)` to make it `(1, k, n)` 
would be sufficient.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] apeskov commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r651296531



##
File path: apps/ios_rpc/tvmrpc/TVMRuntime.mm
##
@@ -71,88 +73,19 @@ void LogMessageImpl(const std::string& file, int lineno, 
const std::string& mess
 namespace tvm {
 namespace runtime {
 
-class NSStreamChannel final : public RPCChannel {

Review comment:
   Moved into "RPCServer.mm". And now it's wrapped into `PackedFunction` to 
meet API requirements of `rpc.CreateEventDrivenServer`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] apeskov commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r651295319



##
File path: apps/ios_rpc/tvmrpc/rpc_server.h
##
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file rpc_server.h
+ * \brief RPC Server implementation.
+ */
+#ifndef TVM_APPS_IOS_RPC_SERVER_H_
+#define TVM_APPS_IOS_RPC_SERVER_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include "tvm/runtime/c_runtime_api.h"
+#include "runtime/rpc/rpc_endpoint.h"
+#include "runtime/rpc/rpc_socket_impl.h"
+#include "support/socket.h"
+#include "rpc_tracker_client.h"
+
+namespace tvm {
+namespace runtime {
+
+std::vector ListDir(const std::string& dirname) {
+  std::vector vec;
+  DIR* dp = opendir(dirname.c_str());
+  if (dp == nullptr) {
+int errsv = errno;
+LOG(FATAL) << "ListDir " << dirname << " error: " << strerror(errsv);
+  }
+  dirent* d;
+  while ((d = readdir(dp)) != nullptr) {
+std::string filename = d->d_name;
+if (filename != "." && filename != "..") {
+  std::string f = dirname;
+  if (f[f.length() - 1] != '/') {
+f += '/';
+  }
+  f += d->d_name;
+  vec.push_back(f);
+}
+  }
+  closedir(dp);
+  return vec;
+}
+
+/*!
+ * \brief CleanDir Removes the files from the directory
+ * \param dirname The name of the directory
+ */
+void CleanDir(const std::string& dirname) {
+  auto files = ListDir(dirname);
+  for (const auto& filename : files) {
+std::string file_path = dirname + "/";
+file_path += filename;
+const int ret = std::remove(filename.c_str());
+if (ret != 0) {
+  LOG(WARNING) << "Remove file " << filename << " failed";
+}
+  }
+}
+
+// Runtime environment
+struct RPCEnv {
+ public:
+  RPCEnv(const std::string ):base_(base) {}
+  // Get Path.
+  std::string GetPath(const std::string& file_name) { return base_ + 
file_name; }
+
+  void CleanUp() const {
+CleanDir(base_);
+  }
+ private:
+  std::string base_;
+};
+
+
+/*!
+ * \brief RPCServer RPC Server class.
+ * \param host The hostname of the server, Default=0.0.0.0
+ * \param port The port of the RPC, Default=9090
+ * \param port_end The end search port of the RPC, Default=9099
+ * \param tracker The address of RPC tracker in host:port format e.g. 
10.77.1.234:9190 Default=""
+ * \param key The key used to identify the device type in tracker. Default=""
+ * \param custom_addr Custom IP Address to Report to RPC Tracker. Default=""
+ */
+class RPCServer {
+ public:
+  /*!
+   * \brief Constructor.
+   */
+  RPCServer(std::string host, int port, int port_end, std::string 
tracker_addr, std::string key,
+std::string custom_addr, std::string work_dir)
+  : host_(std::move(host)),
+port_(port),
+my_port_(0),
+port_end_(port_end),
+tracker_addr_(std::move(tracker_addr)),
+key_(std::move(key)),
+custom_addr_(std::move(custom_addr)),
+work_dir_(std::move(work_dir)),
+tracker_(tracker_addr_, key_, custom_addr_) {}
+
+  /*!
+   * \brief Destructor.
+   */
+  ~RPCServer() {
+try {
+  // Free the resources
+  listen_sock_.Close();
+  tracker_.Close();
+} catch (...) {
+}
+  }
+
+  /*!
+   * \brief Start Creates the RPC listen process and execution.
+   */
+  void Start() {
+listen_sock_.Create();
+my_port_ = listen_sock_.TryBindHost(host_, port_, port_end_);
+LOG(INFO) << "bind to " << host_ << ":" << my_port_;
+listen_sock_.Listen(1);
+continue_processing = true;
+proc_ = std::future(std::async(std::launch::async, 
::ListenLoopProc, this));
+  }
+  
+  void Stop() {
+continue_processing = false;
+tracker_.Close();
+  }
+
+  void setCompletionCallbacks(std::function callback_start, 
std::function callback_stop) {
+completion_callback_start_ = callback_start;
+completion_callback_stop_ = callback_stop;
+  }
+
+ private:
+  /*!
+   * \brief ListenLoopProc The listen process.
+   */
+  void ListenLoopProc() {
+
+while (continue_processing) {
+  support::TCPSocket conn;
+  support::SockAddr addr("0.0.0.0", 0);
+  std::string opts;
+  try {
+// step 1: setup tracker and report to 

[GitHub] [tvm] apeskov commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r651294848



##
File path: apps/ios_rpc/tvmrpc/rpc_server.h
##
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file rpc_server.h
+ * \brief RPC Server implementation.
+ */
+#ifndef TVM_APPS_IOS_RPC_SERVER_H_
+#define TVM_APPS_IOS_RPC_SERVER_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include "tvm/runtime/c_runtime_api.h"
+#include "runtime/rpc/rpc_endpoint.h"
+#include "runtime/rpc/rpc_socket_impl.h"
+#include "support/socket.h"
+#include "rpc_tracker_client.h"
+
+namespace tvm {
+namespace runtime {
+
+std::vector ListDir(const std::string& dirname) {
+  std::vector vec;
+  DIR* dp = opendir(dirname.c_str());
+  if (dp == nullptr) {
+int errsv = errno;
+LOG(FATAL) << "ListDir " << dirname << " error: " << strerror(errsv);
+  }
+  dirent* d;
+  while ((d = readdir(dp)) != nullptr) {
+std::string filename = d->d_name;
+if (filename != "." && filename != "..") {
+  std::string f = dirname;
+  if (f[f.length() - 1] != '/') {
+f += '/';
+  }
+  f += d->d_name;
+  vec.push_back(f);
+}
+  }
+  closedir(dp);
+  return vec;
+}
+
+/*!
+ * \brief CleanDir Removes the files from the directory
+ * \param dirname The name of the directory
+ */
+void CleanDir(const std::string& dirname) {
+  auto files = ListDir(dirname);
+  for (const auto& filename : files) {
+std::string file_path = dirname + "/";
+file_path += filename;
+const int ret = std::remove(filename.c_str());
+if (ret != 0) {
+  LOG(WARNING) << "Remove file " << filename << " failed";
+}
+  }
+}
+
+// Runtime environment
+struct RPCEnv {
+ public:
+  RPCEnv(const std::string ):base_(base) {}
+  // Get Path.
+  std::string GetPath(const std::string& file_name) { return base_ + 
file_name; }
+
+  void CleanUp() const {
+CleanDir(base_);
+  }
+ private:
+  std::string base_;
+};
+
+
+/*!
+ * \brief RPCServer RPC Server class.
+ * \param host The hostname of the server, Default=0.0.0.0
+ * \param port The port of the RPC, Default=9090
+ * \param port_end The end search port of the RPC, Default=9099
+ * \param tracker The address of RPC tracker in host:port format e.g. 
10.77.1.234:9190 Default=""
+ * \param key The key used to identify the device type in tracker. Default=""
+ * \param custom_addr Custom IP Address to Report to RPC Tracker. Default=""
+ */
+class RPCServer {
+ public:
+  /*!
+   * \brief Constructor.
+   */
+  RPCServer(std::string host, int port, int port_end, std::string 
tracker_addr, std::string key,
+std::string custom_addr, std::string work_dir)
+  : host_(std::move(host)),
+port_(port),
+my_port_(0),
+port_end_(port_end),
+tracker_addr_(std::move(tracker_addr)),
+key_(std::move(key)),
+custom_addr_(std::move(custom_addr)),
+work_dir_(std::move(work_dir)),
+tracker_(tracker_addr_, key_, custom_addr_) {}
+
+  /*!
+   * \brief Destructor.
+   */
+  ~RPCServer() {
+try {
+  // Free the resources
+  listen_sock_.Close();
+  tracker_.Close();
+} catch (...) {
+}
+  }
+
+  /*!
+   * \brief Start Creates the RPC listen process and execution.
+   */
+  void Start() {
+listen_sock_.Create();
+my_port_ = listen_sock_.TryBindHost(host_, port_, port_end_);
+LOG(INFO) << "bind to " << host_ << ":" << my_port_;
+listen_sock_.Listen(1);
+continue_processing = true;
+proc_ = std::future(std::async(std::launch::async, 
::ListenLoopProc, this));
+  }
+  
+  void Stop() {
+continue_processing = false;
+tracker_.Close();
+  }
+
+  void setCompletionCallbacks(std::function callback_start, 
std::function callback_stop) {
+completion_callback_start_ = callback_start;
+completion_callback_stop_ = callback_stop;
+  }
+
+ private:
+  /*!
+   * \brief ListenLoopProc The listen process.
+   */
+  void ListenLoopProc() {
+
+while (continue_processing) {
+  support::TCPSocket conn;
+  support::SockAddr addr("0.0.0.0", 0);
+  std::string opts;
+  try {
+// step 1: setup tracker and report to 

[GitHub] [tvm] apeskov commented on a change in pull request #7876: [iOS] Add tracker support into ios-rpc application

2021-06-14 Thread GitBox


apeskov commented on a change in pull request #7876:
URL: https://github.com/apache/tvm/pull/7876#discussion_r651293840



##
File path: apps/ios_rpc/tvmrpc/rpc_server.h
##
@@ -0,0 +1,318 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file rpc_server.h
+ * \brief RPC Server implementation.
+ */
+#ifndef TVM_APPS_IOS_RPC_SERVER_H_
+#define TVM_APPS_IOS_RPC_SERVER_H_
+
+#include 
+#include 
+#include 
+#include 
+
+#include "tvm/runtime/c_runtime_api.h"
+#include "runtime/rpc/rpc_endpoint.h"
+#include "runtime/rpc/rpc_socket_impl.h"
+#include "support/socket.h"
+#include "rpc_tracker_client.h"
+
+namespace tvm {
+namespace runtime {
+
+std::vector ListDir(const std::string& dirname) {
+  std::vector vec;
+  DIR* dp = opendir(dirname.c_str());
+  if (dp == nullptr) {
+int errsv = errno;
+LOG(FATAL) << "ListDir " << dirname << " error: " << strerror(errsv);
+  }
+  dirent* d;
+  while ((d = readdir(dp)) != nullptr) {
+std::string filename = d->d_name;
+if (filename != "." && filename != "..") {
+  std::string f = dirname;
+  if (f[f.length() - 1] != '/') {
+f += '/';
+  }
+  f += d->d_name;
+  vec.push_back(f);
+}
+  }
+  closedir(dp);
+  return vec;
+}
+
+/*!
+ * \brief CleanDir Removes the files from the directory
+ * \param dirname The name of the directory
+ */
+void CleanDir(const std::string& dirname) {
+  auto files = ListDir(dirname);
+  for (const auto& filename : files) {
+std::string file_path = dirname + "/";
+file_path += filename;
+const int ret = std::remove(filename.c_str());
+if (ret != 0) {
+  LOG(WARNING) << "Remove file " << filename << " failed";
+}
+  }
+}
+
+// Runtime environment
+struct RPCEnv {
+ public:
+  RPCEnv(const std::string ):base_(base) {}
+  // Get Path.
+  std::string GetPath(const std::string& file_name) { return base_ + 
file_name; }
+
+  void CleanUp() const {
+CleanDir(base_);
+  }
+ private:
+  std::string base_;
+};
+
+
+/*!
+ * \brief RPCServer RPC Server class.
+ * \param host The hostname of the server, Default=0.0.0.0
+ * \param port The port of the RPC, Default=9090
+ * \param port_end The end search port of the RPC, Default=9099
+ * \param tracker The address of RPC tracker in host:port format e.g. 
10.77.1.234:9190 Default=""
+ * \param key The key used to identify the device type in tracker. Default=""
+ * \param custom_addr Custom IP Address to Report to RPC Tracker. Default=""
+ */
+class RPCServer {
+ public:
+  /*!
+   * \brief Constructor.
+   */
+  RPCServer(std::string host, int port, int port_end, std::string 
tracker_addr, std::string key,
+std::string custom_addr, std::string work_dir)
+  : host_(std::move(host)),
+port_(port),
+my_port_(0),
+port_end_(port_end),
+tracker_addr_(std::move(tracker_addr)),
+key_(std::move(key)),
+custom_addr_(std::move(custom_addr)),
+work_dir_(std::move(work_dir)),
+tracker_(tracker_addr_, key_, custom_addr_) {}
+
+  /*!
+   * \brief Destructor.
+   */
+  ~RPCServer() {
+try {
+  // Free the resources
+  listen_sock_.Close();
+  tracker_.Close();
+} catch (...) {
+}
+  }
+
+  /*!
+   * \brief Start Creates the RPC listen process and execution.
+   */
+  void Start() {
+listen_sock_.Create();
+my_port_ = listen_sock_.TryBindHost(host_, port_, port_end_);
+LOG(INFO) << "bind to " << host_ << ":" << my_port_;
+listen_sock_.Listen(1);
+continue_processing = true;
+proc_ = std::future(std::async(std::launch::async, 
::ListenLoopProc, this));
+  }
+  
+  void Stop() {
+continue_processing = false;
+tracker_.Close();
+  }
+
+  void setCompletionCallbacks(std::function callback_start, 
std::function callback_stop) {
+completion_callback_start_ = callback_start;
+completion_callback_stop_ = callback_stop;
+  }
+
+ private:
+  /*!
+   * \brief ListenLoopProc The listen process.
+   */
+  void ListenLoopProc() {
+
+while (continue_processing) {
+  support::TCPSocket conn;
+  support::SockAddr addr("0.0.0.0", 0);
+  std::string opts;
+  try {
+// step 1: setup tracker and report to 

[GitHub] [tvm] mehrdadh opened a new issue #8255: [microTVM] RPCSession Device Type Bug

2021-06-14 Thread GitBox


mehrdadh opened a new issue #8255:
URL: https://github.com/apache/tvm/issues/8255


   While running graph executor debugger with micro target (e.g. qemu_x86), 
[device](https://github.com/apache/tvm/blob/1c251f50ee616507bdfd8866408e7acf9888cc3f/python/tvm/rpc/client.py#L75)
 function generates invalid device_type.
   In this case we have:
   ```
   -> return dev
   (Pdb) self._tbl_index
   0
   (Pdb) base.RPC_SESS_MASK
   128
   (Pdb) encode
   128
   (Pdb) dev.device_type
   129
   ``` 
   I think `dev.device_type` is expected to be 1 since this is a `cpu` type. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on a change in pull request #8251: [Frontend, Tensorflow] Support for broadcasting in batch_matmul when shapes differ

2021-06-14 Thread GitBox


mbrookhart commented on a change in pull request #8251:
URL: https://github.com/apache/tvm/pull/8251#discussion_r651276848



##
File path: python/tvm/relay/frontend/tensorflow_ops.py
##
@@ -1157,11 +1154,18 @@ def _impl(inputs, attr, params, mod):
 new_shape_y = _op.concatenate(_op.Tuple(new_shape_y), axis=0)
 
 input_x = _op.reshape(input_x, newshape=new_shape_x)
-input_y = _op.reshape(input_y, newshape=new_shape_y)
+
+if np.prod(orig_shape_y) < np.prod(new_shape_y):
+input_y = _op.broadcast_to(input_y, new_shape_y)

Review comment:
   Broadcasting here could cause memory sizes to explode, would it be 
better to use implicit broadcasting in the batch_matmul kernel if possible? We 
do that in the onnx importer, but converting back to ND can be a pain.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rohanmukh commented on pull request #8251: [Frontend, Tensorflow] Support for broadcasting in batch_matmul when shapes differ

2021-06-14 Thread GitBox


rohanmukh commented on pull request #8251:
URL: https://github.com/apache/tvm/pull/8251#issuecomment-860924081


   @trevor-m @yongwww @comaniac @mbrookhart 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kparzysz-quic commented on a change in pull request #8254: [Target] Allow 'true' and 'false' strings in conversions to integer

2021-06-14 Thread GitBox


kparzysz-quic commented on a change in pull request #8254:
URL: https://github.com/apache/tvm/pull/8254#discussion_r651194215



##
File path: src/target/target.cc
##
@@ -210,7 +210,14 @@ ObjectRef TargetInternal::ParseType(const std::string& str,
 // Parsing integer
 int v;
 if (!(is >> v)) {
-  throw Error(": Cannot parse into type \"Integer\" from string: " + str);
+  // Bool is a subclass of IntImm, so allow textual boolean values.

Review comment:
   Done.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8254: [Target] Allow 'true' and 'false' strings in conversions to integer

2021-06-14 Thread GitBox


comaniac commented on a change in pull request #8254:
URL: https://github.com/apache/tvm/pull/8254#discussion_r651186137



##
File path: src/target/target.cc
##
@@ -210,7 +210,14 @@ ObjectRef TargetInternal::ParseType(const std::string& str,
 // Parsing integer
 int v;
 if (!(is >> v)) {
-  throw Error(": Cannot parse into type \"Integer\" from string: " + str);
+  // Bool is a subclass of IntImm, so allow textual boolean values.

Review comment:
   It would be better to use `to_lower` to canonicalize the string.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] AndrewZhaoLuo commented on a change in pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-14 Thread GitBox


AndrewZhaoLuo commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r651148846



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,356 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed precision for relay graphs. i.e. turn a graph into 
fp16 form.
+ */
+#include "to_mixed_precision.h"
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// A function which maps CallNodes to their initial conversion color
+using ColorFunc = std::function;
+
+// A function which maps MIXED_PRECISION_ALWAYS CallNodes to wanted 
accumulation and output dtypes
+using OutputDtypeFunc = std::function;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+  const ColorFunc colorer;
+  const OutputDtypeFunc output_dtype_func;
+  const DataType mixed_precision_type;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs new_attrs = Attrs(call->attrs);
+if (new_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes (creating new tensors of type dtype)
+  if (auto attrs = new_attrs.as()) {
+ModifyAttrsDType(attrs, accumulation_dtype);
+  }
+}
+
+return new_attrs;
+  }
+
+  template 
+  void ModifyAttrsOutputDType(const T* attrs, const DataType& 
accumulation_dtype) const {
+/*
+ Helper template to modify relevant attributes with out_dtype type.
+ These represent accumulation dtypes for some operations e.g.
+ conv2d might take in fp16 and give a fp32 result.
+ Attrs is const because we get it as a const.
+ */
+T* mutable_attrs = const_cast(attrs);
+
+DataType cur_type = (mutable_attrs->out_dtype);
+if (cur_type.is_float() || cur_type.is_void()) mutable_attrs->out_dtype = 
accumulation_dtype;
+  }
+
+  template 
+  void ModifyAttrsDType(const T* attrs, const DataType& accumulation_dtype) 
const {
+/*
+ Helper template to modify relevant attributes with dtype type.
+ This determines 

[GitHub] [tvm] gromero edited a comment on pull request #8055: apps: microtvm: Disable `CONFIG_FPU ` for Zephyr runtime

2021-06-14 Thread GitBox


gromero edited a comment on pull request #8055:
URL: https://github.com/apache/tvm/pull/8055#issuecomment-860863569


   @microbuilder @mehrdadh Hi folks, sorry for the delay on reviewing it.
   
   @mehrdadh thanks for the additional checks!
   
   I figured out why the link error pasted above happened on my local 
environment but not on the CI.
   
   It happens that the CI is using Zephyr SDK 0.12 whilst I was using Zephyr 
SDK 0.11 and the following Zephyr SDK fix is only in 0.12:
   
   ```
   commit 2c1077298b169c39a1badb4d4a4236a2f2eaf769
   Author: Daniel Leung 
   Date:   Thu Dec 10 12:27:08 2020 -0800
   
   x86_64: fix soft float for x86 32-bit
   
   This fixes soft-float build for x86 32-bit (-m32 -msoft-float)
   under x86_64-zephyr-elf multilib build. This now actually
   includes the soft float functions.
   
   Signed-off-by: Daniel Leung 
   ```
   
   hence the soft-float functions were not present in my environment, causing 
the linking errors when the patch in question is applied.
   
   That, on the other hand, exposed the fact that `qemu_x86` .elf image was not 
really using `CONFIG_FPU=y` and that can be confirmed by looking for soft-float 
specific functions present (or absent) in the final .elf image when its built 
with Zephyr SDK 0.12 (fixed version). Hence no soft-float functions are present 
in the `qemu_x86` image without the patch applied:
   
   ```
   $ objdump -t ./zephyr.elf | fgrep float
   $ 
   ```
   
   but are included (and used) in the .elf image when the patch is applied (for 
`CONFIG_FPU=y` is not kicking in):
   
   ```
   $ objdump -t ./zephyr.elf | fgrep float
    ldf *ABS*    floatunsidf.c
    ldf *ABS*    floatsidf.c
    ldf *ABS*    soft_float_stubs.c
   00103020 g F text007f .hidden __floatsidf
   00101dd0 g F text0084 .hidden __floatunsidf
   $
   $ objdump -t ./zephyr.elf | fgrep __subdf3
   001012b0 g F text0b1e .hidden __subdf3
   ```
   
   This seems expected because although `CPU_HAS_FPU` is selected  in Zephyr's 
`boards/x86/qemu_x86/Kconfig.board` `CONFIG_FPU=y` is not set by default in 
Zephyr for that board. `CONFIG_FPU` depends on `CPU_HAS_FPU` to be set but 
setting `CPU_HAS_FPU` doesn't imply `CONFIG_FPU=y` in my understanding.
   
   Thus one might think it was just a matter of enabling `CONFIG_FPU` per 
board, like:
   
   ```
   diff --git a/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf 
b/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf
   index f314f59a5..12c67367f 100644
   --- a/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf
   +++ b/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf
   @@ -23,3 +23,5 @@ CONFIG_TIMER_RANDOM_GENERATOR=y

# Default stack size is 1k, this is required for debug mode. 
CONFIG_MAIN_STACK_SIZE=1536
   +
   +CONFIG_FPU=y
   ```
   
   However although the build will finish ok the following error will be caught 
by the CI when testing against models that rely on floating point operations:
   
   ```
    
test_byoc_utvm[host] 

   
   platform = 'host', west_cmd = 'west', skip_build = False, tvm_debug = False
   
   def test_byoc_utvm(platform, west_cmd, skip_build, tvm_debug):
   """This is a simple test case to check BYOC capabilities of uTVM"""
   model, zephyr_board = PLATFORMS[platform]
   build_config = {"skip_build": skip_build, "debug": tvm_debug}
   x = relay.var("x", shape=(10, 10))
   w0 = relay.var("w0", shape=(10, 10))
   w1 = relay.var("w1", shape=(10, 10))
   w2 = relay.var("w2", shape=(10, 10))
   w3 = relay.var("w3", shape=(10, 10))
   w4 = relay.var("w4", shape=(10, 10))
   w5 = relay.var("w5", shape=(10, 10))
   w6 = relay.var("w6", shape=(10, 10))
   w7 = relay.var("w7", shape=(10, 10))
   
   # C compiler
   z0 = relay.add(x, w0)
   p0 = relay.subtract(z0, w1)
   q0 = relay.multiply(p0, w2)
   
   z1 = relay.add(x, w3)
   p1 = relay.subtract(z1, w4)
   q1 = relay.multiply(p1, w5)
   
   # Other parts on TVM
   z2 = relay.add(x, w6)
   q2 = relay.subtract(z2, w7)
   
   r = relay.concatenate((q0, q1, q2), axis=0)
   f = relay.Function([x, w0, w1, w2, w3, w4, w5, w6, w7], r)
   mod = tvm.IRModule()
   ann = CcompilerAnnotator()
   mod["main"] = ann.visit(f)
   mod = tvm.relay.transform.PartitionGraph()(mod)
   mod = tvm.relay.transform.InferType()(mod)
   
   x_data = np.random.rand(10, 10).astype("float32")
   w_data = []
   for _ in range(8):
   w_data.append(np.random.rand(10, 

[GitHub] [tvm] kparzysz-quic opened a new pull request #8254: [Target] Allow 'true' and 'false' strings in conversions to integer

2021-06-14 Thread GitBox


kparzysz-quic opened a new pull request #8254:
URL: https://github.com/apache/tvm/pull/8254


   This will allow `Bool` attributes to take `true`/`false` values instead of 0 
and 1 only.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero commented on pull request #8055: apps: microtvm: Disable `CONFIG_FPU ` for Zephyr runtime

2021-06-14 Thread GitBox


gromero commented on pull request #8055:
URL: https://github.com/apache/tvm/pull/8055#issuecomment-860863569


   @microbuilder @mehrdadh Hi folks, sorry for the delay on reviewing it.
   
   @mehrdadh thanks for the additional checks!
   
   I figured out why the link error pasted above happened on my local 
environment but not on the CI.
   
   It happens that the CI is using Zephyr SDK 0.12 whilst I was using Zephyr 
SDK 0.11 and the following Zephyr SDK fix is only in 0.12:
   
   ```
   commit 2c1077298b169c39a1badb4d4a4236a2f2eaf769
   Author: Daniel Leung 
   Date:   Thu Dec 10 12:27:08 2020 -0800
   
   x86_64: fix soft float for x86 32-bit
   
   This fixes soft-float build for x86 32-bit (-m32 -msoft-float)
   under x86_64-zephyr-elf multilib build. This now actually
   includes the soft float functions.
   
   Signed-off-by: Daniel Leung 
   ```
   
   hence the soft-float functions were not present in my environment, causing 
the linking errors when the patch in question is applied.
   
   That, on the other hand, exposed the fact that `qemu_x86` .elf image was not 
really using `CONFIG_FPU=y` and that can be confirmed by looking for soft-float 
specific functions present (or absent) in the final .elf image when its built 
with Zephyr SDK 0.12 (fixed version). Hence no soft-float functions are present 
in the `qemu_x86` image without the patch applied:
   
   ```
   $ objdump -t ./zephyr.elf | fgrep float
   $ 
   ```
   
   but are included (and used) in the .elf image when the patch is applied (for 
`CONFIG_FPU=y` is not kicking in):
   
   ```
   $ objdump -t ./zephyr.elf | fgrep float
    ldf *ABS*    floatunsidf.c
    ldf *ABS*    floatsidf.c
    ldf *ABS*    soft_float_stubs.c
   00103020 g F text007f .hidden __floatsidf
   00101dd0 g F text0084 .hidden __floatunsidf
   $
   $ objdump -t ./zephyr.elf | fgrep __subdf3
   001012b0 g F text0b1e .hidden __subdf3
   ```
   
   This seems expected because although `CPU_HAS_FPU` is selected  in Zephyr's 
`boards/x86/qemu_x86/Kconfig.board` `CONFIG_FPU=y` is not set by default in 
Zephyr for that board. `CONFIG_FPU` depends on `CPU_HAS_FPU` to be set but 
setting `CPU_HAS_FPU` doesn't imply `CONFIG_FPU=y` in my understanding.
   
   Thus one might think it was just a matter of enabling `CONFIG_FPU` per 
board, like:
   
   ```
   diff --git a/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf 
b/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf
   index f314f59a5..12c67367f 100644
   --- a/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf
   +++ b/apps/microtvm/zephyr/host_driven/boards/qemu_x86.conf
   @@ -23,3 +23,5 @@ CONFIG_TIMER_RANDOM_GENERATOR=y

# Default stack size is 1k, this is required for debug mode. 
CONFIG_MAIN_STACK_SIZE=1536
   +
   +CONFIG_FPU=y
   ```
   
   However although the build will finish ok the following error will be caught 
by the CI when testing against models that rely on floating point operations:
   
   ```
    
test_byoc_utvm[host] 

   
   platform = 'host', west_cmd = 'west', skip_build = False, tvm_debug = False
   
   def test_byoc_utvm(platform, west_cmd, skip_build, tvm_debug):
   """This is a simple test case to check BYOC capabilities of uTVM"""
   model, zephyr_board = PLATFORMS[platform]
   build_config = {"skip_build": skip_build, "debug": tvm_debug}
   x = relay.var("x", shape=(10, 10))
   w0 = relay.var("w0", shape=(10, 10))
   w1 = relay.var("w1", shape=(10, 10))
   w2 = relay.var("w2", shape=(10, 10))
   w3 = relay.var("w3", shape=(10, 10))
   w4 = relay.var("w4", shape=(10, 10))
   w5 = relay.var("w5", shape=(10, 10))
   w6 = relay.var("w6", shape=(10, 10))
   w7 = relay.var("w7", shape=(10, 10))
   
   # C compiler
   z0 = relay.add(x, w0)
   p0 = relay.subtract(z0, w1)
   q0 = relay.multiply(p0, w2)
   
   z1 = relay.add(x, w3)
   p1 = relay.subtract(z1, w4)
   q1 = relay.multiply(p1, w5)
   
   # Other parts on TVM
   z2 = relay.add(x, w6)
   q2 = relay.subtract(z2, w7)
   
   r = relay.concatenate((q0, q1, q2), axis=0)
   f = relay.Function([x, w0, w1, w2, w3, w4, w5, w6, w7], r)
   mod = tvm.IRModule()
   ann = CcompilerAnnotator()
   mod["main"] = ann.visit(f)
   mod = tvm.relay.transform.PartitionGraph()(mod)
   mod = tvm.relay.transform.InferType()(mod)
   
   x_data = np.random.rand(10, 10).astype("float32")
   w_data = []
   for _ in range(8):
   w_data.append(np.random.rand(10, 

[tvm] branch main updated: Fix build break in android_rpc (#8252)

2021-06-14 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 1c251f5  Fix build break in android_rpc (#8252)
1c251f5 is described below

commit 1c251f50ee616507bdfd8866408e7acf9888cc3f
Author: Euntaik 
AuthorDate: Tue Jun 15 02:13:40 2021 +0900

Fix build break in android_rpc (#8252)
---
 apps/android_rpc/app/src/main/jni/tvm_runtime.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/apps/android_rpc/app/src/main/jni/tvm_runtime.h 
b/apps/android_rpc/app/src/main/jni/tvm_runtime.h
index c0bd707..1331e1a 100644
--- a/apps/android_rpc/app/src/main/jni/tvm_runtime.h
+++ b/apps/android_rpc/app/src/main/jni/tvm_runtime.h
@@ -62,6 +62,7 @@
 #ifdef TVM_OPENCL_RUNTIME
 #include "../src/runtime/opencl/opencl_device_api.cc"
 #include "../src/runtime/opencl/opencl_module.cc"
+#include "../src/runtime/opencl/texture_pool.cc"
 #include "../src/runtime/source_utils.cc"
 #endif
 


[GitHub] [tvm] tqchen commented on pull request #8252: Fix build break in android_rpc

2021-06-14 Thread GitBox


tqchen commented on pull request #8252:
URL: https://github.com/apache/tvm/pull/8252#issuecomment-860850573


   Thanks @euntaik 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen merged pull request #8252: Fix build break in android_rpc

2021-06-14 Thread GitBox


tqchen merged pull request #8252:
URL: https://github.com/apache/tvm/pull/8252


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-14 Thread GitBox


comaniac commented on a change in pull request #8253:
URL: https://github.com/apache/tvm/pull/8253#discussion_r651120543



##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value = mapping_values.get(value.lower(), None)
+
+if parsed_value is None:
+raise TVMCException(f"Invalid value '{value}' for configuration 
'{name}'. ")
+
+if config_type == "runtime.String":
+parsed_value = value
+
+return parsed_value
+
+
+def parse_configs(input_configs):
+"""Parse configuration values set via command line.
+
+Parameters
+--
+input_configs: list of str
+list of configurations provided via command line.
+
+Returns
+---
+pass_context_configs: dict
+a dict containing key-value configs to be used in the PassContext.
+"""
+all_configs = tvm.ir.transform.PassContext.list_configs()
+supported_config_types = ("IntImm", "runtime.String")
+supported_configs = [
+name for name in all_configs.keys() if all_configs[name]["type"] in 
supported_config_types
+]
+pass_context_configs = {}
+
+if not input_configs:
+return {}

Review comment:
   Move this to the beginning of this function so that you don't need to 
process all available configs if users don't specify any.

##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value = mapping_values.get(value.lower(), None)
+
+if parsed_value is None:
+raise TVMCException(f"Invalid value '{value}' for configuration 
'{name}'. ")
+
+if config_type == "runtime.String":
+parsed_value = value
+
+return parsed_value
+
+
+def parse_configs(input_configs):
+"""Parse configuration values set via command line.
+
+Parameters
+--
+input_configs: list of str
+list of configurations provided via command line.
+
+Returns
+---
+pass_context_configs: dict
+a dict containing key-value configs to be used in the PassContext.
+"""
+all_configs = tvm.ir.transform.PassContext.list_configs()
+supported_config_types = ("IntImm", "runtime.String")
+supported_configs = [
+name for name in all_configs.keys() if all_configs[name]["type"] in 
supported_config_types
+]
+pass_context_configs = {}
+
+if not input_configs:
+return {}
+
+for config in input_configs:
+if len(config) == 0:

Review comment:
   ```suggestion
   if not config:
   ```

##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -42,6 +42,13 @@ def add_compile_parser(subparsers):
 
 parser = subparsers.add_parser("compile", help="compile a model.")
 parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--config",

Review comment:
   Would `build-config` or `pass-config` more intuitive?

##
File path: python/tvm/driver/tvmc/common.py
##
@@ -415,3 +415,86 @@ def parse_shape_string(inputs_string):
 shape_dict[name] = shape
 
 return shape_dict
+
+
+def set_config_value(name, value, config_type):
+"""Set a PassContext configuration value according to its value"""
+
+if config_type == "IntImm":
+# "Bool" configurations in the PassContext are recognized as
+# IntImm, so deal with this case here
+mapping_values = {
+"false": False,
+"true": True,
+}
+
+if value.isdigit():
+parsed_value = int(value)
+else:
+# if not an int, accept only values on the mapping table, case 
insensitive
+parsed_value 

[GitHub] [tvm] electriclilies commented on pull request #8110: Unify Python and C++ TIR lower API

2021-06-14 Thread GitBox


electriclilies commented on pull request #8110:
URL: https://github.com/apache/tvm/pull/8110#issuecomment-860838359


   Thanks @tqchen!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on a change in pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-14 Thread GitBox


comaniac commented on a change in pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#discussion_r650300650



##
File path: src/relay/transforms/to_mixed_precision.cc
##
@@ -0,0 +1,356 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *
+ * \file to_mixed_precision.cc
+ * \brief Automatic mixed precision for relay graphs. i.e. turn a graph into 
fp16 form.
+ */
+#include "to_mixed_precision.h"
+
+#include 
+#include 
+#include 
+#include 
+
+#include 
+
+#include "pattern_utils.h"
+
+namespace tvm {
+namespace relay {
+
+// A callable which hashes std::pair
+struct pair_hash {
+  template 
+  std::size_t operator()(const std::pair& pair) const {
+auto h1 = std::hash()(pair.first);
+auto h2 = std::hash()(pair.second);
+
+// Use boost's combine_hash strategy
+return h1 ^ (h1 + 0x9e3779b9 + (h2 << 6) + (h2 >> 2));
+  }
+};
+
+// A map of a parent node and a wanted dtype to existing nodes casted to the 
wanted dtype
+using CachedCastNodes = std::unordered_map, Expr, pair_hash>;
+
+// A function which maps CallNodes to their initial conversion color
+using ColorFunc = std::function;
+
+// A function which maps MIXED_PRECISION_ALWAYS CallNodes to wanted 
accumulation and output dtypes
+using OutputDtypeFunc = std::function;
+
+class MixedPrecisionPass : public MixedModeMutator {
+ private:
+  CachedCastNodes cast_nodes_cache;
+  const ColorFunc colorer;
+  const OutputDtypeFunc output_dtype_func;
+  const DataType mixed_precision_type;
+
+  Attrs GetNewAttrs(const CallNode* call, const DataType& accumulation_dtype) 
const {
+/* If the accumulation dtype is in the attributes make a copy and mutate 
the field. */
+Attrs new_attrs = Attrs(call->attrs);
+if (new_attrs.get() != nullptr) {
+  // TODO(AndrewZhaoLuo): Figure out a better way to do this
+  // modify output_dtype attributes (accumulation dtypes for ops)
+  if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = 
new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  } else if (auto attrs = new_attrs.as()) {
+ModifyAttrsOutputDType(attrs, accumulation_dtype);
+  }
+
+  // modify dtype attributes (creating new tensors of type dtype)
+  if (auto attrs = new_attrs.as()) {
+ModifyAttrsDType(attrs, accumulation_dtype);
+  }
+}
+
+return new_attrs;
+  }
+
+  template 
+  void ModifyAttrsOutputDType(const T* attrs, const DataType& 
accumulation_dtype) const {
+/*
+ Helper template to modify relevant attributes with out_dtype type.
+ These represent accumulation dtypes for some operations e.g.
+ conv2d might take in fp16 and give a fp32 result.
+ Attrs is const because we get it as a const.
+ */
+T* mutable_attrs = const_cast(attrs);
+
+DataType cur_type = (mutable_attrs->out_dtype);
+if (cur_type.is_float() || cur_type.is_void()) mutable_attrs->out_dtype = 
accumulation_dtype;
+  }
+
+  template 
+  void ModifyAttrsDType(const T* attrs, const DataType& accumulation_dtype) 
const {
+/*
+ Helper template to modify relevant attributes with dtype type.
+ This determines the 

[GitHub] [tvm] leandron opened a new pull request #8253: [tvmc] Add a --config option to `tvmc compile`

2021-06-14 Thread GitBox


leandron opened a new pull request #8253:
URL: https://github.com/apache/tvm/pull/8253


   [tvmc] Add a `--config` option to `tvmc compile`:
* Allow to send some configurations to the `PassContext` via command line
* Add various validations to the new option with appropriate error messages
* Add unit testing
   
   cc @gromero @comaniac @manupa-arm 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on pull request #8235: [TVMSCRIPT] add more type support in script function parameter

2021-06-14 Thread GitBox


tkonolige commented on pull request #8235:
URL: https://github.com/apache/tvm/pull/8235#issuecomment-860778473


   Can you also add `int16`, `int64`, and `float64`?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbrookhart commented on pull request #8081: [Pass] Simplify consecutive casts in Relay

2021-06-14 Thread GitBox


mbrookhart commented on pull request #8081:
URL: https://github.com/apache/tvm/pull/8081#issuecomment-860777102


   @icemelon9 can you rebase? I'm not sure if this hit a flakey test or if 
there's an issue with mobilenet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbaret commented on pull request #7925: Add a 'rolling_buffer' scheduling primitive

2021-06-14 Thread GitBox


mbaret commented on pull request #7925:
URL: https://github.com/apache/tvm/pull/7925#issuecomment-860773309


   ping @manupa-arm, could you please take another look at this patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang edited a comment on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-14 Thread GitBox


chiwwang edited a comment on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-860725987


   Ah! I saw what confused me. It's the level of "Pass Registration" section.
   The current _pass_infra.rst_ section-hierachy is
   
   **Pass Infrastructure** (Topmost)
   - The Design
   **The design of backend and frontend are described here.**
 *  C++ Backend
 - PassContext
 - Pass Constructs
 - Module-Level Passes
 - Function-Level Passes
 - Sequential Passes
 * Pass Registration   <-This section has the same level with 
Backend/Frontend.
 * Python Frontend
 - PassContext
 - Pass Objects
   
   Now I add Pass Instrument as:
   
   **Pass Infrastructure** (Topmost)
   - The Design
   **The design of backend and frontend are described here.**
 *  C++ Backend
 - PassContext
 - Pass Constructs
 - Module-Level Passes
 - Function-Level Passes
 - Sequential Passes
 - Pass Registration   <- May I fix this to have the same level 
with other sub-sections in C++ backend?
 - Pass Instruments   <--- Added in this PR.
 - Built-in Instrument   <--- Added in this PR.
 * Python Frontend
 - PassContext
 - Pass Objects
 - Pass Instrument   <--- Added in this PR.
 - Override Instruments in Current PassContext   <--- Added in this PR.
   
   This might looks matching with descriptions in "The Design" section.
   Or, could we isolate Pass Instrument, and have another topmost section as 
**Pass Infrastructure**?
   May I know your thoughts @zackcquic @areusch ?
   Thanks a lot!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang edited a comment on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-14 Thread GitBox


chiwwang edited a comment on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-860725987


   Ah! I saw what confused me. It's the level of "Pass Registration" section.
   The current _pass_infra.rst_ section-hierachy is
   
   **Pass Infrastructure** (Topest)
   - The Design
   **The design of backend and frontend are described here.**
 *  C++ Backend
 - PassContext
 - Pass Constructs
 - Module-Level Passes
 - Function-Level Passes
 - Sequential Passes
 * Pass Registration   <-This section has the same level with 
Backend/Frontend.
 * Python Frontend
 - PassContext
 - Pass Objects
   
   Now I add Pass Instrument as:
   
   **Pass Infrastructure** (Topest)
   - The Design
   **The design of backend and frontend are described here.**
 *  C++ Backend
 - PassContext
 - Pass Constructs
 - Module-Level Passes
 - Function-Level Passes
 - Sequential Passes
 - Pass Registration   <- May I fix this to have the same level 
with other sub-sections in C++ backend?
 - Pass Instruments   <--- Added in this PR.
 - Built-in Instrument   <--- Added in this PR.
 * Python Frontend
 - PassContext
 - Pass Objects
 - Pass Instrument   <--- Added in this PR.
 - Override Instruments in Current PassContext   <--- Added in this PR.
   
   This might looks matching with descriptions in "The Design" section.
   Or, could we isolate Pass Instrument, and have another topest section as 
**Pass Infrastructure**?
   May I know your thoughts @zackcquic @areusch ?
   Thanks a lot!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang commented on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-14 Thread GitBox


chiwwang commented on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-860725987


   Ah! I saw what confused me. It's the level of "Pass Registration" section.
   The current _pass_infra.rst_ section-hierachy is
   
   **Pass Infrastructure** (Topest)
   - The Design
   **The design of backend and frontend are described here.**
 *  C++ Backend
 - PassContext
 - Pass Constructs
 - Module-Level Passes
 - Function-Level Passes
 - Sequential Passes
 * Pass Registration   <-This section has the same level with 
Backend/Frontend.
 * Python Frontend
 - PassContext
 - Pass Objects
   
   Now I add Pass Instrument as:
   
   **Pass Infrastructure** (Topest)
   - The Design
   **The design of backend and frontend are described here.**
 *  C++ Backend
 - PassContext
 - Pass Constructs
 - Module-Level Passes
 - Function-Level Passes
 - Sequential Passes
 - Pass Registration   <- May I fix this to have the same level 
with other sub-sections in C++ backend?
 - Pass Instruments   <--- Added in this PR.
 - Built-in Instrument   <--- Added in this PR.
 * Python Frontend
 - PassContext
 - Pass Objects
 - Pass Instrument   <--- Added in this PR.
 - Override Instruments in Current PassContext   <--- Added in this PR.
   
   This might looks matching with descriptions in "The Design" section.
   Or, could we isolate Pass Instrument, and have anther topest section as 
**Pass Infrastructure**?
   May I know your thoughts @zackcquic @areusch ?
   Thanks a lot!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros edited a comment on pull request #8096: Decoupling AOT from graph memory planner

2021-06-14 Thread GitBox


giuseros edited a comment on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-860715052


   Hi @areusch , 
   After some thinking we came up with a different solution. The way we are 
doing things now is the following:
   pass-a) Compose the `main_func` with `tvm_set_struct`, i.e., codegen ready
   pass-b) Storage rewrite modified to take care of the structs
   pass-c) `tvm::build`
   
   A possible alternative can be the following:
   pass-a2) Compose the `main_func` without `tvm_set_struct`. This means that 
the packed calls will receive raw pointers. This in turn means that the 
`main_func` in TIR is not ready to be code generated
   pass-b2) Storage rewrite without any change. This is possible now since we 
are not using `tvm_set_struct` yet
   pass-c2) transform the packed calls inputs by using `tvm_set_struct`. We can 
actually avoid this pass if we are using unpacked signatures. 
   pass-d2) `tvm::build`
   
   With pipilene-2 we can leave `StorageRewrite` unchanged and still address 
the issues that `tvm_set_struct` gives us. What do you think?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros edited a comment on pull request #8096: Decoupling AOT from graph memory planner

2021-06-14 Thread GitBox


giuseros edited a comment on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-860715052


   Hi @areusch , 
   After some thinking we came up with a different solution. The way we are 
doing things now is the following:
   pass-a) Compose the `main_func` with `tvm_set_struct`, i.e., codegen ready
   pass-b) Storage rewrite modified to take care of the structs
   pass-c) `tvm::build`
   
   A possible alternative can be the following:
   pass-a2) Compose the `main_func` without `tvm_set_struct`. This means that 
the packed calls will receive raw pointers. This in turn means that the 
`main_func` in TIR is not ready to be code generated
   pass-b2) Storage rewrite without any change. This is possible now since we 
are not using `tvm_set_struct` yet
   pass-c2) transform the packed calls inputs by using `tvm_set_struct`. We can 
actually avoid this pass if we are using unpacked signatures. 
   pass-d2) `tvm::build`
   
   With pipilene-2 we can leave `StorageRewrite` unchanged and still get away 
with the issues that `tvm_set_struct` gives us. What do you think?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros edited a comment on pull request #8096: Decoupling AOT from graph memory planner

2021-06-14 Thread GitBox


giuseros edited a comment on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-860715052


   Hi @areusch , 
   After some thinking we came up with a different solution. The way we are 
doing things now is the following:
   pass-a) Compose the `main_func` with `tvm_set_struct`, i.e., codegen ready
   pass-b) Storage rewrite modified to take care of the structs
   pass-c) `tvm::build`
   
   A possible alternative can be the following:
   pass-a2) Compose the `main_func` without `tvm_set_struct`. This means that 
the packed calls will receive raw pointers. This in turn means that the 
`main_func` in TIR is not ready to be code generated
   pass-b2) Storage rewrite without any change. This is possible now since we 
are not using `tvm_set_struct` yet
   pass-c2) transform the packed calls inputs by using `tvm_set_struct`. We can 
actually avoid this pass if we are using unpacked signatures. 
   padd-d2) `tvm::build`
   
   With pipilene-2 we can leave `StorageRewrite` unchanged and still get away 
with the issues that `tvm_set_struct` gives us. What do you think?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros edited a comment on pull request #8096: Decoupling AOT from graph memory planner

2021-06-14 Thread GitBox


giuseros edited a comment on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-860715052


   Hi @areusch , 
   After some thinking we came up with a different solution. The way we are 
doing things now is the following:
   pass-a) Compose the `main_func` with `tvm_set_struct`, i.e., codegen ready
   pass-b) Storage rewrite modified to take care of the structs
   pass-c) tvm::build
   
   A possible alternative can be the following:
   pass-a2) Compose the `main_func` without `tvm_set_struct`. This means that 
the packed calls will receive raw pointers. This in turn means that the 
`main_func` in TIR is not ready to be code generated
   pass-b2) Storage rewrite without any change. This is possible now since we 
are not using `tvm_set_struct` yet
   pass-c2) transform the packed calls inputs by using `tvm_set_struct`. We can 
actually avoid this pass if we are using unpacked signatures. 
   
   With pipilene-2 we can leave `StorageRewrite` unchanged and still get away 
with the issues that `tvm_set_struct` gives us. What do you think?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] giuseros commented on pull request #8096: Decoupling AOT from graph memory planner

2021-06-14 Thread GitBox


giuseros commented on pull request #8096:
URL: https://github.com/apache/tvm/pull/8096#issuecomment-860715052


   Hi @areusch , 
   After some thinking we came up with a different solution. The way we are 
doing things now is the following:
   pass-a) Compose the `main_func` with `tvm_set_struct`, i.e., codegen ready
   pass-b) Storage rewrite modified to take care of the structs
   pass-c) tvm::build
   A possible alternative can be the following:
   pass-a2) Compose the `main_func` without `tvm_set_struct`. This means that 
the packed calls will receive raw pointers. This in turn means that the 
`main_func` in TIR is not ready to be code generated
   pass-b2) Storage rewrite without any change. This is possible now since we 
are not using `tvm_set_struct` yet
   pass-c2) transform the packed calls inputs by using `tvm_set_struct`. We can 
actually avoid this pass if we are using unpacked signatures. 
   
   With pipilene-2 we can leave `StorageRewrite` unchanged and still get away 
with the issues that `tvm_set_struct` gives us. What do you think?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang commented on a change in pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-14 Thread GitBox


chiwwang commented on a change in pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#discussion_r650802362



##
File path: docs/dev/pass_infra.rst
##
@@ -389,6 +397,51 @@ To allow other C++ modules to apply this pass, we declare 
a free function in
 
 TVM_DLL Pass FoldConstant();
 
+Pass Instrument
+~~~
+
+To instrument passes, four methods are introduced to ``PassContext``.
+
+.. code:: c++
+
+TVM_DLL void InstrumentEnterPassContext();
+TVM_DLL void InstrumentExitPassContext();
+TVM_DLL bool InstrumentBeforePass(const IRModule& mod, const PassInfo& 
info) const;
+TVM_DLL void InstrumentAfterPass(const IRModule& mod, const PassInfo& 
info) const;
+
+The first two methods are called respectively in entering/exiting context 
scope. The latter two are called while passes is being 
applied(`src/ir/transform.cc`_).
+
+Note that ``InstrumentBeforePass()`` return a boolean indicating this pass 
should
+be run or not.
+
+``PassInstrument`` provides callbacks run by these methods. Multiple
+``PassInstrument`` instances can be registed into a single ``PassContext``.
+They are called sequentially in the order of ``instruments`` member.
+

Review comment:
   It's hard to introduce python frontend first because pass_infra.txt put 
a big section called "Python Frontend" downward.
   All python things are put there. If I don't follow this, pass_instra.txt 
become fragmented.
   (Although I agreed it is easier to introduce call-sequences and concepts 
with Python code...)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] d-smirnov edited a comment on pull request #8169: [BYOC] [ACL] Migrated to v21.05

2021-06-14 Thread GitBox


d-smirnov edited a comment on pull request #8169:
URL: https://github.com/apache/tvm/pull/8169#issuecomment-860551450


   Closed as superseded with https://github.com/apache/tvm/pull/8245


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] d-smirnov commented on pull request #8169: [BYOC] [ACL] Migrated to v21.05

2021-06-14 Thread GitBox


d-smirnov commented on pull request #8169:
URL: https://github.com/apache/tvm/pull/8169#issuecomment-860551450


   https://github.com/apache/tvm/pull/8245


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] d-smirnov closed pull request #8169: [BYOC] [ACL] Migrated to v21.05

2021-06-14 Thread GitBox


d-smirnov closed pull request #8169:
URL: https://github.com/apache/tvm/pull/8169


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leandron commented on pull request #8245: [CI] [ComputeLibrary] Use pre-built binaries instead of compiled

2021-06-14 Thread GitBox


leandron commented on pull request #8245:
URL: https://github.com/apache/tvm/pull/8245#issuecomment-860531571


   This is merged now, thanks @d-smirnov!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (f4b95ab -> 90fb626)

2021-06-14 Thread leandron
This is an automated email from the ASF dual-hosted git repository.

leandron pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from f4b95ab  Move Micro TVM top level page (#8249)
 add 90fb626  [CI] [ComputeLibrary] Use pre-built binaries instead of 
compiled (#8245)

No new revisions were added by this update.

Summary of changes:
 docker/Dockerfile.ci_cpu   |  6 +-
 ...=> ubuntu_download_arm_compute_lib_binaries.sh} | 74 +-
 2 files changed, 32 insertions(+), 48 deletions(-)
 rename docker/install/{ubuntu_install_arm_compute_lib.sh => 
ubuntu_download_arm_compute_lib_binaries.sh} (53%)


[GitHub] [tvm] leandron merged pull request #8245: [CI] [ComputeLibrary] Use pre-built binaries instead of compiled

2021-06-14 Thread GitBox


leandron merged pull request #8245:
URL: https://github.com/apache/tvm/pull/8245


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rafzi commented on pull request #8134: memory planning: add offset to planning output and respect it in graph executor

2021-06-14 Thread GitBox


rafzi commented on pull request #8134:
URL: https://github.com/apache/tvm/pull/8134#issuecomment-860110757


   It seems like the long term plans of TVM are conflicting with this approach, 
in that the memory planning should happen in TIR.
   
   Is this something that is useful to TVM right now? Should I continue work on 
this or drop it in favor of the upcoming approach?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] echuraev commented on pull request #8069: [Relay] [Pass] Add FP16 model conversion pass

2021-06-14 Thread GitBox


echuraev commented on pull request #8069:
URL: https://github.com/apache/tvm/pull/8069#issuecomment-860009739


   > So you've tested only on LLVM? Does this work on `metal` target? Not sure 
if our metal backend supports fp16 or if M1 GPU is good at fp16 in general 
@echuraev
   
   The Metal backend support fp16. And as far as I know @elvin-n have run fp16 
models with our Metal backend and collected some performance metrics. I think 
he'll add some information about it. 
   
   What about M1, we didn't try to run fp16 models on Metal on M1 yet. 
Theoretically, it should work, but we should check it.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #8247: doc: fixes to dataflow_pattern

2021-06-14 Thread GitBox


masahi merged pull request #8247:
URL: https://github.com/apache/tvm/pull/8247


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] d-smirnov commented on pull request #8245: [CI] [ComputeLibrary] Use pre-built binaries instead of compiled

2021-06-14 Thread GitBox


d-smirnov commented on pull request #8245:
URL: https://github.com/apache/tvm/pull/8245#issuecomment-860165489


   @leandron 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] zhiics commented on pull request #8246: [Relay] make simplify inference iterative

2021-06-14 Thread GitBox


zhiics commented on pull request #8246:
URL: https://github.com/apache/tvm/pull/8246#issuecomment-860010195


   plz fix ci


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #8249: Move Micro TVM top level page

2021-06-14 Thread GitBox


tqchen commented on pull request #8249:
URL: https://github.com/apache/tvm/pull/8249#issuecomment-860057021


   Given uTVM is an important domain  that could warrant a different kinds of 
treatment.  I think it is a good idea  to keeping a landing page at the 
top-level(perhaps under a get started section). let us consider revert to the 
previous state and then think about new overall structure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch merged pull request #8249: Move Micro TVM top level page

2021-06-14 Thread GitBox


areusch merged pull request #8249:
URL: https://github.com/apache/tvm/pull/8249


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on pull request #8249: Move Micro TVM top level page

2021-06-14 Thread GitBox


merrymercy commented on pull request #8249:
URL: https://github.com/apache/tvm/pull/8249#issuecomment-860001883


   I moved it to MISC because I think microTVM is similar to VTA, which is 
placed under MISC section.
   microTVM does not fit into “How to” section very well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8134: memory planning: add offset to planning output and respect it in graph executor

2021-06-14 Thread GitBox


areusch commented on pull request #8134:
URL: https://github.com/apache/tvm/pull/8134#issuecomment-860312504


   @rafzi apologies for the delay in reviewing this one. i'm not sure there is 
broad alignment yet on the way we intend to do full-graph memory planning in 
TVM. and, even when we do come to agreement on a model for memory (which I 
think may look similar to the one you're working towards here), we still need 
to implement support for it in both Graph and AOT executors. Also, the Graph 
executor is invoking TIR PrimFunc, so it's likely something similar to this PR 
will be useful. My thinking is that what you have here is fairly close and 
we'll just need to rename fields or add additional e.g. `pool_id` to give more 
context to the offset.
   
   So I'm not convinced we should drop this PR; however, before proceeding, I'd 
like to get everyone aligned around a single memory planning proposal. There 
are a couple of theoretically orthogonal pieces of such a proposal as well: a) 
the interface between the TVM graph and the memory planner; b) the algorithm(s) 
used in planning; c) the interface between TVM and the executors. At present 
there are two suggestions for (a) a TIR-level interface and a Relay-level 
planner. I think the TIR-based planner offers more flexibility but the Relay 
one is easier to implement to (e.g. it's nearly complete in the tree today).
   
   Would you be interested in reviewing the TIR-level interface proposed in the 
[USMP](https://discuss.tvm.apache.org/t/rfc-unified-static-memory-planning/10099)
 RFC? It would be great to get your thoughts whether it's possible to implement 
the algorithms you've proposed using that interface as well.
   
   Given there is some interest from the community in doing whole-program TIR 
optimization, plus the AOT top-level function is in TIR, it may be slightly 
more impactful to adopt that interface. However, I'd like to understand whether 
that precludes including the algorithms you've proposed 
[here](https://discuss.tvm.apache.org/t/discussion-alignment-memory-planning/9730).
 Finally, this PR could serve as a basis to implement the Graph executor 
changes required to support (c).
   
   Let me know your thoughts!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen merged pull request #8110: Unify Python and C++ TIR lower API

2021-06-14 Thread GitBox


tqchen merged pull request #8110:
URL: https://github.com/apache/tvm/pull/8110


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi merged pull request #8250: [TOPI][batch_matmul] Allow cblas batch_matmul implicit batch_size broadcast

2021-06-14 Thread GitBox


masahi merged pull request #8250:
URL: https://github.com/apache/tvm/pull/8250


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] euntaik opened a new pull request #8252: Fix build break in android_rpc

2021-06-14 Thread GitBox


euntaik opened a new pull request #8252:
URL: https://github.com/apache/tvm/pull/8252


   Fix build break in android_rpc


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8249: Move Micro TVM top level page

2021-06-14 Thread GitBox


areusch commented on pull request #8249:
URL: https://github.com/apache/tvm/pull/8249#issuecomment-860068189


   originally my intent with the microTVM page was to serve as an 
organizational landing page for topics relating to microTVM. I agree VTA has a 
similar structure. I don't think Misc is the right place for such pages, but I 
also agree that Getting Started doesn't entirely make sense either given the 
way we structure the rest of the repo. I do think we could use more top-level 
pages in the documentation site to help organize the documentation, but we can 
contemplate that in a broader PR/RFC.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #8200: [RUNTIME] ShapeTuple Container

2021-06-14 Thread GitBox


tqchen commented on pull request #8200:
URL: https://github.com/apache/tvm/pull/8200#issuecomment-860211675


   @ZihengJiang please follow up


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] rohanmukh opened a new pull request #8251: [Frontend, Tensorflow] Support for broadcasting in batch_matmul when shapes differ

2021-06-14 Thread GitBox


rohanmukh opened a new pull request #8251:
URL: https://github.com/apache/tvm/pull/8251


   Current implementation of `batch_matmul` in TF frontend is not able to 
handle cases where the shape of the second input differs from the first and a 
broadcast is needed to complete the operation. Also, in the current logic it 
always assumed that the shape of second input `shape_y` is atleast of length 3. 
This is not the case is some TF2 models like 
(efficientdet)[https://tfhub.dev/tensorflow/efficientdet/d0/1]. This PR handles 
these use case. 
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #8110: Unify Python and C++ TIR lower API

2021-06-14 Thread GitBox


tqchen commented on pull request #8110:
URL: https://github.com/apache/tvm/pull/8110#issuecomment-860057276


   Thanks @CircleSpin and @electriclilies ! Thanks @Hzfengsy @tkonolige 
@csullivan @YuchenJin @manupa-arm for reviewing


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbaret commented on pull request #7858: [ETHOSN] Removed support for 20.08 version of the driver stack.

2021-06-14 Thread GitBox


mbaret commented on pull request #7858:
URL: https://github.com/apache/tvm/pull/7858#issuecomment-859527557


   This is now merged. Thanks @tristan-arm


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] chiwwang edited a comment on pull request #8220: [DOCS] Add docs for Pass Instrument

2021-06-14 Thread GitBox


chiwwang edited a comment on pull request #8220:
URL: https://github.com/apache/tvm/pull/8220#issuecomment-859617614


   Thanks for prompt feedbacks @zackcquic @tkonolige !
   
   Here are some comments for Zack's questions:
   1. What happens when exceptions occur in different instrument point.
   Added in pass_infra.txt. But it is a little long. You might want to take a 
look again.
   
   2.  Standard Instrument section: PassTimingInstrument, PrintBefore(TODO), 
PrintAfter(TODO), ..
   I think it might be better to maintain these in the doc string of related 
Python class/function.
   So I add example to instrument.py. It will be shown in Python API reference.
   
   3. Global PassContext and override_instrument examples
   Done. Sorry for not aware of this approach.
   
   4. use_pass_infra.py's comments should be updated, sorry, I forgot to update 
it.
   Done.
   
   5. conf.py should be updated.
   Done. But actually it seems to automatically append un-listed tutorials to 
the end.
   How do you think about the current order of tutorial?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >