[GitHub] [tvm] junrushao1994 commented on pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#issuecomment-900037937


   The PR looks good to me otherwise


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r690080401



##
File path: src/tir/transforms/unify_thread_binding.cc
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file unify_thread_binding.cc
+ */
+
+#include 
+#include 
+#include 
+
+#include "ir_utils.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ * \brief A mutator which searches AttrStmts of thread bindings and changes 
the `node` field IterVar
+ * of the AttrStmts, so that for one kind of thread binding (except 
"vthread"), all such thread
+ * bindings use the same IterVar
+ */
+class ThreadBindingUnifier : public StmtExprMutator {
+ public:
+  static Stmt Unify(const PrimFunc& f) { return 
ThreadBindingUnifier().VisitStmt(f->body); }

Review comment:
   ```suggestion
 static Stmt Unify(Stmt stmt) { return 
ThreadBindingUnifier().VisitStmt(std::move(stmt)); }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r690080222



##
File path: src/tir/transforms/unify_thread_binding.cc
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file unify_thread_binding.cc
+ */
+
+#include 
+#include 
+#include 
+
+#include "ir_utils.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ * \brief A mutator which searches AttrStmts of thread bindings and changes 
the `node` field IterVar
+ * of the AttrStmts, so that for one kind of thread binding (except 
"vthread"), all such thread
+ * bindings use the same IterVar
+ */
+class ThreadBindingUnifier : public StmtExprMutator {
+ public:
+  static Stmt Unify(const PrimFunc& f) { return 
ThreadBindingUnifier().VisitStmt(f->body); }
+
+ private:
+  Stmt VisitStmt_(const AttrStmtNode* attr) final {
+// If this AttrStmt is not thread binding attribute, return as usual.
+if (attr->attr_key != attr::thread_extent && attr->attr_key != 
attr::virtual_thread) {
+  return StmtMutator::VisitStmt_(attr);
+}
+
+// Step 1. Fetch the old IterVar.
+IterVar old_iter_var = Downcast(attr->node);
+IterVar new_iter_var;

Review comment:
   nit
   
   ```suggestion
   IterVar new_iter_var{nullptr};
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r690080073



##
File path: src/tir/transforms/unify_thread_binding.cc
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file unify_thread_binding.cc
+ */
+
+#include 
+#include 
+#include 
+
+#include "ir_utils.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ * \brief A mutator which searches AttrStmts of thread bindings and changes 
the `node` field IterVar
+ * of the AttrStmts, so that for one kind of thread binding (except 
"vthread"), all such thread
+ * bindings use the same IterVar
+ */
+class ThreadBindingUnifier : public StmtExprMutator {
+ public:
+  static Stmt Unify(const PrimFunc& f) { return 
ThreadBindingUnifier().VisitStmt(f->body); }
+
+ private:
+  Stmt VisitStmt_(const AttrStmtNode* attr) final {
+// If this AttrStmt is not thread binding attribute, return as usual.
+if (attr->attr_key != attr::thread_extent && attr->attr_key != 
attr::virtual_thread) {
+  return StmtMutator::VisitStmt_(attr);
+}
+
+// Step 1. Fetch the old IterVar.
+IterVar old_iter_var = Downcast(attr->node);
+IterVar new_iter_var;
+
+// Step 2. See if an IterVar for this kind of thread binding was created 
before. If so, we use
+// the created IterVar. Otherwise, we create a new IterVar for this thread 
binding and store the
+// IterVar in mapping `thread_tag2iter_var_map_`.
+Map::iterator it = 
thread_tag2iter_var_map_.find(old_iter_var->thread_tag);
+if (it != thread_tag2iter_var_map_.end()) {
+  new_iter_var = (*it).second;
+  CHECK(ExprDeepEqual()(old_iter_var->dom->extent, 
(*it).second->dom->extent))
+  << "ValueError: All loops that are bound to `" << 
old_iter_var->thread_tag
+  << "` should have the same extent. However, there are two loops with 
extent "
+  << (*it).second->dom->extent << " and " << old_iter_var->dom->extent
+  << ", which are not equal";
+} else {
+  ObjectPtr p_new_iter_var = 
make_object(*old_iter_var.get());
+  p_new_iter_var->var = Var(old_iter_var->thread_tag);
+  new_iter_var = IterVar(p_new_iter_var);
+  // We don't unify thread bindings of "vthread".
+  if (old_iter_var->thread_tag != "vthread") {
+thread_tag2iter_var_map_.Set(old_iter_var->thread_tag, new_iter_var);
+  }
+}
+
+// Step 3. We will substitute the occurrences of the old variable in the 
old IterVar with the
+// new variable in further mutation. Thus, we store the mapping entry.
+var_substitution_map_.Set(old_iter_var->var, new_iter_var->var);
+
+// Step 4. Mutate recursively, and update the AttrStmt with the new 
IterVar.
+AttrStmt new_attr = Downcast(StmtMutator::VisitStmt_(attr));
+ObjectPtr p_new_attr = CopyOnWrite(new_attr.get());
+p_new_attr->node = new_iter_var;
+return Stmt(p_new_attr);
+  }
+
+  PrimExpr VisitExpr_(const VarNode* var) final {
+// If this variable appears as a key in `var_substitution_map_`, we 
substitute it with its
+// corresponding value in the mapping.
+Map::iterator it = var_substitution_map_.find(GetRef(var));
+return it != var_substitution_map_.end() ? (*it).second : GetRef(var);
+  }
+
+  /*!
+   * \brief A mapping from a thread tag to its corresponding IterVar that is 
shared by all
+   * occurrences of the thread tag
+   * */
+  Map thread_tag2iter_var_map_;
+  /*! \brief A mapping from old variables to new variables, which is used for 
substitution */
+  Map var_substitution_map_;
+};
+
+PrimFunc UnifyThreadBinding(PrimFunc f) {
+  // Only apply this pass to TIR that is not from TE schedules
+  if (!IsFromLegacyTESchedule(f)) {
+PrimFuncNode* fptr = f.CopyOnWrite();
+fptr->body = ThreadBindingUnifier::Unify(f);

Review comment:
   Let's also tweak the signature of `ThreadBindingUnifier::Unify` here
   
   ```suggestion
   fptr->body = ThreadBindingUnifier::Unify(std::move(f->body));
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, 

[GitHub] [tvm] MasterJH5574 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


MasterJH5574 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r690078417



##
File path: src/tir/transforms/unify_thread_binding.cc
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file unify_thread_binding.cc
+ */
+
+#include 
+#include 
+#include 
+
+#include "ir_utils.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ * \brief A mutator which searches AttrStmts of thread bindings and changes 
the `node` field IterVar
+ * of the AttrStmts, so that for one kind of thread binding (except 
"vthread"), all such thread
+ * bindings use the same IterVar
+ */
+class ThreadBindingUnifier : public StmtExprMutator {
+ public:
+  static Stmt Unify(const PrimFunc& f) { return 
ThreadBindingUnifier().VisitStmt(f->body); }
+
+ private:
+  Stmt VisitStmt_(const AttrStmtNode* attr) final {
+// If this AttrStmt is not thread binding attribute, return as usual.
+if (attr->attr_key != attr::thread_extent && attr->attr_key != 
attr::virtual_thread) {
+  return StmtMutator::VisitStmt_(attr);
+}
+
+// Step 1. Fetch the old IterVar.
+IterVar old_iter_var = Downcast(attr->node);
+IterVar new_iter_var;
+
+// Step 2. See if an IterVar for this kind of thread binding was created 
before. If so, we use
+// the created IterVar. Otherwise, we create a new IterVar for this thread 
binding and store the
+// IterVar in mapping `thread_tag2iter_var_map_`.
+Map::iterator it = 
thread_tag2iter_var_map_.find(old_iter_var->thread_tag);
+if (it != thread_tag2iter_var_map_.end()) {
+  new_iter_var = (*it).second;
+  CHECK(ExprDeepEqual()(old_iter_var->dom->extent, 
(*it).second->dom->extent))

Review comment:
   Okay. But I'm curious about the difference between them :thinking:.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r690077572



##
File path: src/tir/transforms/unify_thread_binding.cc
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file unify_thread_binding.cc
+ */
+
+#include 
+#include 
+#include 
+
+#include "ir_utils.h"
+
+namespace tvm {
+namespace tir {
+
+/*!
+ * \brief A mutator which searches AttrStmts of thread bindings and changes 
the `node` field IterVar
+ * of the AttrStmts, so that for one kind of thread binding (except 
"vthread"), all such thread
+ * bindings use the same IterVar
+ */
+class ThreadBindingUnifier : public StmtExprMutator {
+ public:
+  static Stmt Unify(const PrimFunc& f) { return 
ThreadBindingUnifier().VisitStmt(f->body); }
+
+ private:
+  Stmt VisitStmt_(const AttrStmtNode* attr) final {
+// If this AttrStmt is not thread binding attribute, return as usual.
+if (attr->attr_key != attr::thread_extent && attr->attr_key != 
attr::virtual_thread) {
+  return StmtMutator::VisitStmt_(attr);
+}
+
+// Step 1. Fetch the old IterVar.
+IterVar old_iter_var = Downcast(attr->node);
+IterVar new_iter_var;
+
+// Step 2. See if an IterVar for this kind of thread binding was created 
before. If so, we use
+// the created IterVar. Otherwise, we create a new IterVar for this thread 
binding and store the
+// IterVar in mapping `thread_tag2iter_var_map_`.
+Map::iterator it = 
thread_tag2iter_var_map_.find(old_iter_var->thread_tag);
+if (it != thread_tag2iter_var_map_.end()) {
+  new_iter_var = (*it).second;
+  CHECK(ExprDeepEqual()(old_iter_var->dom->extent, 
(*it).second->dom->extent))

Review comment:
   Please use `analyzer::CanProveEqual` instead of ExprDeepEqual




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jinhongyii opened a new pull request #8767: [TensorIR][M2a] Reorder

2021-08-16 Thread GitBox


jinhongyii opened a new pull request #8767:
URL: https://github.com/apache/tvm/pull/8767


   This PR is part of the TensorIR upstreaming effort (#7527), which adds a 
schedule primitive: reorder.
   CC: @junrushao1994 @MasterJH5574 @Hzfengsy @tqchen @comaniac @jcf94 
   
   Co-authored-by: Siyuan Feng 
   Co-authored-by: Bohan Hou <32121147+spectrometer...@users.noreply.github.com>
   Co-authored-by: Ruihang Lai 
   Co-authored-by: Wuwei Lin 
   Co-authored-by: Junru Shao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-900010239


   Thanks @electriclilies @mbs-octoml @tqchen!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Add DictAttrs to IRModule and refactor DictAttrs utility functions (#8750)

2021-08-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new e02ea74  Add DictAttrs to IRModule and refactor DictAttrs utility 
functions (#8750)
e02ea74 is described below

commit e02ea7430589fa345ab4472f02511ae8d6c08dea
Author: Lily Orth-Smith 
AuthorDate: Mon Aug 16 22:44:59 2021 -0700

Add DictAttrs to IRModule and refactor DictAttrs utility functions (#8750)

* Add DictAttrs to IRModuleNode

Move GetAttrs to be a member of DictAttrs

Generalize WithAttrs to work with IRModule and move to attrs.h

Change func->GetAttr to func->attrs.GetAttr

* lint

* Fix documentation

* fix typo

* Another typo!

* Revert GetAttrs to ->attrs.GetAttrs change

* Didn't mean to revert these

* Revert a few more things

* Add GetAttrs to IRModuleNode
---
 include/tvm/ir/attrs.h| 108 ++
 include/tvm/ir/function.h |  57 ++--
 include/tvm/ir/module.h   |  54 +++
 3 files changed, 165 insertions(+), 54 deletions(-)

diff --git a/include/tvm/ir/attrs.h b/include/tvm/ir/attrs.h
index da7bc12..fa18610 100644
--- a/include/tvm/ir/attrs.h
+++ b/include/tvm/ir/attrs.h
@@ -214,6 +214,7 @@ class DictAttrsNode : public BaseAttrsNode {
   void VisitNonDefaultAttrs(AttrVisitor* v) final;
   void InitByPackedArgs(const runtime::TVMArgs& args, bool allow_unknown) 
final;
   Array ListFieldInfo() const final;
+
   // type info
   static constexpr const char* _type_key = "DictAttrs";
   TVM_DECLARE_FINAL_OBJECT_INFO(DictAttrsNode, BaseAttrsNode);
@@ -232,6 +233,72 @@ class DictAttrs : public Attrs {
*/
   TVM_DLL explicit DictAttrs(Map dict);
 
+  // Utils for accessing attributes
+  // This needs to be on DictAttrs, not DictAttrsNode because we return the 
default
+  // value if DictAttrsNode is not defined.
+  /*!
+   * \brief Get a function attribute.
+   *
+   * \param attr_key The attribute key.
+   * \param default_value The default value if the key does not exist, 
defaults to nullptr.
+   *
+   * \return The result
+   *
+   * \tparam TOBjectRef the expected object type.
+   * \throw Error if the key exists but the value does not match TObjectRef
+   *
+   * \code
+   *
+   *  void GetAttrExample(const BaseFunc& f) {
+   *auto value = f->attrs.GetAttr("AttrKey", 0);
+   *  }
+   *
+   * \endcode
+   */
+  template 
+  Optional GetAttr(
+  const std::string& attr_key,
+  Optional default_value = Optional(nullptr)) 
const {
+static_assert(std::is_base_of::value,
+  "Can only call GetAttr with ObjectRef types.");
+if (!defined()) return default_value;
+const DictAttrsNode* node = this->as();
+
+auto it = node->dict.find(attr_key);
+if (it != node->dict.end()) {
+  return Downcast>((*it).second);
+} else {
+  return default_value;
+}
+  }
+  // variant that uses TObjectRef to enable implicit conversion to default 
value.
+  template 
+  Optional GetAttr(const std::string& attr_key, TObjectRef 
default_value) const {
+return GetAttr(attr_key, Optional(default_value));
+  }
+  /*!
+   * \brief Check whether the function has an non-zero integer attr.
+   *
+   * This function can be used to check whether an optional
+   * attribute mark(e.g. inline) exists.
+   *
+   * \param attr_key The key to the attribute.
+   * \return The check result.
+   *
+   * \code
+   *
+   *  void HasNonzeroAttrExample(const BaseFunc& f) {
+   *if (f->HasNonzeroAttr(attr::kInline)) {
+   *  // inline the function.
+   *}
+   *  }
+   *
+   * \endcode
+   */
+  bool HasNonzeroAttr(const std::string& attr_key) const {
+return GetAttr(attr_key, 0) != 0;
+  }
+
   TVM_DEFINE_OBJECT_REF_METHODS(DictAttrs, Attrs, DictAttrsNode);
   TVM_DEFINE_OBJECT_REF_COW_METHOD(DictAttrsNode);
 };
@@ -249,6 +316,47 @@ inline TAttrs AttrsWithDefaultValues() {
   return TAttrs(n);
 }
 
+/*!
+ * \brief Copy the function or module, but overrides
+ *the attribute value key with the value.
+ *
+ * \param input The thing to annotate (BaseFunc or IRModule)
+ * \param attr_key The attribute key.
+ * \param attr_value The value attribute value.
+ *
+ * \tparam TFunc The corresponding function or module type.
+ *
+ * \returns The new function or module with updated attributes.
+ *
+ * \note This function performs copy on write optimization for func and module.
+ *   If we move a uniquely referenced func or module into WithAttr,
+ *   then no additional copy will be performed.
+ *
+ *   This is also why we make it as a function instead of a member function
+ *   and why we pass by value in the first argument.
+ *
+ * \code
+ *
+ *  // Recommended way to trigger copy on write
+ *  func = WithAttr(std::move(func), "

[GitHub] [tvm] junrushao1994 merged pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


junrushao1994 merged pull request #8750:
URL: https://github.com/apache/tvm/pull/8750


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8763: Make from_tensorflow.py more GPU memory friendly.

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8763:
URL: https://github.com/apache/tvm/pull/8763#issuecomment-900010012


   Thanks @mbs-octoml 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Make from_tensorflow.py more GPU memory friendly. (#8763)

2021-08-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new cfa498c  Make from_tensorflow.py more GPU memory friendly. (#8763)
cfa498c is described below

commit cfa498c0376622afe4e0f7344f0104dc97d7e876
Author: Mark Shields <87091372+mbs-oct...@users.noreply.github.com>
AuthorDate: Mon Aug 16 22:44:30 2021 -0700

Make from_tensorflow.py more GPU memory friendly. (#8763)

* Make from_tensorflow.py more GPU memory friendly.

Sphinx-gallery runs everything in a single process. There
doesn't appear to be any easy way to force Tensorflow to
return memory other than terminating the process. This at
least gives us a little more wiggle room.

* Also deploy_sparse.py. Should probably also be done to tensorflow.rst.
---
 tutorials/frontend/deploy_sparse.py   | 14 ++
 tutorials/frontend/from_tensorflow.py | 15 +++
 2 files changed, 29 insertions(+)

diff --git a/tutorials/frontend/deploy_sparse.py 
b/tutorials/frontend/deploy_sparse.py
index d3375c4..f0af12b 100644
--- a/tutorials/frontend/deploy_sparse.py
+++ b/tutorials/frontend/deploy_sparse.py
@@ -90,6 +90,20 @@ from tensorflow.python.framework.convert_to_constants import 
(
 import scipy.sparse as sp
 
 
+# Ask tensorflow to limit its GPU memory to what's actually needed
+# instead of gobbling everything that's available.
+# https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
+# This way this tutorial is a little more friendly to sphinx-gallery.
+gpus = tf.config.list_physical_devices("GPU")
+if gpus:
+try:
+for gpu in gpus:
+tf.config.experimental.set_memory_growth(gpu, True)
+print("tensorflow will use experimental.set_memory_growth(True)")
+except RuntimeError as e:
+print("experimental.set_memory_growth option is not available: 
{}".format(e))
+
+
 ###
 # Configure Settings
 # --
diff --git a/tutorials/frontend/from_tensorflow.py 
b/tutorials/frontend/from_tensorflow.py
index fc87c07..4563e24 100644
--- a/tutorials/frontend/from_tensorflow.py
+++ b/tutorials/frontend/from_tensorflow.py
@@ -36,6 +36,21 @@ import os.path
 # Tensorflow imports
 import tensorflow as tf
 
+
+# Ask tensorflow to limit its GPU memory to what's actually needed
+# instead of gobbling everything that's available.
+# https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth
+# This way this tutorial is a little more friendly to sphinx-gallery.
+gpus = tf.config.list_physical_devices("GPU")
+if gpus:
+try:
+for gpu in gpus:
+tf.config.experimental.set_memory_growth(gpu, True)
+print("tensorflow will use experimental.set_memory_growth(True)")
+except RuntimeError as e:
+print("experimental.set_memory_growth option is not available: 
{}".format(e))
+
+
 try:
 tf_compat_v1 = tf.compat.v1
 except ImportError:


[GitHub] [tvm] junrushao1994 merged pull request #8763: Make from_tensorflow.py more GPU memory friendly.

2021-08-16 Thread GitBox


junrushao1994 merged pull request #8763:
URL: https://github.com/apache/tvm/pull/8763


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] huajsj commented on a change in pull request #14: [RFC]Pipeline Compute Executor.

2021-08-16 Thread GitBox


huajsj commented on a change in pull request #14:
URL: https://github.com/apache/tvm-rfcs/pull/14#discussion_r690031181



##
File path: rfcs/0012-pipeline-executor.md
##
@@ -0,0 +1,214 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+- Feature Name: Pipeline Executor
+- Start Date: 2021-07-30
+- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014)
+- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596)
+
+## 1. Summary
+
+
+This proposal introduces Pipeline Executor: A runtime executor that by 
scheduling
+splitted subgraph of relay graph in pipeline to implement task level parallism 
to
+improve compute throughput.
+
+## 2. Motivation
+
+
+
+Currently more and more edge device inference deployments happen on SOC 
devices.
+Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To 
reach the best
+performance, there is a requirement to run an ML network in these 
heterogeneous chipsets.
+However, currently graph executor does not have parallelism logic, and the 
existing data parallelism
+solution only supports parallel on homogeneous chipset(device). Then, the only 
way to do batch processing
+on heterogeneous devices with TVM is to treat a whole ML network as a schedule 
unit and run it on
+different heterogeneous devices, but that would cause latency issue (low speed 
chipset becomes the
+latency bottleneck for single data processing).
+
+Therefore, we need a runtime executor that can provide parallel scheduling 
functionality
+with a finer-grained schedule unit like subgraph (a group of operator with 
dependency relation)
+to be more efficient to use SOC heterogeneous hardware resource to achieve a 
better performance.
+
+
+### Benefits of Pipeline Executor
+
+There are three benefits for Pipeline Executor
+
+Pipeline Executor provides:
+* Compute a single network on multiple backends in parallel to improve 
performance.
+
+* Use RPC to perform distributed computation cross multiple remote devices.
+
+* Pipeline executor provide the capability to integrate non-DNN model function.
+
+## 3. Guide-level explanation
+Pipeline Executor is a runtime executor which implements pipeline execution 
logic for multiple
+subgraphs and relies on graph_executor for operator storage and execution.
+
+This section introduces the use case for Pipeline Executor.
+
+* 1. Using Automatic Graph Split feature to construct pipeline subgraph and 
configuration.
+* 2. Use pipeline_executor to build a pipeline module with the subgraphs and 
configuration.
+* 3. Use pipeline_executor to load the pipeline module to run network in 
pipeline parallelism mode.
+
+### 3.1. Using Automatic Graph Split feature to construct pipeline subgraph 
and configuration.
+
+This feature not in this RFC scope. the logic as following.
+
+this solution include 3 steps, 1. Operator Auto tune, 2. Graph dependency tree 
build and balance, 
+3. Graph Auto Tune. following are more detail.
+
+ 3.1.1 Operator Auto Tune :
+
+* a. In operator Auto tune tune section, user would using existing tuning 
logic to tune the every operator,
+but the tune would separately and serialized happen in all target involved by 
pipeline executor.
+
+* b. After operator tune done , here can get performance data, for example , 
con2d_0 best perf in
+GPU is 3ms, in VTA is 2ms etc, this perf data would get used in later Graph 
dependency tree build
+balance step.
+
+ 3.1.2. Graph dependency tree build balance
+
+* a. Initialize a DAG, the node of the DAG is subgraph, initially for a N node 
DAG, first [1, N -1] node mapping to
+[1 , N-1] layer(compute density operator and others) of original compute 
graph, the number N node is
+mapping to [N, M] layer , M here is the original compute layer number.
+
+* b. by using the perf data generated in 3.1.1.b , every dependency tree node 
can get a time consume value,
+the time consume value for difference node not at beginning is not same, then 
we call this DAG is not balanced in 
+weight of node, by using the way to adjust the node(subgraph) scope(how many 
operator in this node), we make
+every node of the DAG become same or value closed on weight(same time 
consume), then such DAG is a graph split
+solution,
+here we use DAG is to record the parent/child relation that child only can run 
after parent runned, and the scope
+adjustment only can hapen between parent and child.
+
+### 3.1.3 Graph Auto Tune.
+* a. 3.1.2 can generate more than one subgraph split solution DAG, in this 
step, Graph Auto Tune would try these
+multiple solution to get best configuration.
+
+after 1. 2. 3. , here can get an automatic graph split configuration.
+
+### 3.2. Use pipeline_executor to build pipeline module with the said subgraph 
and configuration.
+
+Pipeline executor provide a build function to compile and save the compile 
output into disk,

Review comment:
   fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, pleas

[GitHub] [tvm-rfcs] huajsj commented on a change in pull request #14: [RFC]Pipeline Compute Executor.

2021-08-16 Thread GitBox


huajsj commented on a change in pull request #14:
URL: https://github.com/apache/tvm-rfcs/pull/14#discussion_r690031140



##
File path: rfcs/0012-pipeline-executor.md
##
@@ -0,0 +1,214 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+- Feature Name: Pipeline Executor
+- Start Date: 2021-07-30
+- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014)
+- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596)
+
+## 1. Summary
+
+
+This proposal introduces Pipeline Executor: A runtime executor that by 
scheduling
+splitted subgraph of relay graph in pipeline to implement task level parallism 
to
+improve compute throughput.
+
+## 2. Motivation
+
+
+
+Currently more and more edge device inference deployments happen on SOC 
devices.
+Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To 
reach the best
+performance, there is a requirement to run an ML network in these 
heterogeneous chipsets.
+However, currently graph executor does not have parallelism logic, and the 
existing data parallelism
+solution only supports parallel on homogeneous chipset(device). Then, the only 
way to do batch processing
+on heterogeneous devices with TVM is to treat a whole ML network as a schedule 
unit and run it on
+different heterogeneous devices, but that would cause latency issue (low speed 
chipset becomes the
+latency bottleneck for single data processing).
+
+Therefore, we need a runtime executor that can provide parallel scheduling 
functionality
+with a finer-grained schedule unit like subgraph (a group of operator with 
dependency relation)
+to be more efficient to use SOC heterogeneous hardware resource to achieve a 
better performance.
+
+
+### Benefits of Pipeline Executor
+
+There are three benefits for Pipeline Executor
+
+Pipeline Executor provides:
+* Compute a single network on multiple backends in parallel to improve 
performance.
+
+* Use RPC to perform distributed computation cross multiple remote devices.
+
+* Pipeline executor provide the capability to integrate non-DNN model function.
+
+## 3. Guide-level explanation
+Pipeline Executor is a runtime executor which implements pipeline execution 
logic for multiple
+subgraphs and relies on graph_executor for operator storage and execution.
+
+This section introduces the use case for Pipeline Executor.
+
+* 1. Using Automatic Graph Split feature to construct pipeline subgraph and 
configuration.
+* 2. Use pipeline_executor to build a pipeline module with the subgraphs and 
configuration.
+* 3. Use pipeline_executor to load the pipeline module to run network in 
pipeline parallelism mode.
+
+### 3.1. Using Automatic Graph Split feature to construct pipeline subgraph 
and configuration.

Review comment:
   fixed.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] MasterJH5574 commented on pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


MasterJH5574 commented on pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#issuecomment-899961433


   @junrushao1994 This PR has been updated. Please take a look again, thanks! πŸ˜‰


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero commented on pull request #8761: Update QemuTransport#write() to match new write API contract.

2021-08-16 Thread GitBox


gromero commented on pull request #8761:
URL: https://github.com/apache/tvm/pull/8761#issuecomment-899951536


   > regardless i think we should still ensure that write() conforms to the API 
properly. 
   @areusch yep, I agree.
   
   
   > can you let me know which test case you're running?
   It's pretty much the sine model in tutorials/micro_tflite.py. I have 
prepared a branch which reproduces it (after several runs, you know): 
https://github.com/gromero/tvm/commits/8728
   
   Just clone && cd 8728 && `./run.sh`.  The MLF sine.tar is the same model as 
in micro_tflite.py but already compiled. The run must crash with:
   
   ```
   [...]
   INFO:__main__:b'/tmp/x20/qemu-hack/qemu-system-arm -cpu cortex-m33 -machine 
mps2-an521 -nographic -m 16 -vga none -net none -pidfile qemu.pid -chardev 
pipe,id=con,mux=on,path=/tmp/tmpq4aigpet/fifo -serial chardev:con -mon 
chardev=con,mode=readline -icount shift=7,align=off,sleep=off -rtc clock=vm 
-kernel /tmp/x20/build/zephyr/zephyr.elf\n'
   qemu-system-arm: warning: nic lan9118.0 has no peer
   [02:19:49] /home/gromero/git/tvm/src/runtime/micro/micro_session.cc:367: 
remote: microTVM Zephyr runtime - running
   B
   INFO:__main__:b"make[3]: Leaving directory '/tmp/x20/build'\n"
   INFO:__main__:b'[100%] Built target run\n'
   INFO:__main__:b"make[2]: Leaving directory '/tmp/x20/build'\n"
   INFO:__main__:b'/usr/bin/cmake -E cmake_progress_start 
/tmp/x20/build/CMakeFiles 0\n'
   INFO:__main__:b"make[1]: Leaving directory '/tmp/x20/build'\n"
   Traceback (most recent call last):
 File "/home/gromero/git/tvm/8728/./an.py", line 38, in 
   syslib = session.get_system_lib()
 File "/home/gromero/git/tvm/python/tvm/micro/session.py", line 88, in 
get_system_lib
   return self._rpc.get_function("runtime.SystemLib")()
 File "/home/gromero/git/tvm/python/tvm/rpc/client.py", line 73, in 
get_function
   return self._sess.get_function(name)
 File "/home/gromero/git/tvm/python/tvm/runtime/module.py", line 85, in 
get_function
   check_call(
 File "/home/gromero/git/tvm/python/tvm/_ffi/base.py", line 348, in 
check_call
   raise get_last_ffi_error()
   tvm._ffi.base.TVMError: MicroSessionTimeoutError: failed to read reply 
message after timeout 5s
   FAIL
   ```
   Which points  to the timeout happening at 
https://github.com/gromero/tvm/blob/8728/8728/an.py#L38 , i.e. just after the 
handshake as you mentioned I believe.  So I don't believe it's related to the 
model itself. I'm also with that hypothesis in mind that you raised about 
something fishy in mps2_an521 drivers ... But can't make sense of it yet.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] guberti opened a new pull request #8766: [microTVM] Fix ci-qemu Arduino install dir

2021-08-16 Thread GitBox


guberti opened a new pull request #8766:
URL: https://github.com/apache/tvm/pull/8766


   Not ready for merging, I need to test the entire CI Arduino workflow 
tomorrow.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Yuan-Chuan-YUE commented on pull request #8756: [CODEGEN][OpenCL]: fix tir.erf codegen to opencl directly

2021-08-16 Thread GitBox


Yuan-Chuan-YUE commented on pull request #8756:
URL: https://github.com/apache/tvm/pull/8756#issuecomment-899941744


   Thanks for the reply @junrushao1994 !
   Thanks for the comment @csullivan !
   I will add unit test of `erf` in the opencl codegen. Should I also add unit 
test in cuda? To the best of my knowledge, the unit test in cuda provides 
vectorize intrinsic tests and there is a list to check math intrinsics but does 
not include `erf`. Also, `numpy` doesn't have `erf` causing this may need other 
math library to help checking the result, `math.erf` for example.
   Sorry for my less experience, if there is any misunderstanding, please don't 
refuse to offer your kind advice! Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (d02e50c -> 2008d62)

2021-08-16 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from d02e50c  [AutoScheduler][FIX] Fix exception handling in measure.py 
(#8754)
 add 2008d62  add support for half_pixel_centers in resize (#8689)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  |  3 ++
 tests/python/frontend/tflite/test_forward.py | 46 
 2 files changed, 43 insertions(+), 6 deletions(-)


[GitHub] [tvm] FrozenGene commented on pull request #8689: [TFLite] add support for half_pixel_centers in resize

2021-08-16 Thread GitBox


FrozenGene commented on pull request #8689:
URL: https://github.com/apache/tvm/pull/8689#issuecomment-899932811


   Thanks @euntaik 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene merged pull request #8689: [TFLite] add support for half_pixel_centers in resize

2021-08-16 Thread GitBox


FrozenGene merged pull request #8689:
URL: https://github.com/apache/tvm/pull/8689


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero commented on pull request #8765: [Community] @gromero -> Reviewer

2021-08-16 Thread GitBox


gromero commented on pull request #8765:
URL: https://github.com/apache/tvm/pull/8765#issuecomment-899931921


   @tmoreau89 @jcf94 @tqchen Thank you :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#issuecomment-899929050


   @MasterJH5574 overall it looks good!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] MasterJH5574 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


MasterJH5574 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r689964652



##
File path: src/tir/transforms/inject_virtual_thread.cc
##
@@ -476,11 +476,6 @@ class VirtualThreadInjector : public StmtMutator {
   }
 };
 
-Stmt InjectVirtualThread(Stmt stmt) {

Review comment:
   Because it's never used :-) Looks like someone forgot to remove it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r689961892



##
File path: src/tir/transforms/inject_virtual_thread.cc
##
@@ -476,11 +476,6 @@ class VirtualThreadInjector : public StmtMutator {
   }
 };
 
-Stmt InjectVirtualThread(Stmt stmt) {

Review comment:
   why are we removing this?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r689961081



##
File path: src/driver/driver_api.cc
##
@@ -355,6 +356,7 @@ IRModule LowerSchedule(te::Schedule sch, const 
Array& args, const std
   IRModule mod = ScheduleToModule(std::move(sch), args, name, binds);
   // Get the legacy TE pass list
   Array pass_list = CreatePassList(simple_mode);
+  LOG(INFO) << "mod =\n" << mod;

Review comment:
   remove this




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


junrushao1994 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r689960316



##
File path: python/tvm/tir/schedule/schedule.py
##
@@ -444,6 +444,233 @@ def after_split(a: ty.handle, b: ty.handle) -> None:
 
 ## Schedule: Manipulate ForKind ##
 
+def parallel(self, loop: LoopRV) -> None:
+"""Parallelize the input loop. It requires:
+1) The scope block that the loop is in should have stage-pipeline 
property
+2) All the blocks under the loop are complete blocks or reduction 
blocks, and have affine
+bindings
+3) For each block under the loop, the loop can only be contained in 
data-parallel block
+iters' bindings
+
+Parameters
+--
+loop : LoopRV
+The loop to be parallelized
+
+Examples
+
+
+Before parallel, in TensorIR, the IR is:
+
+.. code-block:: python
+
+@tvm.script.tir
+def before_parallel(a: ty.handle, b: ty.handle) -> None:
+A = tir.match_buffer(a, (128, 128))
+B = tir.match_buffer(b, (128, 128))
+for i, j in tir.grid(128, 128):
+with tir.block([128, 128], "B") as [vi, vj]:
+tir.bind(vi, i)
+tir.bind(vj, j)
+B[vi, vj] = A[vi, vj] * 2.0
+
+Create the schedule and do parallel:
+
+.. code-block:: python
+
+sch = tir.Schedule(before_parallel)
+i, j = sch.get_loops(sch.get_block("B"))
+sch.parallel(i)
+
+After applying parallel, the IR becomes:
+
+.. code-block:: python
+
+@tvm.script.tir
+def after_parallel(a: ty.handle, b: ty.handle) -> None:
+A = tir.match_buffer(a, (128, 128))
+B = tir.match_buffer(b, (128, 128))
+for i in tir.parallel(0, 128):
+for j in tir.serial(0, 128):
+with tir.block([128, 128], "B") as [vi, vj]:
+tir.bind(vi, i)
+tir.bind(vj, j)
+B[vi, vj] = A[vi, vj] * 2.0
+
+"""
+_ffi_api.ScheduleParallel(self, loop)  # type: ignore # pylint: 
disable=no-member
+
+def vectorize(self, loop: LoopRV) -> None:
+"""Vectorize the input loop. It requires:
+1) The scope block that the loop is in should have stage-pipeline 
property
+2) All the blocks under the loop are complete blocks or reduction 
blocks, and have affine
+bindings
+3) For each block under the loop, the loop can only be contained in 
data-parallel block
+iters' bindings
+
+Parameters
+--
+loop : LoopRV
+The loop to be vectorized
+
+Examples
+
+
+Before vectorize, in TensorIR, the IR is:
+
+.. code-block:: python
+
+@tvm.script.tir
+def before_vectorize(a: ty.handle, b: ty.handle) -> None:
+A = tir.match_buffer(a, (128, 128))
+B = tir.match_buffer(b, (128, 128))
+for i, j in tir.grid(128, 128):
+with tir.block([128, 128], "B") as [vi, vj]:
+tir.bind(vi, i)
+tir.bind(vj, j)
+B[vi, vj] = A[vi, vj] * 2.0
+
+Create the schedule and do vectorize:
+
+.. code-block:: python
+
+sch = tir.Schedule(before_vectorize)
+i, j = sch.get_loops(sch.get_block("B"))
+sch.vectorize(j)
+
+After applying vectorize, the IR becomes:
+
+.. code-block:: python
+
+@tvm.script.tir
+def after_vectorize(a: ty.handle, b: ty.handle) -> None:
+A = tir.match_buffer(a, (128, 128))
+B = tir.match_buffer(b, (128, 128))
+for i in tir.serial(0, 128):
+for j in tir.vectorized(0, 128):
+with tir.block([128, 128], "B") as [vi, vj]:
+tir.bind(vi, i)
+tir.bind(vj, j)
+B[vi, vj] = A[vi, vj] * 2.0
+
+"""
+_ffi_api.ScheduleVectorize(self, loop)  # type: ignore # pylint: 
disable=no-member
+
+def bind(self, loop: LoopRV, thread_axis: str) -> None:
+"""Bind the input loop to the given thread axis. It requires:
+1) The scope block that the loop is in should have stage-pipeline 
property
+2) All the blocks under the loop are complete blocks or reduction 
blocks, and have affine
+bindings
+3) For each block under the loop, if the thread axis starts with 
"threadIdx`, the loop can
+only be contained in data-parallel block iter and reduction block 
iters' bindings. Otherwise
+the loop can only b

[GitHub] [tvm] tmoreau89 opened a new pull request #8765: [Community] @gromero -> Reviewer

2021-08-16 Thread GitBox


tmoreau89 opened a new pull request #8765:
URL: https://github.com/apache/tvm/pull/8765


   Please join us to welcome @gromero as a new reviewer to TVM. Gustavo has 
made many additions to the uTVM project by extending support to broader 
Zephyr-based microcontroller boards.
   
   - [Commits History](https://github.com/apache/tvm/commits?author=gromero)
   - [Code 
Review](https://github.com/apache/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by:gromero)
   - [Community Forum Summary](https://discuss.tvm.apache.org/u/gromero/summary)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (e334942 -> d02e50c)

2021-08-16 Thread junrushao
This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from e334942  Fix builtin_fp16.h path according to: 
https://discuss.tvm.apache.org/… (#8705)
 add d02e50c  [AutoScheduler][FIX] Fix exception handling in measure.py 
(#8754)

No new revisions were added by this update.

Summary of changes:
 python/tvm/auto_scheduler/measure.py | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)


[GitHub] [tvm] junrushao1994 merged pull request #8754: [AutoScheduler][FIX] Fix exception handling in measure.py

2021-08-16 Thread GitBox


junrushao1994 merged pull request #8754:
URL: https://github.com/apache/tvm/pull/8754


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tmoreau89 commented on pull request #8764: [Community] @Mousius -> Reviewer

2021-08-16 Thread GitBox


tmoreau89 commented on pull request #8764:
URL: https://github.com/apache/tvm/pull/8764#issuecomment-899912645


   As an aside, I've updated Siva's name.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tmoreau89 opened a new pull request #8764: [Community] @Mousius -> Reviewer

2021-08-16 Thread GitBox


tmoreau89 opened a new pull request #8764:
URL: https://github.com/apache/tvm/pull/8764


   Please join us to welcome @Mousius as a new reviewer to TVM. Christopher has 
been spearheading work to generate an embedded-friendly API for uTVM users, 
introducing user-friendly interfaces to TVM-generated models. He’s been an 
active participant in discussions around AOT and formalizing APIs for the C 
runtime.
   
   - [Commits History](https://github.com/apache/tvm/commits?author=mousius)
   - [Code Review](https://github.com/apache/tvm/pulls?q=reviewed-by%3Amousius+)
   - [Community Forum Summary](https://discuss.tvm.apache.org/u/mousius/summary)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8761: Update QemuTransport#write() to match new write API contract.

2021-08-16 Thread GitBox


areusch commented on pull request #8761:
URL: https://github.com/apache/tvm/pull/8761#issuecomment-899911317


   regardless i think we should still ensure that write() conforms to the API 
properly. can you let me know which test case you're running?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8761: Update QemuTransport#write() to match new write API contract.

2021-08-16 Thread GitBox


areusch commented on pull request #8761:
URL: https://github.com/apache/tvm/pull/8761#issuecomment-899911162


   @gromero i also see this but only on 
`tests/micro/zephyr/test_zephyr.py::test_rpc_large_array[mps2_an521-(16*1024)]` 
now


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero commented on pull request #8761: Update QemuTransport#write() to match new write API contract.

2021-08-16 Thread GitBox


gromero commented on pull request #8761:
URL: https://github.com/apache/tvm/pull/8761#issuecomment-899884709


   @areusch Hi. For the records, I still see the `tvm._ffi.base.TVMError: 
MicroSessionTimeoutError: failed to read reply message after timeout 5s`  
exception even with this fix (https://github.com/apache/tvm/pull/8761). I tried 
against the sine model. I'll continue to investigate more tomorrow.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8761: Update QemuTransport#write() to match new write API contract.

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8761:
URL: https://github.com/apache/tvm/pull/8761#issuecomment-899883668


   The flaky test in auto scheduler is fixed by #8754


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml opened a new pull request #8763: Make from_tensorflow.py more GPU memory friendly.

2021-08-16 Thread GitBox


mbs-octoml opened a new pull request #8763:
URL: https://github.com/apache/tvm/pull/8763


   Sphinx-gallery runs everything in a single process. There
   doesn't appear to be any easy way to force Tensorflow to
   return memory other than terminating the process. This at
   least gives us a little more wiggle room.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899848131


   @mbs-octoml Thanks for the discussion!
   
   Yes, I do like the fact that `DictAttrs` is a subclass of `Attrs`, and could 
be handled in a unified way as other `Attrs` on operators, etc. It provides 
other advantages like easier serialization.
   
   `WithAttrs` is definitely a good point, and it's not actually restricted to 
`DictAttrs`. If we use `Map` as the attributes' type, the 
same method is still valid with proper template programming.
   
   What about this? If an object has an `attrs` field whose type is 
`DictAttrs`, then we require the object to be consistent with a C++ concept 
that:
   - works with `WithAttr`
   - has the method `GetAttr` (with proper type signature)
   - has the method `HasNonzeroAttr` (with proper type signature)
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8586: [Tutorial][Executor] Fix the usage of executors in tutorials

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8586:
URL: https://github.com/apache/tvm/pull/8586#issuecomment-899838397


   CC @comaniac would you like to take a look at this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


mbs-octoml commented on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899829390


   LGTM.
   
   Returning to Junru's q: "would love to discuss a little bit if this design 
is the direction we want to move forward or move away from: I am not so sure 
about DictAttrs itself." I like that CallNode uses Attrs which can be 
subclassed to be the exact structure we care about (eg OnDeviceAttrs). Since 
transformations need to be loosely coupled we can't really ask IRModule to be 
similarly statically typed. So yeah, it sure does look like a plain old 
Map<...>. But I do like the WithAttrs pattern, and we can get used to that 
being the usual way of extending attributes.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero commented on pull request #8762: [microTVM] Fix platform name for qemu_x86 in Zephyr AOT tests

2021-08-16 Thread GitBox


gromero commented on pull request #8762:
URL: https://github.com/apache/tvm/pull/8762#issuecomment-899824505


   cc @areusch @mehrdadh 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] gromero opened a new pull request #8762: [microTVM] Fix platform name for qemu_x86 in Zephyr AOT tests

2021-08-16 Thread GitBox


gromero opened a new pull request #8762:
URL: https://github.com/apache/tvm/pull/8762


   Currently two Zephyr AOT tests (test_tflite and test_qemu_make_fail) are
   not running when qemu_x86 platform is selected because the platform name
   is wrongly listed as 'host' in the match list for not skipping these
   tests. This commit fixes it.
   
   Signed-off-by: Gustavo Romero 
   
   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8757: [CI] Modify Jenkinfile to always display junit report, fix for #8674

2021-08-16 Thread GitBox


areusch commented on pull request #8757:
URL: https://github.com/apache/tvm/pull/8757#issuecomment-899818902


   hmm 
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/ci-docker-staging/144/tests
   
   don't see any tests


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] comaniac commented on pull request #6: [RFC] [Relay] Automatic Mixed Precision Pass

2021-08-16 Thread GitBox


comaniac commented on pull request #6:
URL: https://github.com/apache/tvm-rfcs/pull/6#issuecomment-899816190


   Took a quick pass to the updated RFC. I think it's almost ready to merge as 
long as the last 3 comments are resolved.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8733: [TIR] Change Integer Implicit Conversion Rule to C Standard Way

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8733:
URL: https://github.com/apache/tvm/pull/8733#issuecomment-899811851


   Thanks @Johnson9009 for the investigation and @tkonolige for the 
confirmation!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


electriclilies commented on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899807585


   @tqchen does this look good to you?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm-rfcs] comaniac commented on a change in pull request #14: [RFC]Pipeline Compute Executor.

2021-08-16 Thread GitBox


comaniac commented on a change in pull request #14:
URL: https://github.com/apache/tvm-rfcs/pull/14#discussion_r689834837



##
File path: rfcs/0012-pipeline-executor.md
##
@@ -0,0 +1,214 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+- Feature Name: Pipeline Executor
+- Start Date: 2021-07-30
+- RFC PR: [apache/tvm-rfcs#0014](https://github.com/apache/tvm-rfcs/pull/0014)
+- GitHub Issue: [apache/tvm#8596](https://github.com/apache/tvm/issues/8596)
+
+## 1. Summary
+
+
+This proposal introduces Pipeline Executor: A runtime executor that by 
scheduling
+splitted subgraph of relay graph in pipeline to implement task level parallism 
to
+improve compute throughput.
+
+## 2. Motivation
+
+
+
+Currently more and more edge device inference deployments happen on SOC 
devices.
+Since SOC devices have heterogeneous chipset like GPU, FPGA, CPU, DSP, etc. To 
reach the best
+performance, there is a requirement to run an ML network in these 
heterogeneous chipsets.
+However, currently graph executor does not have parallelism logic, and the 
existing data parallelism
+solution only supports parallel on homogeneous chipset(device). Then, the only 
way to do batch processing
+on heterogeneous devices with TVM is to treat a whole ML network as a schedule 
unit and run it on
+different heterogeneous devices, but that would cause latency issue (low speed 
chipset becomes the
+latency bottleneck for single data processing).
+
+Therefore, we need a runtime executor that can provide parallel scheduling 
functionality
+with a finer-grained schedule unit like subgraph (a group of operator with 
dependency relation)
+to be more efficient to use SOC heterogeneous hardware resource to achieve a 
better performance.
+
+
+### Benefits of Pipeline Executor
+
+There are three benefits for Pipeline Executor
+
+Pipeline Executor provides:
+* Compute a single network on multiple backends in parallel to improve 
performance.
+
+* Use RPC to perform distributed computation cross multiple remote devices.
+
+* Pipeline executor provide the capability to integrate non-DNN model function.
+
+## 3. Guide-level explanation
+Pipeline Executor is a runtime executor which implements pipeline execution 
logic for multiple
+subgraphs and relies on graph_executor for operator storage and execution.
+
+This section introduces the use case for Pipeline Executor.
+
+* 1. Using Automatic Graph Split feature to construct pipeline subgraph and 
configuration.
+* 2. Use pipeline_executor to build a pipeline module with the subgraphs and 
configuration.
+* 3. Use pipeline_executor to load the pipeline module to run network in 
pipeline parallelism mode.
+
+### 3.1. Using Automatic Graph Split feature to construct pipeline subgraph 
and configuration.
+
+This feature not in this RFC scope. the logic as following.
+
+this solution include 3 steps, 1. Operator Auto tune, 2. Graph dependency tree 
build and balance, 
+3. Graph Auto Tune. following are more detail.
+
+ 3.1.1 Operator Auto Tune :
+
+* a. In operator Auto tune tune section, user would using existing tuning 
logic to tune the every operator,
+but the tune would separately and serialized happen in all target involved by 
pipeline executor.
+
+* b. After operator tune done , here can get performance data, for example , 
con2d_0 best perf in
+GPU is 3ms, in VTA is 2ms etc, this perf data would get used in later Graph 
dependency tree build
+balance step.
+
+ 3.1.2. Graph dependency tree build balance
+
+* a. Initialize a DAG, the node of the DAG is subgraph, initially for a N node 
DAG, first [1, N -1] node mapping to
+[1 , N-1] layer(compute density operator and others) of original compute 
graph, the number N node is
+mapping to [N, M] layer , M here is the original compute layer number.
+
+* b. by using the perf data generated in 3.1.1.b , every dependency tree node 
can get a time consume value,
+the time consume value for difference node not at beginning is not same, then 
we call this DAG is not balanced in 
+weight of node, by using the way to adjust the node(subgraph) scope(how many 
operator in this node), we make
+every node of the DAG become same or value closed on weight(same time 
consume), then such DAG is a graph split
+solution,
+here we use DAG is to record the parent/child relation that child only can run 
after parent runned, and the scope
+adjustment only can hapen between parent and child.
+
+### 3.1.3 Graph Auto Tune.
+* a. 3.1.2 can generate more than one subgraph split solution DAG, in this 
step, Graph Auto Tune would try these
+multiple solution to get best configuration.
+
+after 1. 2. 3. , here can get an automatic graph split configuration.
+
+### 3.2. Use pipeline_executor to build pipeline module with the said subgraph 
and configuration.
+
+Pipeline executor provide a build function to compile and save the compile 
output into disk,
+following is a example
+
+```python
+with autotvm.get_pipeline_model_best(mod_file) as mod_config: # this is 
future featu

[GitHub] [tvm] shingjan commented on a change in pull request #8754: [AutoScheduler][FIX] Fix exception handling in measure.py

2021-08-16 Thread GitBox


shingjan commented on a change in pull request #8754:
URL: https://github.com/apache/tvm/pull/8754#discussion_r689831593



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -718,11 +709,18 @@ def local_builder_build(inputs, timeout, n_parallel, 
build_func="default", verbo
 for res in tuple_res:
 if res.status == StatusKind.COMPLETE:
 results.append(BuildResult(*res.value))
-else:
-assert res.status == StatusKind.TIMEOUT
+elif res.status == StatusKind.TIMEOUT:
 if verbose >= 1:
 print(".T", end="", flush=True)  # Build timeout
 results.append(BuildResult(None, [], MeasureErrorNo.BUILD_TIMEOUT, 
None, timeout))
+elif res.status == StatusKind.EXCEPTION:
+if verbose >= 1:
+print(".E", end="", flush=True)  # Build error
+results.append(
+BuildResult(None, [], MeasureErrorNo.COMPILE_HOST, 
make_traceback_info(), timeout)

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] comaniac commented on issue #8683: Error when using `partition_for_vitis_ai` with YOLOX ONNX model

2021-08-16 Thread GitBox


comaniac commented on issue #8683:
URL: https://github.com/apache/tvm/issues/8683#issuecomment-899778369


   cc @jtuyls 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch edited a comment on pull request #8423: Implementation of relay_to_tir target hook

2021-08-16 Thread GitBox


areusch edited a comment on pull request #8423:
URL: https://github.com/apache/tvm/pull/8423#issuecomment-899761713


   marking this as blocked on https://github.com/apache/tvm-rfcs/pull/10


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


tqchen commented on pull request #8760:
URL: https://github.com/apache/tvm/pull/8760#issuecomment-899761997


   closing for now as it is more complicated than I initially thought


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen closed pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


tqchen closed pull request #8760:
URL: https://github.com/apache/tvm/pull/8760


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8423: Implementation of relay_to_tir target hook

2021-08-16 Thread GitBox


areusch commented on pull request #8423:
URL: https://github.com/apache/tvm/pull/8423#issuecomment-899761713


   marking this as blocked on https://github.com/apache/tvm-rfcs/pull/9


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


tqchen edited a comment on pull request #8760:
URL: https://github.com/apache/tvm/pull/8760#issuecomment-899760555


   Good catch @areusch  Seems indeed this would need some special support from 
sphinx gallery side.
   
   Perhaps we could reuse the same strategy here
   
https://github.com/sphinx-gallery/sphinx-gallery/blob/b41e328230f016b2089464b8f834fd76a6395eac/sphinx_gallery/scrapers.py#L550


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


tqchen commented on pull request #8760:
URL: https://github.com/apache/tvm/pull/8760#issuecomment-899760555


   Good catch. Seems indeed this would need some special support from sphinx 
gallery side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jlamperez commented on issue #8683: Error when using `partition_for_vitis_ai` with YOLOX ONNX model

2021-08-16 Thread GitBox


jlamperez commented on issue #8683:
URL: https://github.com/apache/tvm/issues/8683#issuecomment-899759896


   I've been able to get some log traces:
   
   
   ```
   **
   * RELAY IR TO PYXIR
   **
   DEBUG:pyxir:free_var %inputs: Tensor[(1, 3, 416, 416), float32];
   %0 = dyn.strided_slice(%inputs, meta[relay.Constant][0] /* ty=Tensor[(4), 
int64] */, meta[relay.Constant][1] /* ty=Tensor[(4), int64] */, 
meta[relay.Constant][2] /* ty=Tensor[(4), int64] */, begin=None, end=None, 
strides=None, axes=None) /* ty=Tensor[(?, ?, ?, ?), float32] */;
   %1 = shape_of(%0, dtype="int32") /* ty=Tensor[(4), int32] */;
   %2 = cast_like(%1, meta[relay.Constant][4] /* ty=Tensor[(4), int64] */) /* 
ty=Tensor[(4), int64] */;
   %3 = slice_like(%2, meta[relay.Constant][4] /* ty=Tensor[(4), int64] */, 
axes=None) /* ty=Tensor[(4), int64] */;
   %4 = add(meta[relay.Constant][4] /* ty=Tensor[(4), int64] */, %3) /* 
ty=Tensor[(4), int64] */;
   %5 = where(meta[relay.Constant][3] /* ty=Tensor[(4), bool] */, %4, 
meta[relay.Constant][4] /* ty=Tensor[(4), int64] */) /* ty=Tensor[(4), int64] 
*/;
   %6 = greater_equal(%5, %3) /* ty=Tensor[(4), bool] */;
   %7 = shape_of(%0, dtype="int64") /* ty=Tensor[(4), int64] */;
   %8 = where(%6, %3, %5) /* ty=Tensor[(4), int64] */;
   %9 = scatter(%7, meta[relay.Constant][5] /* ty=Tensor[(1), int64] */, 
meta[relay.Constant][6] /* ty=Tensor[(1), int64] */, 
meta[relay.attrs.ScatterAttrs][0]) /* ty=Tensor[(4), int64] */;
   %10 = dyn.strided_slice(%inputs, meta[relay.Constant][8] /* ty=Tensor[(4), 
int64] */, meta[relay.Constant][9] /* ty=Tensor[(4), int64] */, 
meta[relay.Constant][10] /* ty=Tensor[(4), int64] */, begin=None, end=None, 
strides=None, axes=None) /* ty=Tensor[(?, ?, ?, ?), float32] */;
   %11 = shape_of(%10, dtype="int32") /* ty=Tensor[(4), int32] */;
   %12 = cast_like(%11, meta[relay.Constant][12] /* ty=Tensor[(4), int64] */) 
/* ty=Tensor[(4), int64] */;
   %13 = slice_like(%12, meta[relay.Constant][12] /* ty=Tensor[(4), int64] */, 
axes=None) /* ty=Tensor[(4), int64] */;
   %14 = add(meta[relay.Constant][12] /* ty=Tensor[(4), int64] */, %13) /* 
ty=Tensor[(4), int64] */;
   %15 = where(meta[relay.Constant][11] /* ty=Tensor[(4), bool] */, %14, 
meta[relay.Constant][12] /* ty=Tensor[(4), int64] */) /* ty=Tensor[(4), int64] 
*/;
   %16 = greater_equal(%15, %13) /* ty=Tensor[(4), bool] */;
   %17 = shape_of(%10, dtype="int64") /* ty=Tensor[(4), int64] */;
   %18 = where(%16, %13, %15) /* ty=Tensor[(4), int64] */;
   %19 = scatter(%17, meta[relay.Constant][5] /* ty=Tensor[(1), int64] */, 
meta[relay.Constant][6] /* ty=Tensor[(1), int64] */, 
meta[relay.attrs.ScatterAttrs][1]) /* ty=Tensor[(4), int64] */;
   %20 = dyn.strided_slice(%inputs, meta[relay.Constant][14] /* ty=Tensor[(4), 
int64] */, meta[relay.Constant][15] /* ty=Tensor[(4), int64] */, 
meta[relay.Constant][16] /* ty=Tensor[(4), int64] */, begin=None, end=None, 
strides=None, axes=None) /* ty=Tensor[(?, ?, ?, ?), float32] */;
   %21 = shape_of(%20, dtype="int32") /* ty=Tensor[(4), int32] */;
   %22 = cast_like(%21, meta[relay.Constant][18] /* ty=Tensor[(4), int64] */) 
/* ty=Tensor[(4), int64] */;
   %23 = slice_like(%22, meta[relay.Constant][18] /* ty=Tensor[(4), int64] */, 
axes=None) /* ty=Tensor[(4), int64] */;
   %24 = add(meta[relay.Constant][18] /* ty=Tensor[(4), int64] */, %23) /* 
ty=Tensor[(4), int64] */;
   %25 = where(meta[relay.Constant][17] /* ty=Tensor[(4), bool] */, %24, 
meta[relay.Constant][18] /* ty=Tensor[(4), int64] */) /* ty=Tensor[(4), int64] 
*/;
   %26 = greater_equal(%25, %23) /* ty=Tensor[(4), bool] */;
   %27 = shape_of(%20, dtype="int64") /* ty=Tensor[(4), int64] */;
   %28 = where(%26, %23, %25) /* ty=Tensor[(4), int64] */;
   %29 = scatter(%27, meta[relay.Constant][5] /* ty=Tensor[(1), int64] */, 
meta[relay.Constant][6] /* ty=Tensor[(1), int64] */, 
meta[relay.attrs.ScatterAttrs][2]) /* ty=Tensor[(4), int64] */;
   %30 = dyn.strided_slice(%inputs, meta[relay.Constant][20] /* ty=Tensor[(4), 
int64] */, meta[relay.Constant][21] /* ty=Tensor[(4), int64] */, 
meta[relay.Constant][22] /* ty=Tensor[(4), int64] */, begin=None, end=None, 
strides=None, axes=None) /* ty=Tensor[(?, ?, ?, ?), float32] */;
   %31 = shape_of(%30, dtype="int32") /* ty=Tensor[(4), int32] */;
   %32 = cast_like(%31, meta[relay.Constant][24] /* ty=Tensor[(4), int64] */) 
/* ty=Tensor[(4), int64] */;
   %33 = slice_like(%32, meta[relay.Constant][24] /* ty=Tensor[(4), int64] */, 
axes=None) /* ty=Tensor[(4), int64] */;
   %34 = add(meta[relay.Constant][24] /* ty=Tensor[(4), int64] */, %33) /* 
ty=Tensor[(4), int64] */;
   %35 = where(meta[relay.Constant][23] /* ty=Tensor[(4), bool] */, %34, 
meta[relay.Constant][24] /* ty=Tensor[(4), int64] */) /* ty=Tensor[(4), int64] 
*/;
   %36 = greater_equal(%35, %33) /* ty=Tensor[(4), bool] */;
   %37 = shape_of(%30, dtype="int64") /* ty=Tensor[(4), int64] */;
   %38 = where(%36, %3

[GitHub] [tvm] areusch commented on a change in pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


areusch commented on a change in pull request #8760:
URL: https://github.com/apache/tvm/pull/8760#discussion_r689790562



##
File path: docs/conf.py
##
@@ -341,7 +339,7 @@ def force_gc(gallery_cong, fname):
 "download_all_examples": False,
 "min_reported_time": 60,
 "expected_failing_examples": [],
-"reset_modules": (force_gc, "matplotlib", "seaborn"),
+"reset_modules": (force_gc, "matplotlib", "tensorflow", "torch", "onnx"),

Review comment:
   i don't see torch or onnx here: 
https://github.com/sphinx-gallery/sphinx-gallery/blob/b41e328230f016b2089464b8f834fd76a6395eac/sphinx_gallery/scrapers.py#L561
   
   what am i missing?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch opened a new pull request #8761: Update QemuTransport#write() to match new write API contract.

2021-08-16 Thread GitBox


areusch opened a new pull request #8761:
URL: https://github.com/apache/tvm/pull/8761


* suspect this should fix #8278
* forgot to add a loop to write all the data
   
   cc @gromero @junrushao1994 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on a change in pull request #8708: [microTVM] Project API Arduino support

2021-08-16 Thread GitBox


areusch commented on a change in pull request #8708:
URL: https://github.com/apache/tvm/pull/8708#discussion_r689776861



##
File path: 
apps/microtvm/arduino/template_project/tests/test_arduino_microtvm_api_server.py
##
@@ -76,37 +78,31 @@ def test_find_modified_include_path(self, 
mock_pathlib_path):
 "utf-8",
 )
 
-@mock.patch("subprocess.check_output")
-def test_auto_detect_port(self, mock_subprocess_check_output):
+@mock.patch("subprocess.run")
+def test_auto_detect_port(self, mock_subprocess_run):
 process_mock = mock.Mock()
 handler = microtvm_api_server.Handler()
 
 # Test it returns the correct port when a board is connected
-mock_subprocess_check_output.return_value = self.BOARD_CONNECTED_OUTPUT
+mock_subprocess_run.return_value.stdout = self.BOARD_CONNECTED_OUTPUT
 detected_port = handler._auto_detect_port(self.DEFAULT_OPTIONS)
 assert detected_port == "/dev/ttyACM0"

Review comment:
   can you add a test-case for when "arduino_board": "nano33"?

##
File path: 
apps/microtvm/arduino/template_project/tests/test_arduino_microtvm_api_server.py
##
@@ -64,6 +65,7 @@ def test_find_modified_include_path(self, mock_pathlib_path):
 
 BOARD_CONNECTED_OUTPUT = bytes(
 "Port Type  Board Name  FQBN   
 Core \n"
+"/dev/ttyACM1 Serial Port (USB) Wrong Arduino   
arduino:mbed_nano:nano33arduino:mbed_nano\n"

Review comment:
   can you move this down one line, so that if the logic was naive and just 
looking for `"nano33" in FQBN`, it would find /dev/ttyACM0 first?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] areusch commented on pull request #8757: [CI] Modify Jenkinfile to always display junit report, fix for #8674

2021-08-16 Thread GitBox


areusch commented on pull request #8757:
URL: https://github.com/apache/tvm/pull/8757#issuecomment-899734766


   thanks @mikepapadim , pushed to 
https://ci.tlcpack.ai/blue/organizations/jenkins/tvm/detail/ci-docker-staging/144/pipeline
 with some injected test failures to see how they look!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] 01/01: inject some failures to see how it looks

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit d7a4a72ed4d7faa99b977d3813ffc337159c7870
Author: Andrew Reusch 
AuthorDate: Mon Aug 16 11:41:25 2021 -0700

inject some failures to see how it looks
---
 tests/micro/zephyr/test_zephyr.py | 1 +
 tests/python/unittest/test_crt.py | 1 +
 2 files changed, 2 insertions(+)

diff --git a/tests/micro/zephyr/test_zephyr.py 
b/tests/micro/zephyr/test_zephyr.py
index d33033d..a92f606 100644
--- a/tests/micro/zephyr/test_zephyr.py
+++ b/tests/micro/zephyr/test_zephyr.py
@@ -86,6 +86,7 @@ def _make_session(temp_dir, zephyr_board, west_cmd, mod, 
build_config):
 )
 project.build()
 project.flash()
+assert False, "Injecting expected failure, hope to see stdout :)"
 return tvm.micro.Session(project.transport())
 
 
diff --git a/tests/python/unittest/test_crt.py 
b/tests/python/unittest/test_crt.py
index 586e9fb..7b471cb 100644
--- a/tests/python/unittest/test_crt.py
+++ b/tests/python/unittest/test_crt.py
@@ -100,6 +100,7 @@ def test_compile_runtime():
 @tvm.testing.requires_micro
 def test_compile_runtime_llvm():
 """Test targeting the on-device runtime with the llvm backend."""
+assert False, "Injecting expected failure"
 global TARGET
 old_target = TARGET
 try:


[tvm] branch ci-docker-staging updated (7018e8c -> d7a4a72)

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


 discard 7018e8c  Enable custom images to be set in TVM Jenkinsfile
 add e9380e4  Refactor AOT Test Utils parameters into object (#8650)
 add 76a7fa9  Convert AOT to TECompiler (#8697)
 add 5e20ef9  Remove qemu installation from Zephyr RVM (#8701)
 add 66ac470  [Relay] Dense alter layout fixed for packed input (#8669)
 add 4dd7f68  [TIR] Use PopenPool instead of multiprocessing.pool (#8492)
 add 3e37bb5  [CI] Add Arm Compute Library to Arm CI unit test pipeline 
(#8734)
 add 8843153  [UnitTest] Updated tolerances to avoid flaky unit test. 
(#8723)
 add 7cf7adf  [Torch] chunk and unsafe chunk (#8718)
 add 395b308  enhance tir signed-unsigned cast (#8706)
 add ccc09fa  [TVMC] Switch profile flag to use new profiler (#8710)
 add a06863a  [TensorIR][M2a] Storage Align (#8693)
 add f5661f4  [Docs] Moved the generated tutorials folders into a _staging 
folder. (#8735)
 add 170add2  Add parameter to allow caller to supply a Runner (#8747)
 add 901dee5  [Vulkan] Check at codegen if the shader is within shared 
memory limits. (#8746)
 add 3ebd353  [VTA] Make vta graph_pack compatible with latest TVM, and 
bring back object detection tutorials. (#8731)
 add e12ddca  [FRONTEND][PYTORCH] Support fo nn.SiLU added (#8753)
 add 994a151  update docs (#8736)
 add 49224cb  Fix use of fallback AutoTVM knobs in default scheduling 
(#8707)
 add 1a95f9b  [TF] Support TensorFlow < 1.13 for test_sparse_add (#8647)
 add c4c31de  Install rust in ci-lint so cargo fmt can move to lint stage. 
(#8727)
 add 2e24782  [Onnx Operators] Celu (#8741)
 add cddd348  [Fix][TOPI] remove wrong fix in x86's dense_nopack operator 
(#8687)
 add 1d08792  [microTVM] Fix warnings on Zephyr tests (#8740)
 add 3e0c461  Allow Linker script files to be committed (#8745)
 add 7bceaff  [CI] Modify Jenkinfile to always display junit report
 new d7a4a72  inject some failures to see how it looks

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (7018e8c)
\
 N -- N -- N   refs/heads/ci-docker-staging (d7a4a72)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitignore |   2 +-
 Jenkinsfile| 122 
 apps/microtvm/reference-vm/base-box-tool.py|   5 +-
 .../zephyr/base-box/base_box_provision.sh  |   3 -
 .../reference-vm/zephyr/provision_setup.sh |   1 +
 docker/Dockerfile.ci_cpu   |   4 +-
 docker/Dockerfile.ci_lint  |   9 +-
 docs/Makefile  | 142 ++
 docs/conf.py   |  57 ++--
 include/tvm/relay/attrs/nn.h   |  19 ++
 include/tvm/tir/schedule/schedule.h|  15 +
 python/tvm/auto_scheduler/measure.py   | 226 ---
 python/tvm/auto_scheduler/utils.py |  43 +--
 python/tvm/autotvm/graph_tuner/base_graph_tuner.py |   6 +-
 python/tvm/autotvm/record.py   |   4 +-
 python/tvm/autotvm/task/space.py   |   5 +-
 python/tvm/autotvm/utils.py|   4 +-
 python/tvm/contrib/popen_pool.py   |   9 +-
 python/tvm/driver/tvmc/model.py|   2 +-
 python/tvm/driver/tvmc/runner.py   |   4 +-
 python/tvm/relay/frontend/onnx.py  |  14 +
 python/tvm/relay/frontend/pytorch.py   |  31 +--
 python/tvm/relay/op/nn/_nn.py  |   6 +-
 python/tvm/relay/op/nn/nn.py   |  14 +-
 python/tvm/testing/__init__.py |  34 +++
 python/tvm/{arith => testing}/_ffi_api.py  |   4 +-
 .../tvm/testing/auto_scheduler.py  |   2 +-
 python/tvm/{testing.py => testing/utils.py}|   3 -
 python/tvm/tir/schedule/schedule.py|  73 +
 python/tvm/topi/x86/dense.py   |   3 +-
 python/tvm/topi/x86/d

[GitHub] [tvm] CircleSpin commented on pull request #8741: [Onnx Operators] Celu

2021-08-16 Thread GitBox


CircleSpin commented on pull request #8741:
URL: https://github.com/apache/tvm/pull/8741#issuecomment-899733460


   Thank you Junru for the rebase :) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tkonolige commented on pull request #8733: [TIR] Change Integer Implicit Conversion Rule to C Standard Way

2021-08-16 Thread GitBox


tkonolige commented on pull request #8733:
URL: https://github.com/apache/tvm/pull/8733#issuecomment-899712497


   @Johnson9009 The correct implementation should be `uint64 - uint64`. In 
general, all computation in the PRNG kernels should be done with uint64. I'm 
running a couple randomness test suites to make sure things look good. Right 
now main fails the tests, but your branch passes. I think you could redo the 
golden data from the failing tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mbs-octoml commented on pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


mbs-octoml commented on pull request #8760:
URL: https://github.com/apache/tvm/pull/8760#issuecomment-899709362


   I didn't have any luck using just the module name nor explicitly removing 
all tensorflow* modules as they do for 'seaborn' (the latter triggers an 
internal tensorflow error). I've also tried launching the tensorflow tutorial 
as a sub-process but that didn't seem to actually separate out the module loads.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen edited a comment on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


tqchen edited a comment on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899680793


   Thanks @electriclilies "One possibility is to add GetAttrback as a method on 
BaseFunc and IRModule, but have that method just call DictAttr's GetAttr 
method." is exactly what I meant by "keep a common impl as well and redirect in 
the function and module case".
   
   I get your take on the need of a standalone common method, which in some 
sense can be seen as a tradeoff in the C0 listed above.  The addiitional 
considetations  "C1: API consitency: " and C2 puts a bit more weight on 
explicit functions on the explicit API on the function/module side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


electriclilies commented on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899681781


   @tqchen OK, thanks for clarifying, I thought you meant keep `GetAttrs` as a 
separate, standalone util and then have IRModule and BaseFunc call that 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies edited a comment on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


electriclilies edited a comment on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899679386


   @junrushao1994 @tqchen Thanks for the feedback.
   
   To me, it makes more sense that `GetAttr` is a method on `DictAttrs`. When 
it's a standalone method on functions and modules, it's not clear what the 
attributes that GetAttr is accessing actually are. 
   
   Also, it'll still show up as a method on `DictAttrs` within the 
FunctionNode, so people will still be able to see it. 
   
   One possibility is to add `GetAttr`back as a method on `BaseFunc` and 
`IRModule`, but have that method just call `DictAttr`'s `GetAttr` method. What 
do you think about that?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


tqchen commented on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899680793


   Thanks @electriclilies "One possibility is to add GetAttrback as a method on 
BaseFunc and IRModule, but have that method just call DictAttr's GetAttr 
method." is exactly what I meant by "keep a common impl as well and redirect in 
the function and module case"


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


electriclilies commented on pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#issuecomment-899679386


   @junrushao1994 @tqchen Thanks for the feedback.
   
   To me, it makes more sense that `GetAttr` is a method on `DictAttrs`. When 
it's a standalone method on functions and modules, it's not clear what the 
attributes that GetAttr is accessing actually are. 
   
   Also, it'll still show up as a method on `DictAttrs` within the 
FunctionNode, so people will still be able to see it. 
   
   One possibility is to add `GetAttr`back as a method on `BaseFunc` and 
`IRModule`, but have that method just call `DictAttr`'s `GetAttr` method.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] electriclilies commented on a change in pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


electriclilies commented on a change in pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#discussion_r689708778



##
File path: include/tvm/target/target.h
##
@@ -54,7 +54,7 @@ class TargetNode : public Object {
   /*! \brief Keys for this target */
   Array keys;
   /*! \brief Collection of attributes */
-  Map attrs;
+  Map attrs;  // TODO(@electriclilies): Unify with 
DictAttrs on IRModule

Review comment:
   I'm not sure if this is needed, I added it as part of my initial read 
through of this section of the codebase. I'll remove it and if it's needed we 
can discuss in a later PR




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] 01/01: Enable custom images to be set in TVM Jenkinsfile

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit 7018e8c7bbba7572f74bd52fab2502953ed8091b
Author: Leandro Nunes 
AuthorDate: Wed Aug 11 14:10:16 2021 +0100

Enable custom images to be set in TVM Jenkinsfile

 * This work is needed to enable automatic testing of our
   newly built Docker images as part of CI

 * The default value is set by variables in the same
   Jenkinsfile and are used when no custom values are
   provided
---
 Jenkinsfile | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/Jenkinsfile b/Jenkinsfile
index 13ab9e0..359f9a7 100755
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -53,6 +53,21 @@ ci_qemu = "tlcpack/ci-qemu:v0.07"
 ci_arm = "tlcpack/ci-arm:v0.06"
 // <--- End of regex-scanned config.
 
+// Parameters to allow overriding (in Jenkins UI), the images
+// to be used by a given build. When provided, they take precedence
+// over default values above.
+properties([
+  parameters([
+string(name: 'ci_lint_param', defaultValue: ""),
+string(name: 'ci_cpu_param',  defaultValue: ""),
+string(name: 'ci_gpu_param',  defaultValue: ""),
+string(name: 'ci_wasm_param', defaultValue: ""),
+string(name: 'ci_i386_param', defaultValue: ""),
+string(name: 'ci_qemu_param', defaultValue: ""),
+string(name: 'ci_arm_param',  defaultValue: "")
+  ])
+])
+
 // tvm libraries
 tvm_runtime = "build/libtvm_runtime.so, build/config.cmake"
 tvm_lib = "build/libtvm.so, " + tvm_runtime
@@ -107,6 +122,30 @@ def cancel_previous_build() {
 
 cancel_previous_build()
 
+stage('Prepare') {
+  node('CPU') {
+// When something is provided in ci_*_param, use it, otherwise default 
with ci_*
+ci_lint = ci_lint_param ?: ci_lint
+ci_cpu = ci_cpu_param ?: ci_cpu
+ci_gpu = ci_gpu_param ?: ci_gpu
+ci_wasm = ci_wasm_param ?: ci_wasm
+ci_i386 = ci_i386_param ?: ci_i386
+ci_qemu = ci_qemu_param ?: ci_qemu
+ci_arm = ci_arm_param ?: ci_arm
+
+sh """
+  echo "Docker images being used in this build:"
+  echo " ci_lint = ${ci_lint}"
+  echo " ci_cpu  = ${ci_cpu}"
+  echo " ci_gpu  = ${ci_gpu}"
+  echo " ci_wasm = ${ci_wasm}"
+  echo " ci_i386 = ${ci_i386}"
+  echo " ci_qemu = ${ci_qemu}"
+  echo " ci_arm  = ${ci_arm}"
+"""
+  }
+}
+
 stage("Sanity Check") {
   timeout(time: max_time, unit: 'MINUTES') {
 node('CPU') {


[tvm] branch ci-docker-staging updated (c071c3d -> 7018e8c)

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


 discard c071c3d  Enable custom images to be set in TVM Jenkinsfile
 new 7018e8c  Enable custom images to be set in TVM Jenkinsfile

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c071c3d)
\
 N -- N -- N   refs/heads/ci-docker-staging (7018e8c)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:


[GitHub] [tvm] tqchen commented on pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


tqchen commented on pull request #8760:
URL: https://github.com/apache/tvm/pull/8760#issuecomment-899659483


   background https://github.com/sphinx-gallery/sphinx-gallery/issues/853


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen opened a new pull request #8760: [DOCS] Reload the library that could retain global gpu resources

2021-08-16 Thread GitBox


tqchen opened a new pull request #8760:
URL: https://github.com/apache/tvm/pull/8760


   cc @areusch @mbs-octoml 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8705: Fix builtin_fp16.h path according to: https://discuss.tvm.apache.org/…

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8705:
URL: https://github.com/apache/tvm/pull/8705#issuecomment-899656015


   okay TQ has an explanation here: https://github.com/apache/tvm/pull/8719


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8755: Expose FTVMInferCorrectLayout Python interface

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8755:
URL: https://github.com/apache/tvm/pull/8755#issuecomment-899652765


   CC @yzhliu please review when you got time :-)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #8719: fixing build for Android RPC app

2021-08-16 Thread GitBox


junrushao1994 commented on pull request #8719:
URL: https://github.com/apache/tvm/pull/8719#issuecomment-899647869


   Please resolve the merge conflicts and let’s get it merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] lazycal opened a new issue #8759: [Bug] AlterLayout doesn't correctly wrap strided_slice with layout_transforms

2021-08-16 Thread GitBox


lazycal opened a new issue #8759:
URL: https://github.com/apache/tvm/issues/8759


   When doing AlterLayout pass on a conv followed by a strided_slice from 
``NCHW4c`` to `NCHW`, the compiler does nothing to strided_slice, while (I 
think) the only correct behavior should be wrapping it with two 
layout_transforms. This leads to incorrect numerical result/crash at InferType, 
depending on the concrete input shape,
as shown in the following code snippet:
   ```python
   import tvm
   from tvm import relay
   from tvm.relay import transform
   from tvm.relay.testing.temp_op_attr import TempOpAttr
   import numpy as np
   
   def test1(x_shape, w_shape):
   def before():
   x = relay.var("x", shape=x_shape)
   weight = relay.var("weight", shape=w_shape)
   y = relay.nn.conv2d(
   x,
   weight,
   kernel_size=(3, 3),
   padding=(1, 1),
   data_layout="NCHW4c",
   kernel_layout="OIHW4i4o",
   )
   y = relay.strided_slice(y, begin=[0, 0], end=[1, -1], strides=[1, 8])
   y = relay.Function([x, weight], y)
   return tvm.IRModule.from_expr(y)
   
   def alter_conv2d(attrs, inputs, tinfos, out_type):
   data, weight = inputs
   new_attrs = dict(attrs)
   new_attrs["data_layout"] = "NCHW"
   new_attrs["kernel_layout"] = "OIHW"
   return relay.nn.conv2d(data, weight, **new_attrs)
   
   with TempOpAttr("nn.conv2d", "FTVMAlterOpLayout", alter_conv2d):
   be = transform.InferType()(before())
   print('='*40, 'before', '='*40)
   print(be)
   af = transform.AlterOpLayout()(be)
   print('='*40, 'after', '='*40)
   print(af)
   xnp = np.random.rand(*x_shape).astype(np.float32)
   wnp = np.random.rand(*w_shape).astype(np.float32)
   be_res = relay.create_executor("debug", be).evaluate()(xnp, 
wnp).numpy()
   af_res = relay.create_executor("debug", af).evaluate()(xnp, 
wnp).numpy()
   tvm.testing.assert_allclose(be_res, af_res, rtol=1e-3, atol=1e-3)
   
   test1(x_shape=(1, 1, 1, 1, 4), w_shape=(9, 1, 3, 3, 4, 4)) # incorrect 
numerical result
   # test1(x_shape=(1, 1, 1, 1, 4), w_shape=(11, 1, 3, 3, 4, 4)) # crash at 
InferType
   ```
   The module before:
   ```cpp
   def @main(%x: Tensor[(1, 1, 1, 1, 4), float32], %weight: Tensor[(9, 1, 3, 3, 
4, 4), float32]) -> Tensor[(1, 1, 1, 1, 4), float32] {
 %0 = nn.conv2d(%x, %weight, padding=[1, 1, 1, 1], kernel_size=[3, 3], 
data_layout="NCHW4c", kernel_layout="OIHW4i4o") /* ty=Tensor[(1, 9, 1, 1, 4), 
float32] */;
 strided_slice(%0, begin=[0, 0], end=[1, -1], strides=[1, 8], axes=None) /* 
ty=Tensor[(1, 1, 1, 1, 4), float32] */
   }
   ```
   and after:
   ```cpp
   def @main(%x: Tensor[(1, 1, 1, 1, 4), float32], %weight: Tensor[(9, 1, 3, 3, 
4, 4), float32]) -> Tensor[(1, 1, 1, 1, 4), float32] {
 %0 = layout_transform(%x, src_layout="NCHW4c", dst_layout="NCHW") /* 
ty=Tensor[(1, 4, 1, 1), float32] */;
 %1 = layout_transform(%weight, src_layout="OIHW4i4o", dst_layout="OIHW") 
/* ty=Tensor[(36, 4, 3, 3), float32] */;
 %2 = nn.conv2d(%0, %1, padding=[1, 1, 1, 1], kernel_size=[3, 3]) /* 
ty=Tensor[(1, 36, 1, 1), float32] */;
 %3 = strided_slice(%2, begin=[0, 0], end=[1, -1], strides=[1, 8], 
axes=None) /* ty=Tensor[(1, 5, 1, 1), float32] */;
 layout_transform(%3, src_layout="NCHW", dst_layout="NCHW4c") /* 
ty=Tensor[(1, 1, 1, 1, 4), float32] */
   }
   ```
   Specifically, I am doing `conv_NCHW4c_out[;,::8,...]` (a 8-stride slice at 
the primal `C` dimension of `NCHW4c`). After altering layout into `NCHW`, the 
compiler does not wrap strided_slice with any layout_transformations nor adjust 
its attributes, so the semantic gets changed to `conv_NCHW_out[:,::8,...]`, 
which means picking 1 every 8 elements, while what we need is to pick 4 
elements every 4*8=32 elements for `conv_NCHW_out`
   
   It seems that `StridedSliceInferCorrectLayout` is responsible for this.
   
   BTW, the layout_transform seems weird in the latter IR:
   ```cpp
   %3 = strided_slice(%2, begin=[0, 0], end=[1, -1], strides=[1, 8], axes=None) 
/* ty=Tensor[(1, 5, 1, 1), float32] */;
   layout_transform(%3, src_layout="NCHW", dst_layout="NCHW4c") /* 
ty=Tensor[(1, 1, 1, 1, 4), float32] */
   ```
   The resultant tensor has smaller shape `(1,1,1,1,4)` than the 
before-transform one `(1,5,1,1)`, and the reason I think is that `(1,5,1,1)` is 
not a valid input to be converted to the layout of ``NCHW4c``, and I thought 
layout_transform should be able to detect and reject that?
   
   ## Environment
   - TVM: commit e334942db002019979438971440d33ece16585a3
   - CUDA version: 10.0
   - System: Ubuntu 16.04
   - GCC 5.4
   - Build options: -DUSE_RELAY_DEBUG=ON -DUSE_CUBLAS=ON -DUSE_LLVM=ON 
-DUSE_CUDA=ON -DUSE_CUDNN=ON
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the messag

[tvm] 01/01: Enable custom images to be set in TVM Jenkinsfile

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git

commit c071c3d2b28738dc7e6ea06065031bf42ca09c9d
Author: Leandro Nunes 
AuthorDate: Wed Aug 11 14:10:16 2021 +0100

Enable custom images to be set in TVM Jenkinsfile

 * This work is needed to enable automatic testing of our
   newly built Docker images as part of CI

 * The default value is set by variables in the same
   Jenkinsfile and are used when no custom values are
   provided
---
 Jenkinsfile | 39 +++
 1 file changed, 39 insertions(+)

diff --git a/Jenkinsfile b/Jenkinsfile
index 13ab9e0..359f9a7 100755
--- a/Jenkinsfile
+++ b/Jenkinsfile
@@ -53,6 +53,21 @@ ci_qemu = "tlcpack/ci-qemu:v0.07"
 ci_arm = "tlcpack/ci-arm:v0.06"
 // <--- End of regex-scanned config.
 
+// Parameters to allow overriding (in Jenkins UI), the images
+// to be used by a given build. When provided, they take precedence
+// over default values above.
+properties([
+  parameters([
+string(name: 'ci_lint_param', defaultValue: ""),
+string(name: 'ci_cpu_param',  defaultValue: ""),
+string(name: 'ci_gpu_param',  defaultValue: ""),
+string(name: 'ci_wasm_param', defaultValue: ""),
+string(name: 'ci_i386_param', defaultValue: ""),
+string(name: 'ci_qemu_param', defaultValue: ""),
+string(name: 'ci_arm_param',  defaultValue: "")
+  ])
+])
+
 // tvm libraries
 tvm_runtime = "build/libtvm_runtime.so, build/config.cmake"
 tvm_lib = "build/libtvm.so, " + tvm_runtime
@@ -107,6 +122,30 @@ def cancel_previous_build() {
 
 cancel_previous_build()
 
+stage('Prepare') {
+  node('CPU') {
+// When something is provided in ci_*_param, use it, otherwise default 
with ci_*
+ci_lint = ci_lint_param ?: ci_lint
+ci_cpu = ci_cpu_param ?: ci_cpu
+ci_gpu = ci_gpu_param ?: ci_gpu
+ci_wasm = ci_wasm_param ?: ci_wasm
+ci_i386 = ci_i386_param ?: ci_i386
+ci_qemu = ci_qemu_param ?: ci_qemu
+ci_arm = ci_arm_param ?: ci_arm
+
+sh """
+  echo "Docker images being used in this build:"
+  echo " ci_lint = ${ci_lint}"
+  echo " ci_cpu  = ${ci_cpu}"
+  echo " ci_gpu  = ${ci_gpu}"
+  echo " ci_wasm = ${ci_wasm}"
+  echo " ci_i386 = ${ci_i386}"
+  echo " ci_qemu = ${ci_qemu}"
+  echo " ci_arm  = ${ci_arm}"
+"""
+  }
+}
+
 stage("Sanity Check") {
   timeout(time: max_time, unit: 'MINUTES') {
 node('CPU') {


[tvm] branch ci-docker-staging updated (a3f60a3 -> c071c3d)

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


omit a3f60a3  Enable custom images to be set in TVM Jenkinsfile
 new c071c3d  Enable custom images to be set in TVM Jenkinsfile

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (a3f60a3)
\
 N -- N -- N   refs/heads/ci-docker-staging (c071c3d)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:


[GitHub] [tvm] Mousius opened a new pull request #8758: Remove old AOT Executor code

2021-08-16 Thread GitBox


Mousius opened a new pull request #8758:
URL: https://github.com/apache/tvm/pull/8758


   This removes the old AOT execution functions that relied on the model 
descriptor which was removed in https://github.com/apache/tvm/pull/8280.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] MasterJH5574 edited a comment on pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


MasterJH5574 edited a comment on pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#issuecomment-899579152


   @junrushao1994 I think the documents for `UnifyThreadBinding` may need 
further polishment. Would be great if you can help give some comments on the 
document 🀐


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] MasterJH5574 commented on pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


MasterJH5574 commented on pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#issuecomment-899579152


   @junrushao1994 I think the documents for `UnifyThreadBinding` may need 
further polish. Would be great if you can help give some comments on the 
document 🀐


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch ci-docker-staging updated (c2939b4 -> 61b02c3)

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/tvm.git.


 discard c2939b4  i wish we could have nice things
 discard 9241dbb  display all env vars
omit d4b95b1  is prod Jenkins too old to set CI=???
omit 8f1d4cd  fix again
omit 90a1ec3  fix empty string case
omit 2b69c30  clean up task_build
omit e80ec7f  clean up dockerfile
omit 4ae07da  remove -j flag from Jenkinsfile since it is useless now
omit 1b424f1  uncomment cmake
omit efb2935  hardcode build -j
omit 2f6eb9f  actually use --cpuset-cpus...
omit 2f2472e  Use all available ARM cpus
omit 58c58f9  EXECUTOR_NUMBER is indeed 0-based
omit 3f5e543  black format
omit 3840d86  commit num cpus hook
omit 247d14a  rename scheduler
omit f07829c  Fix using nvcc from xdist and also whenever stdin is closed :|
omit 4867fa8  why is it running so many tests?
omit 8bb559a  fix typo
omit 4217cdc  fix unbound local and only run --parallel for build and CPU 
integration
omit bde8859  serialize test_tvm_testing_features
omit c89a0a9  Try pytest-xdist (constrained to 2 CPU max for CI)
 add 2e63568  [Docs][UnitTest] Updated target parametrization documentation 
(#8724)
 add 9586ee2  increase atol for float32 (#8712)
 add 61b02c3  Enable custom images to be set in TVM Jenkinsfile

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (c2939b4)
\
 N -- N -- N   refs/heads/ci-docker-staging (61b02c3)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile|  55 +++--
 conftest.py|  16 ---
 docker/bash.sh |  83 +++---
 docs/dev/pytest_target_parametrization.rst | 127 -
 python/tvm/contrib/nvcc.py |   5 +-
 tests/python/topi/python/test_topi_conv2d_nchw.py  |   2 +-
 .../unittest/test_auto_scheduler_search_policy.py  |   9 +-
 tests/scripts/setup-pytest-env.sh  |  29 -
 tests/scripts/task_build.sh|  15 +--
 tests/scripts/task_python_frontend.sh  |   2 +-
 tests/scripts/task_python_integration.sh   |  21 ++--
 tests/scripts/task_python_integration_gpuonly.sh   |   1 -
 tests/scripts/task_python_unittest.sh  |   9 +-
 tests/scripts/task_python_vta_fsim.sh  |   4 +-
 tests/scripts/task_python_vta_tsim.sh  |   4 +-
 15 files changed, 164 insertions(+), 218 deletions(-)


[tvm] branch ci-docker-stagin created (now 61b02c3)

2021-08-16 Thread areusch
This is an automated email from the ASF dual-hosted git repository.

areusch pushed a change to branch ci-docker-stagin
in repository https://gitbox.apache.org/repos/asf/tvm.git.


  at 61b02c3  Enable custom images to be set in TVM Jenkinsfile

No new revisions were added by this update.


[GitHub] [tvm] areusch commented on pull request #8721: [CI] Enable custom images to be set in TVM Jenkinsfile

2021-08-16 Thread GitBox


areusch commented on pull request #8721:
URL: https://github.com/apache/tvm/pull/8721#issuecomment-899569347


   ok just pushing this to staging again just long enough we can see that it's 
launching containers correctly


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] kueitang commented on pull request #8755: Expose FTVMInferCorrectLayout Python interface

2021-08-16 Thread GitBox


kueitang commented on pull request #8755:
URL: https://github.com/apache/tvm/pull/8755#issuecomment-899535102


   Hi @junrushao1994 , the CI error is fixed already! Please take a look at it. 
Lots of Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Mousius commented on pull request #8744: Run AOT tests against reference system

2021-08-16 Thread GitBox


Mousius commented on pull request #8744:
URL: https://github.com/apache/tvm/pull/8744#issuecomment-899531041


   @areusch / @leandron I think this is pending a Docker update to add the 
files from https://github.com/apache/tvm/pull/8514 ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Lunderberg commented on pull request #8451: [UnitTests] Require cached fixtures to be copy-able, with opt-in.

2021-08-16 Thread GitBox


Lunderberg commented on pull request #8451:
URL: https://github.com/apache/tvm/pull/8451#issuecomment-899491756


   After the post-flaky-test CI restart, CI has passed.  Any last changes, 
@areusch ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] MasterJH5574 commented on a change in pull request #8716: [TensorIR][M2a] Parallel, Vectorize, Bind & Unroll

2021-08-16 Thread GitBox


MasterJH5574 commented on a change in pull request #8716:
URL: https://github.com/apache/tvm/pull/8716#discussion_r689502097



##
File path: src/tir/transforms/flatten_buffer.cc
##
@@ -140,7 +140,10 @@ class BufferFlattener : public StmtExprMutator {
  /*var=*/std::move(var),
  /*iter_type=*/IterVarType::kThreadIndex,
  /*thread_tag=*/thread_tag);
-String attr_key = thread_tag == "vthread" ? attr::virtual_thread : 
attr::thread_extent;
+String attr_key = (thread_tag == "vthread" || thread_tag == "vthread.x" ||
+   thread_tag == "vthread.y" || thread_tag == "vthread.z")
+  ? attr::virtual_thread
+  : attr::thread_extent;

Review comment:
   It's implemented in the latest two commits :-)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mikepapadim commented on issue #8674: [CI] Report pytest --junitxml results when tests fail

2021-08-16 Thread GitBox


mikepapadim commented on issue #8674:
URL: https://github.com/apache/tvm/issues/8674#issuecomment-899476639


   On-going PR in #8757 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] mikepapadim opened a new pull request #8757: [CI] Modify Jenkinfile to always display junit report, fix for #8674

2021-08-16 Thread GitBox


mikepapadim opened a new pull request #8757:
URL: https://github.com/apache/tvm/pull/8757


   This is a fix for #8674.
   
   It always displays `junit` reports when tests fail. 
   
   
   @areusch 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


tqchen commented on a change in pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#discussion_r689492820



##
File path: include/tvm/ir/attrs.h
##
@@ -232,6 +233,72 @@ class DictAttrs : public Attrs {
*/
   TVM_DLL explicit DictAttrs(Map dict);
 
+  // Utils for accessing attributes
+  // This needs to be on DictAttrs, not DictAttrsNode because we return the 
default
+  // value if DictAttrsNode is not defined.
+  /*!
+   * \brief Get a function attribute.
+   *
+   * \param attr_key The attribute key.
+   * \param default_value The default value if the key does not exist, 
defaults to nullptr.
+   *
+   * \return The result
+   *
+   * \tparam TOBjectRef the expected object type.
+   * \throw Error if the key exists but the value does not match TObjectRef
+   *
+   * \code
+   *
+   *  void GetAttrExample(const BaseFunc& f) {
+   *auto value = f->attrs.GetAttr("AttrKey", 0);

Review comment:
   would be great to keep the function GetAttr and HasNonZeroAttr function 
in BaseFunc and IRModule, based on the considerations:
   - C0: It will remove one level of indirection(f->GetAttr vs 
f->attrs.GetAttr) and gives more clear documentation(since developer usually 
looks up doc on the Function or IRModule themselves)
   - C1: API consitency: WithAttr directly operates on the function and module, 
and the functions with related functionalities should ideally be made 
consistent with this usage.
   - C2: If there is a future refactor that changes DictAttr => Map, the API 
can be made consistent in a backward compaitble way
   
   We can of course keep a common impl as well and redirect in the function and 
module case.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


tqchen commented on a change in pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#discussion_r689492820



##
File path: include/tvm/ir/attrs.h
##
@@ -232,6 +233,72 @@ class DictAttrs : public Attrs {
*/
   TVM_DLL explicit DictAttrs(Map dict);
 
+  // Utils for accessing attributes
+  // This needs to be on DictAttrs, not DictAttrsNode because we return the 
default
+  // value if DictAttrsNode is not defined.
+  /*!
+   * \brief Get a function attribute.
+   *
+   * \param attr_key The attribute key.
+   * \param default_value The default value if the key does not exist, 
defaults to nullptr.
+   *
+   * \return The result
+   *
+   * \tparam TOBjectRef the expected object type.
+   * \throw Error if the key exists but the value does not match TObjectRef
+   *
+   * \code
+   *
+   *  void GetAttrExample(const BaseFunc& f) {
+   *auto value = f->attrs.GetAttr("AttrKey", 0);

Review comment:
   would be great to keep the function GetAttr and HasNonZeroAttr function 
in BaseFunc and IRModule, based on the considerations:
   - C0: It will remove one level of indirection(f->GetAttr vs 
f->attrs.GetAttr) and gives more clear documentation(since developer usually 
looks up doc on the Function or IRModule themselves)
   - C1: API consitency: WithAttr directly operates on the function and module, 
and the functions with related functionalities should ideally be made 
consistent with this usage.
   - C2: If there is a future refactor that changes DictAttr => Map, the API 
can be made consistent in a backward compaitble way
   
   We can of course keep the common impl as well and redirect in the function 
and module case.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: Fix builtin_fp16.h path according to: https://discuss.tvm.apache.org/… (#8705)

2021-08-16 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new e334942  Fix builtin_fp16.h path according to: 
https://discuss.tvm.apache.org/… (#8705)
e334942 is described below

commit e334942db002019979438971440d33ece16585a3
Author: Natan Kaminsky 
<88275124+lightricksnatankamin...@users.noreply.github.com>
AuthorDate: Mon Aug 16 15:35:03 2021 +0300

Fix builtin_fp16.h path according to: https://discuss.tvm.apache.org/… 
(#8705)
---
 src/runtime/contrib/sort/sort.cc | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/runtime/contrib/sort/sort.cc b/src/runtime/contrib/sort/sort.cc
index 4aa8c92..cee723c 100644
--- a/src/runtime/contrib/sort/sort.cc
+++ b/src/runtime/contrib/sort/sort.cc
@@ -21,13 +21,14 @@
  * \file Use standard C library call.
  */
 
-#include 
 #include 
 #include 
 
 #include 
 #include 
 
+#include "../../../../3rdparty/compiler-rt/builtin_fp16.h"
+
 namespace tvm {
 namespace contrib {
 


[GitHub] [tvm] tqchen merged pull request #8705: Fix builtin_fp16.h path according to: https://discuss.tvm.apache.org/…

2021-08-16 Thread GitBox


tqchen merged pull request #8705:
URL: https://github.com/apache/tvm/pull/8705


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #8750: Add DictAttrs to IRModule and refactor DictAttrs utility functions

2021-08-16 Thread GitBox


tqchen commented on a change in pull request #8750:
URL: https://github.com/apache/tvm/pull/8750#discussion_r689494759



##
File path: include/tvm/target/target.h
##
@@ -54,7 +54,7 @@ class TargetNode : public Object {
   /*! \brief Keys for this target */
   Array keys;
   /*! \brief Collection of attributes */
-  Map attrs;
+  Map attrs;  // TODO(@electriclilies): Unify with 
DictAttrs on IRModule

Review comment:
   Target attrs is different from the IRModule attrs. They contain 
attributes about a target(cuda) that can be shared across IRModule or functions.
   
   Please double check if we really want to remove the target.attrs(my guess is 
that it is unlikely) would be great to confirm with the original authors of the 
target (e.g. @zxybazh )

##
File path: include/tvm/ir/attrs.h
##
@@ -232,6 +233,72 @@ class DictAttrs : public Attrs {
*/
   TVM_DLL explicit DictAttrs(Map dict);
 
+  // Utils for accessing attributes
+  // This needs to be on DictAttrs, not DictAttrsNode because we return the 
default
+  // value if DictAttrsNode is not defined.
+  /*!
+   * \brief Get a function attribute.
+   *
+   * \param attr_key The attribute key.
+   * \param default_value The default value if the key does not exist, 
defaults to nullptr.
+   *
+   * \return The result
+   *
+   * \tparam TOBjectRef the expected object type.
+   * \throw Error if the key exists but the value does not match TObjectRef
+   *
+   * \code
+   *
+   *  void GetAttrExample(const BaseFunc& f) {
+   *auto value = f->attrs.GetAttr("AttrKey", 0);

Review comment:
   would be great to keep the function GetAttr and HasNonZeroAttr function 
in BaseFunc and IRModule. For two considerations:
   - C0: It will remove one level of indirection(f->GetAttr vs 
f->attrs.GetAttr) and gives more clear documentation
   - C1: API consitency: WithAttr directly operates on the function and module, 
and the functions with related functionalities should ideally be made 
consistent with this usage.
   - C2: If there is a future refactor that changes DictAttr => Map, the API 
can be made consistent in a backward compaitble way
   
   We can of course keep the common impl as well and redirect in the function 
and module case.
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Mousius commented on pull request #8744: Run AOT tests against reference system

2021-08-16 Thread GitBox


Mousius commented on pull request #8744:
URL: https://github.com/apache/tvm/pull/8744#issuecomment-899472483


   Hi @areusch, we could use Zephyr to wrap this up but I think it makes sense 
in this case to match this closely to the existing AOT test utils rather than 
introducing the dependency on Zephyr and generating a Zephyr project to drive 
via `west`. The other issue is that the Zephyr infrastructure isn't present in 
`ci_cpu` where-as the files introduced alongside the FVP in 
https://github.com/apache/tvm/pull/8514 are.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >