[GitHub] [incubator-tvm] ashutoshparkhi commented on issue #6340: Faster RCNN: Unable to compile for CPU because topk fails

2020-08-27 Thread GitBox


ashutoshparkhi commented on issue #6340:
URL: https://github.com/apache/incubator-tvm/issues/6340#issuecomment-682355679


   While trying to compile it with commit a64feed, compilation goes through.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhanghaohit commented on pull request #6125: [VTA][OpenCL] add device_annot support in graphpack

2020-08-27 Thread GitBox


zhanghaohit commented on pull request #6125:
URL: https://github.com/apache/incubator-tvm/pull/6125#issuecomment-682351681


   > I am wondering whether VTA's graph annotation can be unified into the 
relay's heterogeneous execution feature: #4178
   
   I think #4178 is for VM, while graph annotation is for graph runtime?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhanghaohit closed pull request #6125: [VTA][OpenCL] add device_annot support in graphpack

2020-08-27 Thread GitBox


zhanghaohit closed pull request #6125:
URL: https://github.com/apache/incubator-tvm/pull/6125


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhanghaohit commented on pull request #6125: [VTA][OpenCL] add device_annot support in graphpack

2020-08-27 Thread GitBox


zhanghaohit commented on pull request #6125:
URL: https://github.com/apache/incubator-tvm/pull/6125#issuecomment-682350954


   > #4178
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhanghaohit removed a comment on pull request #6125: [VTA][OpenCL] add device_annot support in graphpack

2020-08-27 Thread GitBox


zhanghaohit removed a comment on pull request #6125:
URL: https://github.com/apache/incubator-tvm/pull/6125#issuecomment-682350954


   > #4178
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 removed a comment on pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-27 Thread GitBox


junrushao1994 removed a comment on pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-682166368


   idk why the CI is retriggered...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: typo (#6352)

2020-08-27 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 02b643b  typo (#6352)
02b643b is described below

commit 02b643be282adc57f00ddd30fba7d35a2be91dbd
Author: Andrew Liu 
AuthorDate: Thu Aug 27 21:27:38 2020 -0700

typo (#6352)
---
 docs/dev/virtual_machine.rst | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/dev/virtual_machine.rst b/docs/dev/virtual_machine.rst
index 059878f..ae6cac2 100644
--- a/docs/dev/virtual_machine.rst
+++ b/docs/dev/virtual_machine.rst
@@ -110,7 +110,7 @@ InvokePacked
 Invoke the packed function denoted by ``packed_index``. The ``arity``
 and ``output_size`` are used to inform the VM how many inputs and
 outputs to expect. ``packed_args`` stores the list of argument registers. Note 
``Index``
-is an alais of ``int64_t``, and it will be used in other instructions as well.
+is an alias of ``int64_t``, and it will be used in other instructions as well.
 
 AllocTensor
 ^^^



[GitHub] [incubator-tvm] zhiics commented on pull request #6352: [Maintenance] Fix typo

2020-08-27 Thread GitBox


zhiics commented on pull request #6352:
URL: https://github.com/apache/incubator-tvm/pull/6352#issuecomment-682317858


   Thanks @hypercubestart @MarisaKirisame 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #6352: [Maintenance] Fix typo

2020-08-27 Thread GitBox


zhiics merged pull request #6352:
URL: https://github.com/apache/incubator-tvm/pull/6352


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-27 Thread GitBox


zhiics commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r478822568



##
File path: include/tvm/runtime/vm/bytecode.h
##
@@ -204,6 +207,13 @@ struct Instruction {
   RegName tensor;
   RegName newshape;
 } reshape_tensor;
+struct /* DeviceCopy Operands */ {
+  RegName src;
+  /*! \brief The source device type. */
+  Index src_device_type;

Review comment:
   As per offline discussion, we can keep device type for now since this is 
not a quite typical case for inference. We can have a separate PR to use device 
id in VM if it is needed in the future.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-27 Thread GitBox


zhiics commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r478821758



##
File path: src/relay/analysis/context_analysis.cc
##
@@ -0,0 +1,697 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/analysis/context_analysis.cc
+ * \brief A pass for analyzing device attribute of each IR node.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+using PackedAnalysisResultMap = Map>;
+using AnalysisResultMap =
+std::unordered_map;
+
+namespace analysis {
+
+// Cache ops
+static const Op& device_copy_op = Op::Get("device_copy");
+static const Op& alloc_storage_op = Op::Get("memory.alloc_storage");
+static const Op& alloc_tensor_op = Op::Get("memory.alloc_tensor");
+static const Op& shape_of_op = Op::Get("vm.shape_of");
+static const Op& invoke_tvm_op = Op::Get("vm.invoke_tvm_op");
+static const Op& shape_func_of = Op::Get("vm.shape_func");
+static const Op& reshape_tensor_op = Op::Get("vm.reshape_tensor");
+
+class DeviceDomain;
+using DeviceDomainPtr = std::shared_ptr;
+
+/*
+ * \brief A class to represent the device of a domain, i.e. a segment of relay 
program.
+ */
+class DeviceDomain {
+ public:
+  // Construct an empty domain.
+  DeviceDomain() {
+ctx_.device_type = static_cast(-1);
+ctx_.device_id = -1;
+  }
+
+  // Construct a domain based on a given context.
+  explicit DeviceDomain(const TVMContext& ctx) : ctx_(ctx) {}
+
+  // Check if the current domain is empty.
+  bool IsEmptyDomain() const {
+return static_cast(ctx_.device_type) == -1 && ctx_.device_id == -1;
+  }
+
+  // Check if the current domain equals the other one.
+  bool operator==(const DeviceDomain& other) const {
+return ctx_.device_type == other.ctx_.device_type && ctx_.device_id == 
other.ctx_.device_id;
+  }
+
+  bool operator!=(const DeviceDomain& other) const { return !(*this == other); 
}
+
+ private:
+  // Create a hash for a domain.
+  struct Hash {
+size_t operator()(const DeviceDomainPtr& domain) const {
+  if (domain->IsEmptyDomain()) {
+return (size_t)(domain.get());
+  } else {
+size_t const 
h1(std::hash()(static_cast(domain->ctx_.device_type)));
+size_t const h2(std::hash()(domain->ctx_.device_id));
+return h1 ^ (h2 << 1);
+  }
+}
+  };
+
+  // Create an equality for domains.
+  struct Equal {
+   public:
+bool operator()(const DeviceDomainPtr& lhs, const DeviceDomainPtr& rhs) 
const {
+  // We compare the pointer for empty domains.
+  if (lhs->IsEmptyDomain() && rhs->IsEmptyDomain()) return lhs.get() == 
rhs.get();
+
+  // Otherwise device type and id are used to check equality.
+  return (*lhs.get() == *rhs.get());
+}
+  };
+
+  /* \brief The device to be assigned to the current domain. */
+  TVMContext ctx_;
+
+  friend DeviceDomainPtr Join(const DeviceDomainPtr& lhs, const 
DeviceDomainPtr& rhs);
+  friend class ContextAnalyzer;
+};
+
+// Join two domains.
+DeviceDomainPtr Join(const DeviceDomainPtr& lhs, const DeviceDomainPtr& rhs) {
+  if (lhs->IsEmptyDomain() && rhs->IsEmptyDomain()) {
+return lhs;
+  } else if (lhs->IsEmptyDomain()) {
+return rhs;
+  } else if (rhs->IsEmptyDomain()) {
+return lhs;
+  } else {
+CHECK(*lhs.get() == *rhs.get()) << "All expressions must have a singular 
device to unify";
+return lhs;
+  }
+}
+
+/*
+ * \brief Compute on which device each sub-expression will execute. A union 
find
+ * algorithm is used to assign and merge the context domains.
+ */
+class ContextAnalyzer : public ExprVisitor {

Review comment:
   Yeah, I have iteratively visited let nodes in the pass. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-27 Thread GitBox


zhiics commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r478821957



##
File path: python/tvm/relay/transform/memory_alloc.py
##
@@ -66,7 +85,7 @@ def is_reshape_only(func):
 class ManifestAllocPass(ExprMutator):

Review comment:
   A todo is added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-27 Thread GitBox


zhiics commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r478821823



##
File path: src/relay/analysis/context_analysis.cc
##
@@ -0,0 +1,697 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/analysis/context_analysis.cc
+ * \brief A pass for analyzing device attribute of each IR node.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+using PackedAnalysisResultMap = Map>;
+using AnalysisResultMap =
+std::unordered_map;
+
+namespace analysis {
+
+// Cache ops
+static const Op& device_copy_op = Op::Get("device_copy");
+static const Op& alloc_storage_op = Op::Get("memory.alloc_storage");
+static const Op& alloc_tensor_op = Op::Get("memory.alloc_tensor");
+static const Op& shape_of_op = Op::Get("vm.shape_of");
+static const Op& invoke_tvm_op = Op::Get("vm.invoke_tvm_op");
+static const Op& shape_func_of = Op::Get("vm.shape_func");
+static const Op& reshape_tensor_op = Op::Get("vm.reshape_tensor");
+
+class DeviceDomain;
+using DeviceDomainPtr = std::shared_ptr;
+
+/*
+ * \brief A class to represent the device of a domain, i.e. a segment of relay 
program.
+ */
+class DeviceDomain {
+ public:
+  // Construct an empty domain.
+  DeviceDomain() {
+ctx_.device_type = static_cast(-1);
+ctx_.device_id = -1;
+  }
+
+  // Construct a domain based on a given context.
+  explicit DeviceDomain(const TVMContext& ctx) : ctx_(ctx) {}
+
+  // Check if the current domain is empty.
+  bool IsEmptyDomain() const {
+return static_cast(ctx_.device_type) == -1 && ctx_.device_id == -1;
+  }
+
+  // Check if the current domain equals the other one.
+  bool operator==(const DeviceDomain& other) const {
+return ctx_.device_type == other.ctx_.device_type && ctx_.device_id == 
other.ctx_.device_id;
+  }
+
+  bool operator!=(const DeviceDomain& other) const { return !(*this == other); 
}
+
+ private:
+  // Create a hash for a domain.
+  struct Hash {
+size_t operator()(const DeviceDomainPtr& domain) const {
+  if (domain->IsEmptyDomain()) {
+return (size_t)(domain.get());
+  } else {
+size_t const 
h1(std::hash()(static_cast(domain->ctx_.device_type)));
+size_t const h2(std::hash()(domain->ctx_.device_id));
+return h1 ^ (h2 << 1);
+  }
+}
+  };
+
+  // Create an equality for domains.
+  struct Equal {
+   public:
+bool operator()(const DeviceDomainPtr& lhs, const DeviceDomainPtr& rhs) 
const {
+  // We compare the pointer for empty domains.
+  if (lhs->IsEmptyDomain() && rhs->IsEmptyDomain()) return lhs.get() == 
rhs.get();
+
+  // Otherwise device type and id are used to check equality.
+  return (*lhs.get() == *rhs.get());
+}
+  };
+
+  /* \brief The device to be assigned to the current domain. */
+  TVMContext ctx_;
+
+  friend DeviceDomainPtr Join(const DeviceDomainPtr& lhs, const 
DeviceDomainPtr& rhs);
+  friend class ContextAnalyzer;
+};
+
+// Join two domains.
+DeviceDomainPtr Join(const DeviceDomainPtr& lhs, const DeviceDomainPtr& rhs) {
+  if (lhs->IsEmptyDomain() && rhs->IsEmptyDomain()) {
+return lhs;
+  } else if (lhs->IsEmptyDomain()) {
+return rhs;
+  } else if (rhs->IsEmptyDomain()) {
+return lhs;
+  } else {
+CHECK(*lhs.get() == *rhs.get()) << "All expressions must have a singular 
device to unify";
+return lhs;
+  }
+}
+
+/*
+ * \brief Compute on which device each sub-expression will execute. A union 
find
+ * algorithm is used to assign and merge the context domains.
+ */

Review comment:
   added





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6348: [Ansor][AutoTVM v2.0] Phase 2: Update heavy operations with parallel_for

2020-08-27 Thread GitBox


jcf94 commented on a change in pull request #6348:
URL: https://github.com/apache/incubator-tvm/pull/6348#discussion_r478806015



##
File path: src/auto_scheduler/search_policy/sketch_policy.cc
##
@@ -322,32 +323,40 @@ Array SketchPolicyNode::GenerateSketches() {
 }
 
 Array SketchPolicyNode::SampleInitPopulation(const Array& 
sketches, int out_size) {
-  int fail_ct = 0;
+  std::atomic fail_ct(0);
+  std::mutex m;
   Array out_states;
   auto tic_begin = std::chrono::high_resolution_clock::now();
 
-  // TODO(jcf94, merrymercy): Use parallel_for to run this loop in parallel
-  while (static_cast(out_states.size()) < out_size && fail_ct < 
static_cast(out_size)) {
-// Random choose a starting sketch
-// TODO(jcf94, merrymercy): Maybe choose sketches in different possibility 
for they may have
-// different potential on generating state with better performance
-State tmp_s = sketches[(rand_gen)() % sketches.size()];
-
-// Derivation rule based enumeration
-bool valid = true;
-for (const auto& rule : init_rules) {
-  if (rule->Apply(this, &tmp_s) == 
InitPopulationRule::ResultKind::kInvalid) {
-valid = false;
-break;
-  }
-}
+  support::parallel_for(
+  0, out_size, [this, &out_size, &sketches, &out_states, &fail_ct, &m](int 
i) {
+if (fail_ct >= out_size) {
+  return;
+}
 
-if (valid) {
-  out_states.push_back(std::move(tmp_s));
-} else {
-  fail_ct++;
-}
-  }
+// Random choose a starting sketch
+// TODO(jcf94, merrymercy): Maybe choose sketches in different 
possibility for they may have
+// different potential on generating state with better performance
+State tmp_s = sketches[(rand_gen)() % sketches.size()];
+// Derivation rule based enumeration
+bool valid = true;
+for (const auto& rule : init_rules) {
+  // Some rules use the random generator of SketchPolicyNode, so this 
part has to be
+  // protected

Review comment:
   You're right ... Seems there is no improvement after some tests, I've 
just recovered this to the simple while loop implementation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6348: [Ansor][AutoTVM v2.0] Phase 2: Update heavy operations with parallel_for

2020-08-27 Thread GitBox


jcf94 commented on a change in pull request #6348:
URL: https://github.com/apache/incubator-tvm/pull/6348#discussion_r478806015



##
File path: src/auto_scheduler/search_policy/sketch_policy.cc
##
@@ -322,32 +323,40 @@ Array SketchPolicyNode::GenerateSketches() {
 }
 
 Array SketchPolicyNode::SampleInitPopulation(const Array& 
sketches, int out_size) {
-  int fail_ct = 0;
+  std::atomic fail_ct(0);
+  std::mutex m;
   Array out_states;
   auto tic_begin = std::chrono::high_resolution_clock::now();
 
-  // TODO(jcf94, merrymercy): Use parallel_for to run this loop in parallel
-  while (static_cast(out_states.size()) < out_size && fail_ct < 
static_cast(out_size)) {
-// Random choose a starting sketch
-// TODO(jcf94, merrymercy): Maybe choose sketches in different possibility 
for they may have
-// different potential on generating state with better performance
-State tmp_s = sketches[(rand_gen)() % sketches.size()];
-
-// Derivation rule based enumeration
-bool valid = true;
-for (const auto& rule : init_rules) {
-  if (rule->Apply(this, &tmp_s) == 
InitPopulationRule::ResultKind::kInvalid) {
-valid = false;
-break;
-  }
-}
+  support::parallel_for(
+  0, out_size, [this, &out_size, &sketches, &out_states, &fail_ct, &m](int 
i) {
+if (fail_ct >= out_size) {
+  return;
+}
 
-if (valid) {
-  out_states.push_back(std::move(tmp_s));
-} else {
-  fail_ct++;
-}
-  }
+// Random choose a starting sketch
+// TODO(jcf94, merrymercy): Maybe choose sketches in different 
possibility for they may have
+// different potential on generating state with better performance
+State tmp_s = sketches[(rand_gen)() % sketches.size()];
+// Derivation rule based enumeration
+bool valid = true;
+for (const auto& rule : init_rules) {
+  // Some rules use the random generator of SketchPolicyNode, so this 
part has to be
+  // protected

Review comment:
   You're right ... Seems there is no improvement, I've just recovered this 
to the simple while loop implementation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] electriclilies opened a new pull request #6353: [RELAY][DYN] Dynamic UpSampling3D Op

2020-08-27 Thread GitBox


electriclilies opened a new pull request #6353:
URL: https://github.com/apache/incubator-tvm/pull/6353


   This PR implements the dynamic version of the UpSampling3D relay op. After 
this is merged, we will be able to remove the final infer_value calls from the 
onnx importer, allowing us to import truly dynamic onnx graphs into relay. I 
also cleaned up some documentation and tests for upsampling and relay
   
   It is very similar to the dynamic UpSampling op (see #6273).
   
   @mbrookhart @zhiics @icemelon9 please take a look



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on pull request #6331: [TESTS] Refactor tests to run on either the GPU or CPU

2020-08-27 Thread GitBox


tkonolige commented on pull request #6331:
URL: https://github.com/apache/incubator-tvm/pull/6331#issuecomment-682256268


   @tqchen I'm getting a couple errors with CUDA initialization failing. I'm 
really sure of the cause, but it seems like it might have to do with forking.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart opened a new pull request #6352: [Maintenance] Fix typo

2020-08-27 Thread GitBox


hypercubestart opened a new pull request #6352:
URL: https://github.com/apache/incubator-tvm/pull/6352


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm-site] branch asf-site updated: Build at Thu Aug 27 16:37:14 PDT 2020

2020-08-27 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 5c3e530  Build at Thu Aug 27 16:37:14 PDT 2020
5c3e530 is described below

commit 5c3e5309a3662e99cfd976d6b1740676f56e5295
Author: tqchen 
AuthorDate: Thu Aug 27 16:37:15 2020 -0700

Build at Thu Aug 27 16:37:14 PDT 2020
---
 atom.xml   | 2 +-
 community.html | 6 +++---
 rss.xml| 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/atom.xml b/atom.xml
index b3196b9..8a638c4 100644
--- a/atom.xml
+++ b/atom.xml
@@ -4,7 +4,7 @@
  TVM
  https://tvm.apache.org"; rel="self"/>
  https://tvm.apache.org"/>
- 2020-08-25T10:45:05-07:00
+ 2020-08-27T16:37:12-07:00
  https://tvm.apache.org
  

diff --git a/community.html b/community.html
index 73e53f6..fac7bb9 100644
--- a/community.html
+++ b/community.html
@@ -161,11 +161,11 @@
 Here are the relavant mail-lists:
 
 
-  https://lists.apache.org/list.html?d...@tvm.apache.org";>d...@apache.tvm.org
 development related activities
-  https://lists.apache.org/list.html?discuss-arch...@tvm.apache.org";>discuss-arch...@apache.tvm.org
 archive of discuss forum.
+  https://lists.apache.org/list.html?d...@tvm.apache.org";>d...@tvm.apache.org
 development related activities
+  https://lists.apache.org/list.html?discuss-arch...@tvm.apache.org";>discuss-arch...@tvm.apache.org
 archive of discuss forum.
 
 
-To subscribe, send an email to dev-subscr...@apache.tvm.org.
+To subscribe, send an email to dev-subscr...@tvm.apache.org.
 All discuss forum thread and github issues with RFC COMMUNITY tags are 
automatically forwarded to dev@
 
 
diff --git a/rss.xml b/rss.xml
index 433e296..de3f83f 100644
--- a/rss.xml
+++ b/rss.xml
@@ -5,8 +5,8 @@
 TVM - 
 https://tvm.apache.org
 https://tvm.apache.org"; rel="self" 
type="application/rss+xml" />
-Tue, 25 Aug 2020 10:45:05 -0700
-Tue, 25 Aug 2020 10:45:05 -0700
+Thu, 27 Aug 2020 16:37:12 -0700
+Thu, 27 Aug 2020 16:37:12 -0700
 60
 
 



[incubator-tvm-site] branch master updated: change

2020-08-27 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git


The following commit(s) were added to refs/heads/master by this push:
 new 42186fc  change
42186fc is described below

commit 42186fcd34170c5a7fc3531c2f0be7ec99f53718
Author: tqchen 
AuthorDate: Thu Aug 27 16:36:51 2020 -0700

change
---
 community.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/community.md b/community.md
index 12ad004..ed38404 100644
--- a/community.md
+++ b/community.md
@@ -20,10 +20,10 @@ Welcome to the TVM community. Here are several ways that 
you can stay involved.
 As per Apache tradition, everything happens in the community(also) happens in 
the mail-list.
 Here are the relavant mail-lists:
 
-- 
[d...@apache.tvm.org](https://lists.apache.org/list.html?d...@tvm.apache.org) 
development related activities
-- 
[discuss-arch...@apache.tvm.org](https://lists.apache.org/list.html?discuss-arch...@tvm.apache.org)
 archive of discuss forum.
+- 
[d...@tvm.apache.org](https://lists.apache.org/list.html?d...@tvm.apache.org) 
development related activities
+- 
[discuss-arch...@tvm.apache.org](https://lists.apache.org/list.html?discuss-arch...@tvm.apache.org)
 archive of discuss forum.
 
-To subscribe, send an email to dev-subscr...@apache.tvm.org.
+To subscribe, send an email to dev-subscr...@tvm.apache.org.
 All discuss forum thread and github issues with RFC COMMUNITY tags are 
automatically forwarded to dev@
 
 



[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-27 Thread GitBox


icemelon9 commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r478733360



##
File path: include/tvm/runtime/vm/bytecode.h
##
@@ -204,6 +207,13 @@ struct Instruction {
   RegName tensor;
   RegName newshape;
 } reshape_tensor;
+struct /* DeviceCopy Operands */ {
+  RegName src;
+  /*! \brief The source device type. */
+  Index src_device_type;

Review comment:
   But it limits the use of two devices with the same type, e.g., two GPUs.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (30cd230 -> 1899ad8)

2020-08-27 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository.

lmzheng pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 30cd230  [BYOC][ETHOSN] Add support for quantized convolution (#6335)
 add 1899ad8  [Ansor][AutoTVM v2.0] Phase 2: Evolutionary Search (#6310)

No new revisions were added by this update.

Summary of changes:
 python/tvm/auto_scheduler/search_policy.py |  20 ++
 src/auto_scheduler/search_policy/sketch_policy.cc  | 166 +++-
 src/auto_scheduler/search_policy/sketch_policy.h   |  24 +-
 .../search_policy/sketch_policy_rules.cc   | 279 -
 .../search_policy/sketch_policy_rules.h|  57 -
 src/auto_scheduler/search_policy/utils.cc  |  65 -
 src/auto_scheduler/search_policy/utils.h   |  16 +-
 .../test_auto_scheduler_evolutionary_search.py |  75 ++
 8 files changed, 674 insertions(+), 28 deletions(-)
 create mode 100644 
tests/python/unittest/test_auto_scheduler_evolutionary_search.py



[GitHub] [incubator-tvm] mbrookhart commented on pull request #6351: Dynamic ONNX Importer

2020-08-27 Thread GitBox


mbrookhart commented on pull request #6351:
URL: https://github.com/apache/incubator-tvm/pull/6351#issuecomment-682212270


   cc @zhiics @icemelon9 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-27 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r478711025



##
File path: python/tvm/relay/frontend/change_datatype.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=unused-argument
+"""Change Datatype Pass"""
+from ..function import Function
+from ..expr_functor import ExprMutator
+from ..transform.transform import function_pass
+from ..expr import var, bind
+
+# TODO(@gussmith23) what's the right opt level here?
+@function_pass(opt_level=0)
+class ChangeDatatype(ExprMutator):
+"""Mutator for changing the datatype of Relay programs.
+
+Example:
+
+.. code-block:: python
+
+from tvm.relay.testing.inception_v3 import get_workload
+expr, params = get_workload()
+
+def change_dtype(src, dst, expr, params):
+cdtype = ChangeDatatype(src, dst)
+expr = cdtype.visit(expr)
+expr = relay.ir_pass.infer_type(expr)
+params = dict((p, tvm.nd.array(params[p].asnumpy().astype(dst))) 
for p in params)
+return expr, params
+"""
+def __init__(self, src, dst):
+self.src = src
+self.dst = dst
+super().__init__()
+
+def transform_function(self, func, mod, ctx):
+return self.visit(func)
+
+def visit_constant(self, const):
+if const.data.dtype == self.src:
+return const.astype(self.dst)
+# TODO(hypercubestart): should we raise an error in this case, or 
return const?
+return const

Review comment:
   I'm having trouble thinking of a case where const.data.dtype != src. In 
our tests, the only test that uses relay.ConstantNode is test_batch_norm where 
there is an epsilon constant and a 1f const, but the type of these constants is 
always the same as the type of src (maybe due to type inference?)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart opened a new pull request #6351: Dynamic ONNX Importer

2020-08-27 Thread GitBox


mbrookhart opened a new pull request #6351:
URL: https://github.com/apache/incubator-tvm/pull/6351


   Hello Friends,
   
   Over the last couple of months, @electriclilies and I have been working to 
add more dynamic support to relay ops, to separate the dynamic implementations 
into a dyn namespace, and to provide a pass for converting ops back to static 
forms when possible.
   
   The culmination of that work is this PR, which refactors the ONNX importer 
to directly create dynamic relay graphs instead of using infer_value to make 
them static in the importer.  Longer term, this will allow us to import dynamic 
models that we can't currently use.
   
   We don't want to cause regressions for anyone, so this PR enables the 
dynamic_to_static pass by default in the graph runtime, we tested the PR 
against the ONNX model zoo https://github.com/onnx/models and fixed a number of 
issues in ops that apparently hadn't been tested with dynamic shapes to date.
   
   An added benefit of this PR is that it removes a severe bottleneck in the 
infer_value calls. Models with lots of dynamic ops will import and compile much 
faster than before, Bert Squad from the ONNX model zoo imports and compiles in 
~170s on master vs ~15s with this change.
   
   This PR is not yet complete, we're working on adding dynamic upsampling3d 
and strided slice (#6316) to remove the last two infer value calls.
   
   Since we don't want to introduce regressions for anyone, I'd appreciate it 
if you could test any models you are currently running against this branch and 
let us know if you run into issues.
   
   Thanks!
   
   cc @masahi @jwfromm @soiferj @siju-samuel Please tag anyone else you think 
might be interested



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-27 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r478613686



##
File path: python/tvm/relay/frontend/change_datatype.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=unused-argument
+"""Change Datatype Pass"""
+from ..function import Function
+from ..expr_functor import ExprMutator
+from ..transform.transform import function_pass
+from ..expr import var, bind
+
+# TODO(@gussmith23) what's the right opt level here?
+@function_pass(opt_level=0)

Review comment:
   oops turns out we need the opt_level or else we cant do 
`ChangeDatatype(src, dst)(mod)`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jwfromm commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-27 Thread GitBox


jwfromm commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r478697839



##
File path: python/tvm/relay/transform/memory_alloc.py
##
@@ -66,7 +85,7 @@ def is_reshape_only(func):
 class ManifestAllocPass(ExprMutator):

Review comment:
   would be great to add a TODO so this point doesnt get forgotten.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6335: [BYOC][ETHOSN] Add support for quantized convolution

2020-08-27 Thread GitBox


masahi commented on pull request #6335:
URL: https://github.com/apache/incubator-tvm/pull/6335#issuecomment-682189911


   Thanks @mbaret @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (e35b7fc -> 30cd230)

2020-08-27 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e35b7fc  [Relay][Training] Make AutoDiff thread through global 
function. (#6336)
 add 30cd230  [BYOC][ETHOSN] Add support for quantized convolution (#6335)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/contrib/ethosn.py  |  26 +++
 src/relay/backend/contrib/ethosn/codegen.cc|  43 -
 src/relay/backend/contrib/ethosn/codegen_ethosn.h  |   1 +
 src/relay/backend/contrib/ethosn/ethosn_api.cc | 190 +++
 src/relay/backend/contrib/ethosn/ethosn_api.h  |  22 +++
 tests/python/contrib/test_ethosn/infrastructure.py |   2 +
 tests/python/contrib/test_ethosn/test_conv2d.py| 204 +
 7 files changed, 486 insertions(+), 2 deletions(-)
 create mode 100644 tests/python/contrib/test_ethosn/test_conv2d.py



[GitHub] [incubator-tvm] masahi merged pull request #6335: [BYOC][ETHOSN] Add support for quantized convolution

2020-08-27 Thread GitBox


masahi merged pull request #6335:
URL: https://github.com/apache/incubator-tvm/pull/6335


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] junrushao1994 commented on pull request #6347: [Target][Codegen] Use target class in all codegens

2020-08-27 Thread GitBox


junrushao1994 commented on pull request #6347:
URL: https://github.com/apache/incubator-tvm/pull/6347#issuecomment-682166368


   idk why the CI is retriggered...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on pull request #6333: Add docker/lint.sh, for running dockerized lint scripts locally

2020-08-27 Thread GitBox


areusch commented on pull request #6333:
URL: https://github.com/apache/incubator-tvm/pull/6333#issuecomment-682164848


   @zhiics @leandron please take a look when you have a minute and explicitly 
approve if you're good w/ this change



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on a change in pull request #6333: Add docker/lint.sh, for running dockerized lint scripts locally

2020-08-27 Thread GitBox


areusch commented on a change in pull request #6333:
URL: https://github.com/apache/incubator-tvm/pull/6333#discussion_r478666133



##
File path: tests/lint/clang_format.sh
##
@@ -0,0 +1,23 @@
+#!/bin/bash -e
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+
+# check lastest change, for squash merge into master
+./tests/lint/git-clang-format.sh HEAD~1

Review comment:
   yeah I don't want to change the actual content of the lint steps in this 
PR. but I agree with that idea.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6337: [RELAY][VM] Enable heterogeneous execution for Relay VM

2020-08-27 Thread GitBox


mbrookhart commented on a change in pull request #6337:
URL: https://github.com/apache/incubator-tvm/pull/6337#discussion_r478661229



##
File path: src/relay/analysis/context_analysis.cc
##
@@ -0,0 +1,697 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/analysis/context_analysis.cc
+ * \brief A pass for analyzing device attribute of each IR node.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+namespace tvm {
+namespace relay {
+
+using PackedAnalysisResultMap = Map>;
+using AnalysisResultMap =
+std::unordered_map;
+
+namespace analysis {
+
+// Cache ops
+static const Op& device_copy_op = Op::Get("device_copy");
+static const Op& alloc_storage_op = Op::Get("memory.alloc_storage");
+static const Op& alloc_tensor_op = Op::Get("memory.alloc_tensor");
+static const Op& shape_of_op = Op::Get("vm.shape_of");
+static const Op& invoke_tvm_op = Op::Get("vm.invoke_tvm_op");
+static const Op& shape_func_of = Op::Get("vm.shape_func");
+static const Op& reshape_tensor_op = Op::Get("vm.reshape_tensor");
+
+class DeviceDomain;
+using DeviceDomainPtr = std::shared_ptr;
+
+/*
+ * \brief A class to represent the device of a domain, i.e. a segment of relay 
program.
+ */
+class DeviceDomain {
+ public:
+  // Construct an empty domain.
+  DeviceDomain() {
+ctx_.device_type = static_cast(-1);
+ctx_.device_id = -1;
+  }
+
+  // Construct a domain based on a given context.
+  explicit DeviceDomain(const TVMContext& ctx) : ctx_(ctx) {}
+
+  // Check if the current domain is empty.
+  bool IsEmptyDomain() const {
+return static_cast(ctx_.device_type) == -1 && ctx_.device_id == -1;
+  }
+
+  // Check if the current domain equals the other one.
+  bool operator==(const DeviceDomain& other) const {
+return ctx_.device_type == other.ctx_.device_type && ctx_.device_id == 
other.ctx_.device_id;
+  }
+
+  bool operator!=(const DeviceDomain& other) const { return !(*this == other); 
}
+
+ private:
+  // Create a hash for a domain.
+  struct Hash {
+size_t operator()(const DeviceDomainPtr& domain) const {
+  if (domain->IsEmptyDomain()) {
+return (size_t)(domain.get());
+  } else {
+size_t const 
h1(std::hash()(static_cast(domain->ctx_.device_type)));
+size_t const h2(std::hash()(domain->ctx_.device_id));
+return h1 ^ (h2 << 1);
+  }
+}
+  };
+
+  // Create an equality for domains.
+  struct Equal {
+   public:
+bool operator()(const DeviceDomainPtr& lhs, const DeviceDomainPtr& rhs) 
const {
+  // We compare the pointer for empty domains.
+  if (lhs->IsEmptyDomain() && rhs->IsEmptyDomain()) return lhs.get() == 
rhs.get();
+
+  // Otherwise device type and id are used to check equality.
+  return (*lhs.get() == *rhs.get());
+}
+  };
+
+  /* \brief The device to be assigned to the current domain. */
+  TVMContext ctx_;
+
+  friend DeviceDomainPtr Join(const DeviceDomainPtr& lhs, const 
DeviceDomainPtr& rhs);
+  friend class ContextAnalyzer;
+};
+
+// Join two domains.
+DeviceDomainPtr Join(const DeviceDomainPtr& lhs, const DeviceDomainPtr& rhs) {
+  if (lhs->IsEmptyDomain() && rhs->IsEmptyDomain()) {
+return lhs;
+  } else if (lhs->IsEmptyDomain()) {
+return rhs;
+  } else if (rhs->IsEmptyDomain()) {
+return lhs;
+  } else {
+CHECK(*lhs.get() == *rhs.get()) << "All expressions must have a singular 
device to unify";
+return lhs;
+  }
+}
+
+/*
+ * \brief Compute on which device each sub-expression will execute. A union 
find
+ * algorithm is used to assign and merge the context domains.
+ */
+class ContextAnalyzer : public ExprVisitor {

Review comment:
   Does this need to be recursive? If not, can we use the MixedModeVisitor 
to prevent future issues with recursive stack overflow?

##
File path: python/tvm/relay/backend/vm.py
##
@@ -261,12 +260,6 @@ def _make_executor(self, expr=None):
 
 def _vm_wrapper(*args, **kwargs):
 args = self._convert_args(main, args, kwargs)
-ret_type = self.mod["main"].checked_type.ret_type
-if is_dynamic(ret_type) and "llvm" not in 

[GitHub] [incubator-tvm] jroesch merged pull request #6336: [Relay][Training] Make AutoDiff thread through global function.

2020-08-27 Thread GitBox


jroesch merged pull request #6336:
URL: https://github.com/apache/incubator-tvm/pull/6336


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Relay][Training] Make AutoDiff thread through global function. (#6336)

2020-08-27 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new e35b7fc  [Relay][Training] Make AutoDiff thread through global 
function. (#6336)
e35b7fc is described below

commit e35b7fc4bcdcfe008c5dfea60c2297b93dbff99e
Author: 雾雨魔理沙 
AuthorDate: Thu Aug 27 11:32:40 2020 -0700

[Relay][Training] Make AutoDiff thread through global function. (#6336)

* save

* lint

* lint

* fix warning

* fix test

* save
---
 src/printer/doc.cc   |   2 +-
 src/relay/transforms/gradient.cc | 106 ---
 tests/python/relay/test_pass_gradient.py |  41 +++-
 3 files changed, 124 insertions(+), 25 deletions(-)

diff --git a/src/printer/doc.cc b/src/printer/doc.cc
index d487e3e..ab1eddb 100644
--- a/src/printer/doc.cc
+++ b/src/printer/doc.cc
@@ -129,7 +129,7 @@ Doc Doc::Indent(int indent, Doc doc) {
 }
 
 Doc Doc::StrLiteral(const std::string& value, std::string quote) {
-  // TODO(M.K.): add escape.
+  // TODO(@M.K.): add escape.
   Doc doc;
   return doc << quote << value << quote;
 }
diff --git a/src/relay/transforms/gradient.cc b/src/relay/transforms/gradient.cc
index 7894c34..9c47254 100644
--- a/src/relay/transforms/gradient.cc
+++ b/src/relay/transforms/gradient.cc
@@ -72,7 +72,7 @@ Type WithGradientType(const Type&);
 Expr FirstOrderGradient(const Expr& e, const Optional& mod);
 
 Type WithGradientType(const Type& t) {
-  // TODO(M.K.): stricter checking
+  // TODO(@M.K.): stricter checking
   auto ty = t.as();
   CHECK(ty) << "input should be a function";
   return FuncType(ty->arg_types, TupleType({ty->ret_type, 
TupleType(ty->arg_types)}), {}, {});
@@ -85,7 +85,7 @@ Expr DeGlobal(const Optional& mod, const Expr& e) {
   if (mod.defined() && x) {
 BaseFunc base_func = mod.value()->Lookup(GetRef(x));
 if (auto* n = base_func.as()) {
-  return n->body;
+  return GetRef(n);
 } else {
   return e;
 }
@@ -338,11 +338,22 @@ Expr FirstOrderGradient(const Expr& re, const 
Optional& mod) {
 
 
TVM_REGISTER_GLOBAL("relay._transform.first_order_gradient").set_body_typed(FirstOrderGradient);
 
+Type bpt = RelayRefType(FuncType({}, TupleType(Array()), {}, {}));
+
 struct ReverseADType : TypeMutator {
   Type VisitType_(const TensorTypeNode* ttn) final {
 Type t = GetRef(ttn);
 return TupleType({t, RelayRefType(t)});
   }
+
+  Type VisitType_(const FuncTypeNode* ftn) final {
+std::vector arg_types;
+for (const auto& t : ftn->arg_types) {
+  arg_types.push_back(VisitType(t));
+}
+arg_types.push_back(bpt);
+return FuncType(arg_types, ftn->ret_type, ftn->type_params, 
ftn->type_constraints);
+  }
 };
 
 Type ReverseType(const Type& t) { return ReverseADType()(t); }
@@ -438,12 +449,18 @@ Expr BPEmpty() {
 
 struct ReverseAD : ExprMutator {
   using ADVarMap = std::unordered_map;
-
+  using ADGlobalVarMap = std::unordered_map;
+  Optional mod;
+  // TODO(@M.K.) refactor AD to always use mod.
   Var bp;
   std::shared_ptr ad_vars;
+  std::shared_ptr ad_gvars;
   const OpAttrMap rev_map = 
Op::GetAttrMap("FPrimalGradient");
 
-  explicit ReverseAD(const Var& bp, std::shared_ptr ad_vars) : 
bp(bp), ad_vars(ad_vars) {}
+  explicit ReverseAD(const Optional& mod, const Var& bp,
+ const std::shared_ptr& ad_vars,
+ const std::shared_ptr& ad_gvars)
+  : mod(mod), bp(bp), ad_vars(ad_vars), ad_gvars(ad_gvars) {}
 
   Expr VisitExpr_(const OpNode* op) final {
 LOG(FATAL) << "op should only be inside call";
@@ -481,9 +498,8 @@ struct ReverseAD : ExprMutator {
   Expr nbp = Function({}, LetList::With([&](LetList* ll) {
 // we need a new ReverseAD visitor to avoid 
clobbering the bp local var
 auto dup_bp = ll->Push(BPEmpty());
-ReverseAD dup_diff(dup_bp, ad_vars);
-auto dup_ad = 
ll->Push(dup_diff.VisitExpr(DeDup(x)));
-
+auto dup_ad =
+ll->Push(ReverseAD(mod, dup_bp, ad_vars, 
ad_gvars)(DeDup(x)));
 TransferGrads(call->checked_type(), ret, dup_ad, 
ll);
 ll->Push(Call(RefRead(dup_bp), {}));
 return Call(bpv, {});
@@ -518,22 +534,29 @@ struct ReverseAD : ExprMutator {
 orig_var->checked_type_ = call->checked_type();
 auto ret = ll->Push(GetRev(call->checked_type(), orig_var, ll));
 auto bpv = ll->Push(RefRead(bp));
-Expr nbp = Function({}, LetList::With([&](LetList* ll) {
-  tvm::Array rev =
-  rev_map[op_ref](orig, 
GetGrad(call->checked_type(), ret, ll));
-  CHECK

[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-27 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r478615669



##
File path: python/tvm/target/datatype.py
##
@@ -14,73 +14,153 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-"""Custom datatype functionality"""
-import tvm._ffi
+"""Bring Your Own Datatypes custom datatype framework
 
-import tvm.runtime._ffi_api
-from tvm.runtime import DataType
-import tvm.tir
-from tvm.tir.expr import Cast as _Cast, FloatImm as _FloatImm
+TODO(@gussmith23 @hypercubestart) link to BYODT docs when they exist"""
+import tvm
+from tvm.runtime import convert, DataType
+from tvm.tir.expr import (Call as _Call, Cast as _Cast,
+  FloatImm as _FloatImm, BinaryOpExpr as _BinaryOpExpr)
+from tvm.tir.op import call_pure_extern
+from tvm._ffi import register_func as _register_func
+from tvm.tir import call_intrin
 
 
 def register(type_name, type_code):
 """Register a custom datatype with the given type name and type code
-Currently, the type code is manually allocated by the user, and the
-user must ensure that no two custom types share the same code.
-Generally, this should be straightforward, as the user will be
-manually registering all of their custom types.
+
+Currently, the type code is manually allocated by the user, and the user
+must ensure that no two custom types share the same code. Generally, this
+should be straightforward, as the user will be manually registering all of
+their custom types.
+
+Example:
+
+.. code-block:: python
+
+# Register a dtype named 'posites2' under type code 130.
+tvm.datatype.register('posites2', 130)
+
 
 Parameters
 --
 type_name : str
-The name of the custom datatype
+The name of the custom datatype.
 
 type_code : int
-The type's code, which should be >= kCustomBegin
+The type's code, which should be >= kCustomBegin. See
+include/tvm/runtime/data_type.h.
 """
 tvm.runtime._ffi_api._datatype_register(type_name, type_code)
 
 
 def get_type_name(type_code):
-"""Get the type name from the type code
+"""Get the type name of a custom datatype from the type code.
+
+Note that this only works for custom datatypes registered with
+tvm.datatype.register(). It does not work for TVM-native types.
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_name(130) == 'posites2'
 
 Parameters
 --
 type_code : int
-The type code
+The type code of the custom datatype.
+
+Returns
+---
+type_name : String
+The name of the custom datatype.
+
 """
 return tvm.runtime._ffi_api._datatype_get_type_name(type_code)
 
 
 def get_type_code(type_name):
-"""Get the type code from the type name
+"""Get the type code of a custom datatype from its type name
+
+Note that this only works for custom datatypes registered with
+tvm.datatype.register(). It does not work for TVM-native types.
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_code('posites2') == 130
 
 Parameters
 --
 type_name : str
 The type name
+
+Returns
+---
+type_code : int
+The type code of the custom datatype.
 """
 return tvm.runtime._ffi_api._datatype_get_type_code(type_name)
 
 
 def get_type_registered(type_code):
-"""Get a boolean representing whether the type is registered
+"""Returns true if a custom datatype is registered under the given type 
code
+
+Example:
+
+.. code-block:: python
+
+tvm.datatype.register('posites2', 130)
+assert tvm.datatype.get_type_registered(130)
 
 Parameters
 --
 type_code: int
 The type code
+
+Returns
+---
+type_registered : bool
+True if a custom datatype is registered under this type code, and false
+otherwise.
 """
 return tvm.runtime._ffi_api._datatype_get_type_registered(type_code)
 
 
-def register_op(lower_func, op_name, target, type_name, src_type_name=None):
-"""Register an external function which computes the given op.
+def register_op(lower_func,
+op_name,
+target,
+src_type_name,
+dest_type_name=None,
+intrinsic_name=None):
+"""Register a lowering function for a specific operator of a custom 
datatype
+
+At build time, Relay must lower operators over custom datatypes into
+operators it understands how to compile. For each custom datatype operator
+which Relay finds while lowering custom datatypes, Relay expects to find a
+user-defined lowering fu

[GitHub] [incubator-tvm] hypercubestart commented on a change in pull request #5812: Bring Your Own Datatypes

2020-08-27 Thread GitBox


hypercubestart commented on a change in pull request #5812:
URL: https://github.com/apache/incubator-tvm/pull/5812#discussion_r478613686



##
File path: python/tvm/relay/frontend/change_datatype.py
##
@@ -0,0 +1,88 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=unused-argument
+"""Change Datatype Pass"""
+from ..function import Function
+from ..expr_functor import ExprMutator
+from ..transform.transform import function_pass
+from ..expr import var, bind
+
+# TODO(@gussmith23) what's the right opt level here?
+@function_pass(opt_level=0)

Review comment:
   haha turns out we need the opt_level or else we cant do 
`ChangeDatatype(src, dst)(mod)`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6335: [BYOC][ETHOSN] Add support for quantized convolution

2020-08-27 Thread GitBox


comaniac commented on a change in pull request #6335:
URL: https://github.com/apache/incubator-tvm/pull/6335#discussion_r478610698



##
File path: src/relay/backend/contrib/ethosn/codegen.cc
##
@@ -50,6 +50,16 @@ bool IsEthosnOp(const Call& call, const std::string& 
op_name) {
   }
 }
 
+bool IsEthosnFunc(const Call& call, const std::string& op_name) {

Review comment:
   No idea... Let's leave them here for now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6342: [CI][Contrib] Add Vitis-AI docker installation

2020-08-27 Thread GitBox


comaniac commented on a change in pull request #6342:
URL: https://github.com/apache/incubator-tvm/pull/6342#discussion_r478609685



##
File path: docker/Dockerfile.ci_cpu
##
@@ -83,3 +83,7 @@ RUN bash /install/ubuntu_install_caffe.sh
 # Github Arm(R) Ethos(TM)-N NPU driver
 COPY install/ubuntu_install_ethosn_driver_stack.sh 
/install/ubuntu_install_ethosn_driver_stack.sh
 RUN bash /install/ubuntu_install_ethosn_driver_stack.sh
+
+# Vitis-AI PyXIR CI deps
+COPY install/ubuntu_install_vai_packages.sh 
/install/ubuntu_install_vai_packages.sh
+RUN bash /install/ubuntu_install_vai_packages.sh

Review comment:
   I agree with Leandro basically. In TVM we have two types of docker 
files: CI and Demo. Demo is used to provide an environment for users to quickly 
try out basic TVM functions. However, TVM does not provide the docker file for 
developments other than CI environment.
   
   If you would like to provide an environment for TVM users that want to use 
Vitis-AI, the currently solution is either providing installation instructions 
in the tutorial or document like ACL does to help users set up the environment, 
or make it as a Demo image and point users to there.
   
   Of course, ideally it would be better for TVM to have a set of docker files 
for developments, but it is hard to achieve for some backends due to the 
license issue.
   
   cc @tqchen for more comments.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #6310: [Ansor][AutoTVM v2.0] Phase 2: Evolutionary Search

2020-08-27 Thread GitBox


comaniac commented on pull request #6310:
URL: https://github.com/apache/incubator-tvm/pull/6310#issuecomment-682109549


   @merrymercy comment addressed. PTAL.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6348: [Ansor][AutoTVM v2.0] Phase 2: Update heavy operations with parallel_for

2020-08-27 Thread GitBox


comaniac commented on a change in pull request #6348:
URL: https://github.com/apache/incubator-tvm/pull/6348#discussion_r478604334



##
File path: src/auto_scheduler/search_policy/sketch_policy.cc
##
@@ -322,32 +323,40 @@ Array SketchPolicyNode::GenerateSketches() {
 }
 
 Array SketchPolicyNode::SampleInitPopulation(const Array& 
sketches, int out_size) {
-  int fail_ct = 0;
+  std::atomic fail_ct(0);
+  std::mutex m;
   Array out_states;
   auto tic_begin = std::chrono::high_resolution_clock::now();
 
-  // TODO(jcf94, merrymercy): Use parallel_for to run this loop in parallel
-  while (static_cast(out_states.size()) < out_size && fail_ct < 
static_cast(out_size)) {
-// Random choose a starting sketch
-// TODO(jcf94, merrymercy): Maybe choose sketches in different possibility 
for they may have
-// different potential on generating state with better performance
-State tmp_s = sketches[(rand_gen)() % sketches.size()];
-
-// Derivation rule based enumeration
-bool valid = true;
-for (const auto& rule : init_rules) {
-  if (rule->Apply(this, &tmp_s) == 
InitPopulationRule::ResultKind::kInvalid) {
-valid = false;
-break;
-  }
-}
+  support::parallel_for(
+  0, out_size, [this, &out_size, &sketches, &out_states, &fail_ct, &m](int 
i) {
+if (fail_ct >= out_size) {
+  return;
+}
 
-if (valid) {
-  out_states.push_back(std::move(tmp_s));
-} else {
-  fail_ct++;
-}
-  }
+// Random choose a starting sketch
+// TODO(jcf94, merrymercy): Maybe choose sketches in different 
possibility for they may have
+// different potential on generating state with better performance
+State tmp_s = sketches[(rand_gen)() % sketches.size()];
+// Derivation rule based enumeration
+bool valid = true;
+for (const auto& rule : init_rules) {
+  // Some rules use the random generator of SketchPolicyNode, so this 
part has to be
+  // protected

Review comment:
   With this limitation, how beneficial is the `parallel_for` for this 
operation?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-27 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r478604162



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]

Review comment:
   This is basically the same Target discussion. Lets keep it here: 
https://github.com/apache/incubator-tvm/pull/6302#discussion_r476606729

##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):

Review comment:
   Target discussion: 
https://github.com/apache/incubator-tvm/pull/6302#discussion_r476606729





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-27 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r478598548



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="a.tar",
+help="output the compiled module to an archive",
+)
+parser.add_argument(
+"--sanitize-diagnostics",
+action="store_true",
+default=True,
+dest="sanitize_diagnostics",
+help="enable diagnostic sanitization",
+)
+parser.add_argument(
+"--no-sanitize-diagnostics",
+action="store_false",
+dest="sanitize_diagnostics",
+help="disable diagnostic sanitization",
+)
+parser.add_argument(
+"--target",
+type=parse_target,
+action="append",
+metavar="TARGET[,TARGET]...",
+help=f"compilation target(s): {', '.join(VALID_TARGETS)}, default 
llvm",
+)
+parser.add_argument("--tuner-file", default="", help="tuner file")

Review comment:
   I think it is more aligned with the underlying API. Will update this 
soon.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-27 Thread GitBox


comaniac commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r478591988



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="a.tar",
+help="output the compiled module to an archive",
+)
+parser.add_argument(
+"--sanitize-diagnostics",
+action="store_true",
+default=True,
+dest="sanitize_diagnostics",
+help="enable diagnostic sanitization",
+)
+parser.add_argument(
+"--no-sanitize-diagnostics",
+action="store_false",
+dest="sanitize_diagnostics",
+help="disable diagnostic sanitization",
+)
+parser.add_argument(
+"--target",
+type=parse_target,
+action="append",
+metavar="TARGET[,TARGET]...",
+help=f"compilation target(s): {', '.join(VALID_TARGETS)}, default 
llvm",
+)
+parser.add_argument("--tuner-file", default="", help="tuner file")

Review comment:
   `--tuning-records` or `--tuning-log` are good to me as long as the help 
description is clear enough.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-27 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r478586988



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"
+)
+parser.add_argument(
+"--language",
+choices=frontends.get_frontends(),
+help="specify input language",
+)
+parser.add_argument(
+"--input-shape",
+type=common.parse_input_shapes,
+metavar="INPUT_SHAPE,[INPUT_SHAPE]...",
+help="for pytorch, e.g. '(1,3,224,224)'",
+)
+parser.add_argument(
+"-o",
+"--output",
+default="a.tar",
+help="output the compiled module to an archive",
+)
+parser.add_argument(
+"--sanitize-diagnostics",
+action="store_true",
+default=True,
+dest="sanitize_diagnostics",
+help="enable diagnostic sanitization",
+)
+parser.add_argument(
+"--no-sanitize-diagnostics",
+action="store_false",
+dest="sanitize_diagnostics",
+help="disable diagnostic sanitization",
+)
+parser.add_argument(
+"--target",
+type=parse_target,
+action="append",
+metavar="TARGET[,TARGET]...",
+help=f"compilation target(s): {', '.join(VALID_TARGETS)}, default 
llvm",
+)
+parser.add_argument("--tuner-file", default="", help="tuner file")

Review comment:
   Will that make more sense if we call it `--tuning-records` ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6302: [tvmc] command line driver 'compile' (part 2/4)

2020-08-27 Thread GitBox


leandron commented on a change in pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302#discussion_r478580880



##
File path: python/tvm/driver/tvmc/compiler.py
##
@@ -0,0 +1,407 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""
+Provides support to compile networks both AOT and JIT.
+"""
+import argparse
+import logging
+import tarfile
+from pathlib import Path
+
+import tvm
+from tvm import autotvm
+from tvm import relay
+from tvm._ffi.runtime_ctypes import TVMContext
+from tvm.contrib import cc
+from tvm.contrib import util
+from tvm.relay.op.contrib import get_pattern_table
+
+from . import common, frontends
+from .main import register_parser
+
+# A dictionary of target aliases to simplify the command lines provided by end 
users
+TARGET_ALIASES = {
+"aarch64": "llvm -device=arm_cpu -mtriple=aarch64-linux-gnu -mattr=+neon"
+}
+
+#  A list of valid targets (including aliases) to be used in "--target"
+VALID_TARGETS = ["aarch64", "llvm"]
+
+DEFAULT_TARGET = "llvm"
+DUMP_FORMATS = ["relay", "ll", "asm"]
+
+
+def parse_target(targets_str):
+""" Parsing function for comma separated target syntax. """
+targets = targets_str.split(",")
+for target in targets:
+if target not in VALID_TARGETS:
+raise argparse.ArgumentTypeError(f"unrecognized target: {target}")
+return targets
+
+
+@register_parser
+def add_compile_parser(subparsers):
+""" Include parser for 'compile' subcommand """
+
+parser = subparsers.add_parser("compile", help="compile a model")
+parser.set_defaults(func=drive_compile)
+parser.add_argument(
+"--cross-compiler",
+default="",
+help="the cross compiler to use to generate target libraries",
+)
+parser.add_argument(
+"--dump-codegen", default="", choices=DUMP_FORMATS, help="dump 
generated code"

Review comment:
   I was going to open an issue, but in the end I think we can open this 
discussion as an RFC to get a common understanding.
   
   
https://discuss.tvm.ai/t/rfc-savetofile-file-name-format-expected-behavior/7741
   
   I will put a `TODO` in the sources and just connect it to the current APIs, 
so that we can move forward.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #6332: [VOTE] Apache TVM Graduation

2020-08-27 Thread GitBox


tqchen edited a comment on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-682068380


   Thanks everyone who voted, the results are now in 
https://github.com/apache/incubator-tvm/issues/6350



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #6332: [VOTE] Apache TVM Graduation

2020-08-27 Thread GitBox


tqchen edited a comment on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-682068380


   Thanks everyone who voted, the results are now in 
https://lists.apache.org/thread.html/rc8cdb81de0dc0e1f96194afc1b911365ba8fd4b155ccc0914e70cdcd%40%3Cdev.tvm.apache.org%3E



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new issue #6350: [RESULT][VOTE] Apache Graduation

2020-08-27 Thread GitBox


tqchen opened a new issue #6350:
URL: https://github.com/apache/incubator-tvm/issues/6350


   Thanks everyone who voted. 
   
   Voting thread: 
https://lists.apache.org/thread.html/rd5b8eefe49af09a2d0913758a5e5737b3fdb9072bc0becf4a2b2c7ee%40%3Cdev.tvm.apache.org%3E
   
   
   The results are:
   
   +1
   Markus
   Junru
   ziheng
   Thierry
   Henry
   Lily
   Jared
   Haichen
   Masa
   Morita
   Wuwei
   Siju
   Gon
   Timothy
   Chenfan 
   Siva 
   Furkan
   
   +0 None
   -1 None
   
   The vote has passed. We will now proceed with a followup vote in IPMC.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6332: [VOTE] Apache TVM Graduation

2020-08-27 Thread GitBox


tqchen commented on issue #6332:
URL: https://github.com/apache/incubator-tvm/issues/6332#issuecomment-682068380


   Thanks everyone. The results are:
   
   +1
   Markus
   Junru
   ziheng
   Thierry
   Henry
   Lily
   Jared
   Haichen
   Masa
   Morita
   Wuwei
   Siju
   Gon
   Timothy
   Chenfan 
   Siva 
   Furkan
   
   +0 None
   -1 None
   
   The vote has passed. We will now proceed with a followup vote in IPMC.
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #5913: [random] support random fill

2020-08-27 Thread GitBox


FrozenGene commented on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-681915191


   Thanks for reminding @merrymercy. My agenda is completely full tomorrow and 
weekend. I could do this next week. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on pull request #5913: [random] support random fill

2020-08-27 Thread GitBox


merrymercy commented on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-681906084


   @FrozenGene  Can you send the follow up PRs to enable this in ansor and 
autotvm?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy edited a comment on pull request #5913: [random] support random fill

2020-08-27 Thread GitBox


merrymercy edited a comment on pull request #5913:
URL: https://github.com/apache/incubator-tvm/pull/5913#issuecomment-681906084


   @FrozenGene  Can you send the follow-up PRs to enable this in ansor and 
autotvm?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kongroo opened a new pull request #6349: [CODEGEN][CUDA]: fix cuda half math function is undefined: herf

2020-08-27 Thread GitBox


kongroo opened a new pull request #6349:
URL: https://github.com/apache/incubator-tvm/pull/6349


   I got an "identifier herf undefined" error when converting a model into TVM, 
which can be reproduced by the following code
   ```
   import torch
   import numpy as np
   import tvm
   from tvm import relay
   
   class ErfTest(torch.nn.Module):
   def forward(self, data):
   return torch.erf(data)
   
   def run(data):
   # convert pytorch to tvm
   traced = torch.jit.trace(ErfTest(), torch.from_numpy(data).cuda())
   mod, params = relay.frontend.from_pytorch(traced, [('data', data.shape)])
   
   # compile
   with tvm.transform.PassContext(opt_level=3):
   relay_graph, relay_lib, relay_params = tvm.relay.build(mod, 
target='cuda', params=params)
   relay_model = tvm.contrib.graph_runtime.create(relay_graph, 
relay_lib, tvm.context('gpu', 0))
   
   # run
   relay_model.set_input('data', data)
   relay_model.run()
   return relay_model.get_output(0)
   
   data = np.random.rand(3, 4).astype(np.float32)
   print(torch.erf(torch.Tensor(data)))
   print(run(data))
   print(run(data.astype(np.float16)))
   ```
   
![image](https://user-images.githubusercontent.com/16698151/91425664-04197a00-e88e-11ea-8212-5e08ec9b4650.png)
   
   I fixed this in the same way as #6225 and the results seem ok
   
   https://user-images.githubusercontent.com/16698151/91427276-3deb8000-e890-11ea-8fa0-2fb0c3585cae.png";
 width="300">
   
   cc @vinx13 Could you help to review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6342: [CI][Contrib] Add Vitis-AI docker installation

2020-08-27 Thread GitBox


jtuyls commented on a change in pull request #6342:
URL: https://github.com/apache/incubator-tvm/pull/6342#discussion_r478304888



##
File path: docker/Dockerfile.ci_cpu
##
@@ -83,3 +83,7 @@ RUN bash /install/ubuntu_install_caffe.sh
 # Github Arm(R) Ethos(TM)-N NPU driver
 COPY install/ubuntu_install_ethosn_driver_stack.sh 
/install/ubuntu_install_ethosn_driver_stack.sh
 RUN bash /install/ubuntu_install_ethosn_driver_stack.sh
+
+# Vitis-AI PyXIR CI deps
+COPY install/ubuntu_install_vai_packages.sh 
/install/ubuntu_install_vai_packages.sh
+RUN bash /install/ubuntu_install_vai_packages.sh

Review comment:
   @leandron I am not sure about the clarity of the demo prefix either. 
Maybe we can just add it as Dockerfile.vitisai? @tqchen ?
   
   And ok, we will move all environment changes to this PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6342: [CI][Contrib] Add Vitis-AI docker installation

2020-08-27 Thread GitBox


leandron commented on a change in pull request #6342:
URL: https://github.com/apache/incubator-tvm/pull/6342#discussion_r478288536



##
File path: docker/Dockerfile.ci_cpu
##
@@ -83,3 +83,7 @@ RUN bash /install/ubuntu_install_caffe.sh
 # Github Arm(R) Ethos(TM)-N NPU driver
 COPY install/ubuntu_install_ethosn_driver_stack.sh 
/install/ubuntu_install_ethosn_driver_stack.sh
 RUN bash /install/ubuntu_install_ethosn_driver_stack.sh
+
+# Vitis-AI PyXIR CI deps
+COPY install/ubuntu_install_vai_packages.sh 
/install/ubuntu_install_vai_packages.sh
+RUN bash /install/ubuntu_install_vai_packages.sh

Review comment:
   > Should we rename the ci_vai to make it more clear that it's our main 
docker environment and it's not really for CI at the moment?
   
   I think it would be good to name it in a way that is obvious to people on 
what that Dockerfile is intended for. Looking at existing Dockerfiles 
(https://github.com/apache/incubator-tvm/tree/master/docker) I see there are 
three prefixes: `ci_*`, `conda_*` and `demo_`. Maybe from the repository point 
of view it fits more for a `demo` than `ci`? I'm not very sure about what the 
right answer here, maybe @tqchen will know better.
   
   > Should we add it to this PR even though it's meant to be used for CI?
   
   Even not being strictly related to CI, as @comaniac pointed out, it is 
related to environment changes (for example the change/fix on the 
`ubuntu_install_python.sh` that might affect other Dockerfiles). So I think it 
would be good to have them all in one PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6342: [CI][Contrib] Add Vitis-AI docker installation

2020-08-27 Thread GitBox


jtuyls commented on a change in pull request #6342:
URL: https://github.com/apache/incubator-tvm/pull/6342#discussion_r478283171



##
File path: docker/Dockerfile.ci_cpu
##
@@ -83,3 +83,7 @@ RUN bash /install/ubuntu_install_caffe.sh
 # Github Arm(R) Ethos(TM)-N NPU driver
 COPY install/ubuntu_install_ethosn_driver_stack.sh 
/install/ubuntu_install_ethosn_driver_stack.sh
 RUN bash /install/ubuntu_install_ethosn_driver_stack.sh
+
+# Vitis-AI PyXIR CI deps
+COPY install/ubuntu_install_vai_packages.sh 
/install/ubuntu_install_vai_packages.sh
+RUN bash /install/ubuntu_install_vai_packages.sh

Review comment:
   @leandron The ci_vai docker we added in the other PR is actually not 
meant to be used for CI at the moment. It's the main docker we use in our flow 
as you can see in our documentation: 
https://github.com/apache/incubator-tvm/blob/8903b1a3251370ee1013fc2f9f3ef6004fa0e4b2/docs/deploy/vitis_ai.rst
 and builds on top of the general Vitis-AI docker 
(https://github.com/Xilinx/Vitis-AI) containing the necessary Vitis-AI tools 
for quantization, compilation, etc. 
   
   Two questions: 
   - Should we rename the ci_vai to make it more clear that it's our main 
docker environment and it's not really for CI at the moment?
   - Should we add it to this PR even though it's not meant to be used for CI?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jtuyls commented on a change in pull request #6342: [CI][Contrib] Add Vitis-AI docker installation

2020-08-27 Thread GitBox


jtuyls commented on a change in pull request #6342:
URL: https://github.com/apache/incubator-tvm/pull/6342#discussion_r478283171



##
File path: docker/Dockerfile.ci_cpu
##
@@ -83,3 +83,7 @@ RUN bash /install/ubuntu_install_caffe.sh
 # Github Arm(R) Ethos(TM)-N NPU driver
 COPY install/ubuntu_install_ethosn_driver_stack.sh 
/install/ubuntu_install_ethosn_driver_stack.sh
 RUN bash /install/ubuntu_install_ethosn_driver_stack.sh
+
+# Vitis-AI PyXIR CI deps
+COPY install/ubuntu_install_vai_packages.sh 
/install/ubuntu_install_vai_packages.sh
+RUN bash /install/ubuntu_install_vai_packages.sh

Review comment:
   @leandron The ci_vai docker we added in the other PR is actually not 
meant to be used for CI at the moment. It's the main docker we use in our flow 
as you can see in our documentation: 
https://github.com/apache/incubator-tvm/blob/8903b1a3251370ee1013fc2f9f3ef6004fa0e4b2/docs/deploy/vitis_ai.rst
 and builds on top of the general Vitis-AI docker 
(https://github.com/Xilinx/Vitis-AI) containing the necessary Vitis-AI tools 
for quantization, compilation, etc. 
   
   Two questions: 
   - Should we rename the ci_vai to make it more clear that it's our main 
docker environment and it's not really for CI at the moment?
   - Should we add it to this PR even though it's meant to be used for CI?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leandron commented on a change in pull request #6342: [CI][Contrib] Add Vitis-AI docker installation

2020-08-27 Thread GitBox


leandron commented on a change in pull request #6342:
URL: https://github.com/apache/incubator-tvm/pull/6342#discussion_r478273962



##
File path: docker/Dockerfile.ci_cpu
##
@@ -83,3 +83,7 @@ RUN bash /install/ubuntu_install_caffe.sh
 # Github Arm(R) Ethos(TM)-N NPU driver
 COPY install/ubuntu_install_ethosn_driver_stack.sh 
/install/ubuntu_install_ethosn_driver_stack.sh
 RUN bash /install/ubuntu_install_ethosn_driver_stack.sh
+
+# Vitis-AI PyXIR CI deps
+COPY install/ubuntu_install_vai_packages.sh 
/install/ubuntu_install_vai_packages.sh
+RUN bash /install/ubuntu_install_vai_packages.sh

Review comment:
   > Also, as this script is only used for CI I think 
`ubuntu_install_vitis_ai_packages_ci.sh` would be better. What do you think?
   I had a look on your other PR now, and I understand what you said about the 
other script. I think it would be beneficial to have all the CI change in one 
patch, if possible.

##
File path: docker/Dockerfile.ci_cpu
##
@@ -83,3 +83,7 @@ RUN bash /install/ubuntu_install_caffe.sh
 # Github Arm(R) Ethos(TM)-N NPU driver
 COPY install/ubuntu_install_ethosn_driver_stack.sh 
/install/ubuntu_install_ethosn_driver_stack.sh
 RUN bash /install/ubuntu_install_ethosn_driver_stack.sh
+
+# Vitis-AI PyXIR CI deps
+COPY install/ubuntu_install_vai_packages.sh 
/install/ubuntu_install_vai_packages.sh
+RUN bash /install/ubuntu_install_vai_packages.sh

Review comment:
   > Also, as this script is only used for CI I think 
`ubuntu_install_vitis_ai_packages_ci.sh` would be better. What do you think?
   
   I had a look on your other PR now, and I understand what you said about the 
other script. I think it would be beneficial to have all the CI change in one 
patch, if possible.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on a change in pull request #6335: [BYOC][ETHOSN] Add support for quantized convolution

2020-08-27 Thread GitBox


mbaret commented on a change in pull request #6335:
URL: https://github.com/apache/incubator-tvm/pull/6335#discussion_r478256059



##
File path: src/relay/backend/contrib/ethosn/codegen.cc
##
@@ -50,6 +50,16 @@ bool IsEthosnOp(const Call& call, const std::string& 
op_name) {
   }
 }
 
+bool IsEthosnFunc(const Call& call, const std::string& op_name) {

Review comment:
   Do you have a suggestion of a good common location? We could maybe 
handle this in a follow-up.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 opened a new pull request #6348: [Ansor][AutoTVM v2.0] Phase 2: Update heavy operation with parallel_for

2020-08-27 Thread GitBox


jcf94 opened a new pull request #6348:
URL: https://github.com/apache/incubator-tvm/pull/6348


   For the full upstream plan, see [Ansor 
RFC](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/21).
   
   This PR contains some small fix:
   - Use parallel_for to speed up some heavy operations
   - Add a SketchPolicy + XGBModel UT for a complete workflow test



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org