[incubator-tvm] branch master updated (bfe83eb -> 27f00ef)

2020-07-13 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bfe83eb  Refactor to expose MakeOp functions to C++ (#6047)
 add 27f00ef  Fix pytorch frontend prim::Constant issue (#6051)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py | 2 ++
 1 file changed, 2 insertions(+)



[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454140951



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   To avoid involving extra review effort, removed ThreadPool from the 
current code base. cc @tqchen 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (bfe83eb -> 27f00ef)

2020-07-13 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bfe83eb  Refactor to expose MakeOp functions to C++ (#6047)
 add 27f00ef  Fix pytorch frontend prim::Constant issue (#6051)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py | 2 ++
 1 file changed, 2 insertions(+)



[incubator-tvm] branch master updated (bfe83eb -> 27f00ef)

2020-07-13 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bfe83eb  Refactor to expose MakeOp functions to C++ (#6047)
 add 27f00ef  Fix pytorch frontend prim::Constant issue (#6051)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py | 2 ++
 1 file changed, 2 insertions(+)



[incubator-tvm] branch master updated (bfe83eb -> 27f00ef)

2020-07-13 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bfe83eb  Refactor to expose MakeOp functions to C++ (#6047)
 add 27f00ef  Fix pytorch frontend prim::Constant issue (#6051)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py | 2 ++
 1 file changed, 2 insertions(+)



[incubator-tvm] branch master updated (bfe83eb -> 27f00ef)

2020-07-13 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from bfe83eb  Refactor to expose MakeOp functions to C++ (#6047)
 add 27f00ef  Fix pytorch frontend prim::Constant issue (#6051)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py | 2 ++
 1 file changed, 2 insertions(+)



[GitHub] [incubator-tvm] masahi commented on pull request #6051: Fix pytorch frontend prim::Constant issue

2020-07-13 Thread GitBox


masahi commented on pull request #6051:
URL: https://github.com/apache/incubator-tvm/pull/6051#issuecomment-658004243


   Thanks @jxx123 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi closed issue #6050: [BUG] Pytorch frontend error when the value of prim::Constant is a tensor in cuda

2020-07-13 Thread GitBox


masahi closed issue #6050:
URL: https://github.com/apache/incubator-tvm/issues/6050


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #6051: Fix pytorch frontend prim::Constant issue

2020-07-13 Thread GitBox


masahi merged pull request #6051:
URL: https://github.com/apache/incubator-tvm/pull/6051


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454140951



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   Removed from the current code base. cc @tqchen 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


merrymercy commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454140148



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   Yes, we should just remove thread pool





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


merrymercy commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454140148



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   Yes, we should just remove the thread pool





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] merrymercy commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


merrymercy commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454137351



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   I would like to leave it as follow-up PRs.

##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   I would like to leave it to follow-up PRs.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on pull request #6052: [Relay][Pass] Merge two consecutive reshape ops

2020-07-13 Thread GitBox


icemelon9 commented on pull request #6052:
URL: https://github.com/apache/incubator-tvm/pull/6052#issuecomment-657994257


   cc @zhiics @comaniac @jroesch 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


FrozenGene commented on pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#issuecomment-657994138


   > > @comaniac I have updated the doc and change the name from `f_prepare` to 
`f_preproc`. Could you help to review it again?
   > 
   > Didn't see the update on my side. Will check it later again.
   
   I could see it now from my side. It should work from your side too?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 opened a new pull request #6052: [Relay][Pass] Merge two consecutive reshape ops

2020-07-13 Thread GitBox


icemelon9 opened a new pull request #6052:
URL: https://github.com/apache/incubator-tvm/pull/6052


   Use pattern matching rewriter to merge two consecutive reshape ops.
   
   @mbrookhart I also added an InferType pass after rewriting each pattern. I 
think this change can make the pattern rewriter more useful, at least I need 
this feature in my pass.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zchuang11 commented on issue #5393: Tensorflow OP: 'DenseToDenseSetOperation', 'NonMaxSuppressionV3', 'SparseToDense', 'Unique'

2020-07-13 Thread GitBox


zchuang11 commented on issue #5393:
URL: https://github.com/apache/incubator-tvm/issues/5393#issuecomment-657992191


   > 'NonMaxSuppressionV3' has been added the tensorflow frontend.
   
   Thanks, DenseToDenseSetOperation', 'SparseToDense' and  'Unique' had been 
supported?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-07-13 Thread GitBox


comaniac commented on a change in pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#discussion_r454123642



##
File path: src/runtime/contrib/arm_compute_lib/acl_runtime.cc
##
@@ -0,0 +1,399 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/runtime/contrib/arm_compute_lib/acl_runtime.cc
+ * \brief A simple JSON runtime for Arm Compute Library.
+ */
+
+#include 
+#include 
+
+#include "../../file_util.h"
+#include "../json/json_node.h"
+#include "../json/json_runtime.h"
+
+#ifdef TVM_GRAPH_RUNTIME_ARM_COMPUTE_LIB
+#include 
+#include 
+#include 
+#include 
+
+#include "acl_allocator.h"
+#include "acl_utils.h"
+#endif
+
+namespace tvm {
+namespace runtime {
+namespace contrib {
+
+using namespace tvm::runtime::json;
+
+#ifdef TVM_GRAPH_RUNTIME_ARM_COMPUTE_LIB
+using namespace arm_compute_lib;
+
+/*!
+ * \brief ACL objects we cache in order to avoid needing to construct
+ * a new layer each time.
+ */
+struct CachedLayer {
+  std::shared_ptr function;
+  std::vector inputs;
+  std::vector const_inputs;
+  std::vector outputs;
+};
+#endif
+
+class ACLRuntime : public JSONRuntimeBase {
+ public:
+  /*!
+   * \brief The ACL runtime module. Deserialize the provided functions
+   * on creation and store in the layer cache.
+   *
+   * \param symbol_name The name of the function.
+   * \param graph_json serialized JSON representation of a sub-graph.
+   * \param const_names The names of each constant in the sub-graph.
+   * \params consts An array of constants pre-transposed to the correct layout 
expected by ACL.
+   */
+  explicit ACLRuntime(const std::string& symbol_name, const std::string& 
graph_json,
+  const Array& const_names, const Array& 
consts)
+  : JSONRuntimeBase(symbol_name, graph_json, const_names) {
+this->constants_ = consts;
+  }
+
+  /*!
+   * \brief Get a packed function.
+   *
+   * \param name The name/symbol of the function.
+   * \param sptr_to_self The pointer to the module node.
+   * \return The packed function.
+   */
+  PackedFunc GetFunction(const std::string& name, const ObjectPtr& 
sptr_to_self) override {

Review comment:
   The reason of having this is the same as Init, because it processes the 
constants by itself. If we could resolve the constant tensor issue then we 
don't need to override this function neither.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


comaniac commented on pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#issuecomment-657985296


   > @comaniac I have updated the doc and change the name from `f_prepare` to 
`f_preproc`. Could you help to review it again?
   
   Didn't see the update on my side. Will check it later again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


FrozenGene commented on pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#issuecomment-657976125


   @comaniac I have updated the doc and change the name from `f_prepare` to 
`f_preproc`. Could you help to review it again?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] maheshambule commented on a change in pull request #5052: [TARGET] ONNX codegen

2020-07-13 Thread GitBox


maheshambule commented on a change in pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#discussion_r454097165



##
File path: python/tvm/contrib/target/__init__.py
##
@@ -14,5 +14,3 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-"""Codegen and runtime APIs for targets.

Review comment:
   Removed this as per the review comment from @zhiics 
https://github.com/apache/incubator-tvm/pull/5052#discussion_r439504954





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on pull request #6049: [Pytorch] add operator copy_ support

2020-07-13 Thread GitBox


masahi edited a comment on pull request #6049:
URL: https://github.com/apache/incubator-tvm/pull/6049#issuecomment-657950702


   @huajsj please add a test. Also, `_` suffix in `copy_` likely means it is an 
in-place op, so it's better to add support for non in-place version 
`aten::copy` as well (the same conversion as `copy_`).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6049: [Pytorch] add operator copy_ support

2020-07-13 Thread GitBox


masahi commented on pull request #6049:
URL: https://github.com/apache/incubator-tvm/pull/6049#issuecomment-657950702


   @huajsj please add a test. Also, `_` suffix in `copy_` likely means it is an 
in-place op, so it's better to add support for `aten::copy` as well (the same 
conversion as `copy_`).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454078975



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   @tqchen Ok, I understand that(the stride argument has been set to 1 in 
default in utils.h), and it's fine to further clean these code.
   Just confused about the "does not have to change now" above. :)
   And actually the ThreadPool is never used in current code base...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454078975



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   @tqchen Ok, I understand that(the stride argument has been set to 1 in 
default in utils.h), and it's fine to further clean these code.
   Just confused about the "does not have to change now" above. :)
   And actually the ThreadPool is nerver used in current code base...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454078975



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   @tqchen Ok, I understand that(the stride argument has been set to 1 in 
default in utils.h), and it's fine to further clean these code.
   Just confused about the "does not have to change now" above. :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #6007: [RELAY][DYN] Dynamic broadcast_to, zeros, ones

2020-07-13 Thread GitBox


zhiics commented on pull request #6007:
URL: https://github.com/apache/incubator-tvm/pull/6007#issuecomment-657949021


   @electriclilies Could you modify the code according to the changes in #6047



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics merged pull request #6047: Refactor to expose MakeOp functions to C++

2020-07-13 Thread GitBox


zhiics merged pull request #6047:
URL: https://github.com/apache/incubator-tvm/pull/6047


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #6047: Refactor to expose MakeOp functions to C++

2020-07-13 Thread GitBox


zhiics commented on pull request #6047:
URL: https://github.com/apache/incubator-tvm/pull/6047#issuecomment-657948710


   Thanks @mbrookhart 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (e4a0aa5 -> bfe83eb)

2020-07-13 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e4a0aa5  [IR] Fix a primitive check error (#5991)
 add bfe83eb  Refactor to expose MakeOp functions to C++ (#6047)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/algorithm.py  |   4 +-
 src/relay/op/make_op.h|  83 +
 src/relay/op/nn/convolution.cc| 107 +
 src/relay/op/nn/convolution_make.h| 149 ++
 src/relay/op/nn/nn.cc |   1 +
 src/relay/op/nn/pad.cc|   1 +
 src/relay/op/nn/pooling.cc|  30 +-
 src/relay/op/nn/pooling.h |  65 +
 src/relay/op/tensor/reduce.cc |  17 ++--
 src/relay/op/tensor/transform.cc  |   1 +
 src/relay/op/tensor/transform.h   |   4 +-
 src/relay/op/tensor/unary.cc  |   7 +-
 src/relay/transforms/dynamic_to_static.cc |  38 ++--
 src/relay/transforms/pattern_util.h   | 111 +++---
 14 files changed, 347 insertions(+), 271 deletions(-)
 create mode 100644 src/relay/op/make_op.h
 create mode 100644 src/relay/op/nn/convolution_make.h
 create mode 100644 src/relay/op/nn/pooling.h



[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454078975



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   @tqchen Ok, I understand that(the stride argument has been set to 1 in 
default in utils.h), and it's fine for me the clean these code.
   Just confused about the "does not have to change now" above. :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5892: Add TVM application extension with WASM runtime

2020-07-13 Thread GitBox


tqchen commented on pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#issuecomment-657946685


   cc @kazum 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5892: Add TVM application extension with WASM runtime

2020-07-13 Thread GitBox


tqchen commented on pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#issuecomment-657946635


   @leonwanghui it seems that we might want to include .cargo/config, can you 
modify 
`https://github.com/apache/incubator-tvm/blob/master/tests/lint/check_file_type.py#L110`
  to enable that?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lsy643 commented on a change in pull request #5955: Register Shape Func for Some Operators to Handle Dynamic Shapes

2020-07-13 Thread GitBox


lsy643 commented on a change in pull request #5955:
URL: https://github.com/apache/incubator-tvm/pull/5955#discussion_r454077558



##
File path: python/tvm/relay/op/image/_image.py
##
@@ -64,6 +67,22 @@ def compute_crop_and_resize(attrs, inputs, out_type):
 
 reg.register_injective_schedule("image.crop_and_resize")
 
+@script
+def _crop_and_resize_func(image_shape, boxes_shape, crop_size):
+out = output_tensor((4,), "int64")
+out[0] = boxes_shape[0]
+out[1] = int64(crop_size[0])
+out[2] = int64(crop_size[1])
+out[3] = image_shape[3]
+
+return out
+
+@reg.register_shape_func("image.crop_and_resize", False)
+def crop_and_resize_func(attrs, inputs, _):
+crop_size = get_const_tuple(attrs.crop_size)

Review comment:
   Image layout has been considered in my latest update





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lsy643 commented on a change in pull request #5955: Register Shape Func for Some Operators to Handle Dynamic Shapes

2020-07-13 Thread GitBox


lsy643 commented on a change in pull request #5955:
URL: https://github.com/apache/incubator-tvm/pull/5955#discussion_r454077558



##
File path: python/tvm/relay/op/image/_image.py
##
@@ -64,6 +67,22 @@ def compute_crop_and_resize(attrs, inputs, out_type):
 
 reg.register_injective_schedule("image.crop_and_resize")
 
+@script
+def _crop_and_resize_func(image_shape, boxes_shape, crop_size):
+out = output_tensor((4,), "int64")
+out[0] = boxes_shape[0]
+out[1] = int64(crop_size[0])
+out[2] = int64(crop_size[1])
+out[3] = image_shape[3]
+
+return out
+
+@reg.register_shape_func("image.crop_and_resize", False)
+def crop_and_resize_func(attrs, inputs, _):
+crop_size = get_const_tuple(attrs.crop_size)

Review comment:
   Image layout has been considered





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


tqchen commented on pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#issuecomment-657945828


   cc @zhiics  please 
https://tvm.apache.org/docs/contribute/code_review.html#approve-and-request-changes-explicitly
   
   @jwfromm can you also take a look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


tqchen commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454077315



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   Thanks @jcf94 , let me try to elaborate further. To simplify the 
abstraction, we should:
   
   - Add src/support/parallel_for.h
  - Move the threadpool as a detail of parallel_for.cc, remove thread_pool 
from utils.h
  - It is unclear whether threadpool is needed to implement parallel for, 
it is very possible that we can just launch n std::thread(because std::thread 
is quite lightweight in c++)
   - Use parallel_for for all necessary usecases of threadpool.
   
   Also consider remove the stride argument, or make it optional since stride 
is not used.
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454076532



##
File path: src/runtime/module.cc
##
@@ -66,9 +66,19 @@ PackedFunc ModuleNode::GetFunction(const std::string& name, 
bool query_imports)
   PackedFunc pf = self->GetFunction(name, GetObjectPtr(this));
   if (pf != nullptr) return pf;
   if (query_imports) {
-for (Module& m : self->imports_) {
-  pf = m->GetFunction(name, m.data_);
-  if (pf != nullptr) return pf;

Review comment:
   I didn't realize we have one overload `operator->`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5235: [RELAY][Fix] i64 indices

2020-07-13 Thread GitBox


tqchen commented on pull request #5235:
URL: https://github.com/apache/incubator-tvm/pull/5235#issuecomment-657944392


   cc @mbrookhart @jroesch @junrushao1994 @ZihengJiang @antinucleon 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


tqchen commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454075454



##
File path: src/runtime/module.cc
##
@@ -66,9 +66,19 @@ PackedFunc ModuleNode::GetFunction(const std::string& name, 
bool query_imports)
   PackedFunc pf = self->GetFunction(name, GetObjectPtr(this));
   if (pf != nullptr) return pf;
   if (query_imports) {
-for (Module& m : self->imports_) {
-  pf = m->GetFunction(name, m.data_);
-  if (pf != nullptr) return pf;

Review comment:
   `m->` is a shorthand for` =m.operator->()` so it should work. We are not 
doing `m.GetFunction here`.
GetFunction was overloaded for ModuleNode.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5052: [TARGET] ONNX codegen

2020-07-13 Thread GitBox


tqchen commented on pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#issuecomment-657942495


   @srkreddy1238 please also update, @kazum feel free to make a call to dismiss 
outstanding reviews



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #5991: [IR] Fix a primitive check error

2020-07-13 Thread GitBox


tqchen merged pull request #5991:
URL: https://github.com/apache/incubator-tvm/pull/5991


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #5991: [IR] Fix a primitive check error

2020-07-13 Thread GitBox


tqchen commented on pull request #5991:
URL: https://github.com/apache/incubator-tvm/pull/5991#issuecomment-657942237


   Thanks @liangfu , @junrushao1994 @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (67ed6d0 -> e4a0aa5)

2020-07-13 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 67ed6d0  Fix conv2_gemm after target structure update (#6037)
 add e4a0aa5  [IR] Fix a primitive check error (#5991)

No new revisions were added by this update.

Summary of changes:
 include/tvm/ir/op.h | 1 +
 1 file changed, 1 insertion(+)



[GitHub] [incubator-tvm] tqchen merged pull request #6037: Fix conv2_gemm after target structure update

2020-07-13 Thread GitBox


tqchen merged pull request #6037:
URL: https://github.com/apache/incubator-tvm/pull/6037


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6037: Fix conv2_gemm after target structure update

2020-07-13 Thread GitBox


tqchen commented on pull request #6037:
URL: https://github.com/apache/incubator-tvm/pull/6037#issuecomment-657941954


   Thanks @giuseros ! CC @junrushao1994 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (99c52f3 -> 67ed6d0)

2020-07-13 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 99c52f3  [Frontend][TFLite] Fix fully_connected converter when batch 
size is not 1 (#6038)
 add 67ed6d0  Fix conv2_gemm after target structure update (#6037)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/qnn/op/legalizations.py   |  2 +-
 topi/python/topi/arm_cpu/conv2d_gemm.py|  2 +-
 topi/python/topi/arm_cpu/tensor_intrin.py  |  2 +-
 topi/tests/python/test_topi_conv2d_int8.py | 64 ++
 4 files changed, 67 insertions(+), 3 deletions(-)



[GitHub] [incubator-tvm] jxx123 commented on issue #6050: [BUG] Pytorch frontend error when the value of prim::Constant is a tensor in cuda

2020-07-13 Thread GitBox


jxx123 commented on issue #6050:
URL: https://github.com/apache/incubator-tvm/issues/6050#issuecomment-657940204


   PR link: https://github.com/apache/incubator-tvm/pull/6051



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454071520



##
File path: python/tvm/relay/backend/graph_runtime_factory.py
##
@@ -0,0 +1,113 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Graph runtime factory."""
+import warnings
+from tvm._ffi.base import string_types
+from tvm._ffi.registry import get_global_func
+from tvm.runtime import ndarray
+
+
+def create(graph_json_str, libmod, libmod_name, params):
+"""Create a runtime executor module.
+Parameters
+--
+graph_json_str : str or graph class

Review comment:
   I checked related doc and code. The `graph_json_str._tvm_graph_json()` 
is only used for NNVM (graph class) before. So I will remove them and add one 
check `assert isinstance(graph_json_str, string_types)`. Does it make sense to 
you?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454071250



##
File path: src/auto_schedule/utils.cc
##
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file auto_schedule/utils.cc
+ * \brief Common utilities.
+ */
+
+#include "utils.h"
+
+namespace tvm {
+namespace auto_schedule {
+
+NullStream& NullStream::Global() {
+  static NullStream stream;
+  return stream;
+}
+
+ThreadPool& ThreadPool::Global() {
+  static ThreadPool* pool = new ThreadPool();
+  static int ct = 0;
+
+  ct = (ct + 1) % ThreadPool::REFRESH_EVERY;
+
+  if (ct == 0) {
+pool->Abort();
+delete pool;
+pool = new ThreadPool();
+  }
+
+  if (pool->NumWorkers() == 0) {
+pool->Launch(std::thread::hardware_concurrency());
+  }
+
+  return *pool;
+}
+
+void parallel_for(int start, int end, std::function f, int 
stride) {

Review comment:
   @tqchen Added a temporary implementation of parallel_for here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #6044: [RUNTIME][CRT] init TVMPackedFunc's name

2020-07-13 Thread GitBox


liangfu commented on a change in pull request #6044:
URL: https://github.com/apache/incubator-tvm/pull/6044#discussion_r454063901



##
File path: src/runtime/crt/common/packed_func.c
##
@@ -85,6 +86,7 @@ int TVMPackedFunc_InitGlobalFunc(TVMPackedFunc* pf, const 
char* name, const TVMA
 return status;
   }
 
+  snprintf(pf->name,  sizeof(pf->name), "%s", name);

Review comment:
   looks like the extra space has to be removed to pass clang-format.
   ```diff
   -  snprintf(pf->name,  sizeof(pf->name), "%s", name);
   +  snprintf(pf->name, sizeof(pf->name), "%s", name);
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #6044: [RUNTIME][CRT] init TVMPackedFunc's name

2020-07-13 Thread GitBox


liangfu commented on a change in pull request #6044:
URL: https://github.com/apache/incubator-tvm/pull/6044#discussion_r454063901



##
File path: src/runtime/crt/common/packed_func.c
##
@@ -85,6 +86,7 @@ int TVMPackedFunc_InitGlobalFunc(TVMPackedFunc* pf, const 
char* name, const TVMA
 return status;
   }
 
+  snprintf(pf->name,  sizeof(pf->name), "%s", name);

Review comment:
   looks like the extra space has to be removed to pass clang-format.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454061696



##
File path: python/tvm/relay/backend/graph_runtime_factory.py
##
@@ -0,0 +1,113 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Graph runtime factory."""
+import warnings
+from tvm._ffi.base import string_types
+from tvm._ffi.registry import get_global_func
+from tvm.runtime import ndarray
+
+
+def create(graph_json_str, libmod, libmod_name, params):
+"""Create a runtime executor module.
+Parameters
+--
+graph_json_str : str or graph class
+The graph to be deployed in json format output by nnvm graph.
+The graph can only contain one operator(tvm_op) that
+points to the name of PackedFunc in the libmod.
+libmod : tvm.Module
+The module of the corresponding function

Review comment:
   Yes. We could do it. Thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454060475



##
File path: src/auto_schedule/utils.h
##
@@ -184,22 +184,36 @@ inline void PrintTitle(const std::string& title, int 
verbose) {
 }
 
 /*!
- * \brief A simple thread pool.
+ * \brief A simple thread pool to perform parallel for.
  * TODO(merrymercy): Move this to `src/support/parallel_for`
  */
-class ThreadPool {
+class ParallelFor {

Review comment:
   Ok, I get it. However after checking the current code, I found out that 
actually we have removed all the use of ThreadPool in this minimum system. 
Didn't realize that before.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454059023



##
File path: src/runtime/module.cc
##
@@ -66,9 +66,19 @@ PackedFunc ModuleNode::GetFunction(const std::string& name, 
bool query_imports)
   PackedFunc pf = self->GetFunction(name, GetObjectPtr(this));
   if (pf != nullptr) return pf;
   if (query_imports) {
-for (Module& m : self->imports_) {
-  pf = m->GetFunction(name, m.data_);
-  if (pf != nullptr) return pf;

Review comment:
   Unluckily we can not do this. Because `m` is a `Module`, which only have 
`GetFunction(const std::string& name, const ObjectPtr& sptr_to_self)`. 
This is what we said before if we want to add `query_imports`, we will modify 
the basic core class `Module`.
   
   But you prompt me that I think we could do `pf 
=m.operator->()->GetFunction(name, query_imports);`. Get the `ModuleNode` and 
call the function. does it make sense to you?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


tqchen commented on a change in pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#discussion_r454058968



##
File path: src/auto_schedule/utils.h
##
@@ -184,22 +184,36 @@ inline void PrintTitle(const std::string& title, int 
verbose) {
 }
 
 /*!
- * \brief A simple thread pool.
+ * \brief A simple thread pool to perform parallel for.
  * TODO(merrymercy): Move this to `src/support/parallel_for`
  */
-class ThreadPool {
+class ParallelFor {

Review comment:
   @jcf94 Sorry I wasn't meant to say that we should rename ThreadPool to 
ParallelFor, instead we should hide the use of threadpool behind a parallel_for 
API, in similar style to 
https://docs.microsoft.com/en-us/cpp/parallel/concrt/parallel-algorithms?view=vs-2019#parallel_for





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454059023



##
File path: src/runtime/module.cc
##
@@ -66,9 +66,19 @@ PackedFunc ModuleNode::GetFunction(const std::string& name, 
bool query_imports)
   PackedFunc pf = self->GetFunction(name, GetObjectPtr(this));
   if (pf != nullptr) return pf;
   if (query_imports) {
-for (Module& m : self->imports_) {
-  pf = m->GetFunction(name, m.data_);
-  if (pf != nullptr) return pf;

Review comment:
   Unluckily we can not do this. Because `m` is a `Module`, which only have 
`GetFunction(const std::string& name, const ObjectPtr& sptr_to_self)`. 
This is what we said before if we want to add `query_imports`, we will modify 
the basic core class `Module`.
   
   But you prompt me that I think we could do `pf 
=m.operator->()->GetFunction(name, query_imports);`. Get the `ModuleNode` and 
call the function. do you make sense to you?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] windclarion commented on a change in pull request #6044: [RUNTIME][CRT] init TVMPackedFunc's name

2020-07-13 Thread GitBox


windclarion commented on a change in pull request #6044:
URL: https://github.com/apache/incubator-tvm/pull/6044#discussion_r454058824



##
File path: src/runtime/crt/common/packed_func.c
##
@@ -85,6 +85,7 @@ int TVMPackedFunc_InitGlobalFunc(TVMPackedFunc* pf, const 
char* name, const TVMA
 return status;
   }
 
+  strncpy(pf->name, name, sizeof(pf->name));

Review comment:
   done!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jcf94 commented on pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jcf94 commented on pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#issuecomment-657924465


   > Does not have to change now, but let us change the use of ThreadPool to 
parallel_for abstraction.
   
   Does that means to just modify ThreadPool to ParallelFor now? Class renamed 
& added some comments on the member function.
   
   > @jcf94 and @merrymercy thanks for all the hard work! Can I request that we 
put another unresolved issue? In my opinion the written English parts i.e 
comments, explanations, etc could still use some improvement with both content 
and grammar and I would propose in general that we do some at least 1 or 2 
rounds of full documentation polish (comments, examples, tests, tutorials, etc) 
before we officially release a feature (in this case when all of Ansor is 
landed in master). We tried to do this with Relay but I think we should 
continue to strive to do a better job with new features like this.
   
   Thanks! That would be of great help since I'm not a native speaker. The 
documentation does need to be polished.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


tqchen commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454054106



##
File path: python/tvm/relay/backend/graph_runtime_factory.py
##
@@ -0,0 +1,113 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Graph runtime factory."""
+import warnings
+from tvm._ffi.base import string_types
+from tvm._ffi.registry import get_global_func
+from tvm.runtime import ndarray
+
+
+def create(graph_json_str, libmod, libmod_name, params):
+"""Create a runtime executor module.
+Parameters
+--
+graph_json_str : str or graph class
+The graph to be deployed in json format output by nnvm graph.
+The graph can only contain one operator(tvm_op) that
+points to the name of PackedFunc in the libmod.
+libmod : tvm.Module
+The module of the corresponding function

Review comment:
   Can we simply merge create the constructor of GraphRuntimeFactoryModule?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-07-13 Thread GitBox


zhiics commented on a change in pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#discussion_r454043140



##
File path: src/relay/backend/contrib/arm_compute_lib/codegen_acl.h
##
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/arm_compute_lib/codegen_acl.h
+ * \brief The Relay -> ACL JSON schema compiler.
+ */
+
+#ifndef TVM_RELAY_BACKEND_CONTRIB_ARM_COMPUTE_LIB_CODEGEN_ACL_H_
+#define TVM_RELAY_BACKEND_CONTRIB_ARM_COMPUTE_LIB_CODEGEN_ACL_H_
+
+#include 
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../runtime/contrib/json/json_node.h"
+#include "../codegen_json/codegen_json.h"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace arm_compute_lib {
+
+/*!
+ * \brief Generates an ACLModule from a relay expression. This "compilation"
+ * does not require ACL since the actual conversion using ACL APIs is
+ * deferred until creation of the runtime. This step simply serializes the
+ * relay program into a JSON string.
+ */
+class ACLJSONSerializer : public backend::contrib::JSONSerializer {
+  using JSONGraphNode = tvm::runtime::json::JSONGraphNode;
+  using JSONGraphNodeEntry = tvm::runtime::json::JSONGraphNodeEntry;
+
+ public:
+  ACLJSONSerializer(const std::string& symbol, const Expr& expr) : 
JSONSerializer(symbol, expr) {}
+
+  std::vector VisitExpr_(const CallNode* cn) override;
+  std::vector VisitExpr_(const ConstantNode* cn) override;
+
+  /*!
+   * \brief Get the constant data transposed when pre-processing the
+   * input function.
+   *
+   * \return An array of constants
+   */
+  Array GetParamsData();
+
+ private:
+  /*!
+   * \brief Create a JSON representation of an operator.
+   *
+   * \param call The call to be represented.
+   * \return A JSON representation of a specific operator.
+   */
+  std::shared_ptr CreateOp(const CallNode* cn);
+  std::shared_ptr CreateCompositeConvolution(const CallNode* 
cn);
+
+  /* \brief Transposed constant tensors to serialize. Arm Compute Library 
expects constant tensors
+   * in OHWI format. */
+  Array constants_;
+};
+
+/*!
+ * \brief Pre-process a module containing functions ready for ACL codegen.
+ *
+ * For now we enforce OHWI kernel layout and fold the transforms away.
+ *
+ * \param mod The module to be pre-processed.
+ * \return The processed module.
+ */
+IRModule PreProcessModule(const IRModule& mod);
+
+/*!
+ * \brief Create a runtime module for ACL.
+ *
+ * This consists of a series of "serialized functions" which each represent a
+ * sub-graph to be computed by ACL and will each be executed independently from
+ * one another. Each function consists of serialized JSON describing the 
sub-graph
+ * and serialized constant tensors.
+ *
+ * \note The ACL runtime module only currently supports a single operator per
+ * sub-graph currently.
+ *
+ * \param ref The ext_func Relay expression/module to be executed using extern 
ops.
+ * \return A runtime module.
+ */
+runtime::Module ACLCompiler(const ObjectRef& ref);
+
+/*!
+ * \brief Get the external symbol of the Relay function name.
+ *
+ * \param func The provided function.
+ *
+ * \return An external symbol.
+ */
+std::string GetExtSymbol(const Function& func) {
+  const auto name_node = func->GetAttr(tvm::attr::kGlobalSymbol);
+  CHECK(name_node.defined()) << "Fail to retrieve external symbol.";
+  return std::string(name_node.value());
+}
+
+TVM_REGISTER_GLOBAL("relay.ext.arm_compute_lib").set_body_typed(ACLCompiler);

Review comment:
   put this in the cc file

##
File path: src/relay/backend/contrib/arm_compute_lib/codegen_acl.h
##
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applic

[GitHub] [incubator-tvm] tqchen commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


tqchen commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454053862



##
File path: src/runtime/module.cc
##
@@ -66,9 +66,19 @@ PackedFunc ModuleNode::GetFunction(const std::string& name, 
bool query_imports)
   PackedFunc pf = self->GetFunction(name, GetObjectPtr(this));
   if (pf != nullptr) return pf;
   if (query_imports) {
-for (Module& m : self->imports_) {
-  pf = m->GetFunction(name, m.data_);
-  if (pf != nullptr) return pf;

Review comment:
   After look at it again, i think we canjust do 
   
   ```c++
for (Module& m : self->imports_) {
 pf = m->GetFunction(name, m.data_, query_imports);
if (pf != nullptr) return pf;
   }
   ...
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #6044: [RUNTIME][CRT] init TVMPackedFunc's name

2020-07-13 Thread GitBox


liangfu commented on a change in pull request #6044:
URL: https://github.com/apache/incubator-tvm/pull/6044#discussion_r454051962



##
File path: src/runtime/crt/common/packed_func.c
##
@@ -85,6 +85,7 @@ int TVMPackedFunc_InitGlobalFunc(TVMPackedFunc* pf, const 
char* name, const TVMA
 return status;
   }
 
+  strncpy(pf->name, name, sizeof(pf->name));

Review comment:
   Please use `snprintf` instead, since `strncpy` is unsafe. see 
https://stackoverflow.com/questions/12275381/strncpy-vs-sprintf





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jxx123 opened a new pull request #6051: Fix pytorch frontend prim::Constant issue

2020-07-13 Thread GitBox


jxx123 opened a new pull request #6051:
URL: https://github.com/apache/incubator-tvm/pull/6051


   This PR is to fix the issue described here: 
https://github.com/apache/incubator-tvm/issues/6050



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on pull request #6049: [Pytorch] add operator copy_ support

2020-07-13 Thread GitBox


liangfu commented on pull request #6049:
URL: https://github.com/apache/incubator-tvm/pull/6049#issuecomment-657919266


   @masahi would you please take a look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jxx123 opened a new issue #6050: [BUG] Pytorch frontend error when the value of prim::Constant is a tensor in cuda

2020-07-13 Thread GitBox


jxx123 opened a new issue #6050:
URL: https://github.com/apache/incubator-tvm/issues/6050


   Script to reproduce the issue:
   ```python
   import torch
   from tvm import relay
   
   
   class Foo(torch.nn.Module):
   def __init__(self, bar):
   super(Foo, self).__init__()
   self.bar = bar
   
   def forward(self, x):
   return self.bar + x
   
   foo = Foo(torch.cuda.FloatTensor([[1.0, 2.0], [1.0, 2.0]]))
   x = torch.cuda.FloatTensor([[1.0, 2.0], [1.0, 2.0]])
   traced_foo = torch.jit.trace_module(foo, {'forward': x})
   shape_list = [('input0', (2, 2))]
   module, params = relay.frontend.from_pytorch(traced_foo, shape_list)
   ```
   
   Fix is quite straightforward. Just need to convert the tensor to cpu when 
the tensor is in cuda at line 
https://github.com/jxx123/incubator-tvm/blob/master/python/tvm/relay/frontend/pytorch.py#L2174.
 
   
   Will submit a PR soon.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


zhiics commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454046393



##
File path: src/runtime/graph/graph_runtime_factory.cc
##
@@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file graph_runtime_factory.cc
+ * \brief Graph runtime factory implementations
+ */
+
+#include "./graph_runtime_factory.h"
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+
+GraphRuntimeFactory::GraphRuntimeFactory(
+const std::string& graph_json,
+const std::unordered_map& params,
+const std::string& module_name) {
+  graph_json_ = graph_json;
+  params_ = params;
+  module_name_ = module_name;
+}
+
+PackedFunc GraphRuntimeFactory::GetFunction(
+const std::string& name, const 
tvm::runtime::ObjectPtr& sptr_to_self) {
+  if (name == module_name_) {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  std::vector contexts;
+  for (int i = 0; i < args.num_args; ++i) {
+contexts.emplace_back(args[i].operator TVMContext());
+  }
+  *rv = this->RuntimeCreate(contexts);
+});
+  } else if (name == "debug_create") {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  CHECK_GE(args.size(), 2);
+  std::string module_name = args[0].operator String();
+  CHECK(module_name == module_name_) << "Currently we only support single 
model for now.";
+  std::vector contexts;
+  for (int i = 1; i < args.num_args; ++i) {
+contexts.emplace_back(args[i].operator TVMContext());
+  }
+  *rv = this->DebugRuntimeCreate(contexts);
+});
+  } else if (name == "remove_params") {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  std::unordered_map empty_params{};
+  auto exec =
+  make_object(this->graph_json_, empty_params, 
this->module_name_);
+  exec->Import(this->imports_[0]);
+  *rv = Module(exec);
+});
+  } else {
+return PackedFunc();
+  }
+}
+
+void GraphRuntimeFactory::SaveToBinary(dmlc::Stream* stream) {
+  stream->Write(graph_json_);
+  std::vector names;
+  std::vector arrays;
+  for (const auto& v : params_) {
+names.emplace_back(v.first);
+arrays.emplace_back(const_cast(v.second.operator->()));
+  }
+  uint64_t sz = arrays.size();

Review comment:
   It was introduced in #5770. We should not do it in this pr. This just 
makes you aware of it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leonwanghui commented on pull request #5892: Add TVM application extension with WASM runtime

2020-07-13 Thread GitBox


leonwanghui commented on pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#issuecomment-657915075


   @tqchen It seems that the `./cargo/config` file is not allowed to be 
included into the repo, the ci error is as follows:
   ```shell
   + docker/bash.sh tvmai/ci-lint:v0.61 ./tests/scripts/task_lint.sh
   
   WORKSPACE: /scratch/jenkins-tvm/cudabuild/workspace/exec_3/tvm/sanity
   
   DOCKER CONTAINER NAME: tvmai/ci-lint:v0.61
   
   
   
   Running './tests/scripts/task_lint.sh' inside tvmai/ci-lint:v0.61...
   
   mesg: ttyname failed: Inappropriate ioctl for device
   
   Adding group `tvm' (GID 1000) ...
   
   Done.
   
   Check file types...
   
   --File type check report
   
   apps/wasm-standalone/wasm-graph/.cargo/config
   
   Found 1 files that are now allowed
   
   We do not check in binary files into the repo.
   
   If necessary, please discuss with committers andmodify 
tests/lint/check_file_type.py to enable the file you need.
   
   script returned exit code 255
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454044403



##
File path: src/runtime/graph/graph_runtime_factory.cc
##
@@ -0,0 +1,175 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file graph_runtime_factory.cc
+ * \brief Graph runtime factory implementations
+ */
+
+#include "./graph_runtime_factory.h"
+
+#include 
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+namespace runtime {
+
+GraphRuntimeFactory::GraphRuntimeFactory(
+const std::string& graph_json,
+const std::unordered_map& params,
+const std::string& module_name) {
+  graph_json_ = graph_json;
+  params_ = params;
+  module_name_ = module_name;
+}
+
+PackedFunc GraphRuntimeFactory::GetFunction(
+const std::string& name, const 
tvm::runtime::ObjectPtr& sptr_to_self) {
+  if (name == module_name_) {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  std::vector contexts;
+  for (int i = 0; i < args.num_args; ++i) {
+contexts.emplace_back(args[i].operator TVMContext());
+  }
+  *rv = this->RuntimeCreate(contexts);
+});
+  } else if (name == "debug_create") {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  CHECK_GE(args.size(), 2);
+  std::string module_name = args[0].operator String();
+  CHECK(module_name == module_name_) << "Currently we only support single 
model for now.";
+  std::vector contexts;
+  for (int i = 1; i < args.num_args; ++i) {
+contexts.emplace_back(args[i].operator TVMContext());
+  }
+  *rv = this->DebugRuntimeCreate(contexts);
+});
+  } else if (name == "remove_params") {
+return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+  std::unordered_map empty_params{};
+  auto exec =
+  make_object(this->graph_json_, empty_params, 
this->module_name_);
+  exec->Import(this->imports_[0]);
+  *rv = Module(exec);
+});
+  } else {
+return PackedFunc();
+  }
+}
+
+void GraphRuntimeFactory::SaveToBinary(dmlc::Stream* stream) {
+  stream->Write(graph_json_);
+  std::vector names;
+  std::vector arrays;
+  for (const auto& v : params_) {
+names.emplace_back(v.first);
+arrays.emplace_back(const_cast(v.second.operator->()));
+  }
+  uint64_t sz = arrays.size();

Review comment:
   Could you give me a link about this MetadataModule?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


comaniac commented on a change in pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#discussion_r454043821



##
File path: python/tvm/autotvm/measure/measure_methods.py
##
@@ -473,8 +482,10 @@ def run_through_rpc(measure_input, build_result,
 remote.upload(build_result.filename)
 func = remote.load_module(os.path.split(build_result.filename)[1])
 ctx = remote.context(str(measure_input.target), 0)
+f_prepare = 'cache_flush_cpu_non_first_arg' if enable_cpu_cache_flush 
else ''

Review comment:
   I'm fine with it. Thanks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5753: Support module based interface runtime

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5753:
URL: https://github.com/apache/incubator-tvm/pull/5753#discussion_r454043676



##
File path: python/tvm/relay/backend/graph_runtime_factory.py
##
@@ -0,0 +1,113 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+"""Graph runtime factory."""
+import warnings
+from tvm._ffi.base import string_types
+from tvm._ffi.registry import get_global_func
+from tvm.runtime import ndarray
+
+
+def create(graph_json_str, libmod, libmod_name, params):
+"""Create a runtime executor module.
+Parameters
+--
+graph_json_str : str or graph class

Review comment:
   Ah sorry, I just copy the doc comment from graph_runtime.py. I will 
check this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5915: [BYOC][Contrib] Arm Compute Library integration

2020-07-13 Thread GitBox


comaniac commented on a change in pull request #5915:
URL: https://github.com/apache/incubator-tvm/pull/5915#discussion_r454026302



##
File path: src/relay/backend/contrib/arm_compute_lib/codegen.cc
##
@@ -0,0 +1,188 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file src/relay/backend/contrib/arm_compute_lib/codegen_acl.cc
+ * \brief Implementation of the Relay -> ACL JSON serializer.
+ */
+#include 
+#include 
+#include 
+
+#include "../../utils.h"
+#include "codegen_acl.h"
+
+namespace tvm {
+namespace relay {
+namespace contrib {
+namespace arm_compute_lib {
+
+using JSONGraphNode = tvm::runtime::json::JSONGraphNode;
+using JSONGraphNodeEntry = tvm::runtime::json::JSONGraphNodeEntry;
+
+std::vector ACLJSONSerializer::VisitExpr_(const CallNode* 
cn) {
+  Expr expr = GetRef(cn);
+  std::string name;
+  std::shared_ptr json_node;
+
+  if (cn->op.as()) {
+json_node = CreateOp(cn);
+  } else if (const auto* fn = cn->op.as()) {
+auto comp = fn->GetAttr(attr::kComposite);
+CHECK(comp.defined()) << "Arm Compute Library JSON runtime only supports 
composite functions.";
+name = comp.value();
+if (name == "arm_compute_lib.conv2d") {
+  json_node = CreateCompositeConvolution(cn);
+} else {
+  LOG(FATAL) << "Unrecognized Arm Compute Library pattern: " << name;
+}
+  } else {
+LOG(FATAL) << "Arm Compute Library JSON runtime does not support calls to "
+   << cn->op->GetTypeKey();
+  }
+
+  return AddNode(json_node, GetRef(cn));
+}
+
+std::vector ACLJSONSerializer::VisitExpr_(const 
ConstantNode* cn) {
+  this->constants_.push_back(cn->data);
+  return JSONSerializer::VisitExpr_(cn);
+}
+
+std::shared_ptr ACLJSONSerializer::CreateOp(const CallNode* cn) 
{
+  const auto* op = cn->op.as();
+  CHECK(op);
+  const std::string name = op->name;
+  // Collect inputs
+  std::vector inputs;
+  for (const auto& arg : cn->args) {
+auto res = VisitExpr(arg);
+inputs.insert(inputs.end(), res.begin(), res.end());
+  }
+  // Create JSON op
+  auto json_node = std::make_shared(name, "kernel", inputs, 1);
+  SetCallNodeAttribute(json_node, cn);
+  return json_node;
+}
+
+std::shared_ptr 
ACLJSONSerializer::CreateCompositeConvolution(const CallNode* cn) {

Review comment:
   Ditto. s/CreateCompositeConvolution/CreateCompositeConvJSONNode/ ?

##
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##
@@ -0,0 +1,119 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument
+"""ACL library supported operators."""
+import tvm
+from tvm.relay import transform
+from tvm.relay.build_module import bind_params_by_name
+
+from ...dataflow_pattern import wildcard, is_op, is_constant
+from .register import register_pattern_table
+
+
+def is_arm_compute_runtime_present():

Review comment:
   `is_arm_compute_runtime_enabled` seems better to me.

##
File path: python/tvm/relay/op/contrib/arm_compute_lib.py
##
@@ -0,0 +1,119 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.o

[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#discussion_r454039357



##
File path: python/tvm/autotvm/measure/measure_methods.py
##
@@ -473,8 +482,10 @@ def run_through_rpc(measure_input, build_result,
 remote.upload(build_result.filename)
 func = remote.load_module(os.path.split(build_result.filename)[1])
 ctx = remote.context(str(measure_input.target), 0)
+f_prepare = 'cache_flush_cpu_non_first_arg' if enable_cpu_cache_flush 
else ''

Review comment:
   I could add it in the code place in the measure_methods.py. Do you 
accept this place?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #6038: [Frontend][TFLite] Fix fully_connected converter when batch size is not 1

2020-07-13 Thread GitBox


FrozenGene commented on pull request #6038:
URL: https://github.com/apache/incubator-tvm/pull/6038#issuecomment-657909009


   @trevor-m @anijain2305 Thank you. This pr is merged now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene merged pull request #6038: [Frontend][TFLite] Fix fully_connected converter when batch size is not 1

2020-07-13 Thread GitBox


FrozenGene merged pull request #6038:
URL: https://github.com/apache/incubator-tvm/pull/6038


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (712c82f -> 99c52f3)

2020-07-13 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 712c82f  Add support for tflite arg_min and arg_max (#5992)
 add 99c52f3  [Frontend][TFLite] Fix fully_connected converter when batch 
size is not 1 (#6038)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 10 +-
 tests/python/frontend/tflite/test_forward.py | 21 +++--
 2 files changed, 20 insertions(+), 11 deletions(-)



[GitHub] [incubator-tvm] leonwanghui commented on a change in pull request #5052: [TARGET] ONNX codegen

2020-07-13 Thread GitBox


leonwanghui commented on a change in pull request #5052:
URL: https://github.com/apache/incubator-tvm/pull/5052#discussion_r454034918



##
File path: python/tvm/contrib/target/__init__.py
##
@@ -14,5 +14,3 @@
 # KIND, either express or implied.  See the License for the
 # specific language governing permissions and limitations
 # under the License.
-"""Codegen and runtime APIs for targets.

Review comment:
   Just confused why the annotation has been removed : )





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lsy643 commented on pull request #6033: Create an Invert permutation Operator

2020-07-13 Thread GitBox


lsy643 commented on pull request #6033:
URL: https://github.com/apache/incubator-tvm/pull/6033#issuecomment-657904946


   @anijain2305 This is the link of 
[invert_permutation](https://www.tensorflow.org/api_docs/python/tf/math/invert_permutation)
 from Tensorflow



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


comaniac commented on a change in pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#discussion_r454034185



##
File path: python/tvm/autotvm/measure/measure_methods.py
##
@@ -473,8 +482,10 @@ def run_through_rpc(measure_input, build_result,
 remote.upload(build_result.filename)
 func = remote.load_module(os.path.split(build_result.filename)[1])
 ctx = remote.context(str(measure_input.target), 0)
+f_prepare = 'cache_flush_cpu_non_first_arg' if enable_cpu_cache_flush 
else ''

Review comment:
   Ah sorry for missing the previous comment. Then we could leave it for 
now. Meanwhile, do you guys think it would be better to add a comment saying 
about this restriction?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] leonwanghui commented on pull request #5892: Add TVM application extension with WASM runtime

2020-07-13 Thread GitBox


leonwanghui commented on pull request #5892:
URL: https://github.com/apache/incubator-tvm/pull/5892#issuecomment-657903409


   cc @tqchen Please review it again, thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] huajsj opened a new pull request #6049: [Pytorch] add operator copy_ support

2020-07-13 Thread GitBox


huajsj opened a new pull request #6049:
URL: https://github.com/apache/incubator-tvm/pull/6049


   Issue:
   Using tvm compile a pytorch network, tvm failed due to copy_ operator not 
support.
   
   solution:
   add pytorch copy_ operator support to tvm.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#discussion_r454024307



##
File path: tutorials/autotvm/tune_relay_x86.py
##
@@ -122,8 +122,9 @@ def get_network(name, batch_size):
 
 'measure_option': autotvm.measure_option(
 builder=autotvm.LocalBuilder(),
-runner=autotvm.LocalRunner(number=10, repeat=1,
-   min_repeat_ms=1000),
+runner=autotvm.LocalRunner(number=1, repeat=10,
+   min_repeat_ms=0,
+   enable_cpu_cache_flush=True),

Review comment:
   I will update it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#discussion_r454024398



##
File path: python/tvm/runtime/module.py
##
@@ -163,7 +163,7 @@ def save(self, file_name, fmt=""):
 """
 _ffi_api.ModuleSaveToFile(self, file_name, fmt)
 
-def time_evaluator(self, func_name, ctx, number=10, repeat=1, 
min_repeat_ms=0):
+def time_evaluator(self, func_name, ctx, number=10, repeat=1, 
min_repeat_ms=0, f_prepare=''):

Review comment:
   I agree with you.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#discussion_r454023755



##
File path: python/tvm/autotvm/measure/measure_methods.py
##
@@ -309,7 +313,8 @@ class LocalRunner(RPCRunner):
 Whether check correctness after measurement. This will use llvm cpu 
target to
 call your template and get the reference output.
 This can work for TOPI templates, but may not work for your custom 
template.
-
+enable_cpu_cache_flush: bool
+Whether to enable cpu cache flush

Review comment:
   Thanks, I will update the doc





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on a change in pull request #5914: [clflush] Enable x86 cpu cache flush

2020-07-13 Thread GitBox


FrozenGene commented on a change in pull request #5914:
URL: https://github.com/apache/incubator-tvm/pull/5914#discussion_r454023177



##
File path: python/tvm/autotvm/measure/measure_methods.py
##
@@ -473,8 +482,10 @@ def run_through_rpc(measure_input, build_result,
 remote.upload(build_result.filename)
 func = remote.load_module(os.path.split(build_result.filename)[1])
 ctx = remote.context(str(measure_input.target), 0)
+f_prepare = 'cache_flush_cpu_non_first_arg' if enable_cpu_cache_flush 
else ''

Review comment:
   Previous design I also want to pass the PackedFunc here directly but it 
will have problem in RPC mentioned here: 
https://github.com/apache/incubator-tvm/pull/5914#issuecomment-654560411 So 
currently we could only pass the string to work around rpc restriction.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 edited a comment on pull request #6033: Create an Invert permutation Operator

2020-07-13 Thread GitBox


anijain2305 edited a comment on pull request #6033:
URL: https://github.com/apache/incubator-tvm/pull/6033#issuecomment-657888723


   Thanks for the contribution. Wondering if it is possible to add some context 
or links here?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #6033: Create an Invert permutation Operator

2020-07-13 Thread GitBox


anijain2305 commented on pull request #6033:
URL: https://github.com/apache/incubator-tvm/pull/6033#issuecomment-657888723


   Wondering if it is possible to add some context or links here?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5980: Fixed point multiplication improvements for AArch64

2020-07-13 Thread GitBox


anijain2305 commented on a change in pull request #5980:
URL: https://github.com/apache/incubator-tvm/pull/5980#discussion_r454000570



##
File path: include/tvm/tir/builtin.h
##
@@ -92,6 +92,14 @@ TVM_DLL const Op& shift_right();
  */
 TVM_DLL const Op& large_uint_imm();
 
+/*!
+ * \brief Execute a multiplication between two Q-numbers x and y
+ * followed by a right shift s
+ * The default rounding rule is to the nearest value, rounding half up
+ * (i.e., round(x.1) = x and round (x.5) = x+1)
+ */
+TVM_DLL const Op& qmuls();

Review comment:
   We should come up with a better name. Currently, `qmuls` seems vague.
   Not sure what `q` and `s` stand for a person not familiar with Q numbers.
   
   Why not use the same `fixed_point_multiply` and keep Q numbers description 
in doc string?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on pull request #6042: [TIR] Add an option to limit the maximum extent of explicitly unrolled loops

2020-07-13 Thread GitBox


FrozenGene commented on pull request #6042:
URL: https://github.com/apache/incubator-tvm/pull/6042#issuecomment-657886398


   I understand this pr ‘s purpose. I met 32 hangs for a bit long time on arm 
v7 cpu (Worse thing is single convolution op compile time is ok, but it is not 
well when to compile whole network). Previous fix is to limit the max_unroll to 
be 16. Everything works well. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #5962: [Ansor][AutoTVM v2.0] Part 0: Ansor minimum system for auto schedule generating

2020-07-13 Thread GitBox


jroesch commented on pull request #5962:
URL: https://github.com/apache/incubator-tvm/pull/5962#issuecomment-657883231


   @jcf94 and @merrymercy thanks for all the hard work! Can I request that we 
put another unresolved issue? In my opinion the written English parts i.e 
comments, explanations, etc could still use some improvement with both content 
and grammar and I would propose in general that we do some at least 1 or 2 
rounds of full documentation polish (comments, examples, tests, tutorials, etc) 
 before we officially release a feature (in this case when all of Ansor is 
landed in master). We tried to do this with Relay but I think we should 
continue to strive to do a better job with new features like this. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5980: Fixed point multiplication improvements for AArch64

2020-07-13 Thread GitBox


anijain2305 commented on a change in pull request #5980:
URL: https://github.com/apache/incubator-tvm/pull/5980#discussion_r454011323



##
File path: topi/python/topi/arm_cpu/tensor_intrin.py
##
@@ -451,3 +451,55 @@ def _instr(index):
 return te.decl_tensor_intrin(
 C.op, _intrin_func, binds={data:a_buffer, kernel:b_buffer},
 default_buffer_params=buffer_params)
+
+def _qmuls_arm(op):
+"""
+Implementation of qmuls through arm intrinsics sqrdmulh and srshl
+when q == 31.
+
+Please note that this is introducing a small round-up error for
+some corner cases. This is because we are rounding twice instead
+than only once. I.e.:
+
+* original qmuls: round(x*y*2^-s)
+* arm qmuls: round(round(x*y)*2^-s)
+"""
+x = op.args[0]
+y = op.args[1]
+q = op.args[2]
+s = op.args[3]
+
+# Don't use this intrinsic if we don't have a int32x4 vector
+# and if we are not multiplying q31 numbers
+if x.dtype != "int32x4" and q == 31:

Review comment:
   Can you please double check the condition? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5980: Fixed point multiplication improvements for AArch64

2020-07-13 Thread GitBox


anijain2305 commented on a change in pull request #5980:
URL: https://github.com/apache/incubator-tvm/pull/5980#discussion_r454011323



##
File path: topi/python/topi/arm_cpu/tensor_intrin.py
##
@@ -451,3 +451,55 @@ def _instr(index):
 return te.decl_tensor_intrin(
 C.op, _intrin_func, binds={data:a_buffer, kernel:b_buffer},
 default_buffer_params=buffer_params)
+
+def _qmuls_arm(op):
+"""
+Implementation of qmuls through arm intrinsics sqrdmulh and srshl
+when q == 31.
+
+Please note that this is introducing a small round-up error for
+some corner cases. This is because we are rounding twice instead
+than only once. I.e.:
+
+* original qmuls: round(x*y*2^-s)
+* arm qmuls: round(round(x*y)*2^-s)
+"""
+x = op.args[0]
+y = op.args[1]
+q = op.args[2]
+s = op.args[3]
+
+# Don't use this intrinsic if we don't have a int32x4 vector
+# and if we are not multiplying q31 numbers
+if x.dtype != "int32x4" and q == 31:

Review comment:
   Can you please double check the condition? Should there be `or` here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on a change in pull request #5980: Fixed point multiplication improvements for AArch64

2020-07-13 Thread GitBox


anijain2305 commented on a change in pull request #5980:
URL: https://github.com/apache/incubator-tvm/pull/5980#discussion_r453998267



##
File path: include/tvm/relay/attrs/transform.h
##
@@ -298,6 +298,17 @@ struct ClipAttrs : public tvm::AttrsNode {
   }
 };
 
+/*! \brief Attributes for FixedPointMultiply operator */
+struct FixedPointMultiplyAttrs : public 
tvm::AttrsNode {
+  int32_t multiplier;
+  int32_t shift;
+
+  TVM_DECLARE_ATTRS(FixedPointMultiplyAttrs, 
"relay.attrs.FixedPointMultiplyAttrs") {
+TVM_ATTR_FIELD(multiplier).describe("Integer multiplier.");

Review comment:
   Nit, but lets remove the period at the end to be consistent with others.
   might be good to describe the multiplier and shift briefly

##
File path: include/tvm/tir/op.h
##
@@ -552,6 +552,24 @@ TVM_DLL PrimExpr trunc(PrimExpr x);
  */
 TVM_DLL PrimExpr LargeUIntImm(DataType dtype, int64_t low, int64_t high);
 
+/*!
+ * \brief Execute a multiplication between two Q-numbers x and y
+ * followed by a right shift s. The mathematical expression is:
+ *
+ *out = round(x*y*2^-s)
+ *
+ * More about Q-numbers here: https://en.wikipedia.org/wiki/Q_(number_format)
+ *
+ * The rounding rule is to the nearest value, rounding half up
+ * (i.e., round(x.1) = x and round (x.5) = x+1)
+ * \param x first Q-number
+ * \param y second Q-number
+ * \param q Q-ness of x and y

Review comment:
   Agreed, number of fractional bits is better description

##
File path: src/relay/op/tensor/unary.cc
##
@@ -274,6 +274,20 @@ 
TVM_REGISTER_GLOBAL("relay.op._make.clip").set_body_typed([](Expr a, double a_mi
   return Call(op, {a}, Attrs(attrs), {});
 });
 
+// relay.fixed_point_multiply
+TVM_REGISTER_NODE_TYPE(FixedPointMultiplyAttrs);
+
+RELAY_REGISTER_OP("fixed_point_multiply")
+.describe(R"code( fixed point multiplication )code" TVM_ADD_FILELINE)
+.set_num_inputs(1)
+.add_argument("data", "Tensor", "The input tensor.")
+.add_type_rel("Identity", IdentityRel)
+.set_attr("TOpPattern", kElemWise)
+.set_attr("TOpIsStateful", false)
+.set_attr("FInferCorrectLayout", 
ElemwiseArbitraryLayout)
+.set_attrs_type()
+.set_support_level(3);

Review comment:
   I think level 10 is better here
   
   @tqchen any suggestions here?

##
File path: topi/python/topi/arm_cpu/injective.py
##
@@ -62,9 +62,13 @@ def schedule_injective(outs):
 outs = [outs] if isinstance(outs, te.tensor.Tensor) else outs
 s = te.create_schedule([x.op for x in outs])
 x = outs[0]
+ins = x.op.input_tensors
+dtype = ins[0].dtype if len(ins) > 0 else x.dtype
+max_vlen = 4 if dtype == 'int32' else 8

Review comment:
   Seems like 4 should be better for float32 as well. If it is, then maybe 
we should always use 4 instead of 8.

##
File path: python/tvm/tir/op.py
##
@@ -965,6 +965,34 @@ def popcount(x):
 """
 return call_intrin(x.dtype, "tir.popcount", x)
 
+def qmuls(x, y, q, s):
+"""Execute a multiplication between two Q-numbers x and y
+followed by a right shift s. The mathematical expression is:
+
+   out = round(x*y*2^-s)

Review comment:
   Maybe we should add a line to explain why there is a multiplication 
factor of 2 (perhaps rounding)

##
File path: include/tvm/tir/op.h
##
@@ -552,6 +552,24 @@ TVM_DLL PrimExpr trunc(PrimExpr x);
  */
 TVM_DLL PrimExpr LargeUIntImm(DataType dtype, int64_t low, int64_t high);
 
+/*!
+ * \brief Execute a multiplication between two Q-numbers x and y
+ * followed by a right shift s. The mathematical expression is:
+ *
+ *out = round(x*y*2^-s)
+ *
+ * More about Q-numbers here: https://en.wikipedia.org/wiki/Q_(number_format)
+ *
+ * The rounding rule is to the nearest value, rounding half up
+ * (i.e., round(x.1) = x and round (x.5) = x+1)
+ * \param x first Q-number
+ * \param y second Q-number
+ * \param q Q-ness of x and y

Review comment:
   Are number of fractional bits same for x and y and thats why we need 
only input? Lets make the description more clear.
   IIUC, one can think of it using this op to perform something like Q1.31  * 
Q2.30. But, I think this op is restrictive than that. If it is, then lets 
mention it.

##
File path: include/tvm/tir/builtin.h
##
@@ -92,6 +92,14 @@ TVM_DLL const Op& shift_right();
  */
 TVM_DLL const Op& large_uint_imm();
 
+/*!
+ * \brief Execute a multiplication between two Q-numbers x and y
+ * followed by a right shift s
+ * The default rounding rule is to the nearest value, rounding half up
+ * (i.e., round(x.1) = x and round (x.5) = x+1)
+ */
+TVM_DLL const Op& qmuls();

Review comment:
   We should come up with a better name. Currently, `qmuls` seems vague.
   Not sure what `q` and `s` stand for a person not familiar with Q numbers.
   
   Why not use the same `fixed_point_multiply`?

##
File path: src/target/intrin_rule.cc
##
@@ -115,6 +115,51 @@ TVM_REGISTER_GLOBAL("

[GitHub] [incubator-tvm] anijain2305 commented on pull request #5992: Add support for tflite arg_min and arg_max

2020-07-13 Thread GitBox


anijain2305 commented on pull request #5992:
URL: https://github.com/apache/incubator-tvm/pull/5992#issuecomment-657857651


   Thanks @d-smirnov @siju-samuel @MarisaKirisame This is merged!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 merged pull request #5992: Add support for tflite arg_min and arg_max

2020-07-13 Thread GitBox


anijain2305 merged pull request #5992:
URL: https://github.com/apache/incubator-tvm/pull/5992


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Add support for tflite arg_min and arg_max (#5992)

2020-07-13 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository.

anijain2305 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 712c82f  Add support for tflite arg_min and arg_max (#5992)
712c82f is described below

commit 712c82fb38ec2beea5a72662fb00899ab9bc0a08
Author: Dmitriy Smirnov 
AuthorDate: Tue Jul 14 00:08:56 2020 +0100

Add support for tflite arg_min and arg_max (#5992)

* [Relay][Frontend][TFLite] Add parser support for arg_min_max

* this implementation supports only the case when the axis is a scalar
* tflite 1.13 removes all dims of size 1, Relay doesn't do this
* WARNING: every newer version of tflite > 1.13 needs keepdims=TRUE

* Migrated to tflite 2.1.0

keepdims set to False and added some checks

Note the unit tests emmitted following warning:
/workspace/src/te/schedule/bound.cc:119: not in feed graph consumer = 
compute(T_multiply_red_temp, 0x53f5050)

* linter

* Removed quantized argmin

Removed quantized argmin due to inablility to provide proper test case

* added negative ranges

* re-trigger CI

Co-authored-by: Ina_Dobreva 
---
 python/tvm/relay/frontend/tflite.py  | 50 
 tests/python/frontend/tflite/test_forward.py | 34 +++
 2 files changed, 84 insertions(+)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index 36221b7..1ec8237 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -67,6 +67,8 @@ class OperatorConverter(object):
 'ABS': self.convert_abs,
 'ADD': self.convert_add,
 'ADD_N': self.convert_add_n,
+'ARG_MAX': self.convert_arg_max,
+'ARG_MIN': self.convert_arg_min,
 'AVERAGE_POOL_2D': self.convert_average_pool2d,
 'BATCH_TO_SPACE_ND': self.convert_batch_to_space_nd,
 'CAST': self.convert_cast,
@@ -1634,6 +1636,54 @@ class OperatorConverter(object):
 def convert_reduce_any(self, op):
 return self._convert_reduce(_op.reduce.any, op)
 
+def _convert_arg_min_max(self, relay_op, op):
+"""Generic method converting TFLite arg_min_max"""
+try:
+from tflite.BuiltinOptions import BuiltinOptions
+from tflite.ArgMinOptions import ArgMinOptions
+from tflite.ArgMaxOptions import ArgMaxOptions
+except ImportError:
+raise ImportError("The tflite package must be installed")
+
+input_tensors = self.get_input_tensors(op)
+assert len(input_tensors) == 2, "two input tensor arguments expected"
+
+output_tensors = self.get_output_tensors(op)
+assert len(output_tensors) == 1, "one output tensor expected"
+
+input_tensor = input_tensors[0]
+in_expr = self.get_expr(input_tensor.tensor_idx)
+axis_tensor = input_tensors[1]
+# In Tensorflow, `axis` argument is a Tensor, not attribute. We
+# support the case where it inputs from a scalar constant.
+axis_value = self.get_tensor_value(axis_tensor)
+assert axis_value.size == 1
+axis_value = axis_value.item()
+
+if op.BuiltinOptionsType() == BuiltinOptions.ArgMinOptions:
+arg_min_max_options = ArgMinOptions()
+elif op.BuiltinOptionsType() == BuiltinOptions.ArgMaxOptions:
+arg_min_max_options = ArgMaxOptions()
+op_options = op.BuiltinOptions()
+arg_min_max_options.Init(op_options.Bytes, op_options.Pos)
+
+# set keepdims to True since tflite 1.13 removes all dims of size 1
+# WARNING: all other versions of tflite > 1.13 need keepdims=False
+out = relay_op(in_expr, axis=axis_value, keepdims=False, exclude=False)
+
+return out
+
+def convert_arg_min(self, op):
+"""Convert TFLite ARG_MIN"""
+if self.is_quantized(op):
+raise tvm.error.OpNotImplemented(
+'TFlite quantized ARG_MIN operator is not supported yet.')
+return self._convert_arg_min_max(_op.argmin, op)
+
+def convert_arg_max(self, op):
+"""Convert TFLite ARG_MAX"""
+return self._convert_arg_min_max(_op.argmax, op)
+
 def convert_fully_connected(self, op):
 """Convert TFLite fully connected"""
 try:
diff --git a/tests/python/frontend/tflite/test_forward.py 
b/tests/python/frontend/tflite/test_forward.py
index 52491b2..5118467 100644
--- a/tests/python/frontend/tflite/test_forward.py
+++ b/tests/python/frontend/tflite/test_forward.py
@@ -1755,6 +1755,39 @@ def test_all_reduce():
 if package_version.parse(tf.VERSION) >= package_version.parse('1.15.0'):
 _test_forward_reduce(_test_reduce_any, dtype="bool")
 
+

[GitHub] [incubator-tvm] ymwangg opened a new pull request #6048: [AutoTVM][BugFix] Fix variable name conflict with OpenCL keyword

2020-07-13 Thread GitBox


ymwangg opened a new pull request #6048:
URL: https://github.com/apache/incubator-tvm/pull/6048


   Currently AutoTVM cannot tune the operator `conv2d_nchw_spatial_pack.mali` 
due to a variable name conflict with OpenCL, this PR fixes this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Relay] Add pass for getting calibration data from a relay module (#5997)

2020-07-13 Thread zhic
This is an automated email from the ASF dual-hosted git repository.

zhic pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 96fe315  [Relay] Add pass for getting calibration data from a relay 
module (#5997)
96fe315 is described below

commit 96fe315984d500f802cee627615bc46c76b82f2e
Author: Yi-Hsiang (Sean) Lai 
AuthorDate: Mon Jul 13 18:39:10 2020 -0400

[Relay] Add pass for getting calibration data from a relay module (#5997)

* add simple pass to extract outputs

* complete pass that collects all function inputs/outputs

* add analysis pass for collecting outputs

* reorganize the files

* add the first test

* update test with tuples

* clean up Python code

* merge with upstream

* clean up transform.py

* add comments for cpp files

* fix lint issues

* update submodules

* modify files according to the review

* fix style and typo

* fix lint error

* add checks for repeated function calls

* fix lint error

* merge review comments

* small simplification

* revise the code according to the review comments

* add username in TODO

* use IRModule directly

* use better APIs according to the review

* apply comments from the reviewer

* retrigger ci
---
 include/tvm/relay/analysis.h   |  18 ++
 python/tvm/relay/analysis/analysis.py  |  48 +
 src/relay/analysis/get_calibration_data.cc | 202 +
 .../relay/test_analysis_get_calibration_data.py| 105 +++
 4 files changed, 373 insertions(+)

diff --git a/include/tvm/relay/analysis.h b/include/tvm/relay/analysis.h
index b4b1b9d..8eda7dd 100644
--- a/include/tvm/relay/analysis.h
+++ b/include/tvm/relay/analysis.h
@@ -236,6 +236,24 @@ TVM_DLL Array UnmatchedCases(const Match& match, 
const IRModule& mod);
  */
 TVM_DLL std::unordered_map GetExprRefCount(const Expr& 
body);
 
+/*!
+ * \brief Get the updated module for collecting calibration data.
+ *
+ * \param mod The module to be updated.
+ *
+ * \return The updated module.
+ */
+TVM_DLL IRModule GetCalibrateModule(IRModule mod);
+
+/*!
+ * \brief Get the output map between subgrpahs and its inputs/output.
+ *
+ * \param mod The module for running calibration.
+ *
+ * \return The mapping between a subgraph name and its postition in the output 
tuple.
+ */
+TVM_DLL Map> GetCalibrateOutputMap(const IRModule& 
mod);
+
 }  // namespace relay
 }  // namespace tvm
 
diff --git a/python/tvm/relay/analysis/analysis.py 
b/python/tvm/relay/analysis/analysis.py
index c237859..632af46 100644
--- a/python/tvm/relay/analysis/analysis.py
+++ b/python/tvm/relay/analysis/analysis.py
@@ -21,6 +21,8 @@ This file contains the set of passes for Relay, which exposes 
an interface for
 configuring the passes and scripting them in Python.
 """
 from tvm.ir import IRModule
+from tvm.relay import transform, build_module
+from tvm.runtime.ndarray import cpu
 
 from . import _ffi_api
 from .feature import Feature
@@ -351,3 +353,49 @@ def search_fc_transpose(expr):
 """
 ret = _ffi_api.search_fc_transpose(expr)
 return ret
+
+
+def get_calibration_data(mod, data):
+"""Get the calibration data of a given relay graph
+
+This pass uses the graph runtime to get the calibration data of a module, 
which
+includes the input and output values of each function. The returned data 
uses
+the GlobalVar of each function as a key. Users can further access the 
inputs and
+outputs by using `inputs` or  `outputs` as the key.
+
+Following are some limitations:
+1. The input module (graph) cannot have control flows.
+2. The input arguments of each function cannot be tuples (outputs can be 
tuples).
+3. We only handle top-level functions (i.e., nested function is not 
handled).
+4. We only handle functions with `Compiler` attribute being set.
+
+Parameters
+--
+mod : tvm.IRModule
+The input module for collecting the calibration data
+
+data : Dict[str, NDArray]
+The input data for running the module
+
+Returns
+---
+data : Dict[tvm.relay.GlobalVar, Dict[str, NDArray]]
+"""
+output_map = _ffi_api.get_calibrate_output_map(mod)
+
+mod = _ffi_api.get_calibrate_module(mod)
+mod = transform.Inline()(mod)
+
+ref_ex = build_module.create_executor("graph", mod=mod, ctx=cpu(0))
+ref_res = ref_ex.evaluate()(**data)
+
+calib_data = {}
+for gvar, indices in output_map.items():
+offset = int(indices[0])
+in_len = int(indices[1])
+out_len = int(indices[2])
+value = {"inputs": ref_res[offset:offset + in_len],
+ "outputs": ref_res[offset + in_len:offset + in_len + out_len]}
+ca

[GitHub] [incubator-tvm] zhiics merged pull request #5997: [Relay] Add pass for getting calibration data from a relay module

2020-07-13 Thread GitBox


zhiics merged pull request #5997:
URL: https://github.com/apache/incubator-tvm/pull/5997


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] zhiics commented on pull request #5997: [Relay] Add pass for getting calibration data from a relay module

2020-07-13 Thread GitBox


zhiics commented on pull request #5997:
URL: https://github.com/apache/incubator-tvm/pull/5997#issuecomment-657830023


   Thanks @seanlatias @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   >