[GitHub] [incubator-tvm] jcf94 commented on a change in pull request #6686: [AutoSchedule] Support multiple cache read and fix bugs

2020-10-15 Thread GitBox


jcf94 commented on a change in pull request #6686:
URL: https://github.com/apache/incubator-tvm/pull/6686#discussion_r505237035



##
File path: src/auto_scheduler/compute_dag.cc
##
@@ -970,8 +1005,21 @@ void ComputeDAG::RewriteLayout(const Array& 
transform_steps) {
 }  // end for placeholder
   }// end for stage
   p_dag->access_analyzer = AccessAnalyzer(p_dag->tensors);
-  p_dag->ops = p_dag->access_analyzer->ops_topo_order;
+
+  Array out_ops;
+  for (const auto& op : p_dag->access_analyzer->ops_topo_order) {
+if (p_dag->access_analyzer.IsOutput(op)) {
+  out_ops.push_back(op);
+}
+  }
+
+  p_dag->ops.clear();
+  te::Schedule sch = te::create_schedule(out_ops);
+  for (auto stage : sch->stages) {
+p_dag->ops.push_back(stage->op);
+  }
   p_dag->flop_ct = FlopEstimator().EstimateFlop(p_dag->ops);
+  p_dag->init_state = State(p_dag->ops);

Review comment:
   We can delete Line 987 since it's added here.
   Anyway, this does not matter. I'm doing some updating on layout_write, and 
have also modified some code in this part, I'll refine the code after this PR's 
merge. :)

##
File path: src/te/schedule/schedule_dataflow_rewrite.cc
##
@@ -138,6 +138,15 @@ Tensor Schedule::cache_read(const Tensor& tensor, const 
std::string& scope,
   }
   os << "." << scope;
 
+  // when a schedule has multiple cache_read on the same tensor,
+  // we make sure their op names are unique. e.g., w.shared, w.shared.d, 
w.shared.d.d
+  for (auto pair : (*this)->stage_map) {
+auto stage = pair.second;
+if (stage->op->name == os.str()) {
+  os << ".d";

Review comment:
   Can we add a global map here and mark these name as "w.shared.0", 
"w.shared.1"?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6686: [AutoSchedule] Support multiple cache read and fix bugs

2020-10-15 Thread GitBox


comaniac commented on a change in pull request #6686:
URL: https://github.com/apache/incubator-tvm/pull/6686#discussion_r505272389



##
File path: src/te/schedule/schedule_dataflow_rewrite.cc
##
@@ -138,6 +138,15 @@ Tensor Schedule::cache_read(const Tensor& tensor, const 
std::string& scope,
   }
   os << "." << scope;
 
+  // when a schedule has multiple cache_read on the same tensor,
+  // we make sure their op names are unique. e.g., w.shared, w.shared.d, 
w.shared.d.d
+  for (auto pair : (*this)->stage_map) {
+auto stage = pair.second;
+if (stage->op->name == os.str()) {
+  os << ".d";

Review comment:
   I was thinking about this too, but in this way we need to maintain 
another set in Schedule. Consider this case doesn't happen frequently now, the 
current solution should be sufficient. We can definitely use the approach you 
suggested in the future after naming conflict becomes much common.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] i24361 opened a new pull request #6688: [TUTORIAL][FIX] Fix VTA autotuning from tutorial fails with one P…

2020-10-15 Thread GitBox


i24361 opened a new pull request #6688:
URL: https://github.com/apache/incubator-tvm/pull/6688


   [TUTORIAL][FIX] Fix VTA autotuning from tutorial fails with one PYNQ, but 
succeeds with two PYNQs
   
https://discuss.tvm.apache.org/t/vta-workaround-for-autotuning-with-one-pynq-z1-board/8091
   Autotuning can do without `remote`. I use this way to tune Resnet18 on PYNQ 
Z1 and Faster-RCNN on  PYNQ ZCU104. Sometimes we only have one board available 
and this workaround works.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on a change in pull request #6603: Add µTVM Zephyr support + QEMU regression test

2020-10-15 Thread GitBox


liangfu commented on a change in pull request #6603:
URL: https://github.com/apache/incubator-tvm/pull/6603#discussion_r505293470



##
File path: tests/python/unittest/test_micro_artifact.py
##
@@ -0,0 +1,137 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+"""Unit tests for the artifact module."""
+
+import json
+import os
+import shutil
+
+from tvm.contrib import util
+from tvm.micro import artifact
+
+
+FILE_LIST = ["label1", "label2", "label12", "unlabelled"]
+
+
+TEST_METADATA = {"foo": "bar"}
+
+
+TEST_LABELS = {"label1": ["label1", "label12"], "label2": ["label2", 
"label12"]}
+
+
+def build_artifact(artifact_path, immobile=False):
+os.mkdir(artifact_path)
+
+for f in FILE_LIST:
+with open(os.path.join(artifact_path, f), "w") as lib_f:
+lib_f.write(f"{f}\n")
+
+sub_dir = os.path.join(artifact_path, "sub_dir")
+os.mkdir(sub_dir)
+os.symlink("label1", os.path.join(artifact_path, "rel_symlink"))
+os.symlink("label2", os.path.join(artifact_path, "abs_symlink"), "label2")
+os.symlink(
+os.path.join(artifact_path, "sub_dir"), os.path.join(artifact_path, 
"abs_dir_symlink")
+)
+
+art = artifact.Artifact(artifact_path, TEST_LABELS, TEST_METADATA, 
immobile=immobile)
+
+return art
+
+
+def test_basic_functionality():
+temp_dir = util.tempdir()
+artifact_path = temp_dir.relpath("foo")
+art = build_artifact(artifact_path)
+
+assert art.abspath("bar") == os.path.join(artifact_path, "bar")
+
+for label, paths in TEST_LABELS.items():
+assert art.label(label) == paths
+assert art.label_abspath(label) == [os.path.join(artifact_path, p) for 
p in paths]
+
+
+def test_archive():
+temp_dir = util.tempdir()
+art = build_artifact(temp_dir.relpath("foo"))
+
+# Create archive
+archive_path = art.archive(temp_dir.temp_dir)
+assert archive_path == temp_dir.relpath("foo.tar")
+
+# Inspect created archive
+unpack_dir = temp_dir.relpath("unpack")
+os.mkdir(unpack_dir)
+shutil.unpack_archive(archive_path, unpack_dir)
+
+for path in FILE_LIST:
+with open(os.path.join(unpack_dir, "foo", path)) as f:
+assert f.read() == f"{path}\n"
+
+with open(os.path.join(unpack_dir, "foo", "metadata.json")) as metadata_f:
+metadata = json.load(metadata_f)
+
+assert metadata["version"] == 2
+assert metadata["labelled_files"] == TEST_LABELS
+assert metadata["metadata"] == TEST_METADATA
+
+# Unarchive and verify basic functionality
+unarchive_base_dir = temp_dir.relpath("unarchive")
+unarch = artifact.Artifact.unarchive(archive_path, unarchive_base_dir)
+
+assert unarch.metadata == TEST_METADATA
+assert unarch.labelled_files == TEST_LABELS
+for f in FILE_LIST:
+assert os.path.exists(os.path.join(unarchive_base_dir, f))
+
+
+def test_metadata_only():
+temp_dir = util.tempdir()
+base_dir = temp_dir.relpath("foo")
+art = build_artifact(base_dir)
+
+artifact_path = art.archive(temp_dir.relpath("foo.artifact"), 
metadata_only=True)
+unarch_base_dir = temp_dir.relpath("bar")
+unarch = artifact.Artifact.unarchive(artifact_path, unarch_base_dir)
+assert unarch.base_dir == base_dir
+
+for p in unarch.label_abspath("label1") + unarch.label_abspath("label2"):
+assert os.path.exists(p)
+
+os.unlink(art.abspath("label1"))
+with open(art.abspath("label2"), "w+") as f:
+f.write("changed line\n")
+
+try:
+artifact.Artifact.unarchive(artifact_path, 
os.path.join(temp_dir.temp_dir, "bar2"))
+assert False, "unarchive should raise error"
+except artifact.ArchiveModifiedError as err:
+assert str(err) == (
+"Files in metadata-only archive have been modified:\n"
+" * label1: original file not found\n"
+" * label2: sha256 mismatch: expected "
+"6aa3c5668c8794c791400e19ecd7123949ded1616eafb0395acdd2d896354e83, 
got "
+"ed87db21670a81819d65eccde87c5ae0243b2b61783bf77e9b27993be9a3eca0"

Review comment:
   Just curious, why do we need to hard code these hash here?





This is an autom

[incubator-tvm] branch main updated (9564925 -> c7ff885)

2020-10-15 Thread liangfu
This is an automated email from the ASF dual-hosted git repository.

liangfu pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 9564925  [Relay][Frontend][Onnx] Allow A to B broadcasting of 
batch_matmul and reverse strided slice (#6681)
 add c7ff885  Add µTVM Zephyr support + QEMU regression test (#6603)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/crt/error_codes.h  |   1 +
 include/tvm/runtime/crt/utvm_rpc_server.h  |  24 +-
 python/tvm/exec/rpc_server.py  |  69 ---
 python/tvm/micro/__init__.py   |   2 +-
 python/tvm/micro/artifact.py   | 108 +++-
 .../micro_kernel => micro/contrib}/__init__.py |   0
 python/tvm/micro/contrib/base.py   |  67 +++
 python/tvm/micro/contrib/zephyr.py | 621 +
 python/tvm/micro/debugger.py   |  25 +-
 python/tvm/micro/micro_binary.py   |  15 +-
 python/tvm/micro/micro_library.py  |  13 +-
 python/tvm/micro/session.py|  50 +-
 python/tvm/micro/transport.py  | 238 
 .../graph_tuner => micro/transport}/__init__.py|  15 +-
 python/tvm/micro/transport/base.py | 299 ++
 python/tvm/micro/transport/debug.py|  63 +++
 python/tvm/micro/transport/file_descriptor.py  | 105 
 python/tvm/micro/transport/subprocess.py   |  67 +++
 python/tvm/micro/transport/wakeup.py   |  74 +++
 src/runtime/crt/host/main.cc   |  19 +-
 src/runtime/crt/utvm_rpc_server/rpc_server.cc  |  50 +-
 src/runtime/micro/micro_session.cc | 136 -
 tests/lint/check_file_type.py  |   3 +
 tests/micro/qemu/.gitignore|   2 +
 tests/micro/qemu/test_zephyr.py| 143 +
 tests/micro/qemu/zephyr-runtime/.gitignore |   3 +
 tests/micro/qemu/zephyr-runtime/CMakeLists.txt |  27 +
 .../micro/qemu/zephyr-runtime/crt/crt_config.h |  22 +-
 .../qemu/zephyr-runtime/prj.conf}  |  21 +-
 .../zephyr-runtime/qemu-hack/qemu-system-i386} |  26 +-
 .../micro/qemu/zephyr-runtime/sample.yaml  |  12 +-
 tests/micro/qemu/zephyr-runtime/src/main.c | 238 
 tests/python/unittest/test_crt.py  |   3 +-
 tests/python/unittest/test_micro_artifact.py   | 137 +
 tests/scripts/task_python_microtvm.sh  |   9 +
 35 files changed, 2244 insertions(+), 463 deletions(-)
 copy python/tvm/{topi/arm_cpu/cortex_m7/micro_kernel => 
micro/contrib}/__init__.py (100%)
 create mode 100644 python/tvm/micro/contrib/base.py
 create mode 100644 python/tvm/micro/contrib/zephyr.py
 delete mode 100644 python/tvm/micro/transport.py
 copy python/tvm/{autotvm/graph_tuner => micro/transport}/__init__.py (69%)
 create mode 100644 python/tvm/micro/transport/base.py
 create mode 100644 python/tvm/micro/transport/debug.py
 create mode 100644 python/tvm/micro/transport/file_descriptor.py
 create mode 100644 python/tvm/micro/transport/subprocess.py
 create mode 100644 python/tvm/micro/transport/wakeup.py
 create mode 100644 tests/micro/qemu/.gitignore
 create mode 100644 tests/micro/qemu/test_zephyr.py
 create mode 100644 tests/micro/qemu/zephyr-runtime/.gitignore
 create mode 100644 tests/micro/qemu/zephyr-runtime/CMakeLists.txt
 copy src/runtime/crt/crt_config-template.h => 
tests/micro/qemu/zephyr-runtime/crt/crt_config.h (77%)
 copy tests/{scripts/task_python_ethosn_tests.sh => 
micro/qemu/zephyr-runtime/prj.conf} (74%)
 mode change 100755 => 100644
 copy tests/{lint/cppdocs.sh => 
micro/qemu/zephyr-runtime/qemu-hack/qemu-system-i386} (64%)
 copy conda/conda_build_config.yaml => 
tests/micro/qemu/zephyr-runtime/sample.yaml (88%)
 create mode 100644 tests/micro/qemu/zephyr-runtime/src/main.c
 create mode 100644 tests/python/unittest/test_micro_artifact.py



[GitHub] [incubator-tvm] liangfu merged pull request #6603: Add µTVM Zephyr support + QEMU regression test

2020-10-15 Thread GitBox


liangfu merged pull request #6603:
URL: https://github.com/apache/incubator-tvm/pull/6603


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] liangfu commented on pull request #6603: Add µTVM Zephyr support + QEMU regression test

2020-10-15 Thread GitBox


liangfu commented on pull request #6603:
URL: https://github.com/apache/incubator-tvm/pull/6603#issuecomment-708971948


   Thanks @areusch @tqchen . This is now merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm-vta] liangfu edited a comment on pull request #9: [Hardware][OpenCL] Intelfocl support

2020-10-15 Thread GitBox


liangfu edited a comment on pull request #9:
URL: https://github.com/apache/incubator-tvm-vta/pull/9#issuecomment-706899033


   ci tests failed on tsim seems to be unrelated, @zhanghaohit do you mind 
retrigger ci to see if it succeeds. (It was [successful 
previously](https://ci.tlcpack.ai/blue/organizations/jenkins/tvm-vta/detail/PR-12/1/pipeline).)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm-vta] liangfu edited a comment on pull request #9: [Hardware][OpenCL] Intelfocl support

2020-10-15 Thread GitBox


liangfu edited a comment on pull request #9:
URL: https://github.com/apache/incubator-tvm-vta/pull/9#issuecomment-706899033


   ci tests failed on tsim seems to be unrelated, @zhanghaohit do you mind 
retrigger ci to see if it passes? (It was [successful 
previously](https://ci.tlcpack.ai/blue/organizations/jenkins/tvm-vta/detail/PR-12/1/pipeline).)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch ci-docker-staging updated: Completely disable ResNet

2020-10-15 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch ci-docker-staging
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/ci-docker-staging by this push:
 new 5691c3c  Completely disable ResNet
5691c3c is described below

commit 5691c3c0e00eb6d0e4e4290cfb3e6b52a3ab106e
Author: Jared Roesch 
AuthorDate: Thu Oct 15 01:47:08 2020 -0700

Completely disable ResNet
---
 rust/Cargo.toml | 1 -
 1 file changed, 1 deletion(-)

diff --git a/rust/Cargo.toml b/rust/Cargo.toml
index 28312a5..9935ce7 100644
--- a/rust/Cargo.toml
+++ b/rust/Cargo.toml
@@ -23,7 +23,6 @@ members = [
"tvm",
"tvm/tests/basics",
"tvm/tests/callback",
-   "tvm/examples/resnet",
"tvm-graph-rt",
"tvm-graph-rt/tests/test_tvm_basic",
"tvm-graph-rt/tests/test_tvm_dso",



[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #6685: [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow

2020-10-15 Thread GitBox


ANSHUMAN87 commented on pull request #6685:
URL: https://github.com/apache/incubator-tvm/pull/6685#issuecomment-709001815


   Looks like Tensorflow version in CI is older, may be we need to bump it!!!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] thilinikb commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2020-10-15 Thread GitBox


thilinikb commented on issue #4272:
URL: https://github.com/apache/incubator-tvm/issues/4272#issuecomment-709274690


   @tmoreau89 @flip1995 Could you please point me to this document if it is 
available? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen merged pull request #6687: int32 pooling with int64 shapes

2020-10-15 Thread GitBox


tqchen merged pull request #6687:
URL: https://github.com/apache/incubator-tvm/pull/6687


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated (c7ff885 -> b121278)

2020-10-15 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c7ff885  Add µTVM Zephyr support + QEMU regression test (#6603)
 add b121278  int32 pooling with int64 shapes (#6687)

No new revisions were added by this update.

Summary of changes:
 include/tvm/topi/nn/pooling.h | 42 ++-
 tests/python/relay/test_op_grad_level2.py | 75 +++---
 tests/python/relay/test_op_level10.py | 22 
 tests/python/relay/test_op_level2.py  | 87 +--
 4 files changed, 133 insertions(+), 93 deletions(-)



[GitHub] [incubator-tvm] tqchen commented on issue #6417: [FLAKY] relay/test_op_level1.py::test_binary_op failing intermittently

2020-10-15 Thread GitBox


tqchen commented on issue #6417:
URL: https://github.com/apache/incubator-tvm/issues/6417#issuecomment-709314955


   ping @leandron :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new issue #6689: [TEST][FLAKY] frontend/pytorch/test_forward.py::test_forward_nms

2020-10-15 Thread GitBox


tqchen opened a new issue #6689:
URL: https://github.com/apache/incubator-tvm/issues/6689


   https://ci.tlcpack.ai/job/tvm/job/main/19/execution/node/380/log/
   
   Could due to too close ties, perhaps we should construct nms test-cases 
differently that have no clear ties



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6689: [TEST][FLAKY] frontend/pytorch/test_forward.py::test_forward_nms

2020-10-15 Thread GitBox


tqchen commented on issue #6689:
URL: https://github.com/apache/incubator-tvm/issues/6689#issuecomment-709316506


   cc @masahi @Laurawly 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] flip1995 commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2020-10-15 Thread GitBox


flip1995 commented on issue #4272:
URL: https://github.com/apache/incubator-tvm/issues/4272#issuecomment-709335331


   Oh damn, I totally forgot about it. Basically you have to add those two 
lines to your python script, that runs your model on the PYNQ:
   
   ```python
   dll_path = "/home/xilinx/tvm/build/libvta.so"
   ctypes.CDLL(dll_path, ctypes.RTLD_GLOBAL)
   ```
   
   before the `graph_runtime.create(..)` command. 
   
   I hope this still works. I haven't used TVM/VTA for a long time now.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] flip1995 commented on issue #4272: [VTA] Tutorial on how to deploy and execute model on device without RPC

2020-10-15 Thread GitBox


flip1995 commented on issue #4272:
URL: https://github.com/apache/incubator-tvm/issues/4272#issuecomment-709342481


   Or to be more precise, you have to load your model like this:
   
   ```python
   import ctypes
   
   import tvm
   from tvm.contrib import graph_runtime as runtime
   
   libvta_path = "/home/xilinx/tvm/build/libvta.so"
   ctypes.CDLL(libvta_path, ctypes.RTLD_GLOBAL)
   
   # load compiled model
   with open("graph.json", "r") as graph_file:
   graph = graph_file.read()
   with open("params.params", "rb") as params_file:
   params = bytearray(params_file.read())
   lib = tvm.module.load("./lib.tar")
   
   ctx = tvm.ext_dev(0)
   
   module = runtime.create(graph, lib, ctx)
   module.load_params(params)
   ```
   
   After that `module.run(**)` should work



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on issue #6689: [TEST][FLAKY] frontend/pytorch/test_forward.py::test_forward_nms

2020-10-15 Thread GitBox


masahi commented on issue #6689:
URL: https://github.com/apache/incubator-tvm/issues/6689#issuecomment-709392455


   cc @yongwww 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on pull request #6686: [AutoSchedule] Support multiple cache read and fix bugs

2020-10-15 Thread GitBox


comaniac commented on pull request #6686:
URL: https://github.com/apache/incubator-tvm/pull/6686#issuecomment-709412464


   The failed case in CI seems related to #6417 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #6670: [TFLite] Fix detection of crop in convert_batch_to_space_nd

2020-10-15 Thread GitBox


anijain2305 commented on pull request #6670:
URL: https://github.com/apache/incubator-tvm/pull/6670#issuecomment-709502035


   Saw this issue a couple of months ago. This looks good. Thanks Trevor!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on pull request #6670: [TFLite] Fix detection of crop in convert_batch_to_space_nd

2020-10-15 Thread GitBox


anijain2305 commented on pull request #6670:
URL: https://github.com/apache/incubator-tvm/pull/6670#issuecomment-709502703


   @siju-samuel Can you PTAL
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m commented on pull request #6143: [Relay] support i64 indices

2020-10-15 Thread GitBox


trevor-m commented on pull request #6143:
URL: https://github.com/apache/incubator-tvm/pull/6143#issuecomment-709515877


   FYI, I found that Keras MobileNetV2 model experiences a heavy perf 
regression with i64 indices enabled.
   With ON: 66.55801918395055 FPS
   With OFF: 435.48951121558594 FPS
   
   This is on an AWS m5.12xlarge instance.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m edited a comment on pull request #6143: [Relay] support i64 indices

2020-10-15 Thread GitBox


trevor-m edited a comment on pull request #6143:
URL: https://github.com/apache/incubator-tvm/pull/6143#issuecomment-709515877


   @hzfan FYI, I found that Keras MobileNetV2 model experiences a heavy perf 
regression with i64 indices enabled.
   With ON: 66.55801918395055 FPS
   With OFF: 435.48951121558594 FPS
   
   This is on an AWS m5.12xlarge instance.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch opened a new pull request #6690: [Docker] Update CI CPU and GPU images based on new Docker build files.

2020-10-15 Thread GitBox


jroesch opened a new pull request #6690:
URL: https://github.com/apache/incubator-tvm/pull/6690


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #6690: [Docker] Update CI CPU and GPU images based on new Docker build files.

2020-10-15 Thread GitBox


jroesch commented on pull request #6690:
URL: https://github.com/apache/incubator-tvm/pull/6690#issuecomment-709535200


   cc @tmoreau89 @tqchen @u99127 this should include all the newest changes. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6686: [AutoSchedule] Support multiple cache read and fix bugs

2020-10-15 Thread GitBox


comaniac commented on a change in pull request #6686:
URL: https://github.com/apache/incubator-tvm/pull/6686#discussion_r505784167



##
File path: src/te/schedule/schedule_dataflow_rewrite.cc
##
@@ -138,6 +138,15 @@ Tensor Schedule::cache_read(const Tensor& tensor, const 
std::string& scope,
   }
   os << "." << scope;
 
+  // when a schedule has multiple cache_read on the same tensor,
+  // we make sure their op names are unique. e.g., w.shared, w.shared.d, 
w.shared.d.d
+  for (auto pair : (*this)->stage_map) {
+auto stage = pair.second;
+if (stage->op->name == os.str()) {
+  os << ".d";

Review comment:
   I changed the naming to `w.shared`, `w_d.shared`, because we use 
`StrEndsWith(op->name, ".shared")` in thread binding.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6686: [AutoSchedule] Support multiple cache read and fix bugs

2020-10-15 Thread GitBox


comaniac commented on a change in pull request #6686:
URL: https://github.com/apache/incubator-tvm/pull/6686#discussion_r505784167



##
File path: src/te/schedule/schedule_dataflow_rewrite.cc
##
@@ -138,6 +138,15 @@ Tensor Schedule::cache_read(const Tensor& tensor, const 
std::string& scope,
   }
   os << "." << scope;
 
+  // when a schedule has multiple cache_read on the same tensor,
+  // we make sure their op names are unique. e.g., w.shared, w.shared.d, 
w.shared.d.d
+  for (auto pair : (*this)->stage_map) {
+auto stage = pair.second;
+if (stage->op->name == os.str()) {
+  os << ".d";

Review comment:
   I changed the naming to `w.shared`, `w.d.shared`, because we use 
`StrEndsWith(op->name, ".shared")` in thread binding.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trevor-m opened a new issue #6691: [Performance] Large performance regression with int64 indices INDEX_DEFAULT_I64=ON (PR #6143)

2020-10-15 Thread GitBox


trevor-m opened a new issue #6691:
URL: https://github.com/apache/incubator-tvm/issues/6691


   I've started noticing a large performance regression affecting Keras 
MobileNetV2 caused by `INDEX_DEFAULT_I64=ON` (PR #6143). This is on an AWS 
m5.12xlarge instance.
   
   INDEX_DEFAULT_I64 | Frames per second
    | -
   ON | 66.56
   OFF | 435.49
   
   I profiled the ops and found the slowdown comes from the
   
   ## Profile with `INDEX_DEFAULT_I64=OFF` (fast)
   
   ```
   Node NameOps 
Time(us)  Time(%)  Shape Inputs  
Outputs  
   ---- 
  ---  - --  
---  
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_7   
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_7  64.7043.571(1, 9, 
56, 56, 16)3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_6   
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_6  53.3622.945(1, 2, 
112, 112, 16)  3   1
   fused_nn_pad_3   fused_nn_pad_3  
50.5822.791(1, 6, 113, 113, 16)  1   1  
  
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_5   
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_5  47.8742.642(1, 6, 
56, 56, 16)3   1
   fused_nn_contrib_conv2d_NCHWc_add_clip_6 
fused_nn_contrib_conv2d_NCHWc_add_clip_646.8282.584(1, 6, 
112, 112, 16)  3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_8   
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_8  42.3642.338(1, 12, 
28, 28, 16)   3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_91  
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_9  39.5542.183(1, 36, 
14, 14, 16)   3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_81  
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_8  39.4182.175(1, 12, 
28, 28, 16)   3   1
   fused_nn_contrib_conv2d_NCHWc_add_add_4  
fused_nn_contrib_conv2d_NCHWc_add_add_4 38.8712.145(1, 2, 
56, 56, 12)4   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_9   
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_9  37.9262.093(1, 36, 
14, 14, 16)   3   1
   fused_nn_contrib_conv2d_NCHWc_add_clip_5 
fused_nn_contrib_conv2d_NCHWc_add_clip_537.4072.064(1, 9, 
56, 56, 16)3   1
   fused_nn_contrib_conv2d_NCHWc_add_clip_51
fused_nn_contrib_conv2d_NCHWc_add_clip_535.3491.951(1, 9, 
56, 56, 16)3   1
   fused_nn_contrib_conv2d_NCHWc_add_clip   
fused_nn_contrib_conv2d_NCHWc_add_clip  34.6921.915(1, 80, 
7, 7, 16) 3   1
   fused_nn_contrib_conv2d_NCHWc_add_6  
fused_nn_contrib_conv2d_NCHWc_add_6 34.0521.879(1, 1, 
112, 112, 16)  3   1
   fused_nn_contrib_conv2d_NCHWc_add
fused_nn_contrib_conv2d_NCHWc_add   33.58 1.853(1, 20, 
7, 7, 16) 3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_21  
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_2  33.2981.838(1, 24, 
14, 14, 16)   3   1
   fused_nn_pad_2   fused_nn_pad_2  
33.2011.832(1, 9, 57, 57, 16)1   1  
  
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_22  
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_2  33.0571.824(1, 24, 
14, 14, 16)   3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_2   
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_2  33.0271.823(1, 24, 
14, 14, 16)   3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_23  
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip_2  32.7871.809(1, 24, 
14, 14, 16)   3   1
   fused_nn_contrib_conv2d_NCHWc_add_5  
fused_nn_contrib_conv2d_NCHWc_add_5 32.3321.784(1, 2, 
56, 56, 12)3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip 
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip32.1561.775(1, 60, 
7, 7, 16) 3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip1
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip31.68 1.748(1, 60, 
7, 7, 16) 3   1
   fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip2
fused_nn_contrib_depthwise_conv2d_NCHWc_add_clip30.8321.701(1, 60, 
7, 7, 16) 3   1
   fused_nn_contrib_conv2d_NCHWc_add_clip_7 
fused_nn_contrib_conv2d_NCHWc_add_clip_7 

[GitHub] [incubator-tvm] trevor-m commented on issue #6691: [Performance] Large performance regression with int64 indices INDEX_DEFAULT_I64=ON (PR #6143)

2020-10-15 Thread GitBox


trevor-m commented on issue #6691:
URL: https://github.com/apache/incubator-tvm/issues/6691#issuecomment-709569953


   FYI @kevinthesun @hzfan @zhiics @tqchen



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] rkimball opened a new pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-15 Thread GitBox


rkimball opened a new pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692


   There were circular dependencies in object.h which made use of ICHECK in 
Object problematic.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #6691: [Performance] Performance regression with int64 indices INDEX_DEFAULT_I64=ON (PR #6143)

2020-10-15 Thread GitBox


tqchen commented on issue #6691:
URL: https://github.com/apache/incubator-tvm/issues/6691#issuecomment-709583351


   Thanks @trevor-m cc @hzfan  Given that those are constant shapes, we should 
expect NarrowDataTypes to narrow dtypes to i32, it would be great to look into 
the IR those kernels and see why the NarrowDataType does not produce the right 
optimization 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-15 Thread GitBox


jroesch commented on a change in pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#discussion_r505858000



##
File path: include/tvm/ir/diagnostic_context.h
##
@@ -0,0 +1,222 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file diagnostic.h
+ * \brief A new diagnostic interface for TVM error reporting.
+ *
+ * A prototype of the new diagnostic reporting interface for TVM.
+ *
+ * Eventually we hope to promote this file to the top-level and
+ * replace the existing errors.h.
+ */
+
+#ifndef TVM_IR_DIAGNOSTIC_CONTEXT_H_
+#define TVM_IR_DIAGNOSTIC_CONTEXT_H_
+
+#include 
+#include 
+
+#include 
+#include 
+
+namespace tvm {
+
+using tvm::parser::SourceMap;
+using tvm::runtime::TypedPackedFunc;
+
+extern const char* kTVM_INTERNAL_ERROR_MESSAGE;
+
+class DiagnosticBuilder;
+
+/*! \brief A compiler diagnostic. */
+class Diagnostic;
+
+/*! \brief A compiler diagnostic message. */
+class DiagnosticNode : public Object {
+ public:
+  /*! \brief The level. */

Review comment:
   We should improve this comment. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

2020-10-15 Thread GitBox


masahi commented on pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#issuecomment-709602965


   @siju-samuel @anijain2305 can you merge this? It is a prereq for upgrading 
our CI to the latest pytorch version



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige opened a new pull request #6693: [FIX,MICROTVM] Skip microtvm tests if microtvm is not built

2020-10-15 Thread GitBox


tkonolige opened a new pull request #6693:
URL: https://github.com/apache/incubator-tvm/pull/6693


   This is a simple PR that adds `@tvm.testing.requires_micro` for microtvm 
tests. Previously, if microtvm was not built, no tests would be run.
   
   @areusch 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-15 Thread GitBox


tqchen commented on pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#issuecomment-709642068


   In this particular case, ICHECK perhaps should not end up in ir/diagnostic, 
but instead in `support/logging.h`, moving the marco defs there would be the 
simplest solution



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-15 Thread GitBox


tqchen edited a comment on pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#issuecomment-709642068


   In this particular case, ICHECK perhaps should not end up in ir/diagnostic, 
but instead in `support/logging.h`, moving the marco defs there would be the 
simplest solution. It would also avoid the dep from runtime to ir



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch merged pull request #6690: [Docker] Update CI CPU and GPU images based on new Docker build files.

2020-10-15 Thread GitBox


jroesch merged pull request #6690:
URL: https://github.com/apache/incubator-tvm/pull/6690


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated (b121278 -> 3e8ba2a)

2020-10-15 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from b121278  int32 pooling with int64 shapes (#6687)
 add 3e8ba2a  [Docker] Update CI CPU and GPU images based on new Docker 
build files. (#6690)

No new revisions were added by this update.

Summary of changes:
 Jenkinsfile  | 4 ++--
 docker/Dockerfile.ci_cpu | 4 
 docker/Dockerfile.ci_gpu | 7 +++
 docker/install/ubuntu_install_darknet.sh | 7 ++-
 docker/install/ubuntu_install_dgl.sh | 0
 docker/install/ubuntu_install_sphinx.sh  | 2 +-
 rust/Cargo.toml  | 1 -
 7 files changed, 16 insertions(+), 9 deletions(-)
 mode change 100644 => 100755 docker/install/ubuntu_install_dgl.sh



[GitHub] [incubator-tvm] jroesch opened a new pull request #6694: [Docker] Fix tutorial broken by Docker build

2020-10-15 Thread GitBox


jroesch opened a new pull request #6694:
URL: https://github.com/apache/incubator-tvm/pull/6694


   cc @tqchen @tmoreau89 this change is required for CI to pass on the new 
images. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated (3e8ba2a -> 8243145)

2020-10-15 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 3e8ba2a  [Docker] Update CI CPU and GPU images based on new Docker 
build files. (#6690)
 add 8243145  [FIX,MICROTVM] Skip microtvm tests if microtvm is not built 
(#6693)

No new revisions were added by this update.

Summary of changes:
 python/tvm/testing.py | 17 +
 tests/python/unittest/test_crt.py | 14 +-
 2 files changed, 30 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] tqchen merged pull request #6693: [FIX,MICROTVM] Skip microtvm tests if microtvm is not built

2020-10-15 Thread GitBox


tqchen merged pull request #6693:
URL: https://github.com/apache/incubator-tvm/pull/6693


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #6685: [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow

2020-10-15 Thread GitBox


jroesch commented on pull request #6685:
URL: https://github.com/apache/incubator-tvm/pull/6685#issuecomment-709673851


   cc @tkonolige 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #6685: [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow

2020-10-15 Thread GitBox


siju-samuel commented on a change in pull request #6685:
URL: https://github.com/apache/incubator-tvm/pull/6685#discussion_r505991645



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -890,6 +890,44 @@ def _impl(inputs, attr, params, mod):
 return _impl
 
 
+def _sparse_tensor_dense_matmul():
+# Sparse utility from Numpy
+from scipy import sparse

Review comment:
   use `from scipy.sparse import csr_matrix`
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #6685: [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow

2020-10-15 Thread GitBox


siju-samuel commented on a change in pull request #6685:
URL: https://github.com/apache/incubator-tvm/pull/6685#discussion_r505991920



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -890,6 +890,44 @@ def _impl(inputs, attr, params, mod):
 return _impl
 
 
+def _sparse_tensor_dense_matmul():
+# Sparse utility from Numpy

Review comment:
   Numpy > Scipy
   

##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -890,6 +890,44 @@ def _impl(inputs, attr, params, mod):
 return _impl
 
 
+def _sparse_tensor_dense_matmul():
+# Sparse utility from Numpy
+from scipy import sparse
+
+def _impl(inputs, attr, params, mod):
+assert len(inputs) == 4, "There should be 4 input tensors"
+
+indices_tensor = _infer_value(inputs[0], params, mod).asnumpy()
+values_tensor = _infer_value(inputs[1], params, mod).asnumpy()
+dense_shape_tensor = _infer_value(inputs[2], params, mod).asnumpy()
+
+data = inputs[3]
+
+rows = [x[0] for x in indices_tensor]
+cols = [x[1] for x in indices_tensor]
+
+# Create Numpy sparse Tensor(CSR)
+weight_sp = sparse.csr_matrix(
+(values_tensor, (rows, cols)), 
shape=tuple(dense_shape_tensor.tolist())
+)
+weight_sp = sparse.csr_matrix(weight_sp.transpose())
+
+weight_data = _expr.const(weight_sp.data, weight_sp.data.dtype)
+weight_indptrs = _expr.const(weight_sp.indptr, weight_sp.indptr.dtype)
+weight_indices = _expr.const(weight_sp.indices, 
weight_sp.indices.dtype)
+
+ret = _op.nn.sparse_dense(data, [weight_data, weight_indices, 
weight_indptrs])
+
+# If both are true means First input was dense and second was sparse
+# TODO: Support other adjoint option too
+if attr.get("adjoint_a") and attr.get("adjoint_b"):

Review comment:
   return not supported error for other adjoint options

##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -1750,6 +1750,64 @@ def test_forward_batch_matmul():
 _test_batch_matmul((2, 3, 4, 2, 3, 4, 5, 6), (2, 3, 4, 2, 3, 4, 5, 6), 
"float32", False, True)
 
 
+###
+# SparseTensorDenseMatMul
+# --
+
+
+def _test_sparse_dense_matmul(indices, values, A_shape, B_shape, dtype, 
flip=False):
+""" One iteration of sparse_dense_matmul """
+
+# TODO: Support adjoint options too
+for adjoint_a in [False]:
+for adjoint_b in [False]:
+with tf.Graph().as_default():
+A_sp = tf.sparse.SparseTensor(
+indices=[[0, 0], [1, 2]], values=[4.0, 8.0], 
dense_shape=A_shape
+)
+B = tf.placeholder(shape=B_shape, dtype=dtype, name="B")
+
+if flip:
+result = tf.sparse.sparse_dense_matmul(
+B, A_sp, adjoint_a=adjoint_a, adjoint_b=adjoint_b
+)
+else:
+result = tf.sparse.sparse_dense_matmul(
+A_sp, B, adjoint_a=adjoint_a, adjoint_b=adjoint_b
+)
+
+B_np = np.random.uniform(high=5.0, size=B_shape).astype(dtype)
+
+# TODO: There is an issue in cuda scheduling for csr, work in 
progress
+compare_tf_with_tvm([B_np], [B.name], result.name, no_gpu=True)

Review comment:
   Need a followup pr to solve the cuda scheduling issue for csr

##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -1750,6 +1750,64 @@ def test_forward_batch_matmul():
 _test_batch_matmul((2, 3, 4, 2, 3, 4, 5, 6), (2, 3, 4, 2, 3, 4, 5, 6), 
"float32", False, True)
 
 
+###
+# SparseTensorDenseMatMul
+# --
+
+
+def _test_sparse_dense_matmul(indices, values, A_shape, B_shape, dtype, 
flip=False):
+""" One iteration of sparse_dense_matmul """
+
+# TODO: Support adjoint options too
+for adjoint_a in [False]:
+for adjoint_b in [False]:
+with tf.Graph().as_default():
+A_sp = tf.sparse.SparseTensor(
+indices=[[0, 0], [1, 2]], values=[4.0, 8.0], 
dense_shape=A_shape
+)
+B = tf.placeholder(shape=B_shape, dtype=dtype, name="B")
+
+if flip:
+result = tf.sparse.sparse_dense_matmul(
+B, A_sp, adjoint_a=adjoint_a, adjoint_b=adjoint_b
+)
+else:
+result = tf.sparse.sparse_dense_matmul(
+A_sp, B, adjoint_a=adjoint_a, adjoint_b=adjoint_b
+)
+
+B_np = np.random.uniform(high=5.0, size=B_shape).astype(dtype)
+
+# TODO: There is an issue in cuda scheduling for csr, 

[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

2020-10-15 Thread GitBox


siju-samuel commented on a change in pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#discussion_r505998308



##
File path: python/tvm/relay/frontend/qnn_torch.py
##
@@ -26,6 +26,14 @@
 from tvm.relay import op as _op
 from tvm.relay.frontend.common import infer_shape
 
+from packaging import version

Review comment:
   move inside `_is_newer_than_1_5`

##
File path: python/tvm/relay/frontend/qnn_torch.py
##
@@ -46,59 +54,95 @@ def __init__(self, weight, bias, scale, zero_point, 
param_key):
 self.zero_point = _expr.const(zero_point, dtype="int32")
 
 
-def _unpack_quant_params(param_name, packed_params, unpack_func):
-# Torch stores quantized params in a custom packed format,
-# need to unpack and retrieve them as numpy arrays
-qweight, bias = unpack_func(packed_params)
-weight_np = qweight.dequantize().numpy()
+class ConvPackedParam(QNNParam):
+"""A placeholder for quantized conv2d op attributs

Review comment:
   attributs > attributes

##
File path: python/tvm/relay/frontend/qnn_torch.py
##
@@ -458,24 +513,40 @@ def _impl(inputs, _):
 # inputs[7]: output_zero_point
 # inputs[8]: input_scale (added manually by frontend)
 # inputs[9]: input_zero_point (added manually by frontend)
-weight = inputs[1][0]
-weight_scale = inputs[1][1]
-weight_zero_point = inputs[1][2]
-
-output_scale = _expr.const(inputs[6])
-output_zero_point = _expr.const(inputs[7])
+conv_params = inputs[1]
+weight = conv_params[0]
+weight_scale = conv_params[1]
+weight_zero_point = conv_params[2]
+bias = conv_params[3]
+
+if len(conv_params) > 4:
+# Torch 1.6 or newer case
+strides = conv_params[4]
+padding = conv_params[5]
+dilation = conv_params[6]
+groups = conv_params[7]
+
+output_scale = _expr.const(inputs[2])
+output_zero_point = _expr.const(inputs[3])
+
+assert len(inputs) == 6, "Input quant params not found in op 
inputs"
+
+# These are manually added by add_input_quant_params_to_op_inputs 
above
+# In torch, they are retrieved from QTensor data structure at runt

Review comment:
   runtime?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel merged pull request #6670: [TFLite] Fix detection of crop in convert_batch_to_space_nd

2020-10-15 Thread GitBox


siju-samuel merged pull request #6670:
URL: https://github.com/apache/incubator-tvm/pull/6670


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated (8243145 -> 08b08b1)

2020-10-15 Thread sijusamuel
This is an automated email from the ASF dual-hosted git repository.

sijusamuel pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 8243145  [FIX,MICROTVM] Skip microtvm tests if microtvm is not built 
(#6693)
 add 08b08b1  [TFLite] Fix detection of crop in convert_batch_to_space_nd 
(#6670)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/tflite.py  | 2 +-
 tests/python/frontend/tflite/test_forward.py | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] siju-samuel commented on pull request #6670: [TFLite] Fix detection of crop in convert_batch_to_space_nd

2020-10-15 Thread GitBox


siju-samuel commented on pull request #6670:
URL: https://github.com/apache/incubator-tvm/pull/6670#issuecomment-709695528


   Thanks @trevor-m @anijain2305. This PR is merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] lixiaoquan opened a new pull request #6695: [Relay] Change some passes to mix mode

2020-10-15 Thread GitBox


lixiaoquan opened a new pull request #6695:
URL: https://github.com/apache/incubator-tvm/pull/6695


   @mbrookhart  Could you please take a look?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] Beya2019 commented on pull request #6516: [RELAY][OP] roi_pool operator alter layout

2020-10-15 Thread GitBox


Beya2019 commented on pull request #6516:
URL: https://github.com/apache/incubator-tvm/pull/6516#issuecomment-709700238


   Hi @yzhliu, can you help to have a look at this submit? Thanks very much.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

2020-10-15 Thread GitBox


masahi commented on pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#issuecomment-709702663


   thanks @siju-samuel 
   I added `pytorch_utils.py`  and moved the version check function there



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tmoreau89 merged pull request #6694: [Docker] Fix tutorial broken by Docker build

2020-10-15 Thread GitBox


tmoreau89 merged pull request #6694:
URL: https://github.com/apache/incubator-tvm/pull/6694


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated: Fix tutorial broken by Docker build (#6694)

2020-10-15 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 4c4d3dc  Fix tutorial broken by Docker build (#6694)
4c4d3dc is described below

commit 4c4d3dc2d8723a5b78d34f1bb0540ef66ee185a3
Author: Jared Roesch 
AuthorDate: Thu Oct 15 22:27:11 2020 -0700

Fix tutorial broken by Docker build (#6694)
---
 tutorials/frontend/build_gcn.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tutorials/frontend/build_gcn.py b/tutorials/frontend/build_gcn.py
index 5c571ef..b832d18 100644
--- a/tutorials/frontend/build_gcn.py
+++ b/tutorials/frontend/build_gcn.py
@@ -242,7 +242,9 @@ import networkx as nx
 
 def prepare_params(g, data):
 params = {}
-params["infeats"] = data.features.astype("float32")  # Only support 
float32 as feature for now
+params["infeats"] = data.features.numpy().astype(
+"float32"
+)  # Only support float32 as feature for now
 
 # Generate adjacency matrix
 adjacency = nx.to_scipy_sparse_matrix(g)
@@ -350,5 +352,7 @@ test_mask = data.test_mask
 acc = evaluate(data, logits_tvm)
 print("Test accuracy of TVM results: {:.2%}".format(acc))
 
+import tvm.testing
+
 # Verify the results with the DGL model
 tvm.testing.assert_allclose(logits_torch, logits_tvm, atol=1e-3)



[incubator-tvm] branch main updated (08b08b1 -> 4c4d3dc)

2020-10-15 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 08b08b1  [TFLite] Fix detection of crop in convert_batch_to_space_nd 
(#6670)
 add 4c4d3dc  Fix tutorial broken by Docker build (#6694)

No new revisions were added by this update.

Summary of changes:
 tutorials/frontend/build_gcn.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)



[incubator-tvm] branch main updated (08b08b1 -> 4c4d3dc)

2020-10-15 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 08b08b1  [TFLite] Fix detection of crop in convert_batch_to_space_nd 
(#6670)
 add 4c4d3dc  Fix tutorial broken by Docker build (#6694)

No new revisions were added by this update.

Summary of changes:
 tutorials/frontend/build_gcn.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)



[incubator-tvm] branch main updated (08b08b1 -> 4c4d3dc)

2020-10-15 Thread moreau
This is an automated email from the ASF dual-hosted git repository.

moreau pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 08b08b1  [TFLite] Fix detection of crop in convert_batch_to_space_nd 
(#6670)
 add 4c4d3dc  Fix tutorial broken by Docker build (#6694)

No new revisions were added by this update.

Summary of changes:
 tutorials/frontend/build_gcn.py | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)



[GitHub] [incubator-tvm] sxjscience opened a new pull request #6696: [Frontent][Relay][WIP] Fix MXNet frontend to support BERT in GluonNLP V1

2020-10-15 Thread GitBox


sxjscience opened a new pull request #6696:
URL: https://github.com/apache/incubator-tvm/pull/6696


   Fix the MXNet 2.0 integration in relay. Tested the BERT and ALBERT model in 
the new GluonNLP v1 and has passed the test. Will later on add unittests in 
GluonNLP to ensure that most backbones can be run with the graph runtime.
   
   ```python
   import mxnet as mx
   import numpy as np
   import gluonnlp
   from gluonnlp.models import get_backbone
   import numpy.testing as npt
   
   mx.npx.set_np()
   
   model_cls, cfg, tokenizer, backbone_param_path, _ = 
get_backbone('google_albert_base_v2')
   
   model = model_cls.from_cfg(cfg)
   model.load_parameters(backbone_param_path)
   model.hybridize()
   
   
   batch_size = 1
   seq_length = 128
   token_ids = mx.np.random.randint(0, cfg.MODEL.vocab_size, (batch_size, 
seq_length), dtype=np.int32)
   token_types = mx.np.random.randint(0, 2, (batch_size, seq_length), 
dtype=np.int32)
   valid_length = mx.np.random.randint(seq_length // 2, seq_length, 
(batch_size,), dtype=np.int32)
   mx_out = model(token_ids, token_types, valid_length)
   
   import tvm
   from tvm import relay
   import tvm.contrib.graph_runtime as runtime
   
   shape_dict = {
   'data0': (batch_size, seq_length),
   'data1': (batch_size, seq_length),
   'data2': (batch_size,)
   }
   
   dtype_dict = {
   'data0': 'int32',
   'data1': 'int32',
   'data2': 'int32'
   }
   
   sym = model._cached_graph[1]
   
   params = {}
   for k, v in model.collect_params().items():
   params[v._var_name] = tvm.nd.array(v.data().asnumpy())
   mod, params = relay.frontend.from_mxnet(sym, shape=shape_dict, 
dtype=dtype_dict, arg_params=params)
   print(mod)
   # G4
   target = "cuda -model=t4"
   
   with relay.build_config(opt_level=3, required_pass=["FastMath"]):
   graph, lib, cparams = relay.build(mod, target, params=params)
   
   ctx = tvm.gpu()
   rt = runtime.create(graph, lib, ctx)
   rt.set_input(**cparams)
   rt.set_input(data0=token_ids, data1=token_types, data2=valid_length)
   rt.run()
   for i in range(rt.get_num_outputs()):
   out = rt.get_output(i)
   print(out.asnumpy())# verify the correctness
   npt.assert_allclose(out.asnumpy(), mx_out[i].asnumpy(), rtol=1e-3, 
atol=1e-2)
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org