[tvm] branch main updated (4f9e614 -> a0f4917)
This is an automated email from the ASF dual-hosted git repository. jcf94 pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from 4f9e614 fix first-order AD tuple/projection expr duplication (#8318) add a0f4917 [tvmc] Fix inconsistent usage of host_name -> hostname (#8324) No new revisions were added by this update. Summary of changes: python/tvm/driver/tvmc/autotuner.py| 6 +++--- tests/python/driver/tvmc/test_autotuner.py | 21 + 2 files changed, 24 insertions(+), 3 deletions(-)
[GitHub] [tvm] jcf94 merged pull request #8324: [tvmc] Fix inconsistent usage of host_name -> hostname
jcf94 merged pull request #8324: URL: https://github.com/apache/tvm/pull/8324 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] zhuzilin edited a comment on pull request #8056: [Relay, TOPI] Add negative log likelihood loss (nll_loss) op
zhuzilin edited a comment on pull request #8056: URL: https://github.com/apache/tvm/pull/8056#issuecomment-868179165 > in this case we need to change tag to `kInjective` as the reduction op is not broadcast @vinx13 Changing tag to `kInjective` will fail the tag check in `traverse_after_reduce` in https://github.com/apache/tvm/blob/d0791d3db971a111826d96201bd1e4c9c0d531da/python/tvm/topi/x86/reduction.py#L94 whereas the empty tag was triggering `traverse_before_reduce` It's worth noticing that the tag is of: ```c++ auto T = tvm::te::compute( targets->shape, ... name, tag); ``` which is an element-wise operation on `targets`. And when `reduction="mean"` or `reduction="sum"`, `T` will be reduced, whereas when `reduction="none"`, the `T` will be returned as the result of the `nll_loss`. Therefore, as `nll_loss` is registered to use reduce schedule, when `reduction="mean"` or `"sum"`, the `T` will go through the checks in `traverse_before_reduce` while when `reduction="none"`, `T` will go through checks in `traverse_after_reduce`. Right now, the `kBroadcast` happens to be the only tag to satisfiy both checks and passed the CI . To really solve this problem, I'm afraid we may need to define different tag and different scheduling for different op attr... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] zhuzilin commented on pull request #8056: [Relay, TOPI] Add negative log likelihood loss (nll_loss) op
zhuzilin commented on pull request #8056: URL: https://github.com/apache/tvm/pull/8056#issuecomment-868179165 > in this case we need to change tag to `kInjective` as the reduction op is not broadcast @vinx13 Changing tag to `kInjective` will fail the tag check in `traverse_after_reduce` in https://github.com/apache/tvm/blob/d0791d3db971a111826d96201bd1e4c9c0d531da/python/tvm/topi/x86/reduction.py#L94 whereas the empty tag was triggering `traverse_before_reduce` It's worth noticing that the tag is of: ```c++ auto T = tvm::te::compute( targets->shape, ... name, tag); ``` which is an element-wise operation on `targets`. And when `reduction="mean"` or `reduction="sum"`, `T` will be reduced, whereas when `reduction="none"`, the `T` will be returned as the result of the `nll_loss`. Therefore, as `nll_loss` is registered to use reduce schedule, when `reduction="mean"` or `"sum"`, the `T` will go through the checks in `traverse_before_reduce` while when `reduction="none"`, `T` will go through checks in `traverse_after_reduce`. Right now, the `kBroadcast` happens to be the only tag to satisfiy both checks and passed the CI . To really solve this problem, I'm afraid we may need to define different tag for different op attr... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] gromero commented on pull request #8055: apps: microtvm: Disable `CONFIG_FPU ` for Zephyr runtime
gromero commented on pull request #8055: URL: https://github.com/apache/tvm/pull/8055#issuecomment-868153637 @microbuilder friendly ping :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] jcf94 commented on pull request #8328: [COMMUNITY] Reviewer: wyc-ruiker
jcf94 commented on pull request #8328: URL: https://github.com/apache/tvm/pull/8328#issuecomment-868131139 Congratulation! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] masahi commented on a change in pull request #8309: [Relay] Remove in-place modification of attributes in layout transform
masahi commented on a change in pull request #8309: URL: https://github.com/apache/tvm/pull/8309#discussion_r658383787 ## File path: src/relay/transforms/infer_layout_utils.h ## @@ -85,6 +85,30 @@ inline Layout AdjustSubordinateFactors(const Layout& src_layout, const Layout& o return Layout(new_layout); } +/* + * \brief An output structure to hold results from FInferCorrectLayout calls. + * \tparam inferred_layout An array of two elements, inferred input layouts and + * inferred output layouts. + * \tparam new_attrs Updated attributes consistent with inferred layouts + */ +class InferCorrectLayoutOutputNode : public Object { + public: + Array> inferred_layout; Review comment: Thanks for the good suggestion! Updated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] AndrewZhaoLuo opened a new pull request #8337: [Onnx] Support Bidirectional RNNs
AndrewZhaoLuo opened a new pull request #8337: URL: https://github.com/apache/tvm/pull/8337 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] hogepodge closed pull request #8334: Update the tvmc tutorial with additional requirements
hogepodge closed pull request #8334: URL: https://github.com/apache/tvm/pull/8334 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] hogepodge closed pull request #8180: Fix install link
hogepodge closed pull request #8180: URL: https://github.com/apache/tvm/pull/8180 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg commented on pull request #8332: [Vulkan] Improved error message for extern calls passed to SPIR-V codegen
Lunderberg commented on pull request #8332: URL: https://github.com/apache/tvm/pull/8332#issuecomment-868018715 Potential reviewer: @masahi -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg commented on pull request #8333: [Vulkan] Added debug saving of Vulkan shaders, environment variable documentation
Lunderberg commented on pull request #8333: URL: https://github.com/apache/tvm/pull/8333#issuecomment-868018700 Potential reviewer: @masahi -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg opened a new pull request #8336: [Topi][Unittests] Parametrized tests in `test_topi_dense.py`, split out gpu-independent implementations
Lunderberg opened a new pull request #8336: URL: https://github.com/apache/tvm/pull/8336 [Topi][UnitTests] Parametrized tests in test_topi_dense.py Now, tests run for multiple data types, can be extended with additional datatypes. [Topi] Separated generic-gpu nn.dense implementations into topi.gpu.dense As a follow-up to the renaming of "gpu" to "cuda", separating implementations that require CUDA (e.g. dense_cublas.cuda) from implementations that require any GPU, but not necessarily a CUDA GPU (e.g. dense_small_batch.gpu). My intent is to pair this migration with the extension of unit tests to cover additional GPU runtimes, migrating only implementations that run correctly on non-CUDA GPU devices. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] masahi opened a new pull request #8335: [TEST] Make sure there is no tie in scores in TF combined NMS test
masahi opened a new pull request #8335: URL: https://github.com/apache/tvm/pull/8335 https://github.com/apache/tvm/issues/8140 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] gromero commented on a change in pull request #8331: Allow tvmc to compile models with AOT executor in MLF
gromero commented on a change in pull request #8331: URL: https://github.com/apache/tvm/pull/8331#discussion_r658302581 ## File path: tests/python/driver/tvmc/conftest.py ## @@ -167,40 +148,17 @@ def onnx_mnist(): return model_file -@pytest.fixture(scope="session") -def tflite_compiled_model(tmpdir_factory): - -# Not all CI environments will have TFLite installed -# so we need to safely skip this fixture that will -# crash the tests that rely on it. -# As this is a pytest.fixture, we cannot take advantage -# of pytest.importorskip. Using the block below instead. -try: -import tflite -except ImportError: -print("Cannot import tflite, which is required by tflite_compiled_module_as_tarfile.") -return "" - -target_dir = tmpdir_factory.mktemp("data") -return get_sample_compiled_module(target_dir, "mock.tar") - - -@pytest.fixture(scope="session") -def tflite_compiled_model_mlf(tmpdir_factory): +@pytest.fixture +def tflite_tvmc_compiler(tmpdir_factory): Review comment: How about renaming the `tflite_tvmc_compiler` fixture to `tflite_compile_model`, just because I don't think that `tvmc` in there is informative. But not strong feelings about it. Feel free also get more input from Leandro before sending a v2. ## File path: python/tvm/driver/tvmc/model.py ## @@ -332,8 +333,13 @@ def import_package(self, package_path: str): # Model Library Format (MLF) self.lib_name = None self.lib_path = None +with open(temp.relpath("metadata.json")) as metadata_json: +metadata = json.load(metadata_json) -graph = temp.relpath("runtime-config/graph/graph.json") +if "graph" in metadata["runtimes"]: +graph = temp.relpath("runtime-config/graph/graph.json") +else: +graph = None Review comment: Nice. I just think a hint about AOT should exist here. How about adding a comment above `graph = None`, like "AOT runtime"? ## File path: tests/python/driver/tvmc/test_mlf.py ## @@ -82,9 +85,13 @@ def test_tvmc_export_package_mlf(tflite_mobilenet_v1_1_quant, tmpdir_factory): assert str(exp.value) == expected_reason, on_error -def test_tvmc_import_package_mlf(tflite_compiled_model_mlf): +def test_tvmc_import_package_mlf(tflite_mobilenet_v1_1_quant, tflite_tvmc_compiler): Review comment: I think we can rename `test_tvmc_import_package_mlf` to `test_tvmc_import_package_mlf_graph`. ## File path: tests/python/driver/tvmc/test_mlf.py ## @@ -97,3 +104,27 @@ def test_tvmc_import_package_mlf(tflite_compiled_model_mlf): assert tvmc_package.graph is not None, ".graph must be set in the MLF archive." Review comment: The error message must be adapted here, adding "[...] if not AOT". Maybe: `".graph must be set in the MLF archive if not AOT runtime" -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] schilkunda-amba commented on a change in pull request #8323: [Relay to Onnx][LRN]
schilkunda-amba commented on a change in pull request #8323: URL: https://github.com/apache/tvm/pull/8323#discussion_r658286396 ## File path: python/tvm/contrib/target/onnx.py ## @@ -617,6 +617,20 @@ def convert_attributes(cls, attrs): return {"value": 1} +class LRN(OpConverter): +"""Operator converter for LRN.""" + +@classmethod +def convert_attributes(cls, attrs): +return { +"alpha": attrs.alpha, +"beta": attrs.beta, +"bias": attrs.bias, +"size": attrs.size +# axis? Review comment: Have added the axis check. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] hogepodge commented on pull request #8334: Update the tvmc tutorial with additional requirements
hogepodge commented on pull request #8334: URL: https://github.com/apache/tvm/pull/8334#issuecomment-867953794 @leandron -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] hogepodge opened a new pull request #8334: Update the tvmc tutorial with additional requirements
hogepodge opened a new pull request #8334: URL: https://github.com/apache/tvm/pull/8334 The pre- and post-processing scripts supplied in this tutorial require pillow to be installed, and this tutorial also requires that onnx be installed. This patch indicates those are requirements for successful completion of this tutorial. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg opened a new pull request #8332: [Vulkan] Improved error message for extern calls passed to SPIR-V codegen
Lunderberg opened a new pull request #8332: URL: https://github.com/apache/tvm/pull/8332 Previously, the codegen indicated that there was an extern call. Now, also indicate what that extern call is, to aid in debugging. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg opened a new pull request #8333: [Vulkan] Added debug saving of Vulkan shaders, environment variable documentation
Lunderberg opened a new pull request #8333: URL: https://github.com/apache/tvm/pull/8333 Frequently, looking at the shaders generated by the Vulkan codegen is useful for debugging. While this can be done by checking the `mod.imported_modules[0].get_source()`, that requires the shader to first pass validation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg commented on a change in pull request #8317: [Makefile] Updates to top-level makefile.
Lunderberg commented on a change in pull request #8317: URL: https://github.com/apache/tvm/pull/8317#discussion_r658275627 ## File path: Makefile ## @@ -15,68 +15,80 @@ # specific language governing permissions and limitations # under the License. + +.PHONY: all \ +runtime vta cpptest crttest \ +lint pylint cpplint scalalint \ + doc \ + web webclean \ + cython cython3 cyclean \ +clean + +# Remember the root directory, to be usable by submake invocation. ROOTDIR = $(CURDIR) -# Specify an alternate output directory relative to ROOTDIR. Default build -OUTPUTDIR = $(if $(OUTDIR), $(OUTDIR), build) -.PHONY: clean all test doc pylint cpplint scalalint lint\ -cython cython2 cython3 web runtime vta +# Specify an alternate output directory relative to ROOTDIR. Defaults +# to "build". Can also be a space-separated list of build +# directories, each with a different configuation. +TVM_BUILD_PATH ?= build +TVM_BUILD_PATH := $(abspath $(TVM_BUILD_PATH)) -ifndef DMLC_CORE_PATH - DMLC_CORE_PATH = $(ROOTDIR)/3rdparty/dmlc-core -endif +# Allow environment variables for 3rd-party libraries, default to +# packaged version. +DMLC_CORE_PATH ?= $(ROOTDIR)/3rdparty/dmlc-core +DLPACK_PATH ?= $(ROOTDIR)/3rdparty/dlpack +VTA_HW_PATH ?= $(ROOTDIR)/3rdparty/vta-hw -ifndef DLPACK_PATH - DLPACK_PATH = $(ROOTDIR)/3rdparty/dlpack -endif -ifndef VTA_HW_PATH - VTA_HW_PATH = $(ROOTDIR)/3rdparty/vta-hw -endif -INCLUDE_FLAGS = -Iinclude -I$(DLPACK_PATH)/include -I$(DMLC_CORE_PATH)/include -PKG_CFLAGS = -std=c++11 -Wall -O2 $(INCLUDE_FLAGS) -fPIC -PKG_LDFLAGS = +all: $(addsuffix /all,$(TVM_BUILD_PATH)) -all: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) +runtime: $(addsuffix /runtime,$(TVM_BUILD_PATH)) +vta: $(addsuffix /vta,$(TVM_BUILD_PATH)) +cpptest: $(addsuffix /cpptest,$(TVM_BUILD_PATH)) +crttest: $(addsuffix /crttest,$(TVM_BUILD_PATH)) -runtime: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) runtime +# Set up a default config.cmake inside the build directory. +# filter-out used to avoid circular dependency. +%/config.cmake: | $(filter-out %/config.cmake,$(ROOTDIR)/cmake/config.cmake) Review comment: That definitely makes sense, this is primarily me looking to add more convenience where possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] jroesch commented on a change in pull request #8309: [Relay] Remove in-place modification of attributes in layout transform
jroesch commented on a change in pull request #8309: URL: https://github.com/apache/tvm/pull/8309#discussion_r658272825 ## File path: src/relay/transforms/infer_layout_utils.h ## @@ -85,6 +85,30 @@ inline Layout AdjustSubordinateFactors(const Layout& src_layout, const Layout& o return Layout(new_layout); } +/* + * \brief An output structure to hold results from FInferCorrectLayout calls. + * \tparam inferred_layout An array of two elements, inferred input layouts and + * inferred output layouts. + * \tparam new_attrs Updated attributes consistent with inferred layouts + */ +class InferCorrectLayoutOutputNode : public Object { + public: + Array> inferred_layout; Review comment: Can we change the inner thing to actually be a data structure instead of using Array's to represent structures/pairs? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated (3e28716 -> 4f9e614)
This is an automated email from the ASF dual-hosted git repository. jroesch pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from 3e28716 [Vulkan] Implement sync for SyncThread("warp") (#8320) add 4f9e614 fix first-order AD tuple/projection expr duplication (#8318) No new revisions were added by this update. Summary of changes: src/relay/transforms/first_order_gradient.cc | 35 +++- tests/python/relay/test_pass_gradient.py | 17 ++ 2 files changed, 41 insertions(+), 11 deletions(-)
[GitHub] [tvm] jroesch merged pull request #8318: [Relay][Training] fix first-order AD tuple/projection expr duplication
jroesch merged pull request #8318: URL: https://github.com/apache/tvm/pull/8318 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] masahi commented on pull request #8320: [Vulkan] Implement sync for SyncThread("warp")
masahi commented on pull request #8320: URL: https://github.com/apache/tvm/pull/8320#issuecomment-867891950 Thanks @Lunderberg -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated (07701f2 -> 3e28716)
This is an automated email from the ASF dual-hosted git repository. masahi pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from 07701f2 [UnitTests] Automatic parametrization over targets, with explicit opt-out (#8010) add 3e28716 [Vulkan] Implement sync for SyncThread("warp") (#8320) No new revisions were added by this update. Summary of changes: src/target/spirv/build_vulkan.cc | 21 ++-- src/target/spirv/codegen_spirv.cc| 29 +--- src/target/spirv/spirv_support.cc| 4 src/target/spirv/spirv_support.h | 14 ++ src/tir/transforms/lower_thread_allreduce.cc | 2 +- 5 files changed, 56 insertions(+), 14 deletions(-)
[GitHub] [tvm] masahi merged pull request #8320: [Vulkan] Implement sync for SyncThread("warp")
masahi merged pull request #8320: URL: https://github.com/apache/tvm/pull/8320 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm-site] branch asf-site updated: Build at Thu Jun 24 14:55:40 EDT 2021
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/tvm-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 5bb31ca Build at Thu Jun 24 14:55:40 EDT 2021 5bb31ca is described below commit 5bb31ca047774d238eadd0ef6b1f70f393a72402 Author: tqchen AuthorDate: Thu Jun 24 14:55:41 2021 -0400 Build at Thu Jun 24 14:55:40 EDT 2021 --- atom.xml | 2 +- community.html | 6 ++ feed.xml | 2 +- rss.xml| 4 ++-- 4 files changed, 10 insertions(+), 4 deletions(-) diff --git a/atom.xml b/atom.xml index bc8c8fd..721d5a1 100644 --- a/atom.xml +++ b/atom.xml @@ -4,7 +4,7 @@ TVM https://tvm.apache.org"; rel="self"/> https://tvm.apache.org"/> - 2021-06-09T12:44:30-04:00 + 2021-06-24T14:55:31-04:00 https://tvm.apache.org diff --git a/community.html b/community.html index 55f9881..3e42147 100644 --- a/community.html +++ b/community.html @@ -182,6 +182,12 @@ https://calendar.google.com/calendar/embed?src=071aaettatchrj779v0k8jsmcc%40group.calendar.google.com"; class="btn">Calendar + +Discord +Connect directly with TVM community members in the TVM Discord server. +https://discord.gg/77Hh4jVhbM"; class="btn">Discord + + close diff --git a/feed.xml b/feed.xml index e8ac2e5..8009d7e 100644 --- a/feed.xml +++ b/feed.xml @@ -1,4 +1,4 @@ -http://www.w3.org/2005/Atom"; >https://jekyllrb.com/"; version="4.1.1">Jekyll2021-06-09T12:44:30-04:00/feed.xmlTVM{"name"=>nil}Introducing TVM Auto-scheduler (a.k.a. Ansor)http://www.w3.org/2005/Atom"; >https://jekyllrb.com/"; version="4.1.1">Jekyll2021-06-24T14:55:31-04:00/feed.xmlTVM{"name"=>nil}Introducing TVM Auto-scheduler (a.k.a. Ansor)TVM - https://tvm.apache.org https://tvm.apache.org"; rel="self" type="application/rss+xml" /> -Wed, 09 Jun 2021 12:44:30 -0400 -Wed, 09 Jun 2021 12:44:30 -0400 +Thu, 24 Jun 2021 14:55:31 -0400 +Thu, 24 Jun 2021 14:55:31 -0400 60
[GitHub] [tvm] schilkunda-amba closed pull request #8322: [Relay to Onnx Conversion] Fixed relay.var initialization
schilkunda-amba closed pull request #8322: URL: https://github.com/apache/tvm/pull/8322 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] vinx13 commented on pull request #8056: [Relay, TOPI] Add negative log likelihood loss (nll_loss) op
vinx13 commented on pull request #8056: URL: https://github.com/apache/tvm/pull/8056#issuecomment-867848988 > @vinx13 @altanh Thank you for your help! > > > tag here is topi-level, sometimes we use it to identify a specific compute operation during schedule, otherwise we can leave it empty > > If I change the value of `tag` to an empty string, it will fail the check in `schedule_reduce`, which is: > > https://github.com/apache/tvm/blob/d0791d3db971a111826d96201bd1e4c9c0d531da/python/tvm/topi/x86/reduction.py#L84 > > I'm not sure if I need to adjust somewhere else... in this case we need to change tag to `kInjective` as the reduction op is not broadcast -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm-site] branch main updated: Add Discord server invitation (#29)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm-site.git The following commit(s) were added to refs/heads/main by this push: new 708ea1d Add Discord server invitation (#29) 708ea1d is described below commit 708ea1d4840fee47a0230aed249f807eef04babc Author: Chris Hoge AuthorDate: Thu Jun 24 12:05:40 2021 -0600 Add Discord server invitation (#29) --- _data/community.yml | 4 1 file changed, 4 insertions(+) diff --git a/_data/community.yml b/_data/community.yml index 3aa469c..59bc2dd 100644 --- a/_data/community.yml +++ b/_data/community.yml @@ -27,3 +27,7 @@ des: The TVM Community conducts a number of public events, including monthly general meetings, project sub-meetings, and the annual TVM Conference. You can subscribe to the public events calendar here. buttonname: Calendar link: https://calendar.google.com/calendar/embed?src=071aaettatchrj779v0k8jsmcc%40group.calendar.google.com +- cardname: Discord + des: Connect directly with TVM community members in the TVM Discord server. + buttonname: Discord + link: https://discord.gg/77Hh4jVhbM
[GitHub] [tvm] tmoreau89 commented on pull request #8010: [UnitTests] Automatic parametrization over targets, with explicit opt-out
tmoreau89 commented on pull request #8010: URL: https://github.com/apache/tvm/pull/8010#issuecomment-867833092 Thank you @tkonolige @jwfromm @Lunderberg the PR is now merged! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated (6b7b966 -> 07701f2)
This is an automated email from the ASF dual-hosted git repository. moreau pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from 6b7b966 [Relay][Frontend][Onnx] Enable group_conv1d import through conv2d conversion. (#8321) add 07701f2 [UnitTests] Automatic parametrization over targets, with explicit opt-out (#8010) No new revisions were added by this update. Summary of changes: conftest.py| 20 +- python/tvm/testing.py | 600 +++-- tests/python/topi/python/test_topi_relu.py | 77 ++- tests/python/unittest/test_tvm_testing_features.py | 149 + 4 files changed, 760 insertions(+), 86 deletions(-) create mode 100644 tests/python/unittest/test_tvm_testing_features.py
[GitHub] [tvm] tmoreau89 merged pull request #8010: [UnitTests] Automatic parametrization over targets, with explicit opt-out
tmoreau89 merged pull request #8010: URL: https://github.com/apache/tvm/pull/8010 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] trevor-m commented on a change in pull request #8323: [Relay to Onnx][LRN]
trevor-m commented on a change in pull request #8323: URL: https://github.com/apache/tvm/pull/8323#discussion_r658113381 ## File path: python/tvm/contrib/target/onnx.py ## @@ -617,6 +617,20 @@ def convert_attributes(cls, attrs): return {"value": 1} +class LRN(OpConverter): +"""Operator converter for LRN.""" + +@classmethod +def convert_attributes(cls, attrs): +return { +"alpha": attrs.alpha, +"beta": attrs.beta, +"bias": attrs.bias, +"size": attrs.size +# axis? Review comment: Remove comment. It looks like the ONNX LRN op doesn't have an axis parameter, as it always applies to axis 1. We should probably add assert somewhere to check that the axis is also 1 in the Relay operator that is being converted? ## File path: python/tvm/contrib/target/onnx.py ## @@ -617,6 +617,20 @@ def convert_attributes(cls, attrs): return {"value": 1} +class LRN(OpConverter): +"""Operator converter for LRN.""" + +@classmethod +def convert_attributes(cls, attrs): +return { +"alpha": attrs.alpha, +"beta": attrs.beta, +"bias": attrs.bias, +"size": attrs.size +# axis? Review comment: Remove comment. It looks like the ONNX LRN op doesn't have an axis parameter, as it always applies to axis 1. We should probably add assert somewhere to check that the axis is also 1 in the Relay operator that is being converted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] mbrookhart merged pull request #8321: [Relay][Frontend][Onnx] Enable group_conv1d import through conv2d conversion.
mbrookhart merged pull request #8321: URL: https://github.com/apache/tvm/pull/8321 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated (d9fe672 -> 6b7b966)
This is an automated email from the ASF dual-hosted git repository. mbrookhart pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from d9fe672 [Docs] Prevented docs/1 file from being generated. (#8029) add 6b7b966 [Relay][Frontend][Onnx] Enable group_conv1d import through conv2d conversion. (#8321) No new revisions were added by this update. Summary of changes: python/tvm/relay/frontend/onnx.py | 28 ++-- tests/python/frontend/onnx/test_forward.py | 20 ++-- 2 files changed, 44 insertions(+), 4 deletions(-)
[GitHub] [tvm] trevor-m commented on a change in pull request #8323: [Relay to Onnx][LRN]
trevor-m commented on a change in pull request #8323: URL: https://github.com/apache/tvm/pull/8323#discussion_r658113381 ## File path: python/tvm/contrib/target/onnx.py ## @@ -617,6 +617,20 @@ def convert_attributes(cls, attrs): return {"value": 1} +class LRN(OpConverter): +"""Operator converter for LRN.""" + +@classmethod +def convert_attributes(cls, attrs): +return { +"alpha": attrs.alpha, +"beta": attrs.beta, +"bias": attrs.bias, +"size": attrs.size +# axis? Review comment: Remove comment. It looks like the ONNX LRN op doesn't have an axis parameter, as it always applies to axis 1. Should be add an assert somewhere to check that the axis is also 1 in the Relay operator that is being converted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] trevor-m commented on issue #8140: [TEST][FLAKY] tests/python/frontend/tensorflow/test_forward.py::test_forward_combined_nms
trevor-m commented on issue #8140: URL: https://github.com/apache/tvm/issues/8140#issuecomment-867786119 I'm fine with disabling the test for now, sorry I haven't had a chance to look into the flakiness yet. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] stoa commented on pull request #7742: Contributing the STM32 port
stoa commented on pull request #7742: URL: https://github.com/apache/tvm/pull/7742#issuecomment-867774013 @areusch Hello, Andrew I do not understand the build failure. Who can explain it what the problem is ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen commented on a change in pull request #8317: [Makefile] Updates to top-level makefile.
tqchen commented on a change in pull request #8317: URL: https://github.com/apache/tvm/pull/8317#discussion_r658089050 ## File path: Makefile ## @@ -15,68 +15,80 @@ # specific language governing permissions and limitations # under the License. + +.PHONY: all \ +runtime vta cpptest crttest \ +lint pylint cpplint scalalint \ + doc \ + web webclean \ + cython cython3 cyclean \ +clean + +# Remember the root directory, to be usable by submake invocation. ROOTDIR = $(CURDIR) -# Specify an alternate output directory relative to ROOTDIR. Default build -OUTPUTDIR = $(if $(OUTDIR), $(OUTDIR), build) -.PHONY: clean all test doc pylint cpplint scalalint lint\ -cython cython2 cython3 web runtime vta +# Specify an alternate output directory relative to ROOTDIR. Defaults +# to "build". Can also be a space-separated list of build +# directories, each with a different configuation. +TVM_BUILD_PATH ?= build +TVM_BUILD_PATH := $(abspath $(TVM_BUILD_PATH)) -ifndef DMLC_CORE_PATH - DMLC_CORE_PATH = $(ROOTDIR)/3rdparty/dmlc-core -endif +# Allow environment variables for 3rd-party libraries, default to +# packaged version. +DMLC_CORE_PATH ?= $(ROOTDIR)/3rdparty/dmlc-core +DLPACK_PATH ?= $(ROOTDIR)/3rdparty/dlpack +VTA_HW_PATH ?= $(ROOTDIR)/3rdparty/vta-hw -ifndef DLPACK_PATH - DLPACK_PATH = $(ROOTDIR)/3rdparty/dlpack -endif -ifndef VTA_HW_PATH - VTA_HW_PATH = $(ROOTDIR)/3rdparty/vta-hw -endif -INCLUDE_FLAGS = -Iinclude -I$(DLPACK_PATH)/include -I$(DMLC_CORE_PATH)/include -PKG_CFLAGS = -std=c++11 -Wall -O2 $(INCLUDE_FLAGS) -fPIC -PKG_LDFLAGS = +all: $(addsuffix /all,$(TVM_BUILD_PATH)) -all: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) +runtime: $(addsuffix /runtime,$(TVM_BUILD_PATH)) +vta: $(addsuffix /vta,$(TVM_BUILD_PATH)) +cpptest: $(addsuffix /cpptest,$(TVM_BUILD_PATH)) +crttest: $(addsuffix /crttest,$(TVM_BUILD_PATH)) -runtime: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) runtime +# Set up a default config.cmake inside the build directory. +# filter-out used to avoid circular dependency. +%/config.cmake: | $(filter-out %/config.cmake,$(ROOTDIR)/cmake/config.cmake) Review comment: Right now makefile is mainly served as a convenient tool and we are quite cmake centric, so we can keep things simple in the Makefile -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] mbrookhart commented on issue #8140: [TEST][FLAKY] tests/python/frontend/tensorflow/test_forward.py::test_forward_combined_nms
mbrookhart commented on issue #8140: URL: https://github.com/apache/tvm/issues/8140#issuecomment-867743767 I'm seeing flakiness in this test in about 1/3 of CI jobs, it's becoming a real problem to getting other PRs merged. Should we think about disabling this test until we can resolve the flakiness? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Mousius commented on pull request #8331: Allow tvmc to compile models with AOT executor in MLF
Mousius commented on pull request #8331: URL: https://github.com/apache/tvm/pull/8331#issuecomment-867743835 @leandron @gromero -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Mousius opened a new pull request #8331: Allow tvmc to compile models with AOT executor in MLF
Mousius opened a new pull request #8331: URL: https://github.com/apache/tvm/pull/8331 The tflite_compiled_model fixture was getting duplicated a few times so I've added a parameterized fixture tflite_tvmc_compiler which combines tmpdir_factory setup with compile_model Nested target params broke a basic string split, so in cases where we use nested target params I replaced the string split with shlex split -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg commented on a change in pull request #8317: [Makefile] Updates to top-level makefile.
Lunderberg commented on a change in pull request #8317: URL: https://github.com/apache/tvm/pull/8317#discussion_r658060824 ## File path: Makefile ## @@ -15,68 +15,80 @@ # specific language governing permissions and limitations # under the License. + +.PHONY: all \ +runtime vta cpptest crttest \ +lint pylint cpplint scalalint \ + doc \ + web webclean \ + cython cython3 cyclean \ +clean + +# Remember the root directory, to be usable by submake invocation. ROOTDIR = $(CURDIR) -# Specify an alternate output directory relative to ROOTDIR. Default build -OUTPUTDIR = $(if $(OUTDIR), $(OUTDIR), build) -.PHONY: clean all test doc pylint cpplint scalalint lint\ -cython cython2 cython3 web runtime vta +# Specify an alternate output directory relative to ROOTDIR. Defaults +# to "build". Can also be a space-separated list of build +# directories, each with a different configuation. +TVM_BUILD_PATH ?= build +TVM_BUILD_PATH := $(abspath $(TVM_BUILD_PATH)) -ifndef DMLC_CORE_PATH - DMLC_CORE_PATH = $(ROOTDIR)/3rdparty/dmlc-core -endif +# Allow environment variables for 3rd-party libraries, default to +# packaged version. +DMLC_CORE_PATH ?= $(ROOTDIR)/3rdparty/dmlc-core +DLPACK_PATH ?= $(ROOTDIR)/3rdparty/dlpack +VTA_HW_PATH ?= $(ROOTDIR)/3rdparty/vta-hw -ifndef DLPACK_PATH - DLPACK_PATH = $(ROOTDIR)/3rdparty/dlpack -endif -ifndef VTA_HW_PATH - VTA_HW_PATH = $(ROOTDIR)/3rdparty/vta-hw -endif -INCLUDE_FLAGS = -Iinclude -I$(DLPACK_PATH)/include -I$(DMLC_CORE_PATH)/include -PKG_CFLAGS = -std=c++11 -Wall -O2 $(INCLUDE_FLAGS) -fPIC -PKG_LDFLAGS = +all: $(addsuffix /all,$(TVM_BUILD_PATH)) -all: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) +runtime: $(addsuffix /runtime,$(TVM_BUILD_PATH)) +vta: $(addsuffix /vta,$(TVM_BUILD_PATH)) +cpptest: $(addsuffix /cpptest,$(TVM_BUILD_PATH)) +crttest: $(addsuffix /crttest,$(TVM_BUILD_PATH)) -runtime: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) runtime +# Set up a default config.cmake inside the build directory. +# filter-out used to avoid circular dependency. +%/config.cmake: | $(filter-out %/config.cmake,$(ROOTDIR)/cmake/config.cmake) Review comment: True, and testing without the symlink shows that it works correctly. I tend to be hesitant on makefile rules that neither create their target nor are marked as `.PHONY`, but it does have the desired behavior here. We may need the symlink at some point in the future, if we want to avoid the call to `cmake` when possible, but that can be done at a later point. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen commented on a change in pull request #8317: [Makefile] Updates to top-level makefile.
tqchen commented on a change in pull request #8317: URL: https://github.com/apache/tvm/pull/8317#discussion_r658051013 ## File path: Makefile ## @@ -15,68 +15,80 @@ # specific language governing permissions and limitations # under the License. + +.PHONY: all \ +runtime vta cpptest crttest \ +lint pylint cpplint scalalint \ + doc \ + web webclean \ + cython cython3 cyclean \ +clean + +# Remember the root directory, to be usable by submake invocation. ROOTDIR = $(CURDIR) -# Specify an alternate output directory relative to ROOTDIR. Default build -OUTPUTDIR = $(if $(OUTDIR), $(OUTDIR), build) -.PHONY: clean all test doc pylint cpplint scalalint lint\ -cython cython2 cython3 web runtime vta +# Specify an alternate output directory relative to ROOTDIR. Defaults +# to "build". Can also be a space-separated list of build +# directories, each with a different configuation. +TVM_BUILD_PATH ?= build +TVM_BUILD_PATH := $(abspath $(TVM_BUILD_PATH)) -ifndef DMLC_CORE_PATH - DMLC_CORE_PATH = $(ROOTDIR)/3rdparty/dmlc-core -endif +# Allow environment variables for 3rd-party libraries, default to +# packaged version. +DMLC_CORE_PATH ?= $(ROOTDIR)/3rdparty/dmlc-core +DLPACK_PATH ?= $(ROOTDIR)/3rdparty/dlpack +VTA_HW_PATH ?= $(ROOTDIR)/3rdparty/vta-hw -ifndef DLPACK_PATH - DLPACK_PATH = $(ROOTDIR)/3rdparty/dlpack -endif -ifndef VTA_HW_PATH - VTA_HW_PATH = $(ROOTDIR)/3rdparty/vta-hw -endif -INCLUDE_FLAGS = -Iinclude -I$(DLPACK_PATH)/include -I$(DMLC_CORE_PATH)/include -PKG_CFLAGS = -std=c++11 -Wall -O2 $(INCLUDE_FLAGS) -fPIC -PKG_LDFLAGS = +all: $(addsuffix /all,$(TVM_BUILD_PATH)) -all: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) +runtime: $(addsuffix /runtime,$(TVM_BUILD_PATH)) +vta: $(addsuffix /vta,$(TVM_BUILD_PATH)) +cpptest: $(addsuffix /cpptest,$(TVM_BUILD_PATH)) +crttest: $(addsuffix /crttest,$(TVM_BUILD_PATH)) -runtime: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) runtime +# Set up a default config.cmake inside the build directory. +# filter-out used to avoid circular dependency. +%/config.cmake: | $(filter-out %/config.cmake,$(ROOTDIR)/cmake/config.cmake) Review comment: symlink is likely not necessary, because the cmake is already made to use that behavior https://github.com/apache/tvm/blob/main/CMakeLists.txt#L16, as long as no config.cmake is copied it is fine -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg commented on a change in pull request #8317: [Makefile] Updates to top-level makefile.
Lunderberg commented on a change in pull request #8317: URL: https://github.com/apache/tvm/pull/8317#discussion_r658043926 ## File path: Makefile ## @@ -15,68 +15,80 @@ # specific language governing permissions and limitations # under the License. + +.PHONY: all \ +runtime vta cpptest crttest \ +lint pylint cpplint scalalint \ + doc \ + web webclean \ + cython cython3 cyclean \ +clean + +# Remember the root directory, to be usable by submake invocation. ROOTDIR = $(CURDIR) -# Specify an alternate output directory relative to ROOTDIR. Default build -OUTPUTDIR = $(if $(OUTDIR), $(OUTDIR), build) -.PHONY: clean all test doc pylint cpplint scalalint lint\ -cython cython2 cython3 web runtime vta +# Specify an alternate output directory relative to ROOTDIR. Defaults +# to "build". Can also be a space-separated list of build +# directories, each with a different configuation. +TVM_BUILD_PATH ?= build +TVM_BUILD_PATH := $(abspath $(TVM_BUILD_PATH)) -ifndef DMLC_CORE_PATH - DMLC_CORE_PATH = $(ROOTDIR)/3rdparty/dmlc-core -endif +# Allow environment variables for 3rd-party libraries, default to +# packaged version. +DMLC_CORE_PATH ?= $(ROOTDIR)/3rdparty/dmlc-core +DLPACK_PATH ?= $(ROOTDIR)/3rdparty/dlpack +VTA_HW_PATH ?= $(ROOTDIR)/3rdparty/vta-hw -ifndef DLPACK_PATH - DLPACK_PATH = $(ROOTDIR)/3rdparty/dlpack -endif -ifndef VTA_HW_PATH - VTA_HW_PATH = $(ROOTDIR)/3rdparty/vta-hw -endif -INCLUDE_FLAGS = -Iinclude -I$(DLPACK_PATH)/include -I$(DMLC_CORE_PATH)/include -PKG_CFLAGS = -std=c++11 -Wall -O2 $(INCLUDE_FLAGS) -fPIC -PKG_LDFLAGS = +all: $(addsuffix /all,$(TVM_BUILD_PATH)) -all: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) +runtime: $(addsuffix /runtime,$(TVM_BUILD_PATH)) +vta: $(addsuffix /vta,$(TVM_BUILD_PATH)) +cpptest: $(addsuffix /cpptest,$(TVM_BUILD_PATH)) +crttest: $(addsuffix /crttest,$(TVM_BUILD_PATH)) -runtime: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) runtime +# Set up a default config.cmake inside the build directory. +# filter-out used to avoid circular dependency. +%/config.cmake: | $(filter-out %/config.cmake,$(ROOTDIR)/cmake/config.cmake) Review comment: The latest commit on this PR now preserves the behavior of using the root-directory's `config.cmake` using symlinks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] giuseros commented on pull request #8014: [AOT] Name mangling in AOT
giuseros commented on pull request #8014: URL: https://github.com/apache/tvm/pull/8014#issuecomment-867705273 A friendly ping @manupa-arm , @jroesch , @areusch ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] giuseros commented on pull request #8096: Decoupling AOT from graph memory planner
giuseros commented on pull request #8096: URL: https://github.com/apache/tvm/pull/8096#issuecomment-867704948 A friendly ping @manupa-arm , @jroesch , @areusch ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg commented on a change in pull request #8317: [Makefile] Updates to top-level makefile.
Lunderberg commented on a change in pull request #8317: URL: https://github.com/apache/tvm/pull/8317#discussion_r658022356 ## File path: Makefile ## @@ -15,68 +15,80 @@ # specific language governing permissions and limitations # under the License. + +.PHONY: all \ +runtime vta cpptest crttest \ +lint pylint cpplint scalalint \ + doc \ + web webclean \ + cython cython3 cyclean \ +clean + +# Remember the root directory, to be usable by submake invocation. ROOTDIR = $(CURDIR) -# Specify an alternate output directory relative to ROOTDIR. Default build -OUTPUTDIR = $(if $(OUTDIR), $(OUTDIR), build) -.PHONY: clean all test doc pylint cpplint scalalint lint\ -cython cython2 cython3 web runtime vta +# Specify an alternate output directory relative to ROOTDIR. Defaults +# to "build". Can also be a space-separated list of build +# directories, each with a different configuation. +TVM_BUILD_PATH ?= build +TVM_BUILD_PATH := $(abspath $(TVM_BUILD_PATH)) -ifndef DMLC_CORE_PATH - DMLC_CORE_PATH = $(ROOTDIR)/3rdparty/dmlc-core -endif +# Allow environment variables for 3rd-party libraries, default to +# packaged version. +DMLC_CORE_PATH ?= $(ROOTDIR)/3rdparty/dmlc-core +DLPACK_PATH ?= $(ROOTDIR)/3rdparty/dlpack +VTA_HW_PATH ?= $(ROOTDIR)/3rdparty/vta-hw -ifndef DLPACK_PATH - DLPACK_PATH = $(ROOTDIR)/3rdparty/dlpack -endif -ifndef VTA_HW_PATH - VTA_HW_PATH = $(ROOTDIR)/3rdparty/vta-hw -endif -INCLUDE_FLAGS = -Iinclude -I$(DLPACK_PATH)/include -I$(DMLC_CORE_PATH)/include -PKG_CFLAGS = -std=c++11 -Wall -O2 $(INCLUDE_FLAGS) -fPIC -PKG_LDFLAGS = +all: $(addsuffix /all,$(TVM_BUILD_PATH)) -all: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) +runtime: $(addsuffix /runtime,$(TVM_BUILD_PATH)) +vta: $(addsuffix /vta,$(TVM_BUILD_PATH)) +cpptest: $(addsuffix /cpptest,$(TVM_BUILD_PATH)) +crttest: $(addsuffix /crttest,$(TVM_BUILD_PATH)) -runtime: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) runtime +# Set up a default config.cmake inside the build directory. +# filter-out used to avoid circular dependency. +%/config.cmake: | $(filter-out %/config.cmake,$(ROOTDIR)/cmake/config.cmake) Review comment: Good point, I had been going off of the "Install from Source" documentation, and hadn't realized there could be a root directory config.cmake instead. I'd like to have the `%/config.cmake` build rule in place so that it can later be specialized to set up local build directories to match the CI. (e.g. The rule for `build_docker/ci_gpu/config.cmake` would copy `config.cmake` to that location, then run `tests/scripts/task_config_build_gpu.sh` on it.) I think the best way to maintain the current behavior while allowing for this later expansion would be to symlink to a root directory `config.cmake` if it exists, falling back to copying the `config.cmake` if it doesn't. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen opened a new pull request #8330: [DOCKER] Update lint to reflect the latest state
tqchen opened a new pull request #8330: URL: https://github.com/apache/tvm/pull/8330 Image PR #8329 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated (b9d2899 -> d9fe672)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from b9d2899 [Relay][Training] Additional gradients (#8307) add d9fe672 [Docs] Prevented docs/1 file from being generated. (#8029) No new revisions were added by this update. Summary of changes: docs/api/python/index.rst| 1 + docs/api/python/relay/image.rst | 1 + docs/api/python/relay/index.rst | 1 + docs/api/python/tir.rst | 1 + docs/api/python/topi.rst | 1 + docs/dev/device_target_interactions.rst | 1 + docs/dev/index.rst | 11 ++ python/tvm/auto_scheduler/compute_dag.py | 2 +- python/tvm/driver/build_module.py| 12 +-- python/tvm/ir/op.py | 18 ++-- python/tvm/micro/build.py| 11 +- python/tvm/relay/op/transform.py | 16 ++ python/tvm/relay/transform/transform.py | 26 --- python/tvm/runtime/ndarray.py| 1 + python/tvm/runtime/profiling.py | 2 +- python/tvm/te/hybrid/__init__.py | 2 +- python/tvm/te/operation.py | 10 - python/tvm/te/tensor_intrin.py | 8 +++ python/tvm/tir/buffer.py | 2 +- python/tvm/tir/schedule/block_scope.py | 17 +-- python/tvm/tir/schedule/schedule.py | 27 python/tvm/tir/stmt.py | 2 +- python/tvm/tir/stmt_functor.py | 8 +++ python/tvm/tir/transform/function_pass.py| 2 +- python/tvm/tir/transform/transform.py| 20 -- python/tvm/topi/nn/sparse.py | 18 +--- python/tvm/topi/sparse_reshape.py| 10 ++--- python/tvm/topi/transform.py | 1 + python/tvm/topi/unique.py| 14 ++-- tests/scripts/task_sphinx_precheck.sh| 7 +++--- tutorials/frontend/deploy_model_on_rasp.py | 2 +- tutorials/get_started/autotvm_matmul_x86.py | 8 +++ tutorials/get_started/install.py | 1 + tutorials/get_started/relay_quick_start.py | 9 tutorials/get_started/tensor_expr_get_started.py | 24 + vta/tutorials/autotvm/tune_alu_vta.py| 1 + 36 files changed, 185 insertions(+), 113 deletions(-)
[GitHub] [tvm] tqchen merged pull request #8029: [Docs] Prevented docs/1 file from being generated.
tqchen merged pull request #8029: URL: https://github.com/apache/tvm/pull/8029 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen commented on pull request #8029: [Docs] Prevented docs/1 file from being generated.
tqchen commented on pull request #8029: URL: https://github.com/apache/tvm/pull/8029#issuecomment-867681841 Thanks @Lunderberg ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen commented on a change in pull request #8317: [Makefile] Updates to top-level makefile.
tqchen commented on a change in pull request #8317: URL: https://github.com/apache/tvm/pull/8317#discussion_r657998758 ## File path: Makefile ## @@ -15,68 +15,80 @@ # specific language governing permissions and limitations # under the License. + +.PHONY: all \ +runtime vta cpptest crttest \ +lint pylint cpplint scalalint \ + doc \ + web webclean \ + cython cython3 cyclean \ +clean + +# Remember the root directory, to be usable by submake invocation. ROOTDIR = $(CURDIR) -# Specify an alternate output directory relative to ROOTDIR. Default build -OUTPUTDIR = $(if $(OUTDIR), $(OUTDIR), build) -.PHONY: clean all test doc pylint cpplint scalalint lint\ -cython cython2 cython3 web runtime vta +# Specify an alternate output directory relative to ROOTDIR. Defaults +# to "build". Can also be a space-separated list of build +# directories, each with a different configuation. +TVM_BUILD_PATH ?= build +TVM_BUILD_PATH := $(abspath $(TVM_BUILD_PATH)) -ifndef DMLC_CORE_PATH - DMLC_CORE_PATH = $(ROOTDIR)/3rdparty/dmlc-core -endif +# Allow environment variables for 3rd-party libraries, default to +# packaged version. +DMLC_CORE_PATH ?= $(ROOTDIR)/3rdparty/dmlc-core +DLPACK_PATH ?= $(ROOTDIR)/3rdparty/dlpack +VTA_HW_PATH ?= $(ROOTDIR)/3rdparty/vta-hw -ifndef DLPACK_PATH - DLPACK_PATH = $(ROOTDIR)/3rdparty/dlpack -endif -ifndef VTA_HW_PATH - VTA_HW_PATH = $(ROOTDIR)/3rdparty/vta-hw -endif -INCLUDE_FLAGS = -Iinclude -I$(DLPACK_PATH)/include -I$(DMLC_CORE_PATH)/include -PKG_CFLAGS = -std=c++11 -Wall -O2 $(INCLUDE_FLAGS) -fPIC -PKG_LDFLAGS = +all: $(addsuffix /all,$(TVM_BUILD_PATH)) -all: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) +runtime: $(addsuffix /runtime,$(TVM_BUILD_PATH)) +vta: $(addsuffix /vta,$(TVM_BUILD_PATH)) +cpptest: $(addsuffix /cpptest,$(TVM_BUILD_PATH)) +crttest: $(addsuffix /crttest,$(TVM_BUILD_PATH)) -runtime: - @mkdir -p $(OUTPUTDIR) && cd $(OUTPUTDIR) && cmake .. && $(MAKE) runtime +# Set up a default config.cmake inside the build directory. +# filter-out used to avoid circular dependency. +%/config.cmake: | $(filter-out %/config.cmake,$(ROOTDIR)/cmake/config.cmake) Review comment: This changes the current behavior where a config.cmake in the root can also serve the purpose of configuration. Given that the cmake rule already looks for the config.cmake, perhaps we can skip this step -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] Lunderberg commented on pull request #8029: [Docs] Prevented docs/1 file from being generated.
Lunderberg commented on pull request #8029: URL: https://github.com/apache/tvm/pull/8029#issuecomment-867680223 And it passed! Should be ready to merge, as it has up-to-date fixes for all documentation warnings on the main branch through yesterday afternoon. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen opened a new pull request #8329: [CI] Pin mypy version
tqchen opened a new pull request #8329: URL: https://github.com/apache/tvm/pull/8329 Per https://github.com/apache/tvm/pull/8302 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch ci-docker-staging updated (4e69e98 -> ccfb2af)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch ci-docker-staging in repository https://gitbox.apache.org/repos/asf/tvm.git. discard 4e69e98 [DOCKER] Update lint to reflect the latest state new ccfb2af [DOCKER] Update lint to reflect the latest state This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (4e69e98) \ N -- N -- N refs/heads/ci-docker-staging (ccfb2af) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: Jenkinsfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
[tvm] 01/01: [DOCKER] Update lint to reflect the latest state
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch ci-docker-staging in repository https://gitbox.apache.org/repos/asf/tvm.git commit ccfb2af5fb52672ad32086a8d2d2d2015c446b1a Author: tqchen AuthorDate: Thu Jun 24 06:44:27 2021 -0700 [DOCKER] Update lint to reflect the latest state Pins mypy version. --- Jenkinsfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Jenkinsfile b/Jenkinsfile index 3ea6d22..f26b148 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -44,7 +44,7 @@ // // NOTE: these lines are scanned by docker/dev_common.sh. Please update the regex as needed. --> -ci_lint = "tlcpack/ci-lint:v0.65" +ci_lint = "tlcpack/ci-lint:v0.66" ci_gpu = "tlcpack/ci-gpu:v0.75" ci_cpu = "tlcpack/ci-cpu:v0.74" ci_wasm = "tlcpack/ci-wasm:v0.71"
[GitHub] [tvm] Lunderberg commented on pull request #8315: [Docker] Fix ordering of tf and tflite installs in ci_qemu
Lunderberg commented on pull request #8315: URL: https://github.com/apache/tvm/pull/8315#issuecomment-867677450 Bumped the CI again. It looks like there is a tracking issue #8140 for this particular flaky test case, though there isn't a resolution yet. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen commented on pull request #8328: [COMMUNITY] Reviewer: wyc-ruiker
tqchen commented on pull request #8328: URL: https://github.com/apache/tvm/pull/8328#issuecomment-867665088 my bad, updated! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] wyc-ruiker commented on pull request #8328: [COMMUNITY] Reviewer: wyc-ruiker
wyc-ruiker commented on pull request #8328: URL: https://github.com/apache/tvm/pull/8328#issuecomment-867663172 Thanks! But my TVM forum's name is different from GitHub's name. My community forum summary should be [Community Forum Summary](https://discuss.tvm.apache.org/u/reku/summary). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen opened a new pull request #8328: [COMMUNITY] Reviewer: wyc-ruiker
tqchen opened a new pull request #8328: URL: https://github.com/apache/tvm/pull/8328 Dear community: Please join us to welcome @wyc-ruiker as a new reviewer. Yucheng has made various contributions to the TFLite importer, QNN and codegen components and helped to review code on the related frontends. - [Commits History](https://github.com/apache/tvm/commits?author=wyc-ruiker) - [Code Review](https://github.com/apache/tvm/pulls?utf8=%E2%9C%93&q=reviewed-by:wyc-ruiker) - [Community Forum Summary](https://discuss.tvm.apache.org/u/wyc-ruiker/summary) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch ci-docker-staging updated (9371913 -> 4e69e98)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch ci-docker-staging in repository https://gitbox.apache.org/repos/asf/tvm.git. omit 9371913 update onnx in qemu docker omit fa47f11 Don't force output shape for conv transpose tests, add 1D and 3D cases omit 3671771 support convtranspose opset 11 autopadding omit 0faa04b point jenkins at new docker omit 76d114d add failing onnx tets omit dfeb604 [WIP] Update ONNX versions add 77536da [Metal] Fix bad stream after interrupted tuning session (#8244) add 0f4c065 [Relay][Convert Layout] Enable layout transformation for image.resize op (#8205) add 5f94c1e [CUDA][PASS] conv2d NWHC/HWNC legalize tensorcore (#8222) add bf3f000 [topi][CuDNN] Removed requirement for GPU from topi conv2d_cudnn.cuda and conv3d_cudnn.cuda (#8276) add edb7e77 Fix a word typo and add spaces. (#8278) add 1f0f8f1 [IRPrinter] Prevent multiple printing of optional info (#8279) add 7157c93 add metal to list of choices (#8282) add 801c26d [Vulkan][Codegen] Fixed SPIR-V scoping bug with threadIdx (#8281) add 0bbaf0e [Bug Fixed] Make query_rpc_tracker show the correct device server port and customized address (#8203) add ca99552 [CuDNN] Remove GPU dependency from tvm.contrib.cudnn.conv_output_shape (#8275) add 247b1c4 [Relay][Dataflow] Fix test_rewrite_function_with_fuzzy_body test check (#8287) add 5537788 [VM][PooledAllocator] try reallocation once when OOM (#8285) add c208a1f support adb-shell style cpp_rpc (#8223) add 8e63486 [Relay] [Pass] Add mixed precision (e.g. FP16) model conversion pass (#8069) add 40d5193 [Auto Scheduler] Make the opt_level of task extraction adjustable (#8288) add 13146be [TensorFlow][Frontend] Adding InversePermutation Op (#8277) add 8022513 refact: rm unused variable (#8290) add 3cb838d [microTVM] Refactor uTVM to microTVM (#8283) add 841e195 Fix deprecated use of numpy.asscalar. (#8292) add dbbf259 Fix bulleted lists in TVM documentation. (#8268) add 2acfa2c [RPC][CPP] Add support of cpp RPC-server for Apple (#8224) add d5cc07e Check for presence of LLVM configuration. (#8293) add 1a5bf99 Fix Intel OpenCL SDK search path for Windows (#8301) add 41b4872 [BYOC][NNAPI]: Add testing package to ci_cpu image (#8088) add 5c2836c Turn on Compute library testing in CI for AArch64 (#8291) add 1c6f2bc Update ONNX versions (#8304) add 002441d Fix rst formatting in documentation (#8303) add 9f350d3 [Docker] Update tensorflow/tflite/xgboost versions (#8306) add 35d71b1 [TVMSCRIPT] add more type support in script function parameter (#8235) add 7e7d7fb Port StorageInfo and StaticMemoryPlan data structure (#8297) add 9d75ff4 [rust] convert error msg to string for panic macro (#8289) add d0791d3 Install curl in ubuntu_install_core.sh (#8310) add 3db44f4 Fix ordering of tf and tflite installs in ci_cpu (#8312) add 5fa1c6d [DOCKER] fix sphinx install versions (#8316) add b9d2899 [Relay][Training] Additional gradients (#8307) new 4e69e98 [DOCKER] Update lint to reflect the latest state This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (9371913) \ N -- N -- N refs/heads/ci-docker-staging (4e69e98) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .gitignore | 3 + CONTRIBUTORS.md| 2 +- Jenkinsfile| 6 +- NEWS.md| 2 +- apps/cpp_rpc/rpc_env.cc| 12 +- apps/cpp_rpc/rpc_server.cc | 13 +- apps/microtvm/zephyr/aot_demo/src/main.c | 32 +- apps/microtvm/zephyr/aot_demo/src/zephyr_uart.c| 8 +- apps/microtvm/zephyr/host_driven/src/main.c| 38 +- cmake/modules/StandaloneCrt.cmake | 10 +- cmake/modules/contrib/ArmComputeLib.cmake | 4 + cmake/modules/contrib/EthosN.cmake | 4 + cmake/utils/FindOpenCL.cmake
[tvm] 01/01: [DOCKER] Update lint to reflect the latest state
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch ci-docker-staging in repository https://gitbox.apache.org/repos/asf/tvm.git commit 4e69e98472f7aa5c4963e52c13717a068d96f0a0 Author: tqchen AuthorDate: Thu Jun 24 06:44:27 2021 -0700 [DOCKER] Update lint to reflect the latest state --- Jenkinsfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Jenkinsfile b/Jenkinsfile index 3ea6d22..93e8b36 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -44,7 +44,7 @@ // // NOTE: these lines are scanned by docker/dev_common.sh. Please update the regex as needed. --> -ci_lint = "tlcpack/ci-lint:v0.65" +ci_lint = "tlcpack/ci-lint:v0.66-t0" ci_gpu = "tlcpack/ci-gpu:v0.75" ci_cpu = "tlcpack/ci-cpu:v0.74" ci_wasm = "tlcpack/ci-wasm:v0.71"
[GitHub] [tvm] tqchen commented on issue #8255: [microTVM] RPCSession Device Type Bug
tqchen commented on issue #8255: URL: https://github.com/apache/tvm/issues/8255#issuecomment-867620092 In this particular case, it might be related to the fact that we are running graph runtime locally while the computation are on the remote. @areusch might have more background. In such cases, `remote.cpu(0)` is actually the correct device type. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen commented on issue #8308: [BUG] Incorrect buffer offset for vectorized computation
tqchen commented on issue #8308: URL: https://github.com/apache/tvm/issues/8308#issuecomment-867617327 also cc @vinx13 @masahi who can help manage the related PRs -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] tqchen commented on issue #8308: [BUG] Incorrect buffer offset for vectorized computation
tqchen commented on issue #8308: URL: https://github.com/apache/tvm/issues/8308#issuecomment-867616887 Thanks @wrongtest for reporting the issue, can you help to suggest a fix and send a PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated: [Relay][Training] Additional gradients (#8307)
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new b9d2899 [Relay][Training] Additional gradients (#8307) b9d2899 is described below commit b9d2899ae8adeb88bd95d633e9d1d8193f9c9560 Author: Altan Haan AuthorDate: Thu Jun 24 05:46:56 2021 -0700 [Relay][Training] Additional gradients (#8307) --- python/tvm/relay/op/_tensor_grad.py| 62 +++--- tests/python/relay/test_op_grad_level10.py | 24 tests/python/relay/test_op_grad_level3.py | 7 tests/python/relay/test_op_grad_level4.py | 33 4 files changed, 121 insertions(+), 5 deletions(-) diff --git a/python/tvm/relay/op/_tensor_grad.py b/python/tvm/relay/op/_tensor_grad.py index d5b8910..09b1435 100644 --- a/python/tvm/relay/op/_tensor_grad.py +++ b/python/tvm/relay/op/_tensor_grad.py @@ -15,7 +15,7 @@ # specific language governing permissions and limitations # under the License. # pylint: disable=invalid-name, unused-argument -"""Backend compiler related feature registration""" +"""Gradient definitions for Relay operators""" from tvm.topi.nn.utils import get_pad_tuple from tvm.topi.utils import get_const_tuple from tvm.error import OpError @@ -527,10 +527,7 @@ def softmax_grad(orig, grad): @register_gradient("nn.log_softmax") def log_softmax_grad(orig, grad): """Gradient of log_softmax""" -x = orig.args[0] -sm = _nn.softmax(x, axis=orig.attrs.axis) -grad = grad / sm -return softmax_grad(sm, grad) +return [grad - _sum(grad, axis=orig.attrs.axis, keepdims=True) * exp(orig)] @register_gradient("nn.bias_add") @@ -596,6 +593,12 @@ def cast_grad(orig, grad): return [cast_like(grad, x)] +@register_gradient("cast_like") +def cast_like_grad(orig, grad): +x, like = orig.args +return [cast_like(grad, x), zeros_like(like)] + + @register_gradient("nn.batch_flatten") def batch_flatten_grad(orig, grad): """Returns grad reshaped to data dims""" @@ -873,3 +876,52 @@ def less_equal_grad(orig, grad): Returns the gradient of less_equal. """ return [zeros_like(orig.args[0]), zeros_like(orig.args[1])] + + +@register_gradient("not_equal") +def not_equal_grad(orig, grad): +""" +Returns the gradient of not_equal (just zeros). +""" +return [zeros_like(orig.args[0]), zeros_like(orig.args[1])] + + +@register_gradient("strided_slice") +def strided_slice_grad(orig, grad): +""" +Returns the gradient of strided_slice, which is equal to grad where the +input was sliced and zero elsewhere. +""" +assert orig.attrs.axes is None, "grad for strided_slice with axes is not yet supported" +x = orig.args[0] +begin = get_const_tuple(orig.attrs.begin) +end = get_const_tuple(orig.attrs.end) +strides = get_const_tuple(orig.attrs.strides) +if orig.attrs.slice_mode == "size": +# convert sizes to ending indices and ignore strides +end = list(end) +for i, (start, size) in enumerate(zip(begin, end)): +if size == -1: +end[i] = int(x.checked_type.shape[i]) +else: +end[i] = start + size +strides = None +else: +assert orig.attrs.slice_mode == "end" +return [strided_set(zeros_like(x), grad, begin, end, strides)] + + +@register_gradient("one_hot") +def one_hot_grad(orig, grad): +""" +Returns the gradient of one_hot, which is the sum of grad at on and off +indices for on_value and off_value respectively. +""" +indices, on_value, off_value = orig.args + +g_zeros = zeros_like(grad) +on_mask = equal(orig, on_value) +grad_on = _sum(where(on_mask, grad, g_zeros)) +grad_off = _sum(where(on_mask, g_zeros, grad)) + +return [zeros_like(indices), cast_like(grad_on, on_value), cast_like(grad_off, off_value)] diff --git a/tests/python/relay/test_op_grad_level10.py b/tests/python/relay/test_op_grad_level10.py index 4a6ffb9..e2145f7 100644 --- a/tests/python/relay/test_op_grad_level10.py +++ b/tests/python/relay/test_op_grad_level10.py @@ -15,6 +15,7 @@ # specific language governing permissions and limitations # under the License. import pytest +import numpy as np from tvm import relay from tvm.relay.testing import check_grad @@ -72,5 +73,28 @@ def test_reverse_reshape_grad(): check_grad(relay.Function([x], relay.op.reverse_reshape(x, (-1, 0 +def test_one_hot_grad(): +indices_shape = (3, 4) +depth = 5 +axis = -1 + +for indices_dtype in ["int32", "int64"]: +for val_dtype in ["float32", "float64"]: +inputs = [ +np.random.randint(depth, size=indices_shape, dtype=indices_dtype), +np.array(np.random.randn() * 1e-5).astype(val_dtype), +np.array(np.random.randn() * 1e-5).astype(val_dtype), +
[GitHub] [tvm] tqchen merged pull request #8307: [Relay][Training] Additional gradients
tqchen merged pull request #8307: URL: https://github.com/apache/tvm/pull/8307 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] leandron commented on pull request #8326: [CI] Install curl in the context of ubuntu_install_nodejs.sh
leandron commented on pull request #8326: URL: https://github.com/apache/tvm/pull/8326#issuecomment-867553178 > Can you leave a comment around it ? Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] echuraev opened a new pull request #8327: [RPC] Fix android rpc connection to tracker
echuraev opened a new pull request #8327: URL: https://github.com/apache/tvm/pull/8327 After commit 0bbaf0e, android_rpc wasn't able connect to rpc_tracker. Added addr field to cinfo. Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] leandron commented on pull request #8324: [tvmc] Fix inconsistent usage of host_name -> hostname
leandron commented on pull request #8324: URL: https://github.com/apache/tvm/pull/8324#issuecomment-867534416 > Hi @leandron , > Is there any way to test this? Thanks for the push. Yes, there is a way to test this in isolation. I just updated here with a test case. Please have another look :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] leandron opened a new pull request #8326: [CI] Install curl in the context of ubuntu_install_nodejs.sh
leandron opened a new pull request #8326: URL: https://github.com/apache/tvm/pull/8326 Make sure that curl is installed, as this script is used on ci_lint, which does not need all the packages installed by ubuntu_install_core.sh This partially undo what is done in #8310, that causes an unexpected failure when rebuilding `ci_lint` cc @mbrookhart @u99127 for reviews -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] giuseros commented on pull request #8324: [tvmc] Fix inconsistent usage of host_name -> hostname
giuseros commented on pull request #8324: URL: https://github.com/apache/tvm/pull/8324#issuecomment-867490323 Hi @leandron , Is there any way to test this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] d-smirnov opened a new pull request #8325: FoldScaleAxis became non-recursive
d-smirnov opened a new pull request #8325: URL: https://github.com/apache/tvm/pull/8325 This PR migrates FoldScaleAxis optimization pass from ExprVisitor/ExprMutator to non-recursive MixedModeVisitor/MixedModeMutator. The specific transforming part itself is still recursive, however the underlying traversal machinery is now non-recursive. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] leandron opened a new pull request #8324: [tvmc] Fix inconsistent usage of host_name -> hostname
leandron opened a new pull request #8324: URL: https://github.com/apache/tvm/pull/8324 There was some inconsistent usage of `host_name` vs. `hostname` in tvmc tune. * This change prevents a python error when running tuning via and RPC tracker on tvmc. cc @giuseros @mbaret @comaniac for reviews -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [tvm] leandron commented on pull request #8316: [DOCKER] fix sphinx install versions
leandron commented on pull request #8316: URL: https://github.com/apache/tvm/pull/8316#issuecomment-867414400 This is merged now. Thanks @mbrookhart! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated: [DOCKER] fix sphinx install versions (#8316)
This is an automated email from the ASF dual-hosted git repository. leandron pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new 5fa1c6d [DOCKER] fix sphinx install versions (#8316) 5fa1c6d is described below commit 5fa1c6dae0903f4dc31d39d42fcf582190ac1a68 Author: Matthew Brookhart AuthorDate: Thu Jun 24 01:40:11 2021 -0600 [DOCKER] fix sphinx install versions (#8316) --- docker/install/ubuntu_install_sphinx.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docker/install/ubuntu_install_sphinx.sh b/docker/install/ubuntu_install_sphinx.sh index 80b3323..8a7ce1d 100755 --- a/docker/install/ubuntu_install_sphinx.sh +++ b/docker/install/ubuntu_install_sphinx.sh @@ -21,4 +21,4 @@ set -u set -o pipefail # NOTE: install docutils < 0.17 to work around https://github.com/readthedocs/sphinx_rtd_theme/issues/1115 -pip3 install sphinx sphinx-gallery==0.4.0 autodocsumm sphinx_rtd_theme sphinx_autodoc_annotation matplotlib Image "commonmark>=0.7.3" "docutils>=0.11" "docutils<0.17" +pip3 install sphinx sphinx-gallery==0.4.0 autodocsumm sphinx_rtd_theme sphinx_autodoc_annotation matplotlib Image "commonmark>=0.7.3" "docutils>=0.11,<0.17"
[GitHub] [tvm] leandron merged pull request #8316: [DOCKER] fix sphinx install versions
leandron merged pull request #8316: URL: https://github.com/apache/tvm/pull/8316 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[tvm] branch main updated: Fix ordering of tf and tflite installs in ci_cpu (#8312)
This is an automated email from the ASF dual-hosted git repository. leandron pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new 3db44f4 Fix ordering of tf and tflite installs in ci_cpu (#8312) 3db44f4 is described below commit 3db44f42cbafc107f1146a55b834c5c7a9458d3c Author: Manupa Karunaratne AuthorDate: Thu Jun 24 08:22:14 2021 +0100 Fix ordering of tf and tflite installs in ci_cpu (#8312) The recently merged 8306 PR introduced a depedency for tflite installation that tf must be installed first. However, that PR did not correct the ordering in ci_cpu which does not have that ordering. Change-Id: Ib82c2b33e4e123d4562682e9e97b21bfe23cc0ef --- docker/Dockerfile.ci_cpu | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docker/Dockerfile.ci_cpu b/docker/Dockerfile.ci_cpu index 7b511de..65afa69 100644 --- a/docker/Dockerfile.ci_cpu +++ b/docker/Dockerfile.ci_cpu @@ -79,14 +79,14 @@ RUN bash /install/ubuntu_install_sbt.sh COPY install/ubuntu_install_verilator.sh /install/ubuntu_install_verilator.sh RUN bash /install/ubuntu_install_verilator.sh -# TFLite deps -COPY install/ubuntu_install_tflite.sh /install/ubuntu_install_tflite.sh -RUN bash /install/ubuntu_install_tflite.sh - # TensorFlow deps COPY install/ubuntu_install_tensorflow.sh /install/ubuntu_install_tensorflow.sh RUN bash /install/ubuntu_install_tensorflow.sh +# TFLite deps +COPY install/ubuntu_install_tflite.sh /install/ubuntu_install_tflite.sh +RUN bash /install/ubuntu_install_tflite.sh + # Compute Library COPY install/ubuntu_download_arm_compute_lib_binaries.sh /install/ubuntu_download_arm_compute_lib_binaries.sh RUN bash /install/ubuntu_download_arm_compute_lib_binaries.sh
[GitHub] [tvm] leandron merged pull request #8312: Fix ordering of tf and tflite installs in ci_cpu
leandron merged pull request #8312: URL: https://github.com/apache/tvm/pull/8312 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org