[GitHub] [tvm] liangfu commented on a change in pull request #7059: #7058 [Tutorial] Import errors in deploy_detection.py and deploy_classification.py

2020-12-09 Thread GitBox
liangfu commented on a change in pull request #7059: URL: https://github.com/apache/tvm/pull/7059#discussion_r539181597 ## File path: vta/tutorials/frontend/deploy_classification.py ## @@ -56,9 +56,14 @@ from tvm.contrib.debugger import debug_runtime from tvm.relay import tra

[GitHub] [tvm] liangfu opened a new pull request #7069: [AutoSchedule] Compatibility improvement with XGBoost v1.3.0

2020-12-09 Thread GitBox
liangfu opened a new pull request #7069: URL: https://github.com/apache/tvm/pull/7069 In XGBoost v1.3.0, the aggcv function has been moved from [training.py](https://github.com/dmlc/xgboost/blob/release_1.2.0/python-package/xgboost/training.py#L337) to [callback.py](https://github.com/dml

[GitHub] [tvm] yongwww commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
yongwww commented on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741706021 @tqchen Thanks for the comment. We chose to go with MLIR/HLO instead of XLA HLO, because HLO operations works on tensors with static shape (dynamic ops like [NMS](https://gi

[GitHub] [tvm] yongwww edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
yongwww edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741706021 @tqchen Thanks for the comment. We chose to go with MLIR/HLO instead of XLA HLO, because HLO operations works on tensors with static shape (dynamic ops like [NMS](htt

[GitHub] [tvm] masahi commented on pull request #6900: PyTorch frontend: make type inference incremental

2020-12-09 Thread GitBox
masahi commented on pull request #6900: URL: https://github.com/apache/tvm/pull/6900#issuecomment-741716260 @t-vi Please have a look at the CI issue. It is due to the recent change I made. This is an automated message from t

[GitHub] [tvm] d-smirnov commented on pull request #6655: [BYOC] Added "include_non_call_ops" parameter to AnnotateTarget pass

2020-12-09 Thread GitBox
d-smirnov commented on pull request #6655: URL: https://github.com/apache/tvm/pull/6655#issuecomment-741720201 @comaniac Ping. How can we make some progress here? This is an automated message from the Apache Git Service. To

[GitHub] [tvm] d-smirnov commented on pull request #6970: [TFLite] added scalar axis value handling in reduce

2020-12-09 Thread GitBox
d-smirnov commented on pull request #6970: URL: https://github.com/apache/tvm/pull/6970#issuecomment-741720340 Ping. How can we make some progress here? This is an automated message from the Apache Git Service. To respond to

[GitHub] [tvm] d-smirnov commented on pull request #6984: [TFLite] pack operation extedned with const args

2020-12-09 Thread GitBox
d-smirnov commented on pull request #6984: URL: https://github.com/apache/tvm/pull/6984#issuecomment-741720499 Ping. How can we make some progress here? This is an automated message from the Apache Git Service. To respond to

[GitHub] [tvm] d-smirnov commented on pull request #6998: [TFLite] Strided slice handling of shrink_axis_mask improved

2020-12-09 Thread GitBox
d-smirnov commented on pull request #6998: URL: https://github.com/apache/tvm/pull/6998#issuecomment-741720832 Ping. How can we make some progress here? @comaniac Please advise should the commit be split? What to do with failed unit tests? -

[GitHub] [tvm] t-vi commented on pull request #6900: PyTorch frontend: make type inference incremental

2020-12-09 Thread GitBox
t-vi commented on pull request #6900: URL: https://github.com/apache/tvm/pull/6900#issuecomment-741734196 Oh, right, an undetected merge confict. Fixed. Thank you @masahi. This is an automated message from the Apache Git Serv

[GitHub] [tvm] yongwww edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
yongwww edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741706021 @tqchen Thanks for the comment. We chose to go with MLIR/HLO instead of XLA HLO, because HLO operations works on tensors with static shape (dynamic ops like [NMS](htt

[GitHub] [tvm] fprotopapa commented on a change in pull request #7059: #7058 [Tutorial] Import errors in deploy_detection.py and deploy_classification.py

2020-12-09 Thread GitBox
fprotopapa commented on a change in pull request #7059: URL: https://github.com/apache/tvm/pull/7059#discussion_r539306511 ## File path: vta/tutorials/frontend/deploy_classification.py ## @@ -56,9 +56,14 @@ from tvm.contrib.debugger import debug_runtime from tvm.relay import

[GitHub] [tvm] FrozenGene commented on pull request #6889: [TOPI] sparse_dense Op sparse_data input added

2020-12-09 Thread GitBox
FrozenGene commented on pull request #6889: URL: https://github.com/apache/tvm/pull/6889#issuecomment-741807882 > @tqchen , @jroesch , @FrozenGene, @junrushao1994 : Would you please help proceed with the PR. TIA! Just see your mention and sorry for later reply. Will do one round of r

[GitHub] [tvm] tqchen commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
tqchen commented on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077 First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point. Right now we have the need to

[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
tqchen edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077 First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point. Right now we have the n

[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
tqchen edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077 First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point. Right now we have the n

[GitHub] [tvm] tqchen merged pull request #7064: [FIX] Improve error messages and docs

2020-12-09 Thread GitBox
tqchen merged pull request #7064: URL: https://github.com/apache/tvm/pull/7064 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the s

[tvm] branch main updated (c31e338 -> e848af1)

2020-12-09 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from c31e338 fix nvcc compile option to be compatible with older cuda (#7065) add e848af1 [FIX] Improve error messages

[GitHub] [tvm] tqchen edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
tqchen edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741822077 Thanks @yongwww First of all, I do not disagree about the general goal of having a relay dialect in MLIR. The main discussion pt is how can we get to that point. Right

[GitHub] [tvm] FrozenGene commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
FrozenGene commented on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741841901 I agree TQ’s point. we introduce this pr’s main goal is to cover more models (include dynamic). However, introducting tablegen will bring overhead to us. We don’t have urgent req

[GitHub] [tvm] FrozenGene edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
FrozenGene edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741841901 I agree TQ @tqchen ’s point. we introduce this pr’s main goal is to cover more models (include dynamic). However, introducting tablegen will bring overhead to us. We don’t

[GitHub] [tvm] masahi commented on pull request #6900: PyTorch frontend: make type inference incremental

2020-12-09 Thread GitBox
masahi commented on pull request #6900: URL: https://github.com/apache/tvm/pull/6900#issuecomment-741852553 Thanks @t-vi This is an automated message from the Apache Git Service. To respond to the message, please log on to G

[GitHub] [tvm] masahi merged pull request #6900: PyTorch frontend: make type inference incremental

2020-12-09 Thread GitBox
masahi merged pull request #6900: URL: https://github.com/apache/tvm/pull/6900 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the s

[tvm] branch main updated (e848af1 -> db0215e)

2020-12-09 Thread masahi
This is an automated email from the ASF dual-hosted git repository. masahi pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from e848af1 [FIX] Improve error messages and docs (#7064) add db0215e Incremental type inference (#6900) No new revisi

[GitHub] [tvm] ANSHUMAN87 commented on pull request #7048: [Frontend][TFLite] Densify Op added

2020-12-09 Thread GitBox
ANSHUMAN87 commented on pull request #7048: URL: https://github.com/apache/tvm/pull/7048#issuecomment-741855916 > Thanks @ANSHUMAN87. Overall looks pretty good, but I am a little confused at exactly what is happening is this pass. Is it converting matrices from sparse to dense at compile t

[GitHub] [tvm] zhiics commented on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
zhiics commented on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746 Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it does intro

[GitHub] [tvm] zhiics edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
zhiics edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746 Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it doe

[GitHub] [tvm] zhiics edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
zhiics edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746 Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it doe

[GitHub] [tvm] zhiics edited a comment on pull request #7060: [WIP] Add MLIR Relay Dialect, Translate MLIR HLO to MLIR Relay to Relay

2020-12-09 Thread GitBox
zhiics edited a comment on pull request #7060: URL: https://github.com/apache/tvm/pull/7060#issuecomment-741866746 Thanks @yongwww for the great effort:) Both paths make sense to me. Adding Relay as a dialect would benefit more other dialects including other projects internally. But it doe

[GitHub] [tvm] giuseros opened a new pull request #7070: Add autoscheduler support to tvmc

2020-12-09 Thread GitBox
giuseros opened a new pull request #7070: URL: https://github.com/apache/tvm/pull/7070 - Add an autoschedule module to tvmc - Extract common tuning option between autotuner and autoscheduler - Add testing This is an

[GitHub] [tvm] t-vi commented on pull request #6900: PyTorch frontend: make type inference incremental

2020-12-09 Thread GitBox
t-vi commented on pull request #6900: URL: https://github.com/apache/tvm/pull/6900#issuecomment-741894947 Thank you @masahi, for the guidance and discussion, and review! This is an automated message from the Apache Git Servic

[GitHub] [tvm] giuseros commented on pull request #7070: Add autoscheduler support to tvmc

2020-12-09 Thread GitBox
giuseros commented on pull request #7070: URL: https://github.com/apache/tvm/pull/7070#issuecomment-741895859 cc @leandron @merrymercy This is an automated message from the Apache Git Service. To respond to the message, plea

[GitHub] [tvm] tqchen merged pull request #6948: [µTVM] Allow for platform-specific malloc in runtime

2020-12-09 Thread GitBox
tqchen merged pull request #6948: URL: https://github.com/apache/tvm/pull/6948 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the s

[tvm] branch main updated (db0215e -> f606637)

2020-12-09 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from db0215e Incremental type inference (#6900) add f606637 [µTVM] Allow for platform-specific malloc in runtime (#6948)

[GitHub] [tvm] merrymercy commented on pull request #7068: [Auto Scheduler][Auto TVM] Fix infer tile size for NHWC winograd for CUDA

2020-12-09 Thread GitBox
merrymercy commented on pull request #7068: URL: https://github.com/apache/tvm/pull/7068#issuecomment-741928470 Luckily, it does not affect the current correctness, because most conv layer has the same H and W, right? This i

[GitHub] [tvm] merrymercy edited a comment on pull request #7068: [Auto Scheduler][Auto TVM] Fix infer tile size for NHWC winograd for CUDA

2020-12-09 Thread GitBox
merrymercy edited a comment on pull request #7068: URL: https://github.com/apache/tvm/pull/7068#issuecomment-741928470 Luckily, this bug does not affect the correctness for current models, because most conv layer has the same H and W, right? ---

[GitHub] [tvm] merrymercy edited a comment on pull request #7068: [Auto Scheduler][Auto TVM] Fix infer tile size for NHWC winograd for CUDA

2020-12-09 Thread GitBox
merrymercy edited a comment on pull request #7068: URL: https://github.com/apache/tvm/pull/7068#issuecomment-741928470 Luckily, this bug does not affect the correctness for current models, because most conv layers have the same H and W, right? -

[GitHub] [tvm] jwfromm commented on a change in pull request #7063: [Relay][Strategy] Allow cuda cross compilation without physical device.

2020-12-09 Thread GitBox
jwfromm commented on a change in pull request #7063: URL: https://github.com/apache/tvm/pull/7063#discussion_r539509778 ## File path: python/tvm/contrib/nvcc.py ## @@ -269,15 +270,24 @@ def have_int8(compute_version): return False -def have_tensorcore(compute_version):

[tvm] branch main updated (f606637 -> a72bdd3)

2020-12-09 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository. lmzheng pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from f606637 [µTVM] Allow for platform-specific malloc in runtime (#6948) add a72bdd3 [Auto Scheduler][Auto TVM] Fix in

[GitHub] [tvm] merrymercy merged pull request #7068: [Auto Scheduler][Auto TVM] Fix infer tile size for NHWC winograd for CUDA

2020-12-09 Thread GitBox
merrymercy merged pull request #7068: URL: https://github.com/apache/tvm/pull/7068 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to t

[GitHub] [tvm] tqchen merged pull request #7069: [AutoSchedule] Compatibility improvement with XGBoost v1.3.0

2020-12-09 Thread GitBox
tqchen merged pull request #7069: URL: https://github.com/apache/tvm/pull/7069 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the s

[tvm] branch main updated (a72bdd3 -> 94b2e44)

2020-12-09 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from a72bdd3 [Auto Scheduler][Auto TVM] Fix infer tile size for NHWC winograd (#7068) add 94b2e44 [AutoSchedule] Compat

[GitHub] [tvm] merrymercy commented on pull request #7069: [AutoSchedule] Compatibility improvement with XGBoost v1.3.0

2020-12-09 Thread GitBox
merrymercy commented on pull request #7069: URL: https://github.com/apache/tvm/pull/7069#issuecomment-741935197 Could you also update this place https://github.com/apache/tvm/blob/a72bdd3fe74017ff7284691d96fc043ef67cb511/python/tvm/autotvm/tuner/xgboost_cost_model.py#L472 ? -

[GitHub] [tvm] merrymercy edited a comment on pull request #7069: [AutoSchedule] Compatibility improvement with XGBoost v1.3.0

2020-12-09 Thread GitBox
merrymercy edited a comment on pull request #7069: URL: https://github.com/apache/tvm/pull/7069#issuecomment-741935197 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHu

[tvm] branch merrymercy-patch-1 created (now 85e71db)

2020-12-09 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository. lmzheng pushed a change to branch merrymercy-patch-1 in repository https://gitbox.apache.org/repos/asf/tvm.git. at 85e71db [AutoScheduler] Delete deprecated file auto_schedule.py This branch includes the following new co

[tvm] 01/01: [AutoScheduler] Delete deprecated file auto_schedule.py

2020-12-09 Thread lmzheng
This is an automated email from the ASF dual-hosted git repository. lmzheng pushed a commit to branch merrymercy-patch-1 in repository https://gitbox.apache.org/repos/asf/tvm.git commit 85e71db9fb5533ea5e04f029ba180bfc8387f80a Author: Lianmin Zheng AuthorDate: Wed Dec 9 09:44:14 2020 -0800

[GitHub] [tvm] merrymercy opened a new pull request #7071: [AutoScheduler] Delete deprecated file auto_schedule.py

2020-12-09 Thread GitBox
merrymercy opened a new pull request #7071: URL: https://github.com/apache/tvm/pull/7071 `auto_schedule.py` is deprecated in #7028. But I forget to delete this file. This is an automated message from the Apache Git Service

[GitHub] [tvm] comaniac commented on a change in pull request #7063: [Relay][Strategy] Allow cuda cross compilation without physical device.

2020-12-09 Thread GitBox
comaniac commented on a change in pull request #7063: URL: https://github.com/apache/tvm/pull/7063#discussion_r539516443 ## File path: python/tvm/contrib/nvcc.py ## @@ -269,15 +270,24 @@ def have_int8(compute_version): return False -def have_tensorcore(compute_version)

[GitHub] [tvm] merrymercy commented on a change in pull request #7053: [auto_scheduler] buffer support, correctness check

2020-12-09 Thread GitBox
merrymercy commented on a change in pull request #7053: URL: https://github.com/apache/tvm/pull/7053#discussion_r539517512 ## File path: python/tvm/auto_scheduler/auto_schedule.py ## @@ -117,14 +132,21 @@ def __init__( num_measure_trials=0, early_stopping=None

[GitHub] [tvm] comaniac commented on a change in pull request #7070: Add autoscheduler support to tvmc

2020-12-09 Thread GitBox
comaniac commented on a change in pull request #7070: URL: https://github.com/apache/tvm/pull/7070#discussion_r539522022 ## File path: python/tvm/driver/tvmc/autoscheduler.py ## @@ -0,0 +1,212 @@ +# Licensed to the Apache Software Foundation (ASF) under one +# or more contribut

[GitHub] [tvm] comaniac commented on pull request #6655: [BYOC] Added "include_non_call_ops" parameter to AnnotateTarget pass

2020-12-09 Thread GitBox
comaniac commented on pull request #6655: URL: https://github.com/apache/tvm/pull/6655#issuecomment-741988621 > @comaniac Ping. How can we make some progress here? I still don't think we should make AnnotateTarget pass even more complicate, so I proposed to have a separate ACL specif

[GitHub] [tvm] comaniac commented on pull request #6998: [TFLite] Strided slice handling of shrink_axis_mask improved

2020-12-09 Thread GitBox
comaniac commented on pull request #6998: URL: https://github.com/apache/tvm/pull/6998#issuecomment-741989718 > Ping. How can we make some progress here? > @comaniac Please advise should the commit be split? What to do with failed unit tests? I'm not familiar with TFLite frontend.

[GitHub] [tvm] tqchen commented on pull request #6984: [TFLite] pack operation extedned with const args

2020-12-09 Thread GitBox
tqchen commented on pull request #6984: URL: https://github.com/apache/tvm/pull/6984#issuecomment-742000393 cc @mbaret @FrozenGene This is an automated message from the Apache Git Service. To respond to the message, please l

[GitHub] [tvm] tqchen edited a comment on pull request #6984: [TFLite] pack operation extedned with const args

2020-12-09 Thread GitBox
tqchen edited a comment on pull request #6984: URL: https://github.com/apache/tvm/pull/6984#issuecomment-742000393 cc @mbaret @FrozenGene @siju-samuel This is an automated message from the Apache Git Service. To respond to t

[GitHub] [tvm] tkonolige opened a new pull request #7072: [FIX] Remove debugging print statement

2020-12-09 Thread GitBox
tkonolige opened a new pull request #7072: URL: https://github.com/apache/tvm/pull/7072 Somehow a unnecessary print statement was left in the codebase. @tqchen This is an automated message from the Apache Git Service.

[GitHub] [tvm] tqchen merged pull request #6970: [TFLite] added scalar axis value handling in reduce

2020-12-09 Thread GitBox
tqchen merged pull request #6970: URL: https://github.com/apache/tvm/pull/6970 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the s

[GitHub] [tvm] tqchen commented on pull request #6970: [TFLite] added scalar axis value handling in reduce

2020-12-09 Thread GitBox
tqchen commented on pull request #6970: URL: https://github.com/apache/tvm/pull/6970#issuecomment-742001092 Thanks @d-smirnov this is merged. Please tag more related reviewers(by AT them) next time when you raised the pr Thi

[tvm] branch main updated (94b2e44 -> db8edc1)

2020-12-09 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git. from 94b2e44 [AutoSchedule] Compatibility improvement with XGBoost v1.3.0 (#7069) add db8edc1 [TFLite] added scalar axi

[GitHub] [tvm] mbrookhart commented on pull request #6978: [Relay][Topi]Add Sort Op to Relay

2020-12-09 Thread GitBox
mbrookhart commented on pull request #6978: URL: https://github.com/apache/tvm/pull/6978#issuecomment-742003328 @kevinthesun Can you re-review without the topk legalization? This is an automated message from the Apache Git Se

[GitHub] [tvm] tqchen commented on pull request #6777: [BYOC] Configurable optimize pass for PartitionGraph

2020-12-09 Thread GitBox
tqchen commented on pull request #6777: URL: https://github.com/apache/tvm/pull/6777#issuecomment-742003795 @comaniac consider close this PR as it seems to be stale? This is an automated message from the Apache Git Service. T

[GitHub] [tvm] tqchen closed issue #6332: [VOTE] Apache TVM Graduation

2020-12-09 Thread GitBox
tqchen closed issue #6332: URL: https://github.com/apache/tvm/issues/6332 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specif

[GitHub] [tvm] tqchen opened a new issue #6332: [VOTE] Apache TVM Graduation

2020-12-09 Thread GitBox
tqchen opened a new issue #6332: URL: https://github.com/apache/tvm/issues/6332 Dear Community: Thanks to everyone who participated in the discussion about graduation[1]. This is a formal voting thread for Apache TVM’s graduation. If this vote passes, the next step would be t

[GitHub] [tvm] tqchen closed issue #6299: [DISCUSS][RFC] Apache TVM Graduation

2020-12-09 Thread GitBox
tqchen closed issue #6299: URL: https://github.com/apache/tvm/issues/6299 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specif

[GitHub] [tvm] tqchen closed issue #6350: [RESULT][VOTE] Apache Graduation

2020-12-09 Thread GitBox
tqchen closed issue #6350: URL: https://github.com/apache/tvm/issues/6350 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specif

[GitHub] [tvm] comaniac commented on pull request #6777: [BYOC] Configurable optimize pass for PartitionGraph

2020-12-09 Thread GitBox
comaniac commented on pull request #6777: URL: https://github.com/apache/tvm/pull/6777#issuecomment-742006769 > @comaniac consider close this PR as it seems to be stale? Ah sorry we don't have bandwidth to work on this recently. I'll close the PR for now. --

[GitHub] [tvm] comaniac closed pull request #6777: [BYOC] Configurable optimize pass for PartitionGraph

2020-12-09 Thread GitBox
comaniac closed pull request #6777: URL: https://github.com/apache/tvm/pull/6777 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the

[GitHub] [tvm] merrymercy closed issue #6133: Ansor Stabilization Todo Items Tracking

2020-12-09 Thread GitBox
merrymercy closed issue #6133: URL: https://github.com/apache/tvm/issues/6133 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the sp

[GitHub] [tvm] merrymercy commented on issue #6133: Ansor Stabilization Todo Items Tracking

2020-12-09 Thread GitBox
merrymercy commented on issue #6133: URL: https://github.com/apache/tvm/issues/6133#issuecomment-742009953 The functionality of Ansor is complete. Tutorials on tuning operators/networks on CPU/GPU are available on https://tvm.apache.org/docs/tutorials/#autoscheduler-template-free-auto-sch

[GitHub] [tvm] merrymercy edited a comment on issue #6133: Ansor Stabilization Todo Items Tracking

2020-12-09 Thread GitBox
merrymercy edited a comment on issue #6133: URL: https://github.com/apache/tvm/issues/6133#issuecomment-742009953 The functionality of Ansor is complete. Tutorials on tuning operators/networks on CPU/GPU are available on https://tvm.apache.org/docs/tutorials/#autoscheduler-template-free-a

[GitHub] [tvm] csullivan opened a new pull request #7073: Rollback changes to SSA begin/end scope for Store in C codegen

2020-12-09 Thread GitBox
csullivan opened a new pull request #7073: URL: https://github.com/apache/tvm/pull/7073 Rollback changes to SSA begin/end scope for Store in C codegen. Instead, scope binary operator codegen in CUDA to fix the issue originally addressed by 5f4b9a9. Thanks for contributing to TVM!

[GitHub] [tvm] mbrookhart commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
mbrookhart commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539624790 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffe

[GitHub] [tvm] jwfromm commented on a change in pull request #7063: [Relay][Strategy] Allow cuda cross compilation without physical device.

2020-12-09 Thread GitBox
jwfromm commented on a change in pull request #7063: URL: https://github.com/apache/tvm/pull/7063#discussion_r539661492 ## File path: python/tvm/contrib/nvcc.py ## @@ -269,15 +270,24 @@ def have_int8(compute_version): return False -def have_tensorcore(compute_version):

[GitHub] [tvm] comaniac commented on a change in pull request #7063: [Relay][Strategy] Allow cuda cross compilation without physical device.

2020-12-09 Thread GitBox
comaniac commented on a change in pull request #7063: URL: https://github.com/apache/tvm/pull/7063#discussion_r539685119 ## File path: python/tvm/contrib/nvcc.py ## @@ -269,15 +269,34 @@ def have_int8(compute_version): return False -def have_tensorcore(compute_version)

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539693783 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539693783 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539693783 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539699599 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539699599 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] mbrookhart commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
mbrookhart commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539700585 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffe

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539699599 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] mbrookhart commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
mbrookhart commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539700714 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffe

[GitHub] [tvm] mbrookhart commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
mbrookhart commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539701202 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffe

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539702878 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539704122 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] Laurawly commented on a change in pull request #6839: [ONNX] NMS in ONNX

2020-12-09 Thread GitBox
Laurawly commented on a change in pull request #6839: URL: https://github.com/apache/tvm/pull/6839#discussion_r539704122 ## File path: python/tvm/topi/cuda/nms.py ## @@ -97,47 +97,44 @@ def get_valid_counts_ir( valid_count = ib.buffer_ptr(valid_count) out = ib.buffer_

[GitHub] [tvm] antinucleon commented on pull request #7053: [wip][auto_scheduler] buffer support, correctness check

2020-12-09 Thread GitBox
antinucleon commented on pull request #7053: URL: https://github.com/apache/tvm/pull/7053#issuecomment-742134344 I find out this solution only work for a single task, but doesn't work with TaskScheduler. Will update it later. ---

[GitHub] [tvm] mbrookhart opened a new pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
mbrookhart opened a new pull request #7074: URL: https://github.com/apache/tvm/pull/7074 @masahi @anijain2305 Fix for #7067 This is an automated message from the Apache Git Service. To respond to the message, ple

[GitHub] [tvm] masahi commented on pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
masahi commented on pull request #7074: URL: https://github.com/apache/tvm/pull/7074#issuecomment-742146711 @anijain2305 So after https://github.com/apache/tvm/pull/6704, it seems type inferencer can pass `IncompleteType` to QNN type rel functions, which by itself is not wrong. @mbrookhart

[GitHub] [tvm] masahi edited a comment on pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
masahi edited a comment on pull request #7074: URL: https://github.com/apache/tvm/pull/7074#issuecomment-742146711 @anijain2305 So after https://github.com/apache/tvm/pull/6704, it seems type inferencer can pass `IncompleteType` to QNN type rel functions, which by itself is not wrong. Prev

[GitHub] [tvm] jwfromm commented on a change in pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
jwfromm commented on a change in pull request #7074: URL: https://github.com/apache/tvm/pull/7074#discussion_r539753995 ## File path: src/relay/qnn/op/op_common.h ## @@ -171,6 +171,11 @@ static inline bool QnnBroadcastRel(const Array& types, int num_inputs, con ICHECK_EQ(ty

[GitHub] [tvm] jwfromm commented on pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
jwfromm commented on pull request #7074: URL: https://github.com/apache/tvm/pull/7074#issuecomment-742155528 I'm a little confused what exactly is being type checked. The comments say its scale and zero points but the number of checked types in the loop don't match up. A little better docu

[GitHub] [tvm] corehalt commented on issue #7067: [QNN] [BYOC] MergeComposite pass breaks on QNN graph

2020-12-09 Thread GitBox
corehalt commented on issue #7067: URL: https://github.com/apache/tvm/issues/7067#issuecomment-742158829 @mbrookhart thank you for fixing the issue so fast, appreciate it. This is an automated message from the Apache Git Serv

[GitHub] [tvm] comaniac opened a new pull request #7075: [Relay] Support deformable Conv2D NHWC

2020-12-09 Thread GitBox
comaniac opened a new pull request #7075: URL: https://github.com/apache/tvm/pull/7075 PR #6999 added a TOPI compute for deformable Conv2D NHWC. This PR adds the required supports to lower NHWC deformable Conv2D from Relay so that it can be tuned by auto_scheduler. Notes: - I did

[GitHub] [tvm] vinx13 commented on a change in pull request #7075: [Relay] Support deformable Conv2D NHWC

2020-12-09 Thread GitBox
vinx13 commented on a change in pull request #7075: URL: https://github.com/apache/tvm/pull/7075#discussion_r539762513 ## File path: src/relay/op/nn/convolution.h ## @@ -1128,7 +1169,8 @@ bool DeformableConv2DRel(const Array& types, int num_inputs, const Attrs& } else {

[GitHub] [tvm-vta] dsteger opened a new pull request #21: pynq: Update pynq driver control logic

2020-12-09 Thread GitBox
dsteger opened a new pull request #21: URL: https://github.com/apache/tvm-vta/pull/21 An update is needed in the pynq driver to control VTA. Once VTA is started it is necessary to wait for fetch ap_done signal prior to reading compute done flag. Also, the autorestart IPs need to be

[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
masahi commented on a change in pull request #7074: URL: https://github.com/apache/tvm/pull/7074#discussion_r539764667 ## File path: src/relay/qnn/op/op_common.h ## @@ -171,6 +171,11 @@ static inline bool QnnBroadcastRel(const Array& types, int num_inputs, con ICHECK_EQ(typ

[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
masahi commented on a change in pull request #7074: URL: https://github.com/apache/tvm/pull/7074#discussion_r539764667 ## File path: src/relay/qnn/op/op_common.h ## @@ -171,6 +171,11 @@ static inline bool QnnBroadcastRel(const Array& types, int num_inputs, con ICHECK_EQ(typ

[GitHub] [tvm] tqchen commented on pull request #6890: [Relay] Fix a bug in tensor_array_scatter

2020-12-09 Thread GitBox
tqchen commented on pull request #6890: URL: https://github.com/apache/tvm/pull/6890#issuecomment-742164379 ping @kevinthesun please followup This is an automated message from the Apache Git Service. To respond to the message

[GitHub] [tvm] tqchen edited a comment on pull request #6890: [Relay] Fix a bug in tensor_array_scatter

2020-12-09 Thread GitBox
tqchen edited a comment on pull request #6890: URL: https://github.com/apache/tvm/pull/6890#issuecomment-742164379 ping @lixiaoquan @kevinthesun please followup This is an automated message from the Apache Git Service. To res

[GitHub] [tvm] masahi commented on a change in pull request #7074: Fix QNN type inference

2020-12-09 Thread GitBox
masahi commented on a change in pull request #7074: URL: https://github.com/apache/tvm/pull/7074#discussion_r539766653 ## File path: tests/python/frontend/pytorch/qnn_test.py ## @@ -32,17 +32,58 @@ from tvm.relay.frontend.pytorch_utils import is_version_greater_than from tvm.

  1   2   >