[GitHub] [incubator-tvm] comaniac commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR
comaniac commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR URL: https://github.com/apache/incubator-tvm/issues/4468#issuecomment-562826046 @DKXXXL , thanks for the clarification and it seems fair enough to me :) Then it seems like #4449, #3895 and this RFC should be unified and designed together. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] comaniac commented on a change in pull request #4465: [AutoTVM] Tune softmax CUDA schedule
comaniac commented on a change in pull request #4465: [AutoTVM] Tune softmax CUDA schedule URL: https://github.com/apache/incubator-tvm/pull/4465#discussion_r355106974 ## File path: topi/python/topi/cuda/softmax.py ## @@ -52,13 +60,22 @@ def schedule_softmax(outs): raise ValueError('Tag is expected to be softmax_output or log_softmax_output. \ Got {0}'.format(op_tag)) +# create tuning space +max_num_threads = tvm.target.current_target(allow_none=False).max_num_threads +possible_num_thread = get_powers_of_two_in_range(32, max_num_threads) +cfg.define_knob("num_thread", possible_num_thread) Review comment: I personally think it would be better to use `define_split` directly so that this part could be more concise. `define_split` also has an option to use all power of two numbers in a given range as candidates. In addition, do you think if there will bring any improvements if we create two knobs used at `s[expsum].split(k, factor=num_thread)` and `s[softmax].split(softmax.op.axis[1], nparts=num_thread)`? Maybe we need differernt thread number if `k` and `softmax.op.axis[1]` are different, but that will also increase the tuning space so I'm not 100% for sure if this is a good idea. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] DKXXXL commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR
DKXXXL commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR URL: https://github.com/apache/incubator-tvm/issues/4468#issuecomment-562806258 Hi @comaniac , Thanks for commenting. :) Yes. This is a real problem happening in the industrial context. The current solution is either over-conservative or unsound. About the name of "Data-flow Analysis", I think it is more a terminology question. For example the CFA is also a kind of program analysis but I don't think that is expressible in this framework or required in TVM IR (since there is not even first class function in TVM IR). Also I am not sure if this is expressible enough for **all** of the program analysis. Program Analysis is a really broad field. In my opinion, this framework can express most of the Data-flow Analysis phrased in the format of fixpoint computation in the first chapter of *Principle Of Program Analysis*. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-tvm-site] branch asf-site updated: Build at Fri Dec 6 17:22:03 PST 2019
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 5d9b345 Build at Fri Dec 6 17:22:03 PST 2019 5d9b345 is described below commit 5d9b345b5eb71bd52fe485a26f3c5c82eb36c5f1 Author: tqchen AuthorDate: Fri Dec 6 17:22:03 2019 -0800 Build at Fri Dec 6 17:22:03 PST 2019 --- atom.xml| 2 +- community.html | 1 + images/community/fb.png | Bin 0 -> 25039 bytes rss.xml | 4 ++-- 4 files changed, 4 insertions(+), 3 deletions(-) diff --git a/atom.xml b/atom.xml index 27aefdb..89f4c25 100644 --- a/atom.xml +++ b/atom.xml @@ -4,7 +4,7 @@ TVM https://tvm.apache.org; rel="self"/> https://tvm.apache.org"/> - 2019-11-26T12:17:11-08:00 + 2019-12-06T17:22:02-08:00 https://tvm.apache.org diff --git a/community.html b/community.html index 31e3db0..6e9f65f 100644 --- a/community.html +++ b/community.html @@ -200,6 +200,7 @@ in alphabetical order. + diff --git a/images/community/fb.png b/images/community/fb.png new file mode 100644 index 000..2452018 Binary files /dev/null and b/images/community/fb.png differ diff --git a/rss.xml b/rss.xml index 9c8de9e..606777c 100644 --- a/rss.xml +++ b/rss.xml @@ -5,8 +5,8 @@ TVM - https://tvm.apache.org https://tvm.apache.org; rel="self" type="application/rss+xml" /> -Tue, 26 Nov 2019 12:17:11 -0800 -Tue, 26 Nov 2019 12:17:11 -0800 +Fri, 06 Dec 2019 17:22:02 -0800 +Fri, 06 Dec 2019 17:22:02 -0800 60
[incubator-tvm-site] branch master updated: Add fb
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/master by this push: new 38b95a2 Add fb 38b95a2 is described below commit 38b95a25aec4165d8e6ec0ccbafea5858768817b Author: tqchen AuthorDate: Fri Dec 6 17:21:50 2019 -0800 Add fb --- community.md| 1 + images/community/fb.png | Bin 0 -> 25039 bytes 2 files changed, 1 insertion(+) diff --git a/community.md b/community.md index 44650b6..1d2efba 100644 --- a/community.md +++ b/community.md @@ -74,6 +74,7 @@ in alphabetical order. + diff --git a/images/community/fb.png b/images/community/fb.png new file mode 100644 index 000..2452018 Binary files /dev/null and b/images/community/fb.png differ
[GitHub] [incubator-tvm] alexgl-github opened a new pull request #4476: Implement 1d deconvolution
alexgl-github opened a new pull request #4476: Implement 1d deconvolution URL: https://github.com/apache/incubator-tvm/pull/4476 Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] junrushao1994 commented on issue #4471: [topi python API] 'bilinear_sample_nchw' is not supported in deformable_conv2d.py
junrushao1994 commented on issue #4471: [topi python API] 'bilinear_sample_nchw' is not supported in deformable_conv2d.py URL: https://github.com/apache/incubator-tvm/issues/4471#issuecomment-562783833 Could you verify if it works? If so, we may close this issue for now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work
apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work URL: https://github.com/apache/incubator-tvm/pull/4472#discussion_r355007762 ## File path: topi/python/topi/cuda/conv2d_transpose_nchw.py ## @@ -186,7 +186,9 @@ def _callback(op): if cfg.is_fallback: N, F, Y, X = get_const_tuple(conv.shape) -_fallback_schedule(N, F, Y, X) +# Workaround to make CUDA compilation work. Issue #4470 Review comment: added kernel / strides check and skip `_fallback_schedule` when output channel is 1. In other case It will run `_fallback_schedule` for kernel 1x1 or when kernel != strides This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work
apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work URL: https://github.com/apache/incubator-tvm/pull/4472#discussion_r354992237 ## File path: topi/python/topi/cuda/conv2d_transpose_nchw.py ## @@ -186,7 +186,9 @@ def _callback(op): if cfg.is_fallback: N, F, Y, X = get_const_tuple(conv.shape) -_fallback_schedule(N, F, Y, X) +# Workaround to make CUDA compilation work. Issue #4470 Review comment: I checked more kernel and strides combinations and found that the error happens when kernel is equal to strides, e.g. ``` # kernel and strides when compilation for CUDA fails 2x2 and (2,2) 3x3 and (3,3) 4x4 and (4,4) 5x5 and (5,5) 2x3 and (2,3) 3x2 and (3,2) 1x2 and (1x2) etc ``` I also found that the compilation fails if output channel is 1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work
apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work URL: https://github.com/apache/incubator-tvm/pull/4472#discussion_r354992237 ## File path: topi/python/topi/cuda/conv2d_transpose_nchw.py ## @@ -186,7 +186,9 @@ def _callback(op): if cfg.is_fallback: N, F, Y, X = get_const_tuple(conv.shape) -_fallback_schedule(N, F, Y, X) +# Workaround to make CUDA compilation work. Issue #4470 Review comment: I checked more kernel and strides combinations and found that the error happens when kernel is equal to strides, e.g. ``` # kernel and strides when compilation for CUDA fails 2x2 and (2,2) 3x3 and (3,3) 4x4 and (4,4) 5x5 and (5,5) 2x3 and (2,3) 3x2 and (3,2) 1x2 and (1x2) etc ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work
apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work URL: https://github.com/apache/incubator-tvm/pull/4472#discussion_r354992237 ## File path: topi/python/topi/cuda/conv2d_transpose_nchw.py ## @@ -186,7 +186,9 @@ def _callback(op): if cfg.is_fallback: N, F, Y, X = get_const_tuple(conv.shape) -_fallback_schedule(N, F, Y, X) +# Workaround to make CUDA compilation work. Issue #4470 Review comment: I checked more kernel and strides combinations and found that the error happens when kernel is equal to strides, e.g. ``` # kernel and strides when compilation for CUDA fails 2x2 and (2,2) 3x3 and (3,3) 4x4 and (4,4) 5x5 and (5,5) 2x3 and (2,3) 3x2 and (3,2) etc ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work
apivovarov commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work URL: https://github.com/apache/incubator-tvm/pull/4472#discussion_r354992237 ## File path: topi/python/topi/cuda/conv2d_transpose_nchw.py ## @@ -186,7 +186,9 @@ def _callback(op): if cfg.is_fallback: N, F, Y, X = get_const_tuple(conv.shape) -_fallback_schedule(N, F, Y, X) +# Workaround to make CUDA compilation work. Issue #4470 Review comment: I checked more kernel and strides combinations and found that the error happens when kernel is equal to strides, e.g. ``` # kernel and strides when compilation for CUDA fails 2x2 and (2,2) 3x3 and (3,3) 4x4 and (4,4) 5x5 and (5,5) 2x3 and (2,3) 3x2 and (3,2) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] apivovarov commented on issue #4447: [Relay][Frontend][TFlite] Add parses support for UNPACK tflite operator
apivovarov commented on issue #4447: [Relay][Frontend][TFlite] Add parses support for UNPACK tflite operator URL: https://github.com/apache/incubator-tvm/pull/4447#issuecomment-562693190 LGTM @FrozenGene Can you have a look? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] jwfromm commented on issue #4464: [RFC] Add TVMDSOOp to integrate any TVM operator with TensorFlow
jwfromm commented on issue #4464: [RFC] Add TVMDSOOp to integrate any TVM operator with TensorFlow URL: https://github.com/apache/incubator-tvm/issues/4464#issuecomment-562681167 The motivations of this RFC are extremely similar to those in [pytorch-tvm](https://github.com/pytorch/tvm), however the two implementations are very different and it is worth discussing the tradeoffs. - torch-tvm is self contained, it doesn't use any special functions or classes in TVM. Instead it modifies torch script to use existing TVM functions. - torch-tvm uses relay to represent subgraphs and then dynamically builds functions rather than using prebuilt libraries as proposed here. I understand that the current implementation is the shortest path to getting tvm functions working in TensorFlow and that a torch-tvm approach would be a much larger undertaking. However, I don't think it will be able to scale well. The use of prebuilt libraries means there will be a lot of back and forth between regular tvm and tensorflow-tvm during development, and it seems like developers would be better off just importing their tf model to relay and doing everything within tvm. Contrast this to the torch-tvm approach where all the tvm magic happens transparently, making it very straight forward for pytorch users. We should also consider where the code belongs. I personally prefer having projects like torch-tvm and tf-tvm being separate from the main tvm repo if possible as it we already are dealing with frontend bloat. All that said, I think something like tf-tvm is a great idea and something we should work towards. I just want to make sure we make the first step carefully. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] comaniac commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR
comaniac commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR URL: https://github.com/apache/incubator-tvm/issues/4468#issuecomment-562673761 Hey @DKXXXL, thanks for the example! Just curious. Do you think the case of copy propagation caused dead code happens in current workloads? Or this is more like a concern to the TVM programming model as your example? Another question is that the name "data-flow" analysis confuses me a bit, because it seems to me that the proposed framework is not limited to data flow analysis but general IR analysis or program analysis. Could you clarify it a little bit more? Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] tqchen commented on issue #4473: ci.tvm.ai is down
tqchen commented on issue #4473: ci.tvm.ai is down URL: https://github.com/apache/incubator-tvm/issues/4473#issuecomment-562625577 Thanks for reporting Double checked and seems the ci and docs are online atm. Closing for now, please feel free open new threads This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] tqchen closed issue #4473: ci.tvm.ai is down
tqchen closed issue #4473: ci.tvm.ai is down URL: https://github.com/apache/incubator-tvm/issues/4473 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] DKXXXL commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR
DKXXXL commented on issue #4468: [RFC] Data-flow Analysis Functionality on TVM IR URL: https://github.com/apache/incubator-tvm/issues/4468#issuecomment-562559678 Hi @junrushao1994 , an over-simplified example from the industrial context is the following: ```python ... B0 = tvm.compute((m,n), lambda i,j: A0[i,j] + 2*A1[i,j], name = "B0") C0 = tvm.compute((m,n), lambda i,j: A0[i,j] + 2*A1[i,j], name = "C0") D0 = tvm.compute((m,n), lambda i,j: B0[i,j] + 3*C0[i,j], name = "D0") ... ``` The customized TVM will schedule and use `compute_at` to the extreme, and transform into something like ```cpp ... for (i, 0, m) { for (j, 0, n) { B0[0] = (A0[((i*stride) + (j*stride))] + (2f*A1[((i*stride) + (j*stride))])) C0[0] = (A0[((i*stride) + (j*stride))] + (2f*A1[((i*stride) + (j*stride))])) D0[((i*stride) + (j*stride))] = (B0[0] + (2f*C0[0])) }} ... ``` This gives our 'incomplete' CSE and Copy Propagation a chance to make the C0 assigned by B0 and replace C0’s appearance in D0 into B0 and make C0 dead (or not? dependent on the future). ```cpp ... for (i, 0, m) { for (j, 0, n) { B0[0] = (A0[((i*stride) + (j*stride))] + (2f*A1[((i*stride) + (j*stride))])) C0[0] = B0[0] D0[((i*stride) + (j*stride))] = (B0[0] + (2f*B0[0])) }} ... ``` The above ‘incomplete’ CSE and Copy Propagation pass can do things safely in a straight-line code in a small range (without data-flow analysis), but the same thing did not happen for dead code elimination – if we don’t know any live information out of this for loop, we cannot just eliminate the assignment to C0[0]. Generally speaking, dead code can arise after copy propagation and how dead code arises in TVM is similar to how they arise in LLVM and traditional compiler passes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] anguoyang commented on issue #4262: [RELAY][Bug] 'name_hint' AttributeError issue when covert tensorflow to TVM
anguoyang commented on issue #4262: [RELAY][Bug] 'name_hint' AttributeError issue when covert tensorflow to TVM URL: https://github.com/apache/incubator-tvm/issues/4262#issuecomment-562504897 @FinnWeng @Msabih I met the same problem, have you solved it? someone in TVM forum said that it because of pytorch version, but I changed to 1.0.1 and re-exported the onnx file, still failed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] Beya2019 opened a new pull request #4475: onnx frontend support layout choice depend on hardware target support…
Beya2019 opened a new pull request #4475: onnx frontend support layout choice depend on hardware target support… URL: https://github.com/apache/incubator-tvm/pull/4475 …ed layout with NCHW and NHWC Thanks for contributing to TVM! Please refer to guideline https://docs.tvm.ai/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from [Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers) by @ them in the pull request thread. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] vinx13 merged pull request #4469: Fix typo in travserse
vinx13 merged pull request #4469: Fix typo in travserse URL: https://github.com/apache/incubator-tvm/pull/4469 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-tvm] branch master updated (ba9d96b -> 7cf1ead)
This is an automated email from the ASF dual-hosted git repository. wuwei pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from ba9d96b [relay][op] Add shape func to tile (#4441) add 7cf1ead Fix typo in travserse (#4469) No new revisions were added by this update. Summary of changes: python/tvm/intrin.py| 2 +- src/api/api_pass.cc | 2 +- src/pass/lower_warp_memory.cc | 4 ++-- topi/python/topi/arm_cpu/bitserial_dense.py | 2 +- topi/python/topi/bifrost/depthwise_conv2d.py| 2 +- topi/python/topi/cuda/dense.py | 2 +- topi/python/topi/cuda/depthwise_conv2d.py | 2 +- topi/python/topi/cuda/pooling.py| 4 ++-- topi/python/topi/cuda/reduction.py | 4 ++-- topi/python/topi/hls/nn.py | 10 +- topi/python/topi/intel_graphics/depthwise_conv2d.py | 2 +- topi/python/topi/opengl/conv2d_nchw.py | 2 +- topi/python/topi/opengl/dense.py| 2 +- topi/python/topi/opengl/pooling.py | 4 ++-- topi/python/topi/x86/binary_dense.py| 2 +- topi/python/topi/x86/bitserial_dense.py | 2 +- topi/python/topi/x86/pooling.py | 4 ++-- topi/python/topi/x86/reduction.py | 4 ++-- 18 files changed, 28 insertions(+), 28 deletions(-)
[GitHub] [incubator-tvm] vinx13 commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work
vinx13 commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work URL: https://github.com/apache/incubator-tvm/pull/4472#discussion_r354714737 ## File path: topi/python/topi/cuda/conv2d_transpose_nchw.py ## @@ -186,7 +186,9 @@ def _callback(op): if cfg.is_fallback: N, F, Y, X = get_const_tuple(conv.shape) -_fallback_schedule(N, F, Y, X) +# Workaround to make CUDA compilation work. Issue #4470 Review comment: Can we still use the fallback for the other cases by checking the input params here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] vinx13 commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work
vinx13 commented on a change in pull request #4472: Workaround to make conv2d_transpose compilation for CUDA work URL: https://github.com/apache/incubator-tvm/pull/4472#discussion_r354714737 ## File path: topi/python/topi/cuda/conv2d_transpose_nchw.py ## @@ -186,7 +186,9 @@ def _callback(op): if cfg.is_fallback: N, F, Y, X = get_const_tuple(conv.shape) -_fallback_schedule(N, F, Y, X) +# Workaround to make CUDA compilation work. Issue #4470 Review comment: Can we still use the fallback for 1x1 case by checking the input params here? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-tvm] Beya2019 closed pull request #4474: onnx frontend support layout choice depend on hardware target supported layout with NCHW and NHWC
Beya2019 closed pull request #4474: onnx frontend support layout choice depend on hardware target supported layout with NCHW and NHWC URL: https://github.com/apache/incubator-tvm/pull/4474 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services