FrozenGene commented on a change in pull request #4880: [QNN] Add support for
per channel weight scale in dense op
URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379286796
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -982,13 +982,15 @@ def co
masahi commented on issue #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#issuecomment-586136935
@tqchen please give an approval.
This is an automated message from the Apache Git Service.
To respond
masahi commented on a change in pull request #4880: [QNN] Add support for per
channel weight scale in dense op
URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379286230
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -982,13 +982,15 @@ def conver
masahi commented on a change in pull request #4880: [QNN] Add support for per
channel weight scale in dense op
URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379280705
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -982,13 +982,15 @@ def conver
masahi commented on a change in pull request #4880: [QNN] Add support for per
channel weight scale in dense op
URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379280705
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -982,13 +982,15 @@ def conver
anijain2305 commented on a change in pull request #4880: [QNN] Add support for
per channel weight scale in dense op
URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379277612
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -982,13 +982,15 @@ def c
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 7013fc9 [TOPI][CUDA] Enable vectorization on fp16 type (#4867)
add 24c53a3 [QNN] More doc fix on quanti
tqchen merged pull request #4874: [QNN] More doc fix on quantize and convolution
URL: https://github.com/apache/incubator-tvm/pull/4874
This is an automated message from the Apache Git Service.
To respond to the message, plea
tqchen commented on issue #4867: [TOPI][CUDA] Enable vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#issuecomment-586091707
Thanks @vinx13 @wpan11nv !
This is an automated message from the Apa
tqchen merged pull request #4867: [TOPI][CUDA] Enable vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867
This is an automated message from the Apache Git Service.
To respond to the message, plea
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from b787ffa [REFACTOR][PY] Establish tvm.tir
add 7013fc9 [TOPI][CUDA] Enable vectorization on fp16 type (#4
This is an automated email from the ASF dual-hosted git repository.
tqchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from a6c42b3 Update docs/dev/virtual_machine.rst
add b787ffa [REFACTOR][PY] Establish tvm.tir
No new revisi
tqchen merged pull request #4877: [REFACTOR][PY] Establish tvm.tir
URL: https://github.com/apache/incubator-tvm/pull/4877
This is an automated message from the Apache Git Service.
To respond to the message, please log on to G
masahi commented on a change in pull request #4880: [QNN] Add support for per
channel weight scale in dense op
URL: https://github.com/apache/incubator-tvm/pull/4880#discussion_r379234008
##
File path: python/tvm/relay/frontend/tflite.py
##
@@ -982,13 +982,15 @@ def conver
FrozenGene commented on issue #4857: Windows Support for cpp_rpc
URL: https://github.com/apache/incubator-tvm/pull/4857#issuecomment-586073161
Thanks @jmorrill to bring C++ RPC to Windows and change build system to
CMake!
I maybe have no time to review it this week, but I have a glan
masahi opened a new pull request #4880: [QNN] Add support for per channel
weight scale in dense op
URL: https://github.com/apache/incubator-tvm/pull/4880
QNN dense op does not accept a vector weight scale as argument at the
moment, but this restriction can be fixed trivially.
pleas
FrozenGene commented on a change in pull request #4847: Return empty
CSourceModule when no lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#discussion_r379218842
##
File path: src/relay/backend/build_module.cc
##
@@ -437,28 +441,50 @
masahi commented on issue #4878: [Relay][SimplifyInference] Express Softmax as
sequence of Relay ops
URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586049782
Correct, but look closer, it is for input dim more than 2d.
See the PR below for background.
https://
zhiics commented on issue #4878: [Relay][SimplifyInference] Express Softmax as
sequence of Relay ops
URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586049492
I think we may need to keep Softmax schedule as well. We can remove only if
we treat them as batchnorm.
anijain2305 commented on issue #4878: [Relay][SimplifyInference] Express
Softmax as sequence of Relay ops
URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586048850
> please make sure you benchmark on 4d spatial inputs. In cuda softmax
schedule, there is a special case h
zhiics commented on issue #4459: [RUNTIME] Implement TVMDSOOp(TensorFlow custom
op) for TVM runtime
URL: https://github.com/apache/incubator-tvm/pull/4459#issuecomment-586048228
For python unit test, something similar in your RFC should be okay. We have
TensorFlow in the CI. For gtest, it
anijain2305 commented on issue #4878: [Relay][SimplifyInference] Express
Softmax as sequence of Relay ops
URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586047123
> Great job!
> I'm not sure whether we want to keep softmax compute & schedule though. If
someone build
masahi commented on issue #4878: [Relay][SimplifyInference] Express Softmax as
sequence of Relay ops
URL: https://github.com/apache/incubator-tvm/pull/4878#issuecomment-586046390
please make sure you benchmark on 4d spatial inputs. In cuda softmax
schedule, there is a special case handling
soiferj commented on issue #4879: [Relay][Pass] Fix bug in re-processing call
node in MergeComposite pass
URL: https://github.com/apache/incubator-tvm/pull/4879#issuecomment-586044128
Sure, I'll work on adding a unit test.
Th
soiferj opened a new pull request #4879: [Relay][Pass] Fix bug in re-processing
call node in MergeComposite pass
URL: https://github.com/apache/incubator-tvm/pull/4879
This fixes a bug where call nodes are recursively processed more than once,
potentially resulting in a composite function
anijain2305 opened a new pull request #4878: [Relay][SimplifyInference] Express
Softmax as sequence of Relay ops
URL: https://github.com/apache/incubator-tvm/pull/4878
Discuss - https://discuss.tvm.ai/t/softmax-sequence-of-relay-ops/5686
@soiferj @yzhliu @kevinthesun
Data sha
tqchen commented on issue #4877: [REFACTOR][PY] Establish tvm.tir
URL: https://github.com/apache/incubator-tvm/pull/4877#issuecomment-586021509
cc @icemelon9 @ZihengJiang @yzhliu
This is an automated message from the Apache G
masahi edited a comment on issue #4874: [QNN] More doc fix on quantize and
convolution
URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-586020980
@anijain2305 do you think it also makes sense to make `units` param in qnn
dense required?
https://github.com/apache/incu
tqchen opened a new pull request #4877: [REFACTOR][PY] Establish tvm.tir
URL: https://github.com/apache/incubator-tvm/pull/4877
- Move related files into the corresponding location as in C++
- Keep the top-level TVM API backward compatible to make minimum changes in
topi
---
masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution
URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-586020980
@anijain2305 do you think it also makes sense to make `units` param in qnn
dense required?
https://github.com/apache/incubator-tv
masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution
URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-586012237
Tests passed, should be ready to go @anijain2305 @vinx13 @FrozenGene
---
kumasento commented on issue #4847: Return empty CSourceModule when no
lowered_funcs exists in Relay mod
URL: https://github.com/apache/incubator-tvm/pull/4847#issuecomment-586005848
Thank you for your valuable suggestions @tqchen @zhiics @FrozenGene !
I now changed the logic to try
anijain2305 commented on issue #4874: [QNN] More doc fix on quantize and
convolution
URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-585979392
Yes, I think the same problem exists with simple conv. We can make it a
required argument, and change the parsers. Parsers shou
masahi commented on issue #4874: [QNN] More doc fix on quantize and convolution
URL: https://github.com/apache/incubator-tvm/pull/4874#issuecomment-585977753
@anijain2305 I find the usage of `channels` argument confusing and I think
it is better to make `channels` a required argument. Other
anijain2305 commented on issue #3680: [TOPI] Update softmax compute and CPU
schedule
URL: https://github.com/apache/incubator-tvm/pull/3680#issuecomment-585943014
Another suggestion -
https://discuss.tvm.ai/t/softmax-sequence-of-relay-ops/5686
-
tqchen commented on issue #4876: [CodeGen][CUDA] Fix issues in cuda codegen
URL: https://github.com/apache/incubator-tvm/pull/4876#issuecomment-585924576
cc @vinx13 @ZihengJiang please help to take a look
This is an automated
wpan11nv commented on issue #4876: [CodeGen][CUDA] Fix issues in cuda codegen
URL: https://github.com/apache/incubator-tvm/pull/4876#issuecomment-585914125
This patch should fix errors observed below (I did *not* verify as I found
no complete reproduces there). My own test works fine with
wpan11nv opened a new pull request #4876: [CodeGen][CUDA] Fix issues in cuda
codegen
URL: https://github.com/apache/incubator-tvm/pull/4876
- Do not emit __shared__ etc. as part of type for casting
- Fix fp16 reduction kernels with compiler errors:
"no operator "+" matches t
wpan11nv commented on a change in pull request #4867: [TOPI][CUDA] Enable
vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r379014124
##
File path: topi/tests/python/test_topi_relu.py
##
@@ -20,11 +20,20 @@
import tvm
import to
tqchen commented on issue #4875: Image preprocessing for darknet takes too long
URL: https://github.com/apache/incubator-tvm/issues/4875#issuecomment-585876040
A PR is more than welcomed
This is an automated message from the A
vinx13 commented on a change in pull request #4867: [TOPI][CUDA] Enable
vectorization on fp16 type
URL: https://github.com/apache/incubator-tvm/pull/4867#discussion_r378985967
##
File path: topi/tests/python/test_topi_relu.py
##
@@ -20,11 +20,20 @@
import tvm
import topi
vizero1 opened a new issue #4875: Image preprocessing for darknet takes too long
URL: https://github.com/apache/incubator-tvm/issues/4875
Hi,
I was working on this tutorial
https://docs.tvm.ai/tutorials/frontend/from_darknet.html#sphx-glr-tutorials-frontend-from-darknet-py
and it se
masahi opened a new pull request #4874: [QNN] More doc fix on quantize and
convolution
URL: https://github.com/apache/incubator-tvm/pull/4874
This is an automated message from the Apache Git Service.
To respond to the messag
wweic merged pull request #4868: [doc][VM] Update the vm doc
URL: https://github.com/apache/incubator-tvm/pull/4868
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
This is an automated email from the ASF dual-hosted git repository.
wweic pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.
from 8d94587 Optimize x86 conv3d_ndhwc using data packing approach.
(#4866)
add c8e17dd fix vm doc
add
wweic commented on issue #4868: [doc][VM] Update the vm doc
URL: https://github.com/apache/incubator-tvm/pull/4868#issuecomment-585608060
thanks @zhiics @tqchen
This is an automated message from the Apache Git Service.
To res
46 matches
Mail list logo