I modified source code to mimic deploy_classification.py to include
quantization and graph_pack() process, now the compiling process goes well
until it started lowering the conv2d + relu function:
```
tvm/python/tvm/relay/backend/compile_engine.py, select_implementation(),
op.name= nn.conv2d
valid implementation 0 : conv2d_packed.vta plevel= 10
selected best_plevel_implementation: conv2d_packed.vta
tvm/python/tvm/relay/backend/compile_engine.py, select_implementation(),
op.name= nn.relu
valid implementation 0 : injective.cpu plevel= 10
selected best_plevel_implementation: injective.cpu
tvm/python/tvm/relay/backend/_backend.py: lower function:
fused_nn_conv2d_nn_relu
lower phase 0
lower phase 1
Traceback (most recent call last):
...
[bt] (1)
/work/git_repo/tvm/build/libtvm.so(tvm::tir::CopyIntrinInjector::VisitStmt_(tvm::tir::AttrStmtNode
const*)+0x1b8) [0x7fa6cd7d6308]
[bt] (0)
/work/git_repo/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4a)
[0x7fa6cd3ce070]
T_relu[(((ax2.outer*896) + (ax3.outer*16)) + ax5)] = max(res[ax5], 0)
File "/work/git_repo/tvm/src/tir/pass/inject_copy_intrin.cc", line 49
File "tvm/_ffi/_cython/./packed_func.pxi", line 54, in
tvm._ffi._cy3.core.tvm_callback
File "/work/git_repo/tvm/python/tvm/relay/backend/_backend.py", line 62, in
lower
raise RuntimeError(msg)
...
[bt] (1)
/work/git_repo/tvm/build/libtvm.so(tvm::tir::CopyIntrinInjector::VisitStmt_(tvm::tir::AttrStmtNode
const*)+0x1b8) [0x7fa6cd7d6308]
[bt] (0)
/work/git_repo/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x4a)
[0x7fa6cd3ce070]
T_relu[(((ax2.outer*896) + (ax3.outer*16)) + ax5)] = max(res[ax5], 0)
File "/work/git_repo/tvm/src/tir/pass/inject_copy_intrin.cc", line 49
TVMError: Check failed: MatchCopyPattern(op->body, &ret): Cannot match copy
pattern of for (ax5, 0, 16) {
}
```
Apparently, nn.conv2d has been mapped to VTA (conv2d_packed.vta), but nn.relu
is on CPU side(injective.cpu), thus the copy intrinsic insertion is required.
However, it complains about "Cannot match copy pattern for for (ax5, 0, 16) {
}" which leads me no clue. It is appreciated anyone can provide any hint.
Another question, why the nn.relu cannot be mapped to ALU?
---
[Visit Topic](https://discuss.tvm.ai/t/how-to-map-nn-conv2d-to-vta/6372/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/8e704d3dd93a5bf203408445afffcc51cd3e38ca300629b448f482d8826b82d4).