Meteorix opened a new issue #7008: URL: https://github.com/apache/tvm/issues/7008
For a large model, tvm compilation is really slow. I perf it and find that type inference costs most of the time. ``` # Children Self Command Shared Object Symbol # ........ ........ ....... .................................... ..................................................................................................................................................................... # 93.18% 0.00% python libtvm.so [.] tvm::relay::PatternRewriter::Rewrite | ---tvm::relay::PatternRewriter::Rewrite | --93.17%--tvm::relay::InferTypeWithModule | --93.05%--tvm::transform::Pass::operator() tvm::transform::PassNode::operator() tvm::transform::ModulePassNode::operator() tvm::runtime::PackedFunc::operator()<tvm::IRModule, tvm::transform::PassContext> std::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator() std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tv void tvm::runtime::TypedPackedFunc<tvm::IRModule (tvm::IRModule, tvm::transform::PassContext)>::AssignTypedLambda<tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassCont | --93.03%--tvm::relay::transform::InferType()::{lambda(tvm::IRModule, tvm::transform::PassContext const&)#1}::operator() | |--79.48%--tvm::relay::TypeInferencer::Infer | | | |--49.03%--tvm::relay::TypeInferencer::GetType ``` From my understanding, ``PatternRewriter`` rewrite every function in a module, then each time ``PatternRewriter`` calls ``InferType`` to infer every function. It should be incremental. Is there any reason why this ``incremental`` inference is commented? https://github.com/apache/tvm/blob/main/src/relay/transforms/type_infer.cc#L805 ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org