My model does not contain conv2d, the most time-consuming op is nn.dense. Do you mean using optimized history to build the relay using batch 500 and then do inference?
--- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/3) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/9f5ad887107cbfbad0abda7b64520ebfd39a8df39a1bd8f986eec359694b772e).
