In order to see the performance difference of Graph Runtime and VM Runtime. We 
construct a simple network with three layers of dense+bias structures. The 
dimensions are 1024-512-256-128.

We construct three cases:
1. using Graph Runtime, the input batch size is fixed at compilation. "Graph 
Runtime"
2. using VM Runtime, the input batch size is fixed at compilation. "VM Static"
3. using VM Runtime, the input batch relies on "relay.Any()" to do the 
compilation, which could support dynamic batch size with only one compilation 
for different batch sizes. "VM Dynamic"

We measure the inference run time in unit of "ms" . The results are as 
following:

![屏幕快照 2020-03-24 下午10.53.03|690x275](upload://9D4HXnsBHv06uNw6HeYypTWRXRT.png) 

We found that the for the fixed batch size case, the VM runtime is slower than 
Graph Runtime and can be up to 2 times. We guess this comes from the additional 
execution of AllocStorage and AllocTensor in VM runtime.

For the dynamic batch size case, the VM runtime is slower than the VM runtime 
with fixed batch size, which can be up to 4.5 times. We found that, there are 
many additional instructions for calculating the tensor shape in dynamic input 
size case. For example, the number of VM instructions with static batch size is 
only 22, while the number of VM instructions to support dynamic batch size is 
85!! 

So, it seems that compared with GraphRuntime, VM runtime has some performance 
degradation. And with dynamic input size from "relay.Any()", the performance 
degradation is even larger due to the calculation for the tensor shape. What to 
think of such a performance degradation in VM runtime and dynamic shape case? 
Is there any possible future plan to further increase the efficiency of the VM 
runtime, especially for the dynamic shape support?  

Thank you so much!





---
[Visit 
Topic](https://discuss.tvm.ai/t/vm-the-performance-degradation-of-vm-runtime-and-dynamic-shape-support-compared-to-graph-runtime/6076/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/c3b24b8c9f751dd59876d41584814c7b91c8306a2202cb8d43e7a9402610c25f).

Reply via email to