Wheest opened a new issue, #11233:
URL: https://github.com/apache/tvm/issues/11233

   
   ### Expected behavior
   
   Compiling a model with `debug_executor` allows one to run the model and get 
the tracing output.
   
   ### Actual behavior
   
   Process fails during the module creation:
   ```
   Traceback (most recent call last):
     File "tvm_profiler_simple.py", line 98, in <module>
       main(args)
     File "tvm_profiler_simple.py", line 64, in main
       m = debug_executor.create(lib.graph_json, lib, dev, 
dump_root="/tmp/tvmdbg")
     File "/app/source/tvm/python/tvm/contrib/debugger/debug_executor.py", line 
70, in create
       func_obj = fcreate(graph_json_str, libmod, *device_type_id)
     File "/app/source/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 223, 
in __call__
       values, tcodes, num_args = _make_tvm_args(args, temp_args)
     File "/app/source/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 188, 
in _make_tvm_args
       raise TypeError("Don't know how to handle type %s" % type(arg))
   TypeError: Don't know how to handle type <class 
'tvm.relay.backend.executor_factory.GraphExecutorFactoryModule'>
   ```
   
   ### Environment
   
   x86 platform, TVM v0.8.
   
   ### Steps to reproduce
   
   [This 
script](https://gist.github.com/Wheest/9ad2d6a47bbd2cfaa4be530c68ba2f6c) shows 
the issue.
   
   The gist has three modes: `["tutorial", "alt", "normal"]`, invoked with 
`python tvm_profiler_simple.py --mode tutorial`.
   
   - `normal` inference works of course
   - `tutorial` is the approach in the current documentation, with fails with 
output [1]
   - `alt` is the approach used in [this docs 
PR](https://github.com/apache/tvm/pull/11231) which works, with example output 
[2]
   
   As the [discussion in the forum 
says](https://discuss.tvm.apache.org/t/runnig-a-model-with-tvm-debugger/9869/9?u=wheest),
 there does not appear to be a canonical way of creating the debugger, but the 
approach in the docs should at least work, until there has been a refactor to 
produce an unambiguous canonical approach.  
   
   
   
   [1] `tutorial` sample output:
   ```
   Traceback (most recent call last):
     File "tvm_profiler_simple.py", line 98, in <module>
       main(args)
     File "tvm_profiler_simple.py", line 64, in main
       m = debug_executor.create(lib.graph_json, lib, dev, 
dump_root="/tmp/tvmdbg")
     File "/app/source/tvm/python/tvm/contrib/debugger/debug_executor.py", line 
70, in create
       func_obj = fcreate(graph_json_str, libmod, *device_type_id)
     File "/app/source/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 223, 
in __call__
       values, tcodes, num_args = _make_tvm_args(args, temp_args)
     File "/app/source/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 188, 
in _make_tvm_args
       raise TypeError("Don't know how to handle type %s" % type(arg))
   TypeError: Don't know how to handle type <class 
'tvm.relay.backend.executor_factory.GraphExecutorFactoryModule'>
   
   ```
   
   [2] `alt` sample output
   ```
   [19:53:25] ../src/runtime/graph_executor/debug/graph_executor_debug.cc:103: 
Iteration: 0
   [19:53:25] ../src/runtime/graph_executor/debug/graph_executor_debug.cc:108: 
Op #0 tvmgen_default_fused_layout_transform: 29.3538 us/iter
   [19:53:25] ../src/runtime/graph_executor/debug/graph_executor_debug.cc:108: 
Op #1 tvmgen_default_fused_nn_contrib_conv2d_NCHWc_add_nn_relu: 3009 us/iter
   ...
   Node Name                                                       Ops          
                                                   Time(us)   Time(%)  Shape    
             Inputs  Outputs
   ---------                                                       ---          
                                                   --------   -------  -----    
             ------  -------
   tvmgen_default_fused_nn_contrib_conv2d_NCHWc_add_nn_relu        
tvmgen_default_fused_nn_contrib_conv2d_NCHWc_add_nn_relu        3009.0     
7.115    (1, 2, 112, 112, 32)  3       1
   tvmgen_default_fused_nn_contrib_conv2d_NCHWc_add_add_nn_relu_4  
tvmgen_default_fused_nn_contrib_conv2d_NCHWc_add_add_nn_relu_4  2835.26    
6.704    (1, 16, 14, 14, 16)   4       1
   ...
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to