leandron opened a new pull request #6302:
URL: https://github.com/apache/incubator-tvm/pull/6302


   This is a follow-up PR on top of #6112, introducing `compile` subcommand on 
`tvmc` or `python -m tvm.driver.tvmc`
    * Add 'compile' subcommand into tvmc (tvm.driver.tvmc)
    * Add frontends: Keras, ONNX, TensorFlow, tflite, PyTorch
    * Add tests for the 'compile' subcommand
    * Add frontend dependencies on `setup.py` to make TVM python package 
ready-to-use on install
   
   Known limitations:
   - It covers a subset of TVM supported frontends: Keras, ONNX, tflite, 
TensorFlow and PyTorch
   - It assumes graph runtime only
   
   There are still 2 patches to be submitted: `tvmc tune` and `tvmc run`, both 
depending on this. In case you want to have a look and test it, you can use the 
`--dump-codegen ll` or  `--dump-codegen asm` or  `--dump-codegen relay` to 
check the output module as source.
   
   A sample usage would look like (assumes TVM is built and working):
   ```
   wget 
https://storage.googleapis.com/download.tensorflow.org/models/mobilenet_v1_2018_08_02/mobilenet_v1_1.0_224_quant.tgz
   
   tar xvvzf mobilenet_v1_1.0_224_quant.tgz
   
   python -m tvm.driver.tvmc compile --target=llvm --output model.tar 
--dump-codegen relay  mobilenet_v1_1.0_224_quant.tflite
   
   cat model.relay
   
   # should output mobilenet as relay
   ```
   
   @tqchen @comaniac @jroesch, can you have a look?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to