samskalicky opened a new pull request #17623: Dynamic subgraph compile support
URL: https://github.com/apache/incubator-mxnet/pull/17623
 
 
   ## Description ##
   This PR adds support for passing the ndarrays from the existing 
`optimize_for` API down to the `acceptSubgraph` function in an external 
library. There are no user-facing API changes.
   - Modifies the subgraph library example to optionally require args to be 
provided
   - Adds a new test to partition operators that directly consume params
   - Adds annotation on subgraph inputs for the name of the original param so 
that inputs can be mapped
   
   ## Design ##
   In #15886 the `optimize_for` API was added to give users an easy API to use 
to partition their models. The `args` argument took the params to the model to 
use for shape/type inference. But the ndarray values were never used. In this 
PR, we pass the ndarray data values to the backend library. There is no change 
in the API for `optimize_for`:
   ```
   sym = sym.optimize_for('default', args=args, ctx=mx.cpu())
   ```
   
   On the backend library side, the `acceptSubgraph` API has an addition 
argument for `args` that is a map of named MXTensors. These are the same `args` 
that the user passed into the front-end `optimize_for` API. 
   ```
   MXReturnValue acceptSubgraph(std::string json, int subraph_id, bool* accept,
                                  std::unordered_map<std::string, std::string>& 
options,
                                  std::unordered_map<std::string, std::string>& 
attrs) {
                                  std::unordered_map<std::string, std::string>& 
attrs,
                                  std::map<std::string, MXTensor>& args);
   ```
   
   This additional map of args will allow backends to compile subgraphs and use 
param/weight values during compilation. For example, this will enable TensorRT 
to be implemented as a backend library and eliminate the `init_tensorrt_params` 
API that is needed to provide the params to the TensorRT backend. It will also 
enable compiling subgraphs with TVM and other compile-specific backends. 
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the best of my knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to