samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381097030
 
 

 ##########
 File path: example/extensions/lib_subgraph/README.md
 ##########
 @@ -0,0 +1,193 @@
+<!--- Licensed to the Apache Software Foundation (ASF) under one -->
+<!--- or more contributor license agreements.  See the NOTICE file -->
+<!--- distributed with this work for additional information -->
+<!--- regarding copyright ownership.  The ASF licenses this file -->
+<!--- to you under the Apache License, Version 2.0 (the -->
+<!--- "License"); you may not use this file except in compliance -->
+<!--- with the License.  You may obtain a copy of the License at -->
+
+<!---   http://www.apache.org/licenses/LICENSE-2.0 -->
+
+<!--- Unless required by applicable law or agreed to in writing, -->
+<!--- software distributed under the License is distributed on an -->
+<!--- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->
+<!--- KIND, either express or implied.  See the License for the -->
+<!--- specific language governing permissions and limitations -->
+<!--- under the License. -->
+
+Custom Partitioner Example and Tutorial
+=======================================
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:       Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:       Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821:             Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
+sym_block.hybridize(backend='myPart')
+```
+
+### Using a Custom Partitioner Library
+
+Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For 
the Symbol API, the `optimize_for` API can be called on Symbol objects to 
return a partitioned Symbol.
+
+```
+optimize_for(backend, args=None, ctx=None, **kwargs)
+```
+
+The `optimize_for` API takes at least 1 argument, `backend` which is a string 
that identifies which backend to partition the model for. The `args` argument 
is optional and takes a list of NDArray or dict of str to NDArray. It is used 
to infer shapes and types and before partitioning. The `ctx` argument is 
optional and takes a device context to infer storage types. It also take any 
other user-specified options that will be passed to the backend partitioning 
APIs.
+
+For the Gluon API, the `hybridize` API can be called on HybridBlocks to 
partition the internal CachedOp Symbol.
+
+```
+hybridize(backend=None, backend_opts=None)
+```
+
+When the `hybridize` function is called, Gluon will convert the program’s 
execution into the style used in symbolic programming. The `backend` argument 
is a string that identifies which backend to partition the model for. The 
`backend_opts` takes other user-specified options that will be passed to the 
backend partitioning APIs.
+
+### Writing A Custom Partitioner
+
+There are several essential building blocks for making a custom partitioner:
+
+* [initialize](./subgraph_lib.cc#L242):
+    * This function is the library initialization function necessary for any 
dynamic libraries. It lets you check if the user is using a compatible version 
of MXNet. Note that this `version` parameter is passed from MXNet when library 
is loaded.
+
+            MXReturnValue initialize(int version)
+
+* [supportedOps](./subgraph_lib.cc#L179):
+    * This function provides a copy of the model graph as a JSON string, and 
provides an interface for identifying which operators should be partitioned 
into a subgraph. Also this is where a custom partitioner can validate the 
options specified by the user.
+
+            MXReturnValue supportedOps(
+                std::string json,
+                const int num_ids,
+                int *ids,
+                std::unordered_map<std::string, std::string>& options)
+
+* [REGISTER_PARTITIONER(my_part_name)](./subgraph_lib.cc#L238):
+    * This macro registers the custom partitioner and its properties to MXNet 
by its name. Notice that a partitioner can have multiple partitioning 
strategies. This enables multiple *passes* to be run in a single partitioning 
call from the user. The first argument to `addStrategy` is a user-specified 
name. The second argument is the `supportedOps` function. The third argument is 
the name of the subgraph operator to create for each subgraph created during 
partitioning (see below for more info about subgraph operators). The 
`setAcceptSubgraph` API registers a callback function that is called for each 
subgraph created during partitioning (more on this below). Notice that the 
first argument to this function is the strategy to associate with and the 
second argument is the `acceptSubgraph` function.
+
+            REGISTER_PARTITIONER(my_part_name)
+            .addStrategy("strategy1", 
+                          supportedOps, 
+                          "_custom_subgraph_op")
+            .setAcceptSubgraph("strategy1", 
+                                acceptSubgraph);
+
+
+Also there are some optional functions you can specify:
+
+* [acceptSubgraph](./subgraph_lib.cc#L220):
+    * This function provides an opportunity to accept/reject a subgraph after 
MXNet partitions it. It also allows specifying custom attributes on the 
subgraph (ie. user-generated IDs). If you do not register this function, 
subgraphs will be accepted by default. 
+
+            MXReturnValue acceptSubgraph(
 
 Review comment:
   how about `reviewSubgraph`?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to