[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-20 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r382297048
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,192 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+The custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by installing the nightly pip wheel or compiling from 
source. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:   Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:   Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821: Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
 
 Review comment:
   Made the change in #17623 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-19 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381479830
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,192 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+The custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by installing the nightly pip wheel or compiling from 
source. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:   Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:   Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821: Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
 
 Review comment:
   If we dont get it into this PR, we'll get this into the next one #17623


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-19 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381115979
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -1271,7 +1271,17 @@ extern "C" {
 for (int i = 0; i < num_opts; i++) {
   opts[std::string(opt_keys[i])] = std::string(opt_vals[i]);
 }
-return supportedOps(subgraph_json, num_ids, ids, opts);
+// create array of bools for operator support
+std::vector _ids(num_ids, false);
+// call user's supportedOps function
+MXReturnValue retval = supportedOps(subgraph_json, _ids, opts);
+if (!retval) return retval;
+
+// copy bools in ids to ints
+for (int i = 0; i < num_ids; i++)
 
 Review comment:
   No, since this will fail out here:
   
https://github.com/apache/incubator-mxnet/blob/a11a9f9a8a0d412e421a87263dad9a4cde076d11/src/operator/subgraph/partitioner/custom_subgraph_property.h#L191-L195


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-19 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381115979
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -1271,7 +1271,17 @@ extern "C" {
 for (int i = 0; i < num_opts; i++) {
   opts[std::string(opt_keys[i])] = std::string(opt_vals[i]);
 }
-return supportedOps(subgraph_json, num_ids, ids, opts);
+// create array of bools for operator support
+std::vector _ids(num_ids, false);
+// call user's supportedOps function
+MXReturnValue retval = supportedOps(subgraph_json, _ids, opts);
+if (!retval) return retval;
+
+// copy bools in ids to ints
+for (int i = 0; i < num_ids; i++)
 
 Review comment:
   technically no, but we would need to check. so might as well update anyway. 
Remember that this happens only once for the model up-front.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381126732
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:   Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:   Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821: Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
+sym_block.hybridize(backend='myPart')
+```
+
+### Using a Custom Partitioner Library
+
+Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For 
the Symbol API, the `optimize_for` API can be called on Symbol objects to 
return a parti

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381126732
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:   Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:   Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821: Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
+sym_block.hybridize(backend='myPart')
+```
+
+### Using a Custom Partitioner Library
+
+Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For 
the Symbol API, the `optimize_for` API can be called on Symbol objects to 
return a parti

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381115979
 
 

 ##
 File path: include/mxnet/lib_api.h
 ##
 @@ -1271,7 +1271,17 @@ extern "C" {
 for (int i = 0; i < num_opts; i++) {
   opts[std::string(opt_keys[i])] = std::string(opt_vals[i]);
 }
-return supportedOps(subgraph_json, num_ids, ids, opts);
+// create array of bools for operator support
+std::vector _ids(num_ids, false);
+// call user's supportedOps function
+MXReturnValue retval = supportedOps(subgraph_json, _ids, opts);
+if (!retval) return retval;
+
+// copy bools in ids to ints
+for (int i = 0; i < num_ids; i++)
 
 Review comment:
   technically no, but we would need to check. so might as well update anyway


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381102009
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381102033
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381101971
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:   Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:   Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821: Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
+sym_block.hybridize(backend='myPart')
+```
+
+### Using a Custom Partitioner Library
+
+Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For 
the Symbol API, the `optimize_for` API can be called on Symbol objects to 
return a parti

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381099733
 
 

 ##
 File path: tests/python/unittest/test_extensions.py
 ##
 @@ -157,46 +159,10 @@ def test_subgraph():
 # check that result matches one executed by MXNet
 assert_almost_equal(out[0].asnumpy(), out3[0].asnumpy(), rtol=1e-3, 
atol=1e-3)
 
-@unittest.skipIf(check_platform(), "not all machine types supported")
-@unittest.skipIf(is_cd_run(), "continuous delivery run - ignoring test")
-@unittest.skipIf(default_context().device_type == 'cpu', "ignoring 
custom_op_gpu test on cpu run")
-def test_custom_op_gpu():
 
 Review comment:
   its not removed, its refactored. See the "Changes" section of the PR 
description. Notice that the test was moved to the **test_extensions_gpu.py** 
file


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381096874
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:   Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:   Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821: Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
 
 Review comment:
   No, here we're creating a HybridBlock from an existing Symbol object, so we 
use SymbolBlock (which inherits from HybridBlock)


This is an automated message from the Ap

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381097030
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the **lib_subgraph** 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate the dynamic library 
**libsubgraph_lib.so** which is compiled from the `subgraph_lib.cc` file. This 
is the library you are going to load that contains everything for the custom 
partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, then partition the model and 
execute the operators like a regular MXNet operator and output the result. 
Below is the output when running the `python test_subgraph.py` command. Notice 
that it loads 2 operators: my_gemm and state_gemm.
+
+```
+[10:38:03] src/c_api/c_api.cc:286: Found 1 operators in library
+[10:38:03] src/c_api/c_api.cc:350:   Op[0] _custom_subgraph_op
+[10:38:03] src/c_api/c_api.cc:785: Found 1 partitioners in library
+[10:38:03] src/c_api/c_api.cc:801:   Partitioner[0] myProp
+[10:38:03] src/c_api/c_api.cc:821: Strategy[0] strategy1 
subgraphOp: '_custom_subgraph_op'
+```
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
+sym_block.hybridize(backend='myPart')
+```
+
+### Using a Custom Partitioner Library
+
+Partitioning APIs in MXNet are available in both Symbol and Gluon APIs. For 
the Symbol API, the `optimize_for` API can be called on Symbol objects to 
return a parti

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381096737
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+the custom partitioner feature was merged recently (#15969) and is not 
available in versions of MXNet prior to v1.7.0. To use the feature now, please 
install MXNet either by compiling from source code or downloading a nightly 
build. For running the following example, it doesn’t matter if it is a CUDA, 
MKLDNN or plain MXNet build; the custom partitioner doesn’t interact with the 
execution of other native MXNet features. Note that if you want to write your 
custom partitioners running on GPU, you still need an MXNet CUDA build. 
 
 Review comment:
   It doesnt matter, but if you intend to use GPU then you have to compile for 
CUDA. It just means that build flavor doesnt matter. If you have suggestions on 
how to improve the wording I would appreciate any help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381096111
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
+
+This custom partitioner feature, enables users to write custom model 
partitioning strategies without compiling against all of MXNet header files and 
dependencies. When a library containing custom partitioners is loaded 
dynamically, the components found in the library will be re-registered in MXNet 
so that users can use those natively just like other built-in components.
 
 Review comment:
   I used the term "components" since we're still at the beginning of the doc 
and I wanted to use a general term for things before we dig into the nitty 
gritty details. Here components refers to the custom subgraph operator and 
partitioner that will be registered with the REGISTER_* macros.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r381095547
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,193 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and other internal 
classes, followed by recompiling MXNet from source. This feature allows adding 
custom partitioners by dynamically loading external libraries at runtime.
 
 Review comment:
   No, but I didnt want to use the name of the internal classes (ie. subgraph 
property) since its not really relevant here. For users writing custom 
"subgraph properties" all they care about is that they are going to partition 
the graph. So the code they write forms a "partitioner".


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380606823
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,163 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and, followed by 
recompiling MXNet from source with all of its dependencies. This feature allows 
adding custom partitioners by dynamically loading custom C++ partitioners 
compiled in external libraries at runtime.
+
+Custom partitioners enable users to write custom model partitioning strategies 
without compiling against all of MXNet header files and dependencies. When a 
library containing custom partitioners is loaded dynamically, the components 
found in the library will be re-registered in MXNet so that users can use those 
natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or 
downloading a nightly build. It doesn’t matter if the build comes with CUDA or 
MKLDNN. The custom partitioning APIs do not interact with the execution of 
other native MXNet operators.
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the `lib_subgraph` 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate a dynamic library 
**libsubgraph_lib.so** compiled from `subgraph_lib.cc`. This is the library you 
are going to load that contains everything for the custom partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, print "Found x", then partition 
the model and execute the operators like a regular MXNet operator and output 
the result.
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
+sym_block.hybridize(backend='myPart')
+```
+
+### Writing A Custom Partitioner
+
+There are several essential building blocks for making a custom partitioner:
+
+* [initialize](./subgraph_lib.cc#L242):
+* This function is the library initialization function necessary for any 
dynamic libraries. It lets you check if the user is using a compatible version 
of MXNet. Note that this `version` parameter is passed from MXNet when library 
is loaded.
+
+MXReturnValue initialize(int version)
+
+* [supportedOps](./subgraph_lib.cc#L179):
+* This function provides a copy of the model graph as a JSON string, and 
provides an interface for identifying which operators should be partitioned 
into a subgraph. Also this is where a custom partitioner can validate the 
options specified by the user.
+
+MXReturnValue supportedOps(
+std::string json,
+const int num_ids,
+int *ids,
+std::unordered_map& options)
+
+* [REGISTER_PARTITIONER(

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380606733
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -724,17 +745,32 @@ int MXLoadLib(const char *path) {
 regOp.set_attr("FComputeEx", forward_gpu_lambda, 
plevel);
   }
 }
-// optionally add fgradient if user specified a function
+// optionally add fgradient if user specified a function, or for stateful 
ops
 if (backward_ctx_map.size() != 0 || createop_map.size() != 0) {
-  regOp.set_attr("FGradient", grad_reg, plevel);
   std::string grad_name = "_backward_" + name_str;
   nnvm::Op &gradOp = 
dmlc::Registry::Get()->__REGISTER_OR_GET__(grad_name);
+  regOp.set_attr("FGradient", grad_reg, plevel);
   gradOp.set_attr("TIsBackward", true, plevel);
-  gradOp.set_attr_parser(attr_parser);
-  gradOp.set_num_inputs(num_inouts);
-  gradOp.set_num_outputs(num_inputs);
   gradOp.set_attr("FInferStorageType", 
infer_storage_type, plevel);
   gradOp.set_attr("FResourceRequest", resc_req, plevel);
+
+  if (!isSubgraphOp) {
+// register attr parser and standard functions for non-subgraph ops
+gradOp.set_attr_parser(attr_parser);
+gradOp.set_num_inputs(num_inouts);
+gradOp.set_num_outputs(num_inputs);
+  } else {
+// for subgraph ops use special functions
 
 Review comment:
   updated comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380606470
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,163 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and, followed by 
recompiling MXNet from source with all of its dependencies. This feature allows 
adding custom partitioners by dynamically loading custom C++ partitioners 
compiled in external libraries at runtime.
+
+Custom partitioners enable users to write custom model partitioning strategies 
without compiling against all of MXNet header files and dependencies. When a 
library containing custom partitioners is loaded dynamically, the components 
found in the library will be re-registered in MXNet so that users can use those 
natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or 
downloading a nightly build. It doesn’t matter if the build comes with CUDA or 
MKLDNN. The custom partitioning APIs do not interact with the execution of 
other native MXNet operators.
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380606426
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,163 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and, followed by 
recompiling MXNet from source with all of its dependencies. This feature allows 
adding custom partitioners by dynamically loading custom C++ partitioners 
compiled in external libraries at runtime.
 
 Review comment:
   revised, let me know if this addresses your concerns


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380606048
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -724,17 +745,32 @@ int MXLoadLib(const char *path) {
 regOp.set_attr("FComputeEx", forward_gpu_lambda, 
plevel);
   }
 }
-// optionally add fgradient if user specified a function
+// optionally add fgradient if user specified a function, or for stateful 
ops
 if (backward_ctx_map.size() != 0 || createop_map.size() != 0) {
-  regOp.set_attr("FGradient", grad_reg, plevel);
   std::string grad_name = "_backward_" + name_str;
   nnvm::Op &gradOp = 
dmlc::Registry::Get()->__REGISTER_OR_GET__(grad_name);
+  regOp.set_attr("FGradient", grad_reg, plevel);
   gradOp.set_attr("TIsBackward", true, plevel);
-  gradOp.set_attr_parser(attr_parser);
-  gradOp.set_num_inputs(num_inouts);
-  gradOp.set_num_outputs(num_inputs);
   gradOp.set_attr("FInferStorageType", 
infer_storage_type, plevel);
   gradOp.set_attr("FResourceRequest", resc_req, plevel);
+
+  if (!isSubgraphOp) {
+// register attr parser and standard functions for non-subgraph ops
+gradOp.set_attr_parser(attr_parser);
+gradOp.set_num_inputs(num_inouts);
+gradOp.set_num_outputs(num_inputs);
+  } else {
+// for subgraph ops use special functions
+using namespace mxnet::op;
+auto grad_inouts = [=](const nnvm::NodeAttrs& attrs) {
+  uint32_t cnt = DefaultSubgraphOpNumInputs(attrs);
+  cnt += 2 * DefaultSubgraphOpNumOutputs(attrs);
 
 Review comment:
   added this comment:
   ```
   // for backward passes, inputs + outputs + input gradients (one for each 
output)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380605888
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,163 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and, followed by 
recompiling MXNet from source with all of its dependencies. This feature allows 
adding custom partitioners by dynamically loading custom C++ partitioners 
compiled in external libraries at runtime.
+
+Custom partitioners enable users to write custom model partitioning strategies 
without compiling against all of MXNet header files and dependencies. When a 
library containing custom partitioners is loaded dynamically, the components 
found in the library will be re-registered in MXNet so that users can use those 
natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or 
downloading a nightly build. It doesn’t matter if the build comes with CUDA or 
MKLDNN. The custom partitioning APIs do not interact with the execution of 
other native MXNet operators.
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the `lib_subgraph` 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate a dynamic library 
**libsubgraph_lib.so** compiled from `subgraph_lib.cc`. This is the library you 
are going to load that contains everything for the custom partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, print "Found x", then partition 
the model and execute the operators like a regular MXNet operator and output 
the result.
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
+```bash
+g++ -shared -fPIC -std=c++11 mypart_lib.cc -o libmypart_lib.so -I 
../../../include/mxnet
+```
+
+Finally, you can write a Python script to load the library and partition a 
model with your custom partitioner:
+```python
+import mxnet as mx
+mx.library.load(‘libmyop_lib.so’)
+sym, _, _ = mx.model.load_checkpoint('mymodel', 0) 
+
+# Symbol/Module flow
+sym2 = sym.optimize_for("myPart")
+
+# Gluon flow
+sym_block = nn.SymbolBlock(sym, inputs)
+sym_block.hybridize(backend='myPart')
+```
+
+### Writing A Custom Partitioner
+
+There are several essential building blocks for making a custom partitioner:
+
+* [initialize](./subgraph_lib.cc#L242):
+* This function is the library initialization function necessary for any 
dynamic libraries. It lets you check if the user is using a compatible version 
of MXNet. Note that this `version` parameter is passed from MXNet when library 
is loaded.
+
+MXReturnValue initialize(int version)
+
+* [supportedOps](./subgraph_lib.cc#L179):
+* This function provides a copy of the model graph as a JSON string, and 
provides an interface for identifying which operators should be partitioned 
into a subgraph. Also this is where a custom partitioner can validate the 
options specified by the user.
 
 Review comment:
   added the "Using a Custom Partitioner Library" section to show how users 
provide options


This is an automated message from th

[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380606048
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -724,17 +745,32 @@ int MXLoadLib(const char *path) {
 regOp.set_attr("FComputeEx", forward_gpu_lambda, 
plevel);
   }
 }
-// optionally add fgradient if user specified a function
+// optionally add fgradient if user specified a function, or for stateful 
ops
 if (backward_ctx_map.size() != 0 || createop_map.size() != 0) {
-  regOp.set_attr("FGradient", grad_reg, plevel);
   std::string grad_name = "_backward_" + name_str;
   nnvm::Op &gradOp = 
dmlc::Registry::Get()->__REGISTER_OR_GET__(grad_name);
+  regOp.set_attr("FGradient", grad_reg, plevel);
   gradOp.set_attr("TIsBackward", true, plevel);
-  gradOp.set_attr_parser(attr_parser);
-  gradOp.set_num_inputs(num_inouts);
-  gradOp.set_num_outputs(num_inputs);
   gradOp.set_attr("FInferStorageType", 
infer_storage_type, plevel);
   gradOp.set_attr("FResourceRequest", resc_req, plevel);
+
+  if (!isSubgraphOp) {
+// register attr parser and standard functions for non-subgraph ops
+gradOp.set_attr_parser(attr_parser);
+gradOp.set_num_inputs(num_inouts);
+gradOp.set_num_outputs(num_inputs);
+  } else {
+// for subgraph ops use special functions
+using namespace mxnet::op;
+auto grad_inouts = [=](const nnvm::NodeAttrs& attrs) {
+  uint32_t cnt = DefaultSubgraphOpNumInputs(attrs);
+  cnt += 2 * DefaultSubgraphOpNumOutputs(attrs);
 
 Review comment:
   added a comment


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-18 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380605491
 
 

 ##
 File path: example/extensions/lib_subgraph/README.md
 ##
 @@ -0,0 +1,163 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Custom Partitioner Example and Tutorial
+===
+
+## Introduction
+
+Adding custom model partitioners in MXNet used to require deep understanding 
of the MXNet backend, including operator registration and, followed by 
recompiling MXNet from source with all of its dependencies. This feature allows 
adding custom partitioners by dynamically loading custom C++ partitioners 
compiled in external libraries at runtime.
+
+Custom partitioners enable users to write custom model partitioning strategies 
without compiling against all of MXNet header files and dependencies. When a 
library containing custom partitioners is loaded dynamically, the components 
found in the library will be re-registered in MXNet so that users can use those 
natively just like other built-in components.
+
+## Getting Started
+
+### Have MXNet Ready
+
+First you should install MXNet either from compiling from source code or 
downloading a nightly build. It doesn’t matter if the build comes with CUDA or 
MKLDNN. The custom partitioning APIs do not interact with the execution of 
other native MXNet operators.
+
+### Run An Example
+
+You can start getting familiar with custom partitioners by running an example 
provided in the **example/extensions/lib_subgraph** directory. This example 
partitions `exp` and `log` operators into subgraphs. Go to the `lib_subgraph` 
directory and follow these steps:
+
+1. Run `make`. The Makefile will generate a dynamic library 
**libsubgraph_lib.so** compiled from `subgraph_lib.cc`. This is the library you 
are going to load that contains everything for the custom partitioner.
+2. Run `python test_subgraph.py`. It’ll first load the above library, find the 
components, register them in the MXNet backend, print "Found x", then partition 
the model and execute the operators like a regular MXNet operator and output 
the result.
+
+### Basic Files For Custom Partitioner Library
+
+* **lib_subgraph/subgraph_lib.cc**: This file has a source code implementation 
of all required components to make a custom partitioner, it also shows 
registration of them so that they can be loaded by MXNet.
+
+* **lib_subgraph/Makefile**: This file compiles the source code to a dynamic 
shared library, with a header file `include/mxnet/lib_api.h` from MXNet source 
code. Currently the custom operator is compatible with C++11 onwards.
+
+* **lib_subgraph/test_subgraph.py**: This file calls 
`mx.library.load(‘libsubgraph_lib.so’)` to load the library containing the 
custom components, partitions the model using the `optimize_for` API, and 
prints outputs of the forward passes. The outputs should be the same as the 
regular MXNet forward pass without partitioning.
+
+## Writing Custom Partitioner Library
+
+For building a library containing your own custom partitioner, compose a C++ 
source file like `mypart_lib.cc`, include `lib_api.h` header file, and write 
your custom partitioner with these essential functions:
+- `initialize` - Library Initialization Function
+- `REGISTER_PARTITIONER ` - Partitioner Registration Macro
+- `mySupportedOps ` - Operator Support
+
+Then compile it to the `mypart_lib.so` dynamic library using the following 
command:
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-17 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380498775
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -724,17 +745,32 @@ int MXLoadLib(const char *path) {
 regOp.set_attr("FComputeEx", forward_gpu_lambda, 
plevel);
   }
 }
-// optionally add fgradient if user specified a function
+// optionally add fgradient if user specified a function, or for stateful 
ops
 if (backward_ctx_map.size() != 0 || createop_map.size() != 0) {
-  regOp.set_attr("FGradient", grad_reg, plevel);
   std::string grad_name = "_backward_" + name_str;
   nnvm::Op &gradOp = 
dmlc::Registry::Get()->__REGISTER_OR_GET__(grad_name);
+  regOp.set_attr("FGradient", grad_reg, plevel);
   gradOp.set_attr("TIsBackward", true, plevel);
-  gradOp.set_attr_parser(attr_parser);
-  gradOp.set_num_inputs(num_inouts);
-  gradOp.set_num_outputs(num_inputs);
   gradOp.set_attr("FInferStorageType", 
infer_storage_type, plevel);
   gradOp.set_attr("FResourceRequest", resc_req, plevel);
+
+  if (!isSubgraphOp) {
+// register attr parser and standard functions for non-subgraph ops
+gradOp.set_attr_parser(attr_parser);
+gradOp.set_num_inputs(num_inouts);
+gradOp.set_num_outputs(num_inputs);
+  } else {
+// for subgraph ops use special functions
+using namespace mxnet::op;
+auto grad_inouts = [=](const nnvm::NodeAttrs& attrs) {
+  uint32_t cnt = DefaultSubgraphOpNumInputs(attrs);
+  cnt += 2 * DefaultSubgraphOpNumOutputs(attrs);
+  return cnt;
+};
+gradOp.set_num_inputs(grad_inouts);
+gradOp.set_num_outputs(DefaultSubgraphOpNumInputs);
 
 Review comment:
   in a forward pass we have num_in to produce num_out. for a backward pass 
num_in + num_out + the input gradients (one for each output) so it totals to: 
`num_in + 2 * num_out`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-17 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380495712
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -581,19 +582,39 @@ int MXLoadLib(const char *path) {
 
 // FGradient register lambda
 auto grad_reg = [=](const nnvm::ObjectPtr& n, const 
std::vector& ograds) {
-// copy gradients first
-std::vector heads(ograds.begin(), ograds.end());
-// copy inputs second
-for (auto& h : n->inputs) {
-  heads.push_back(h);
-}
-// copy outputs last
-uint32_t n_out = n->num_outputs();
-for (uint32_t i = 0; i < n_out; ++i) {
-  heads.emplace_back(n, i, 0);
-}
-std::string grad_name = "_backward_" + name_str;
-return mxnet::op::MakeGradNode(grad_name.c_str(), n, heads, 
n->attrs.dict);
+  // create node for gradient
+  auto p = nnvm::Node::Create();
+  std::string grad_name = "_backward_" + name_str;
+  p->attrs.op = nnvm::Op::Get(grad_name.c_str());
+  p->attrs.name = n->attrs.name + "_backward";
 
 Review comment:
   node names have to be unique


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] samskalicky commented on a change in pull request #17585: Dynamic subgraph property doc

2020-02-17 Thread GitBox
samskalicky commented on a change in pull request #17585: Dynamic subgraph 
property doc
URL: https://github.com/apache/incubator-mxnet/pull/17585#discussion_r380493090
 
 

 ##
 File path: src/c_api/c_api.cc
 ##
 @@ -581,19 +582,39 @@ int MXLoadLib(const char *path) {
 
 // FGradient register lambda
 auto grad_reg = [=](const nnvm::ObjectPtr& n, const 
std::vector& ograds) {
-// copy gradients first
-std::vector heads(ograds.begin(), ograds.end());
-// copy inputs second
-for (auto& h : n->inputs) {
-  heads.push_back(h);
-}
-// copy outputs last
-uint32_t n_out = n->num_outputs();
-for (uint32_t i = 0; i < n_out; ++i) {
-  heads.emplace_back(n, i, 0);
-}
-std::string grad_name = "_backward_" + name_str;
-return mxnet::op::MakeGradNode(grad_name.c_str(), n, heads, 
n->attrs.dict);
+  // create node for gradient
+  auto p = nnvm::Node::Create();
+  std::string grad_name = "_backward_" + name_str;
+  p->attrs.op = nnvm::Op::Get(grad_name.c_str());
+  p->attrs.name = n->attrs.name + "_backward";
 
 Review comment:
   `grad_name` is the name of the registered FGradient operator which is: 
`"_backward_" + name_str`. it will be the same for all backward nodes of that 
particular op (think of it as backward op_name)
   `p->attrs.name` is the unique name of the particular node in the graph, 
which is: `n->attrs.name + "_backward"`. It will be different for backward 
nodes of the same op


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services