samskalicky commented on a change in pull request #17486: Update CustomOp doc with changes for GPU support URL: https://github.com/apache/incubator-mxnet/pull/17486#discussion_r375022474
########## File path: example/extensions/lib_custom_op/README.md ########## @@ -56,87 +72,111 @@ For building a library containing your own custom operator, compose a C++ source - `forward` - Forward Computation (can be replace with `createOpState`, see below for details) Then compile it to `libmyop_lib.so` dynamic library using the following command: + ```bash g++ -shared -fPIC -std=c++11 myop_lib.cc -o libmyop_lib.so -I ../../../include/mxnet ``` +If you don't want to download MXNet source and choose only using `lib_api.h` header, you can copy the header over to the same folder of `myop_lib.cc` and run: + +```bash +g++ -shared -fPIC -std=c++11 myop_lib.cc -o libmyop_lib.so +``` + Finally, you can write a Python script to load the library and run your custom operator: + ```python import mxnet as mx mx.library.load(‘libmyop_lib.so’) mx.nd.my_op(...) ``` -### Writing Regular Custom Operator: +### Writing A Regular Custom Operator -There are several essential building blocks for making a (stateless) custom operator: +There are several essential building blocks for making a custom operator: * [initialize](./gemm_lib.cc#L227): * This function is the library initialization function necessary for any dynamic libraries. It checks if you are using a compatible version of MXNet. Note that this `version` parameter is passed from MXNet when library is loaded. - MXReturnValue initialize(int version) +```c++ + MXReturnValue initialize(int version) +``` * [parseAttrs](./gemm_lib.cc#L118): * This function specifies number of input and output tensors for the custom operator; also this is where a custom operator can validate the attributes (ie. options) specified by the user. - MXReturnValue parseAttrs( - std::map<std::string, - std::string> attrs, - int* num_in, - int* num_out) +```c++ + MXReturnValue parseAttrs( + std::map<std::string, + std::string> attrs, + int* num_in, + int* num_out) +``` * [inferType](./gemm_lib.cc#L124): * This function specifies how the custom operator infers output data types using input data types. - MXReturnValue inferType( - std::map<std::string, std::string> attrs, - std::vector<int> &intypes, - std::vector<int> &outtypes) +```c++ + MXReturnValue inferType( + std::map<std::string, std::string> attrs, + std::vector<int> &intypes, + std::vector<int> &outtypes) +``` * [inferShape](./gemm_lib.cc#L143): * This function specifies how the custom operator infers output tensor shape using input shape. - MXReturnValue inferShape( - std::map<std::string, std::string> attrs, - std::vector<std::vector<unsigned int>> &inshapes, - std::vector<std::vector<unsigned int>> &outshapes) +```c++ + MXReturnValue inferShape( + std::map<std::string, std::string> attrs, + std::vector<std::vector<unsigned int>> &inshapes, + std::vector<std::vector<unsigned int>> &outshapes) +``` * [forward](./gemm_lib.cc#L56): * This function specifies the computation of the forward pass of the operator. - MXReturnValue forward( - std::map<std::string, std::string> attrs, - std::vector<MXTensor> inputs, - std::vector<MXTensor> outputs, - OpResource res) +```c++ + MXReturnValue forward( + std::map<std::string, std::string> attrs, + std::vector<MXTensor> inputs, + std::vector<MXTensor> outputs, + OpResource res) +``` * [REGISTER_OP(my_op_name)](./gemm_lib.cc#L169): - * This macro registers the custom operator and its properties to MXNet NDArray and Symbol APIs by its name. - - REGISTER_OP(my_op_name) - .setForward(forward) - .setParseAttrs(parseAttrs) - .setInferType(inferType) - .setInferShape(inferShape); + * This macro registers the custom operator and its properties to MXNet NDArray and Symbol APIs by its name. Note that for operator running on CPU, you need pass the name of the context `"cpu"` when registering forward or backward function. Review comment: lets move this to the end ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services