ptrendx opened a new pull request #14173: [WIP] MXNet AMP (automatic mixed 
precision)
URL: https://github.com/apache/incubator-mxnet/pull/14173
 
 
   ## Description ##
   
   Whis is a Work in Progress PR for AMP (automatic mixed precision) support 
for MXNet, similar to pyTorch version found in https://github.com/NVIDIA/apex.
   
   This PR relies on multiple other PRs and bug fixes, listed in Comments 
section.
   
   Dynamic loss scaling part done by @Caenorst (commits were squashed for 
easier rebasing).
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Auditor that enables/disables operations to be done in FP16 
automatically. It is implemented via patching MXNet functions in mxnet.symbol 
and mxnet.ndarray to insert casts to FP16/FP32 where necessary.
   - [x] Operator `amp_cast` and `amp_multicast` that handle casting between 
FP16/FP32 when necessary and do not change other types. They are optimized to 
not do anything if the input is already in the proper type.
   - [ ] Dynamic loss scaling and supporting operators for checking gradients 
for infs/NaNs and skipping update step if such value is encountered.
   
   ## Comments ##
   - This PR relies on multiple other PRs/bug fixes:
     - [ ] #14153 
     - [ ] https://github.com/dmlc/tvm/pull/2572 - once this is done, need to 
change submodule to point to dmlc/tvm again
     - [x] #14097 
     - [ ] https://github.com/dmlc/dmlc-core/issues/503
     - [ ] #12139 which was masked by #12189, but which prevents updating to 
newer dmlc-core
   - Once 
   
   FYI @eric-haibin-lin @szha 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to