eric-haibin-lin commented on a change in pull request #10545: [WIP] Add NEWS 
and README
URL: https://github.com/apache/incubator-mxnet/pull/10545#discussion_r181539250
 
 

 ##########
 File path: NEWS.md
 ##########
 @@ -1,5 +1,126 @@
 MXNet Change Log
 ================
+## 1.2.0
+### New Features - Added Scala Inference APIs
+- Implemented new [Scala Inference 
APIs](https://cwiki.apache.org/confluence/display/MXNET/MXNetScalaInferenceAPI) 
which offer an easy-to-use, Scala Idiomatic and thread-safe high level APIs for 
performing predictions with deep learning models trained with MXNet (#9678). 
Implemented a new ImageClassifier class which provides APIs for classification 
tasks on a Java BufferedImage using a pre-trained model you provide (#10054). 
Implemented a new ObjectDetector class which provides APIs for object and 
boundary detections on a Java BufferedImage using a pre-trained model you 
provide (#10229).
+
+### New Features - Added module to import ONNX models into MXNet
+- Implemented a new ONNX module in MXNet which offers an easy to use API to 
import ONNX models into MXNet's symbolic interface (#9963). Checkout the 
[example](https://github.com/apache/incubator-mxnet/blob/master/example/onnx/super_resolution.py)
 on how you could use this 
[API](https://cwiki.apache.org/confluence/display/MXNET/ONNX-MXNet+API+Design) 
to import ONNX models and perform inference on MXNet. 
+
+### New Features - Added support for Model Quantization with Calibration
+- Implemented model quantization by adopting the [TensorFlow 
approach](https://www.tensorflow.org/performance/quantization) with calibration 
by borrowing the idea from Nvidia's 
[TensorRT](http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf).
 The focus of this work is on keeping quantized models (ConvNets for now) 
inference accuracy loss under control when compared to their corresponding FP32 
models. Please see the 
[example](https://github.com/apache/incubator-mxnet/tree/master/example/quantization)
 on how to quantize a FP32 model with or without calibration (#9552).
+
+### New Features - MKL-DNN Integration
+- MXNet now integrates with Intel MKL-DNN to accelerate neural network 
operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch 
Normalization, Activation, LRN, Softmax, as well as some common operators: sum 
and concat (#9677). This integration allows NDArray to contain data with 
MKL-DNN layouts and reduces data layout conversion to get the maximal 
performance from MKL-DNN.
+
+### New Features - Added Exception Handling Support for Operators
+- Implemented [Exception Handling Support for 
Operators](https://cwiki.apache.org/confluence/display/MXNET/Improved+exception+handling+in+MXNet)
 in MXNet. MXNet now transports backend C++ exceptions to the different 
language front-ends and prevents crashes when exceptions are thrown during 
operator execution (#9681).
+
+### New Features - Enhanced FP16 support
+- Added support for distributed mixed precision training with FP16. It 
supports storing of master copy of weights in float32 with the multi_precision 
mode of optimizers (#10183). Improved speed of float16 operations on x86 CPU by 
8 times through F16C instruction set. Added support for more operators to work 
with FP16 inputs (#10125, #10078, #10169). Added a tutorial on using mixed 
precision with FP16 (#10391).
+
+### New Features - Added Profiling Enhancements
+- Enhanced built-in profiler to support native Intel:registered: VTune:tm: 
Amplifier objects such as Task, Frame, Event, Counter and Marker from both C++ 
and Python -- which is also visible in the Chrome tracing view(#8972). Added 
Runtime tracking of symbolic and imperative operators as well as memory and API 
calls. Added Tracking and dumping of aggregate profiling data. Profiler also no 
longer affects runtime performance when not in use. 
+
+### Breaking Changes
+- Changed Namespace for MXNet scala from `ml.dmlc.mxnet` to `org.apache.mxnet` 
(#10284).
+- Changed API for the Pooling operator from `mxnet.symbol.Pooling(data=None, 
global_pool=_Null, cudnn_off=_Null, kernel=_Null, pool_type=_Null, 
pooling_convention=_Null, stride=_Null, pad=_Null, name=None, attr=None, 
out=None, **kwargs)` to  `mxnet.symbol.Pooling(data=None,  kernel=_Null, 
pool_type=_Null, global_pool=_Null, cudnn_off=_Null, pooling_convention=_Null, 
stride=_Null, pad=_Null, name=None, attr=None, out=None, **kwargs)`. This is a 
breaking change when kwargs are not provided since the new api expects the 
arguments starting from `global_pool` at the fourth position instead of the 
second position. (#10000).
+
+### Bug Fixes
+- Fixed tests - Flakiness/Bugs - (#9598, #9951, #10259, #10197, #10136, 
#10422). Please see: [Tests Improvement 
Project](https://github.com/apache/incubator-mxnet/projects/9)
+- Fixed `cudnn_conv` and `cudnn_deconv` deadlock (#10392).
+- Fixed a race condition in `io.LibSVMIter` when batch size is large (#10124).
+- Fixed a race condition in converting data layouts in MKL-DNN (#9862).
+- Fixed MKL-DNN sigmoid/softrelu issue (#10336).
+- Fixed incorrect indices generated by device row sparse pull (#9887).
+- Fixed cast storage support for same stypes (#10400).
+- Fixed uncaught exception for bucketing module when symbol name not specified 
(#10094).
+- Fixed regression output layers (#9848).
+- Fixed crash with `mx.nd.ones` (#10014).
+- Fixed `sample_multinomial` crash when `get_prob=True` (#10413).
+- Fixed buggy type inference in correlation (#10135).
+- Fixed race condition for `CPUSharedStorageManager->Free` and launched 
workers at iter init stage to avoid frequent relaunch (#10096).
+- Fixed DLTensor Conversion for int64 (#10083).
+- Fixed issues where hex symbols of the profiler were not being recognized by 
chrome tracing tool(#9932)
+- Fixed crash when profiler was not enabled (#10306)
+- Fixed ndarray assignment issues (#10022, #9981, #10468).
+- Fixed incorrect indices generated by device row sparse pull (#9887).
+- Fixed `print_summary` bug in visualization module (#9492).
+- Fixed shape mismatch in accuracy metrics (#10446).
+- Fixed random samplers from uniform and random distributions in R bindings 
(#10450).
+- Fixed a bug that was causing training metrics to be printed as NaN sometimes 
(#10437).
+- Fixed a crash with non positive reps for tile ops (#10417).
+
+### Performance Improvements 
+- On average, after the MKL-DNN change, the inference speed of MXNet + MKLDNN 
outperforms MXNet + OpenBLAS by a factor of 32, outperforms MXNet + MKLML by 
82% and outperforms MXNet + MKLML with the experimental flag by 8%. The 
experiments were run for the image classifcation example, for different 
networks and different batch sizes.
+- Improved sparse SGD, sparse AdaGrad and sparse Adam optimizer speed on GPU 
by 30x (#9561, #10312, #10293, #10062).
+- Improved `sparse.retain` performance on CPU by 2.5x (#9722)
+- Replaced `std::swap_ranges` with memcpy (#10351)
+- Implemented DepthwiseConv2dBackwardFilterKernel which is over 5x faster 
(#10098)
+- Implemented CPU LSTM Inference (#9977)
+- Added Layer Normalization in C++ (#10029)
+- Optimized Performance for rtc (#10018)
+- Improved CPU performance of  ROIpooling operator by using OpenMP (#9958)
+- Accelerated the calculation of F1 (#9833)
+
+### API Changes
+- `Block.save_params` now match parameters according to model structure 
instead of names to avoid prefix mismatching problems during saving and loading 
(#10511).
+- Added an optional argument `ctx` to `mx.random.seed`. Seeding with `ctx` 
option produces random number sequence independent of device id. (#10367).
+- Added copy flag for astype (#10347).
+- Added context parameter to Scala Infer API - ImageClassifier and 
ObjectDetector (#10252).
+- Added axes support for dropout in gluon (#10032).
+- Added default `ctx` to cpu for `gluon.Block.load_params` (#10160).
+- Added support for variable sequence length in gluon.RecurrentCell (#9934).
+- Added convenience fluent method for squeeze op (#9734).
+- Made `array.reshape` compatible with numpy (#9790).
+- Added axis support and gradient for L2norm (#9740).
+
+### Sparse Support
+- Added support for multi-GPU training with `row_sparse` weights using 
`device` KVStore (#9987).
+- Added `Module.prepare` API for multi-GPU and multi-machine training with 
row_sparse weight (#10285)
+- Added `deterministic` option for `contrib.SparseEmbedding` operator
+- Added `sparse.broadcast_mul` and `sparse.broadcast_div` with CSRNDArray and 
1-D dense NDArray
 
 Review comment:
   .. on CPU (#10208) 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to