sebouh commented on a change in pull request #8301: Preparing for 0.12.0.rc0: 
Final changes before RC
URL: https://github.com/apache/incubator-mxnet/pull/8301#discussion_r144992148
 
 

 ##########
 File path: NEWS.md
 ##########
 @@ -1,34 +1,46 @@
 MXNet Change Log
 ================
 ## 0.12.0
-### New Features - Sparse Tensor Support
-  - Added limited cpu support for two sparse formats for `Symbol` and 
`NDArray` - `CSRNDArray` and `RowSparseNDArray`
-  - Added a sparse dot product operator and many element-wise sparse operators
-  - Added a data iterator for sparse data input - `LibSVMIter`
-  - Added three optimizers for sparse gradient updates: `Ftrl`, `SGD` and 
`Adam`
-  - Added `push` and `row_sparse_pull` with `RowSparseNDArray` in distributed 
kvstore
-### New Features - Autograd and Gluon
-  - New loss functions added - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`
+### Performance
+  - Added full support for NVIDIA Volta GPU Architecture and CUDA 9. Training 
is up to 3.5x faster than Pascal when using float16.
+  - Enabled JIT compilation. Autograd and Gluon hybridize now use less memory 
and has faster speed. Performance is almost the same with old symbolic style 
code.
+  - Improved ImageRecordIO image loading performance and added indexed 
RecordIO support.
+  - Added better openmp thread management to improve CPU performance.
+### New Features - Gluon
+  - Added enhancements to the Gluon package, a high-level interface designed 
to be easy to use while keeping most of the flexibility of low level API. Gluon 
supports both imperative and symbolic programming, making it easy to train 
complex models imperatively with minimal impact on performance. Neural networks 
(and other machine learning models) can be defined and trained with `gluon.nn` 
and `gluon.rnn` packages. 
+  - Added new loss functions - `SigmoidBinaryCrossEntropyLoss`, `CTCLoss`, 
`HuberLoss`, `HingeLoss`, `SquaredHingeLoss`, `LogisticLoss`, `TripletLoss`.
   - `gluon.Trainer` now allows reading and setting learning rate with 
`trainer.learning_rate` property.
-  - Added `mx.autograd.grad` and experimental second order gradient support 
(though most operators don't support second order gradient yet)
-  - Added `ConvLSTM` etc to `gluon.contrib`
+  - Added API `HybridBlock.export` for exporting gluon models to MXNet format.
+  - Added `ConvLSTM` to gluon.contrib.
 
 Review comment:
   what about VariationalDropout? 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to