[ 
https://issues.apache.org/jira/browse/SYSTEMML-540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Dusenberry updated SYSTEMML-540:
-------------------------------------
    Description: 
This epic covers the addition of deep learning to SystemML, including:

* Core DML layer abstractions for deep (convolutional, recurrent) neural nets, 
with simple forward/backward API: affine, convolution (start with 2D), 
max-pooling, non-linearities (relu, sigmoid, softmax), dropout, loss functions.
* Modularized DML optimizers: (mini-batch, stochastic) gradient descent (w/ 
momentum, etc.).
* Additional DML language support as necessary (tensors, built-in functions 
such as convolution, function pointers, list structures, etc.).
* Integration with other deep learning frameworks (Caffe, Torch, Theano, 
TensoFlow, etc.) via automatic DML code generation.
* etc.

\[*DONE*\] Phase 1:  *MVPs*
* Create mathematically correct DML deep learning library for running basic 
feed-forward and convolutional neural nets on a singlenode.
* Create mathematically correct built-in operators for convolution and max 
pooling for singlenode operation.

\[*CURRENT*\] Phase 2:  *Singlenode*
* Improve performance of DML deep learning library in singlenode operation.
* Expand DML deep learning library to include additional commonly-used layers, 
such as RNNs and LSTMs, as well as additional optimizers.
* Improve built-in operators for convolution and max pooling to be highly 
performant in singlenode operation.
* Implement performant GPU acceleration for built-in operators (and end-to-end 
deep learning algorithms) in singlenode operation.
* Add general engine improvements to improve bottlenecks, such as left-indexing 
within DML-bodied functions.
* Add end-to-end deep learning algorithm examples, such as a "LeNet" 
convolutional neural net.

Phase 3: *Distributed*
* Expand deep learning support to include *distributed operations* with large 
models.  This includes improvements to the DML deep learning library, the 
built-in operators, the GPU acceleration, and general engine improvements.

Phase 4: *APIs/Wrappers*
* Explore integration with Caffe, creating a SystemML interpreter for Caffe 
model definitions.
* Explore integration with Keras, creating a SystemML backend for Keras.

  was:
This epic covers the addition of deep learning to SystemML, including:

* Core DML layer abstractions for deep (convolutional, recurrent) neural nets, 
with simple forward/backward API: affine, convolution (start with 2D), 
max-pooling, non-linearities (relu, sigmoid, softmax), dropout, loss functions.
* Modularized DML optimizers: (mini-batch, stochastic) gradient descent (w/ 
momentum, etc.).
* Additional DML language support as necessary (tensors, built-in functions 
such as convolution, function pointers, list structures, etc.).
* Integration with other deep learning frameworks (Caffe, Torch, Theano, 
TensoFlow, etc.) via automatic DML code generation.
* etc.

[**DONE**] Phase 1:  *MVPs*
* Create mathematically correct DML deep learning library for running basic 
feed-forward and convolutional neural nets on a singlenode.
* Create mathematically correct built-in operators for convolution and max 
pooling for singlenode operation.

[**CURRENT**] Phase 2:  *Singlenode*
* Improve performance of DML deep learning library in singlenode operation.
* Expand DML deep learning library to include additional commonly-used layers, 
such as RNNs and LSTMs, as well as additional optimizers.
* Improve built-in operators for convolution and max pooling to be highly 
performant in singlenode operation.
* Implement performant GPU acceleration for built-in operators (and end-to-end 
deep learning algorithms) in singlenode operation.
* Add general engine improvements to improve bottlenecks, such as left-indexing 
within DML-bodied functions.
* Add end-to-end deep learning algorithm examples, such as a "LeNet" 
convolutional neural net.

Phase 3: *Distributed*
* Expand deep learning support to include *distributed operations* with large 
models.  This includes improvements to the DML deep learning library, the 
built-in operators, the GPU acceleration, and general engine improvements.

Phase 4: *APIs/Wrappers*
* Explore integration with Caffe, creating a SystemML interpreter for Caffe 
model definitions.
* Explore integration with Keras, creating a SystemML backend for Keras.


> Deep Learning
> -------------
>
>                 Key: SYSTEMML-540
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-540
>             Project: SystemML
>          Issue Type: Epic
>            Reporter: Mike Dusenberry
>            Assignee: Mike Dusenberry
>
> This epic covers the addition of deep learning to SystemML, including:
> * Core DML layer abstractions for deep (convolutional, recurrent) neural 
> nets, with simple forward/backward API: affine, convolution (start with 2D), 
> max-pooling, non-linearities (relu, sigmoid, softmax), dropout, loss 
> functions.
> * Modularized DML optimizers: (mini-batch, stochastic) gradient descent (w/ 
> momentum, etc.).
> * Additional DML language support as necessary (tensors, built-in functions 
> such as convolution, function pointers, list structures, etc.).
> * Integration with other deep learning frameworks (Caffe, Torch, Theano, 
> TensoFlow, etc.) via automatic DML code generation.
> * etc.
> \[*DONE*\] Phase 1:  *MVPs*
> * Create mathematically correct DML deep learning library for running basic 
> feed-forward and convolutional neural nets on a singlenode.
> * Create mathematically correct built-in operators for convolution and max 
> pooling for singlenode operation.
> \[*CURRENT*\] Phase 2:  *Singlenode*
> * Improve performance of DML deep learning library in singlenode operation.
> * Expand DML deep learning library to include additional commonly-used 
> layers, such as RNNs and LSTMs, as well as additional optimizers.
> * Improve built-in operators for convolution and max pooling to be highly 
> performant in singlenode operation.
> * Implement performant GPU acceleration for built-in operators (and 
> end-to-end deep learning algorithms) in singlenode operation.
> * Add general engine improvements to improve bottlenecks, such as 
> left-indexing within DML-bodied functions.
> * Add end-to-end deep learning algorithm examples, such as a "LeNet" 
> convolutional neural net.
> Phase 3: *Distributed*
> * Expand deep learning support to include *distributed operations* with large 
> models.  This includes improvements to the DML deep learning library, the 
> built-in operators, the GPU acceleration, and general engine improvements.
> Phase 4: *APIs/Wrappers*
> * Explore integration with Caffe, creating a SystemML interpreter for Caffe 
> model definitions.
> * Explore integration with Keras, creating a SystemML backend for Keras.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to