[ 
https://issues.apache.org/jira/browse/MAHOUT-976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christian Herta updated MAHOUT-976:
-----------------------------------

    Description: 
Implement a multi layer perceptron

 * via Matrix Multiplication
 * Learning by Backpropagation; implementing tricks by Yann LeCun et al.: 
"Efficent Backprop"
 * arbitrary number of hidden layers (also 0  - just the linear model)
 * connection between proximate layers only 
 * different cost and activation functions (different activation function in 
each layer) 
 * test of backprop by gradient checking 
 
First:
 * implementation "stocastic gradient descent" like gradient machine
 * simple gradient descent incl. momentum

Later (new jira issues):  
 * Distributed Batch learning (see below)  
 * "Stacked (Denoising) Autoencoder" - Feature Learning
 * advanced cost minimazation like 2nd order methods, conjugate gradient etc.

Distribution of learning can be done by (batch learning):
 1 Partioning of the data in x chunks 
 2 Learning the weight changes as matrices in each chunk
 3 Combining the matrixes and update of the weights - back to 2
Maybe this procedure can be done with random parts of the chunks (distributed 
quasi online learning). 
Batch learning with delta-bar-delta heuristics for adapting the learning rates. 
   
 

  was:
Implement a multi layer perceptron

 * via Matrix Multiplication
 * Learning by Backpropagation; implementing tricks by Yann LeCun et al.: 
"Efficent Backprop"
 * arbitrary number of hidden layers (also 0  - just the linear model)
 * connection between proximate layers only 
 * different cost and activation functions (different activation function in 
each layer) 
 * test of backprop by gradient checking 
 
First:
 * implementation "stocastic gradient descent" like gradient machine
 * simple gradient descent 

Later (new jira issues):
 * momentum for better and faster learning  
 * advanced cost minimazation like 2nd order methods, conjugate gradient etc.  
 * Distributed Batch learning (see below)  
 * "Stacked (Denoising) Autoencoder" - Feature Learning
   

Distribution of learning can be done by (batch learning):
 1 Partioning of the data in x chunks 
 2 Learning the weight changes as matrices in each chunk
 3 Combining the matrixes and update of the weights - back to 2
Maybe this procedure can be done with random parts of the chunks (distributed 
quasi online learning) 

    
> Implement Multilayer Perceptron
> -------------------------------
>
>                 Key: MAHOUT-976
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-976
>             Project: Mahout
>          Issue Type: New Feature
>    Affects Versions: 0.7
>            Reporter: Christian Herta
>            Priority: Minor
>              Labels: multilayer, networks, neural, perceptron
>   Original Estimate: 80h
>  Remaining Estimate: 80h
>
> Implement a multi layer perceptron
>  * via Matrix Multiplication
>  * Learning by Backpropagation; implementing tricks by Yann LeCun et al.: 
> "Efficent Backprop"
>  * arbitrary number of hidden layers (also 0  - just the linear model)
>  * connection between proximate layers only 
>  * different cost and activation functions (different activation function in 
> each layer) 
>  * test of backprop by gradient checking 
>  
> First:
>  * implementation "stocastic gradient descent" like gradient machine
>  * simple gradient descent incl. momentum
> Later (new jira issues):  
>  * Distributed Batch learning (see below)  
>  * "Stacked (Denoising) Autoencoder" - Feature Learning
>  * advanced cost minimazation like 2nd order methods, conjugate gradient etc.
> Distribution of learning can be done by (batch learning):
>  1 Partioning of the data in x chunks 
>  2 Learning the weight changes as matrices in each chunk
>  3 Combining the matrixes and update of the weights - back to 2
> Maybe this procedure can be done with random parts of the chunks (distributed 
> quasi online learning). 
> Batch learning with delta-bar-delta heuristics for adapting the learning 
> rates.    
>  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to