Implement Multilayer Perceptron
-------------------------------
Key: MAHOUT-976
URL: https://issues.apache.org/jira/browse/MAHOUT-976
Project: Mahout
Issue Type: New Feature
Affects Versions: 0.7
Reporter: Christian Herta
Priority: Minor
Implement a multi layer perceptron
* via Matrix Multiplication
* Learning by Backpropagation; implementing tricks by Yann LeCun et al.:
"Efficent Backprop"
* arbitrary number of hidden layers (also 0 - just the linear model)
* connection between proximate layers only
* different cost and activation functions (different activation function in
each layer)
* test of backprop by numerically gradient checking
First:
* implementation "stocastic gradient descent" like gradient machine
Later (new jira issues):
* Distributed Batch learning (see below)
* "Stacked (Denoising) Autoencoder" - Feature Learning
Distribution of learning can be done in batch learning by:
1 Partioning of the data in x chunks
2 Learning the weight changes as matrices in each chunk
3 Combining the matrixes and update of the weights - back to 2
Maybe is procedure can be done with random parts of the chunks (distributed
quasi online learning)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira