[
https://issues.apache.org/jira/browse/MAHOUT-976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13673111#comment-13673111
]
Robin Anil commented on MAHOUT-976:
-----------------------------------
I see a few system.out.println() please remove those. Also use the Mahout
eclipse code formatter to format the files. [~chrisberlin] will you be able to
work on these quickly? I am pushing it off the 0.8 list. If you can work on it,
please update it and we will review it.
> Implement Multilayer Perceptron
> -------------------------------
>
> Key: MAHOUT-976
> URL: https://issues.apache.org/jira/browse/MAHOUT-976
> Project: Mahout
> Issue Type: New Feature
> Affects Versions: 0.7
> Reporter: Christian Herta
> Assignee: Ted Dunning
> Priority: Minor
> Labels: multilayer, networks, neural, perceptron
> Fix For: 0.8
>
> Attachments: MAHOUT-976.patch, MAHOUT-976.patch, MAHOUT-976.patch,
> MAHOUT-976.patch
>
> Original Estimate: 80h
> Remaining Estimate: 80h
>
> Implement a multi layer perceptron
> * via Matrix Multiplication
> * Learning by Backpropagation; implementing tricks by Yann LeCun et al.:
> "Efficent Backprop"
> * arbitrary number of hidden layers (also 0 - just the linear model)
> * connection between proximate layers only
> * different cost and activation functions (different activation function in
> each layer)
> * test of backprop by gradient checking
> * normalization of the inputs (storeable) as part of the model
>
> First:
> * implementation "stocastic gradient descent" like gradient machine
> * simple gradient descent incl. momentum
> Later (new jira issues):
> * Distributed Batch learning (see below)
> * "Stacked (Denoising) Autoencoder" - Feature Learning
> * advanced cost minimazation like 2nd order methods, conjugate gradient etc.
> Distribution of learning can be done by (batch learning):
> 1 Partioning of the data in x chunks
> 2 Learning the weight changes as matrices in each chunk
> 3 Combining the matrixes and update of the weights - back to 2
> Maybe this procedure can be done with random parts of the chunks (distributed
> quasi online learning).
> Batch learning with delta-bar-delta heuristics for adapting the learning
> rates.
>
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira