Backprop is used to compute the gradient here
<https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/Layer.scala#L579-L584>,
which is then optimized by SGD or LBFGS here
<https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/Layer.scala#L878>

On Mon, Sep 7, 2015 at 11:24 AM, Nick Pentreath <nick.pentre...@gmail.com>
wrote:

> Haven't checked the actual code but that doc says "MLPC employes
> backpropagation for learning the model. .."?
>
>
>
> —
> Sent from Mailbox <https://www.dropbox.com/mailbox>
>
>
> On Mon, Sep 7, 2015 at 8:18 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
> wrote:
>
>> http://people.apache.org/~pwendell/spark-releases/latest/ml-ann.html
>>
>> Implementation seems missing backpropagation?
>> Was there is a good reason to omit BP?
>> What are the drawbacks of a pure feedforward-only ANN?
>>
>> Thanks!
>>
>>
>> --
>> Ruslan Dautkhanov
>>
>
>

Reply via email to