Thanks!

It does not look Spark ANN yet supports dropout/dropconnect or any other
techniques that help avoiding overfitting?
http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
https://cs.nyu.edu/~wanli/dropc/dropc.pdf

ps. There is a small copy-paste typo in
https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/BreezeUtil.scala#L43
should read B&C :)



-- 
Ruslan Dautkhanov

On Mon, Sep 7, 2015 at 12:47 PM, Feynman Liang <fli...@databricks.com>
wrote:

> Backprop is used to compute the gradient here
> <https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/Layer.scala#L579-L584>,
> which is then optimized by SGD or LBFGS here
> <https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/Layer.scala#L878>
>
> On Mon, Sep 7, 2015 at 11:24 AM, Nick Pentreath <nick.pentre...@gmail.com>
> wrote:
>
>> Haven't checked the actual code but that doc says "MLPC employes
>> backpropagation for learning the model. .."?
>>
>>
>>
>> —
>> Sent from Mailbox <https://www.dropbox.com/mailbox>
>>
>>
>> On Mon, Sep 7, 2015 at 8:18 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
>> wrote:
>>
>>> http://people.apache.org/~pwendell/spark-releases/latest/ml-ann.html
>>>
>>> Implementation seems missing backpropagation?
>>> Was there is a good reason to omit BP?
>>> What are the drawbacks of a pure feedforward-only ANN?
>>>
>>> Thanks!
>>>
>>>
>>> --
>>> Ruslan Dautkhanov
>>>
>>
>>
>

Reply via email to