Not sure dropout but if you change the solver from breeze bfgs to breeze
owlqn or breeze.proximal.NonlinearMinimizer you can solve ann loss with l1
regularization which will yield elastic net style sparse solutions....using
that you can clean up edges which has 0.0 as weight...
On Sep 7, 2015 7:35 PM, "Feynman Liang" <fli...@databricks.com> wrote:

> BTW thanks for pointing out the typos, I've included them in my MLP
> cleanup PR <https://github.com/apache/spark/pull/8648>
>
> On Mon, Sep 7, 2015 at 7:34 PM, Feynman Liang <fli...@databricks.com>
> wrote:
>
>> Unfortunately, not yet... Deep learning support (autoencoders, RBMs) is
>> on the roadmap for 1.6
>> <https://issues.apache.org/jira/browse/SPARK-10324> though, and there is
>> a spark package
>> <http://spark-packages.org/package/rakeshchalasani/MLlib-dropout> for
>> dropout regularized logistic regression.
>>
>>
>> On Mon, Sep 7, 2015 at 3:15 PM, Ruslan Dautkhanov <dautkha...@gmail.com>
>> wrote:
>>
>>> Thanks!
>>>
>>> It does not look Spark ANN yet supports dropout/dropconnect or any other
>>> techniques that help avoiding overfitting?
>>> http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf
>>> https://cs.nyu.edu/~wanli/dropc/dropc.pdf
>>>
>>> ps. There is a small copy-paste typo in
>>>
>>> https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/BreezeUtil.scala#L43
>>> should read B&C :)
>>>
>>>
>>>
>>> --
>>> Ruslan Dautkhanov
>>>
>>> On Mon, Sep 7, 2015 at 12:47 PM, Feynman Liang <fli...@databricks.com>
>>> wrote:
>>>
>>>> Backprop is used to compute the gradient here
>>>> <https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/Layer.scala#L579-L584>,
>>>> which is then optimized by SGD or LBFGS here
>>>> <https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/ann/Layer.scala#L878>
>>>>
>>>> On Mon, Sep 7, 2015 at 11:24 AM, Nick Pentreath <
>>>> nick.pentre...@gmail.com> wrote:
>>>>
>>>>> Haven't checked the actual code but that doc says "MLPC employes
>>>>> backpropagation for learning the model. .."?
>>>>>
>>>>>
>>>>>
>>>>> —
>>>>> Sent from Mailbox <https://www.dropbox.com/mailbox>
>>>>>
>>>>>
>>>>> On Mon, Sep 7, 2015 at 8:18 PM, Ruslan Dautkhanov <
>>>>> dautkha...@gmail.com> wrote:
>>>>>
>>>>>> http://people.apache.org/~pwendell/spark-releases/latest/ml-ann.html
>>>>>>
>>>>>> Implementation seems missing backpropagation?
>>>>>> Was there is a good reason to omit BP?
>>>>>> What are the drawbacks of a pure feedforward-only ANN?
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Ruslan Dautkhanov
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to