Re: Spark ANN

2015-09-15 Thread Ruslan Dautkhanov
com] > *Sent:* Tuesday, September 08, 2015 12:07 PM > *To:* Ulanov, Alexander > *Cc:* Ruslan Dautkhanov; Nick Pentreath; user; na...@yandex.ru > *Subject:* Re: Spark ANN > > > > Just wondering, why do we need tensors? Is the implementation of convnets > using im2col (se

Re: Spark ANN

2015-09-09 Thread Feynman Liang
ered as well http://arxiv.org/pdf/1312.5851.pdf. > > > > *From:* Feynman Liang [mailto:fli...@databricks.com] > *Sent:* Tuesday, September 08, 2015 12:07 PM > *To:* Ulanov, Alexander > *Cc:* Ruslan Dautkhanov; Nick Pentreath; user; na...@yandex.ru > *Subject:* Re: Spark ANN &g

RE: Spark ANN

2015-09-09 Thread Ulanov, Alexander
; user; na...@yandex.ru Subject: Re: Spark ANN My 2 cents: * There is frequency domain processing available already (e.g. spark.ml<http://spark.ml> DCT transformer) but no FFT transformer yet because complex numbers are not currently a Spark SQL datatype * We shouldn't assume signals are even,

Re: Spark ANN

2015-09-08 Thread Feynman Liang
r > > *From:* Ruslan Dautkhanov [mailto:dautkha...@gmail.com] > *Sent:* Monday, September 07, 2015 10:09 PM > *To:* Feynman Liang > *Cc:* Nick Pentreath; user; na...@yandex.ru > *Subject:* Re: Spark ANN > > > > Found a dropout commit from avulanov: > > > https://g

RE: Spark ANN

2015-09-08 Thread Ulanov, Alexander
...@yandex.ru Subject: Re: Spark ANN Just wondering, why do we need tensors? Is the implementation of convnets using im2col (see here<http://cs231n.github.io/convolutional-networks/>) insufficient? On Tue, Sep 8, 2015 at 11:55 AM, Ulanov, Alexander <alexander.ula...@hpe.com<mailto:a

Re: Spark ANN

2015-09-07 Thread Ruslan Dautkhanov
Thanks! It does not look Spark ANN yet supports dropout/dropconnect or any other techniques that help avoiding overfitting? http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf https://cs.nyu.edu/~wanli/dropc/dropc.pdf ps. There is a small copy-paste typo in

Re: Spark ANN

2015-09-07 Thread Ruslan Dautkhanov
Found a dropout commit from avulanov: https://github.com/avulanov/spark/commit/3f25e26d10ef8617e46e35953fe0ad1a178be69d It probably hasn't made its way to MLLib (yet?). -- Ruslan Dautkhanov On Mon, Sep 7, 2015 at 8:34 PM, Feynman Liang wrote: > Unfortunately, not

Re: Spark ANN

2015-09-07 Thread Feynman Liang
Unfortunately, not yet... Deep learning support (autoencoders, RBMs) is on the roadmap for 1.6 though, and there is a spark package for dropout regularized logistic regression.

Re: Spark ANN

2015-09-07 Thread Feynman Liang
BTW thanks for pointing out the typos, I've included them in my MLP cleanup PR On Mon, Sep 7, 2015 at 7:34 PM, Feynman Liang wrote: > Unfortunately, not yet... Deep learning support (autoencoders, RBMs) is on > the roadmap for

Re: Spark ANN

2015-09-07 Thread Debasish Das
Not sure dropout but if you change the solver from breeze bfgs to breeze owlqn or breeze.proximal.NonlinearMinimizer you can solve ann loss with l1 regularization which will yield elastic net style sparse solutionsusing that you can clean up edges which has 0.0 as weight... On Sep 7, 2015 7:35

Re: Spark ANN

2015-09-07 Thread Nick Pentreath
Haven't checked the actual code but that doc says "MLPC employes backpropagation for learning the model. .."? — Sent from Mailbox On Mon, Sep 7, 2015 at 8:18 PM, Ruslan Dautkhanov wrote: > http://people.apache.org/~pwendell/spark-releases/latest/ml-ann.html >

Re: Spark ANN

2015-09-07 Thread Feynman Liang
Backprop is used to compute the gradient here , which is then optimized by SGD or LBFGS here