Thank you, it's very useful to have examples.
On Monday, March 7, 2016 at 9:16:46 AM UTC-5, michae...@gmail.com wrote:
>
> To report back, my experience with Mocha.jl has been very good. The
> following is an example of how one can do regression with Mocha. This
> assumes that there are two
To report back, my experience with Mocha.jl has been very good. The
following is an example of how one can do regression with Mocha. This
assumes that there are two data files "train.dat" and "test.dat", which are
plain ascii files, space delimited, variables in columns. The outputs are
in
I'd be interested in seeing your sin-fitting network as well.
Phil
On Monday, February 1, 2016 at 9:34:16 AM UTC-8, michae...@gmail.com wrote:
>
> Thanks everyone for the comments and pointers to code. I have coded up a
> simple example, fitting y=sin(x) + error, and the results very good,
Dear Michael, i am interested in using Mocha in the context of regression
too. Could you share the simple example of the sintetic function below to
me too (possibly in private) ?
Thanks,
Fabrizio
On Monday, February 1, 2016 at 6:34:16 PM UTC+1, michae...@gmail.com wrote:
>
> Thanks everyone
Thanks everyone for the comments and pointers to code. I have coded up a
simple example, fitting y=sin(x) + error, and the results very good, enough
so that I'll certainly be investigating further with larger scale problems.
I may try to use one of the existing packages, but it may be
One thing to keep in mind is that of stability. Small changes to weights
in the early layer of a deep feedforward network might have large impacts
on the final regression result. This is not as big of a problem in
classification tasks because the final result is squashed to a small range
(usually
The reason why most of the deep learning focus is on classification is
because image classification and voice recognition is where all the
research money and focus is for the large companies that are investing in
machine learning, i.e. Google, Baidu, Facebook, Microsoft, etc Also a
number
AFAIK deep learning in general does not have any problem with redundant
inputs. If you have fewer nodes in your first layer than input nodes, then
the redundant (or nearly-redundant) input nodes will be combined into one
node (... more or less). And there are approaches that favor using
I've been using NN for regression and I've experimented with Mocha. I
ended up coding my own network for speed purposes but in general you simply
leave the final output of the neural network as a linear combination
without applying an activation function. That way the output can represent
a
Thanks, that's pretty much my understanding. Scaling the inputs seems to be
important, too, from what I read. I'm also interested in a framework that
will trim off redundant inputs.
I have run the mocha tutorial examples, and it looks very promising because
the structure is clear, and there
I am happy to see people interested in messing around with Julia for ML.
The best way to wrap your head around the concepts is usually to try it
out and see what happens.
My 2 cents are that I doubt that you will get competitive results with
neural networks for your regression problems (even
11 matches
Mail list logo