Awesome!
Thank you David -- backproppy looks nice + simple -- exactly what i
needed to experiment/learn with.
On Mon, Nov 28, 2011 at 1:22 PM, David Warde-Farley
wrote:
> On Mon, Nov 28, 2011 at 06:42:03PM +0100, Andreas Müller wrote:
>
>> I think it should be pretty straightforward, replacing
On Mon, Nov 28, 2011 at 06:42:03PM +0100, Andreas Müller wrote:
> I think it should be pretty straightforward, replacing cp.prod()
> with np.dot() and similar.
> The implementation has lots of features, so I am not sure
> how easy it is to understand. You can definitely have a look.
>
> If you al
On 11/28/2011 05:23 PM, Timmy Wilson wrote:
> Thanks Guys!
>
>> This is neither a Deep Belief Network nor a stack
>> of RBMs, just a regular feed forward neural network
>> that has a particularly well chosen set of initial weights.
> Agreed. This is what i'm imagining.
>
> Assuming good results, i
Thanks Guys!
> This is neither a Deep Belief Network nor a stack
> of RBMs, just a regular feed forward neural network
> that has a particularly well chosen set of initial weights.
Agreed. This is what i'm imagining.
Assuming good results, i'm sure i'll want to move to a GPU implementation.
In
I'd like to add something to David's addition to Olivier's answer:
There are also some alternatives to Theano ;)
Theano is great, in particular with all the docs and tutorials, but I
think it feels
like learning a new language.
My lab has a CUDA library called CUV that aims to be a numpy replace
A few things I'd add to Olivier's reply:
First, it's not quite accurate to call it "layered RBMs". The RBM
interpretation, and the CD-1 approximate training procedure, really
only make sense in the context of a single layer/unsupervised
training, but we then take the weights and biases and shove t
You should definitely have a look at theano that will probably run
much faster than pure numpy for this kind of models (esp. if you have
access to a GPU with the CUDA runtime).
http://deeplearning.net/software/theano/
The deep learning tutorial [1] have a section on backpropagation [2]
and also o
Hi scikit-learn community,
I'm experimenting w/ unsupervised Deep Belief Nets (DBN) for dimension
reduction.
Hinton shows good results using a 2000-500-250-125-2 Autoencoder to
cluster a newswire corpus (essentially a neural topic model):
http://www.cs.toronto.edu/%7Ehinton/science.pdf
I'm tryi