Dear Dieter, I have experienced the same exact problem with conx, that is
the reason by wich I do not use conx.
It seems that the C API of conx does not have an implementation of
incremental learning for neural networks, this seems to be strange but I
cant find the correct fucntions for doing this.
If anybody has the WAY to the that , please inform to all of us both to this
list of Pyro and please send a note to the list of Conx users somewhere.
THNKS.
jose
----- Original Message -----
From: "Dieter Vanderelst" <[EMAIL PROTECTED]>
To: "Douglas S. Blank" <[EMAIL PROTECTED]>; <pyro-users@pyrorobotics.org>
Sent: Friday, October 26, 2007 12:26 PM
Subject: Re: [Pyro-users] Using the SRN
Dear Douglas,
Thank you for your answer.
I have programmed a net based on your pointers. But I still have some
troubles.
This is what I do:
I use the code at http://pyrorobotics.org/?page=SRNModuleExperiments to
make an elman net.
Then I want to train this net by setting *single* input and output pattern
repetitively:
for each input_vector en output_vector:
network.setInputs([input_vector])
network.setOutputs([output_vector])
network.train() #train the network some *more* on each pass
Is this possible? It seems like the net is resetting itself after each
call of train since it considers each pass trough this loop a an epoch?
Can this resetting be switched off?
Regards,
Dieter
Douglas S. Blank wrote:
Dieter,
You can use as long of sequences as you want, even from a file.
See, for example, the section on off-line learning here:
http://pyrorobotics.org/?page=Building_20Neural_20Networks_20using_20Conx
or
http://pyrorobotics.org/?page=Robot_20Learning_20using_20Neural_20Networks
You can use the loadDataFromFile or loadInputsFromFile /
loadtargetsFromFile.
If you want to look at hidden layer activations, perhaps the easiest
method would be to use the SRN.propagate(input=[0,1,0,0,1]) form, and
then look at the hidden layer. For example:
srn = SRN()
# .. add layers, train
srn.propagate(input=[0,1,0,0,1])
print srn["hidden"].activation
Another way would be to extend the SRN class and override one of the
methods, like postBackprop:
from pyrobot.brain.conx import *
class MySRN(SRN):
def postBackprop(self, **args):
print self["hidden"].activation
SRN.postBackprop(self, **args)
and use the MySRN class exactly the way that you would the SRN class.
That would allow you to examine the hidden layer during processing.
You can set batch to 0 and you shouldn't have any problem, either way.
Hope that helps,
-Doug
Dieter Vanderelst wrote:
Hi,
I need some advise on the use off SRN (simple recurrent nets).
I know what the network does but I need some help on the Pyro
implementation.
This is what I want to do with the net:
-I want to train a SRN using a single (very long) sequence of patterns.
The examples I could find on SRN all define a number of patterns and
build a sequence of these on the fly. However, I will read a single long
sequence of patterns from a file (experimental data).
-Second, I want to analyze the activation of the hidden nodes in
response to each different input pattern. To this, I want present the
net ad random with a long sequence of input patterns and register the
activations.
-I don't want the network to be trained using batch updating. Given my
problem, batch updating is senseless.
So, could somebody assist me in finding the best settings for this kind
of requirements?
Thanks,
Dieter Vanderelst
_______________________________________________
Pyro-users mailing list
Pyro-users@pyrorobotics.org
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users
_______________________________________________
Pyro-users mailing list
Pyro-users@pyrorobotics.org
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users
_______________________________________________
Pyro-users mailing list
Pyro-users@pyrorobotics.org
http://emergent.brynmawr.edu/mailman/listinfo/pyro-users