On 04/24/2010 05:26 PM, strtr wrote:
Andrei Alexandrescu Wrote:

On 04/24/2010 04:30 PM, strtr wrote:
Andrei Alexandrescu Wrote:

On 04/24/2010 12:52 PM, strtr wrote:
Walter Bright Wrote:

strtr wrote:
Portability will become more important as evo algos get used
more. Especially in combination with threshold functions.
The computer will generate/optimize all input/intermediate
values itself and executing the program on higher precision
machines might give totally different outputs.


You've got a bad algorithm if increasing the precision breaks
it.

No, I don't. All algorithms using threshold functions which have
been generated using evolutionary algorithms will break by
changing the precision. That is, you will need to retrain them.
The point of most of these algorithms(eg. neural networks) is
that you don't know what is happening in it.

I'm not an expert in GA, but I can tell that a neural network that
is dependent on precision is badly broken.
How can you tell?

Any NN's transfer function must be smooth.
http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html#Transfer%20Function

  It wasn't for nothing I mentioned threshold functions

Especially in the more complex spiking neural networks bases on
dynamical systems, thresholds are kind of important.

Meh. You can't train using a gradient method unless the output is smooth
(infinitely derivable).

Which was exactly why I mentioned evolutionary algorithms.

So are you saying there are neural networks with thresholds that are trained using evolutionary algorithms instead of e.g. backprop? I found this:

https://docs.google.com/viewer?url=http://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/batchis.pdf

which does seem to support the point. I'd have to give it a closer look to see whether precision would affect training.


Andrei

Reply via email to