On 04/24/2010 07:21 PM, strtr wrote:
Andrei Alexandrescu Wrote:

So are you saying there are neural networks with thresholds that
are trained using evolutionary algorithms instead of e.g. backprop?
I found this:
The moment a network is just a bit recurrent, any gradient descent
algo will be a hell.


https://docs.google.com/viewer?url=http://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/batchis.pdf



which does seem to support the point. I'd have to give it a closer look
to see whether precision would affect training.

I would love to see your results :)

But even in the basic 3 layer sigmoid network the question is: Will
two outputs which are exactly the same(for a certain input) stay the
same if you change the precision.

You shouldn't care.

When the calculations leading up to
the two outputs are totally different ( for instance fully dependent
on separated subsets of the input; separated paths), changing the
precision could influence them differently leading to different
outputs ?

I'm not sure about that. Fundamentally all learning relies on some smoothness assumption - at a minimum, continuity of the transfer function (small variation in input leads to small variation in output). I'm sure certain oddities could be derived from systems that impose discontinuities, but by and large I think those aren't all that interesting.

The case you mention above involves a NN making a different end discrete classification decision because numeric vagaries led to some threshold being met or not. I have certainly seen that happening - even changing the computation method (e.g. unrolling loops) will lead to different individual results. But that doesn't matter; statistically the neural net will behave the same.


Andrei

Reply via email to