Andrei Alexandrescu Wrote:
> 
> So are you saying there are neural networks with thresholds that are 
> trained using evolutionary algorithms instead of e.g. backprop? I found 
> this:
The moment a network is just a bit recurrent, any gradient descent algo will be 
a hell. 

> 
> https://docs.google.com/viewer?url=http://www.cs.rutgers.edu/~mlittman/courses/ml03/iCML03/papers/batchis.pdf
> 
> which does seem to support the point. I'd have to give it a closer look 
> to see whether precision would affect training.
> 
I would love to see your results :)

But even in the basic 3 layer sigmoid network the question is:
Will two outputs which are exactly the same(for a certain input) stay the same 
if you change the precision.
When the calculations leading up to the two outputs are totally different ( for 
instance fully dependent on separated subsets of the input; separated paths), 
changing the precision could influence them differently leading to different 
outputs ?

Reply via email to