Walter Bright Wrote:

> strtr wrote:
> > Walter Bright Wrote:
> >> You've got a bad algorithm if increasing the precision breaks it.
> > No, I don't. All algorithms using threshold functions which have been
> > generated using evolutionary algorithms will break by changing the 
> > precision.
> > That is, you will need to retrain them. The point of most of these
> > algorithms(eg. neural networks) is that you don't know what is happening in
> > it.
> 
> 
> You're going to have nothing but trouble with such a program. It won't be 
> portable even on Java, and it may also exhibit different behavior based on 
> compiler switch settings.
Most of the training will be done on the user's computer which would nullify 
all these problems.
Only when someone wants to use their trained program on another computer might 
problems arise.

> 
> It's like relying on the lense in your camera to be of poor quality. Can you 
> imagine going to the camera store and saying "I don't want the newer, high 
> quality lenses, I want your old fuzzy one!" ?
> 
> I suggest instead using fixed point arithmetic with a 64 bit integer type.
Is there no way to stay within float standards?
It only needs to be portable over x86


Reply via email to