On Thu, Nov 20, 2014 at 4:58 PM, Marco Dinarelli
<marco.dinare...@gmail.com> wrote:

> In order to keep track and check everything I'm doing is correct, I wrote a
> first prototype in octave and then I rewrote the same things in C with gsl.

Try without optimizations, especially without --fast-math.


> When I run the two programs, at first results are the same, but after some
> iterations to train the network, results start to diverge, with the C
> implementation having worst results and cross-entropy fluctuating values.

Your algorithm migt not be numerically stable. It could for example be chaotic.

Do you use any random number generators in your training?

> My question is: Is this a precision problem or there should be a mistake
> somewhere ?

Are you using double in both Octave and C? 1e-14 is by the way rather
close to the minimum resolvable
relative difference between two double precision numbers.

> So, in case this is a precision problem, does anybody know how to overcome
> it ?

It really depends on your algorithm. Ideally it should be robust
against small pertubations to the initial conditions,
at least if it is a neural network training algorithm (otherwise it
does not really make sense).
Try perturbing the training set/input to your octave implementation
and see if it performs about the same.
Then do that to your C implementation.


Maybe you just introduced a bug/typo/omission in your C program that
you haven't discovered yet?


Best regards
Anders A. Søndergaard

Reply via email to