On 16 Jun 2012, at 6:32pm, Etienne <ejlist-sql...@yahoo.fr> wrote:

> Once again, I really do not care of the accuracy.
> 
> I KNOW 0.1 CAN NOT BE STORED EXACTLY IN A REAL VARIABLE.

I am unsurprised to find that your decimal strings are different after the 11th 
decimal place because your REAL values are not stored to 15 decimal places, and 
the calculations done with the numbers stored as REAL are not accurate to a 
precision of 10^-15.  Your number

0.90100000000029468

involves a calculation involving the number

999.90100000000029468

which has 20 decimal digits.  Calculations performed on Double-lenght REAL 
numbers are accurate only to 15 decimal digits.  Anything after that is just 
noise and can be ignored.  It doesn't matter what shows, because the programmer 
should never be showing the number to that much precision.

You posted

>>> Please note that the "realvalue" variable has identical values at the first 
>>> loop pass.


You have no way of knowing that.  You are seeing the value converted into a 
text string in an attempt to show it in decimal.  For all you know, the values 
are different, but they're being shown as the same text string because the 
difference is lost after the last digit.

There's no simple way to find out where the values are becoming different 
unless you dump the piece of memory the values are being stored in.  Do the 
calculation realvalue-999 in your code, store the resulting value in a 
variable, and dump the piece of memory that values is stored in (preferably as 
binary, but hexadecimal is acceptable).  Then look at the results from your two 
environments and see whether they're the same.

Your different maths libraries may do any of the following

1) turn your string '0.1' into different bit patterns, in order to store it as 
a binary value
2) do the same calculations but get different results because they round the 
right-hand bits differently
3) turn the same binary value into a different text string, when you ask to see 
the results in decimal.

This can happen
in different programming languages,
or in the same program compiled by two different compilers,
or in the same program compiled by the same compiler for two different 
platforms,
or in the same object code running on two identical platforms except for them 
having different CPUs,
or on identical platforms set to do rounding differently.

>From what I see, one of your programs is a C program and the other runs under 
>JSDB, whatever that is.  But you've now introduced another complication 
>because you're using the binary-to-decimal routines in gdb to show the results 
>as decimal, so instead of two environments you now have three.  Even if 
>they're both using the same IEEE754 algorithms You have no idea what rounding 
>mode each library uses.

For at least 50 years we've known that if you see a computer spit out a decimal 
number like

0.1 [lots of zeros here] 82734766

anything after the long line of zeros is rubbish.  It's perfectly predictable 
-- if you understand how your CPU does maths -- but it's not important.  
Because we know that storing values like 0.1 in a REAL variable will lead to 
that sort of thing because they cannot be stored accurately so it will happen 
every time.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to