On 23 October 2015 at 23:34, Rousselot, Richard A <
Richard.A.Rousselot at centurylink.com> wrote:

> Scott,
>
> I agree with everything you said but...  To me if a program/CPU evaluates
> something internally, then when it reports the result it should be the
> result as it sees it.  It shouldn't report something different.
>

To be pendatic, you haven't asked for the result of the internal
calculation. You've asked for that result converted to a printable string
with certain formatting.

To be fair you've asked for plenty digits of precision, and I think it
would be reasonable to say that the failure to provide them by sqlite's
printf is a bug. But the primary use case for printf is to provide output
in *readable* form -- these ludicrous precisions are not part of that use
case, and it seems the formatting algorithms include concessions which
prevent them from being printed correctly.

eg. the system's printf, (via python 2.6.6), vs sqlite's printf:

python> "%.66f" % (9.2+7.9+0+4.0+2.6+1.3)
'25.000000000000003552713678800500929355621337890625000000000000000000'
sqlite> select printf("%.66f", (9.2+7.9+0+4.0+2.6+1.3));
25.000000000000000000000000000000000000000000000000000000000000000000

python> "%.66f" % (1.1+2.6+3.2+0.1+0.1)
'7.099999999999999644728632119949907064437866210937500000000000000000'
sqlite> select printf("%.66f", (1.1+2.6+3.2+0.1+0.1));
7.099999999999999000000000000000000000000000000000000000000000000000

I think this explains the discrepancy you're seeing. Of course this leaves
you with no straightforward way to get the actual result via the sqlite3
shell, which also applies formatting when it displays SELECT results. I
guess if you really want the exact bits you need to use the ieee754
extension.

Aha, looking at the code I see the reason for the 16-digit cutoff - 16 is
what the counter passed to et_getdigit is initialised to, unless you use
the ! flag to %f. Interestingly this doesn't give the full 66, but does
give more non-zero digits:

sqlite> select printf("%!.66f", (9.2+7.9+0+4.0+2.6+1.3));
25.000000000000003551846317

The comments claim that 16 is the default because that's how many
significant digits you have in a 64-bit float. So I'm less convinced now
that there's actually a bug here - nothing that printf does can change the
fact that floating point is inherently inexact, and historically its been
printf's job to /hide/ those innacuracies for the sake of readability which
it is doing here by saying "realistically we can only store 16 significant
digits - anything beyond that is floating point error".

It may still be worth documenting the behaviour though?
-Rowan

Reply via email to