On Sunday, May 27, 2012 1:39:40 PM UTC+2, Snark wrote:
>
> In fact it doesn't matter what maxima() returns and how python converts 
> it : I could check that within maxima itself, there is a difference 
> between what I get on ARM (1.69...9e+17) and on x86_64 (1.7e+17). 
>

So what? printf() will also produce a different output on different 
platforms, yet this is not printf()'s fault, because it is not an accurate 
routine. It has never been. Your reasoning is at the very least very bogus.

So the problem is below maxima (large) -- and above ecl (strict), which 
> doesn't leave that much room in the software stack. 
>

I am not denying that ECL might be to be blamed something, but you have not 
stated the problem properly nor provided any conclusive evidence that what 
is being done is correct. Let me state the whole thread again and do not 
shortcut me by saying "I see this or that and thus it is ECL's fault"

Your test is checking that data can be converted from python to Maxima and 
viceversa. This test is built on these assumptions

1) The python code can convert from "double float" to Maxima's 
representation accurately.
2) Maxima can produce an accurate representation of that number
3) The python code can recover a "double float" from Maxima's 
representation.

Now, what I was saying above is that if you use text representation for 1, 
2 and 3, then this chain is stupidly broken. It does not matter whether you 
got the result you wanted in some platform -- guy, if you use this as a 
proof that a test is correct, then this is doomed.

1) If python uses the C library to convert from double float to text, this 
has proven to be broken on many platforms.
2) If Maxima prints a floating point number to the prompt, it need not use 
the most accurate representation. There are many Common Lisp functions that 
print out floating point numbers and you have not told me which one is 
used. Only some of them are accurate and many will output different values 
on different platforms
3) If you care to read the bug report I got you will see that this bug 
report is _not_ reproducible 
(http://sourceforge.net/tracker/?func=detail&atid=398053&aid=3495303&group_id=30035)
 
As I show there, on 64 and 32 bit platforms, other implementations produce 
the same output for the lisp form that is shown there, (format t "~21v" 
1.7e+17)

So you have not provided me with convincing evidence that anything in 
particular is broken. Anything can be wrong, beginning with your 
assumptions about how "1.7e+17" is converted by maxima() to floating point 
format.

Juanjo

-- 
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to 
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to