Hi, On Sun, Jul 24, 2011 at 1:06 AM, Aaron Meurer <asmeu...@gmail.com> wrote: > On Sat, Jul 23, 2011 at 5:59 PM, Matthew Brett <matthew.br...@gmail.com> > wrote: >> Hi, >> >> On Sun, Jul 24, 2011 at 12:36 AM, Aaron Meurer <asmeu...@gmail.com> wrote: >>> On Sat, Jul 23, 2011 at 5:28 PM, Matthew Brett <matthew.br...@gmail.com> >>> wrote: >>>> Hi, >>>> >>>> On Sun, Jul 24, 2011 at 12:23 AM, Aaron Meurer <asmeu...@gmail.com> wrote: >>>>> Note that Float(1) is not the same as Float(1.0). Fredrik or someone >>>>> else would have to explain the details, but I think the reasoning >>>>> behind Float(int) => Integer is something related to precision. >>>> >>>> Right, sorry, I should have added that: >>>> >>>> sympy.Float(1.0) == sympy.numbers.One() >>> >>> ==, yes, but is, no. >> >> Surely == is the relevant operator? >> >>> In [2]: Float(1.0) is S.One >>> Out[2]: False >>> >>> In [3]: Float(1.0) == S.One >>> Out[3]: True >>> >>> == works because of some type casting. You also get, for example: >>> >>> In [4]: 0.5 == Rational(1, 2) >>> Out[4]: True >>> >>>> >>>>> Also, as the commit message notes, there is the following >>>>> inconsistency in 0.6.7: >>>>> >>>>> In [1]: -1.0*x >>>>> Out[1]: -1.0⋅x >>>>> >>>>> In [2]: 1.0*x >>>>> Out[2]: x >>>> >>>> To me, that inconsistency is a benefit. Is there some disbenefit? >>>> I'm asking honestly. For me it is just a question of reduced >>>> readability in doctests and examples. >>>> >>>> Cheers, >>>> >>>> Matthew >>>> >>> >>> Yes, I think there is a disbenefit, because you loose the precision >>> information in 1.0 when you convert it to S.One. >>> >>> This will happen when you use Floats. They are assumed to be close to >>> (up to their precision), but not necessarily equal to the numbers the >>> represent. So with the default precision of 15 or something like >>> that, 1.0 is really 1 +- 1e-15. If you really want exact numbers >>> (i.e., rationals), you should use them. Otherwise, SymPy assumes that >>> Floats are not exact, and treats them as such. >> >> I believe that the integers up to around 2^52 are exactly >> representable in 64 bit doubles: >> >> http://stackoverflow.com/questions/440204/does-floor-return-something-thats-exactly-representable >> >> so 1.0 will always be exactly representable in float. Indeed, it >> seems to me confusing to imply that I don't have exactly 1.0 by >> retaining it. >> >>> Others, if any of this is not true, please correct me. >>> >>> By the way, if you want to convert from floats to rationals, you can >>> use nsimplify: >>> >>> In [14]: nsimplify(1.0*x, rational=True) >>> Out[14]: x >> >> Right - but it seems ugly and unfortunate to add that to the doctests, >> and examples, especially where we have matrices where we have to >> iterate over all the values looking for these guys. >> >> Well - sorry - I'll consider my peanut thrown and missed :) >> >> Cheers, >> >> Matthew >> > > Well, maybe others could chip in here.
While I am at it, surely this is surprising?: In [44]: simplify(x * 1.0) Out[44]: 1.0*x and identity under addition doesn't have the same feature: In [42]: x + 0.0 Out[42]: x What do Mathematica etc do in this case? See you, Matthew -- You received this message because you are subscribed to the Google Groups "sympy" group. To post to this group, send email to sympy@googlegroups.com. To unsubscribe from this group, send email to sympy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sympy?hl=en.