On Wednesday, July 30, 2014 10:23:03 PM UTC-7, William wrote:
>
> On Wed, Jul 30, 2014 at 9:11 PM, rjf <fat...@gmail.com <javascript:>> 
> wrote: 
> > 
> > 
> > On Wednesday, July 23, 2014 8:22:39 PM UTC-7, Robert Bradshaw wrote: 
> >> 
> >> On Wed, Jul 23, 2014 at 5:47 PM, rjf <fat...@gmail.com> wrote: 
> >> > 
> >> > 
> >> > On Saturday, July 19, 2014 8:22:39 AM UTC-7, Nils Bruin wrote: 
> >> >> 
> >> >> On Saturday, July 19, 2014 5:43:57 AM UTC-7, defeo wrote: 
> >> >>> 
> >> >>> However, Julia multimethods are backed up by a powerful coercion 
> >> >>> system, so I do not understand the "step back" criticism. 
> >> >>> 
> >> >> That comment wasn't made with respect to Julia, because that would 
> be 
> >> >> comparing the coercion facilities of a CAS to those of a programming 
> >> >> language. Coercion in a CAS tends to be a *lot* more complicated 
> than 
> >> >> what 
> >> >> programming languages are designed for. As an example: 
> >> >> 
> >> >> Consider A+B where A is a polynomial in ZZ[x,y] and B is a power 
> series 
> >> >> in 
> >> >> F_q[[x]] (finite field with q elements). 
> >> >> 
> >> >> Do you expect your CAS to make sense of that addition? Sage does. 
> >> > 
> >> > A  CAS that has representations for those two objects will very 
> likely 
> >> > make 
> >> > sense of that addition, so Sage is hardly unique. 
> >> 
> >> Show me one. 
> > 
> > 
> > Certainly Macsyma has been able to add taylor series and polynomials 
> for, 
> > oh, 40 years. 
> > If it had taylor series over F_q, it would have to make sense of that 
> > addition as well. 
>
> How do you define an element of F_q in Maxima? 



  I couldn't find 
> anything after a cursory look at [1].  I'm trying to figure out if 
> Maxima counts as "a CAS that has representations for those two 
> objects". 
>

I think you have to look at the documentation for tellrat,  and maybe 
algebraic:true

There is a fairly serious questions as to whether one should distinguish 
between
3 mod 13  and  the integer 3,   or should one make a distinction between
addition mod 13  and  addition in Z.
 
Maxima tends toward not marking the objects, and puts the burden on the 
operation.
Axiom, and apparently Sage, requires that you mark the objects and define 
the operations
and coercions, and then let the type system do its job, or not.


> [1] http://maxima.sourceforge.net/docs/manual/en/maxima_29.html 
>
> > Perhaps Axiom and Fricas have such odd Taylor series objects;  they 
> don't 
> > seem to be 
> > something that comes up much, and I would expect they have some 
> > uncomfortable 
> > properties. 
>
> They do come up frequently in algebraic geometry, number theory, 
> representation theory, combinatorics, coding theory and many other 
> areas of pure and applied mathematics. 
>

   You can say
whatever you want about some made-up computational problems
in  "pure mathematics" but I think you are just bluffing regarding 
applications.

>
> > Note that even adding  5 mod 13   and the integer 10  is potentially 
> > uncomfortable, 
>
> sage: a = Mod(5,13); a 
> 5 
> sage: parent(a) 
> Ring of integers modulo 13 
> sage: type(a) 
> <type 'sage.rings.finite_rings.integer_mod.IntegerMod_int'> 
> sage: a + 10 
> 2 
> sage: parent(a+10) 
> Ring of integers modulo 13 
>
> That was really comfortable.  


Unfortunately, it 
(a) looks uncomfortable, and (
b) got the answer wrong.

The sum of 5 mod13 and 10  is 15, not 2.

The calculation  (( 5 mod 13)+10) mod 13 is 2.

Note that the use of mod in the line above, twice, means different things.
One is a tag on the number 5 and the second is an operation akin to 
remainder.

 

>   It's basically the same in Magma and 
> GAP.   In PARI it's equally comfortable. 
>
> ? Mod(5,13)+10 
> %2 = Mod(2, 13) 
>
>
> > and the rather common operation of Hensel lifting requires doing 
> arithmetic 
> > in a combination 
> > of fields (or rings) of different related sizes. 
>

If you knew about Hensel lifting, I would think that you would comment 
here, and that
you also would know why 15 rather than 2 might be the useful answer. 

coercing the integer 10  into   something like 10 mod 13   or  perhaps -3 
mod 13 is
a choice, probably the wrong one.
presumably one could promote your variable a to have a value in Z and do
the addition again. Whether comfortably or not.
 

> > 
> >> 
> >> 
> >> >> It returns an answer in F_q[[x]][y] (i.e., a polynomial in y over 
> power 
> >> >> series in x over F_q) . You can argue whether it's desirable for a 
> >> >> system to 
> >> >> try to be that smart, but all computer algebra systems I know are a 
> >> >> "step 
> >> >> back" relative to this. Programming languages do not tend to have 
> type 
> >> >> models that would even allow you to try and make sense of this kind 
> of 
> >> >> question. 
> >> > 
> >> > 
> >> > A harder question is when the coercion is not so obvious,  As a 
> simple 
> >> > example, is 
> >> > the polynomial x^0 coerced to the integer 1? 
> >> 
> >> The *polynomial* x^0, yes. 
> > 
> > 
> > That means that 1 is not a polynomial. Adds to some checking, if you 
> assume 
> > that 
> > a polynomial has a main variable, but 1  does not. 
> > As does the representation of zero as a polynomial with no terms, or a 
> > polynomial 
> > with one term, say 0*x^0. 
>
> There are many 1's. 
>

that's a choice;  it may make good sense if you are dealing with a raft of 
algebraic structures
with various identities.
I'm not convinced it is a good choice if you are dealing with the bulk of 
applied analysis and
scientific computing applications amenable to symbolic manipulation.
Where one is one and all alone and ever more shall be so.
 

>
> >> > How do you add two bigfloats of different precision? 
> >> 
> >> Return the result in lower precision. 
> > 
> > 
> > I would say that is wrong.  Macsyma converts both to the globally set 
> > precision. 
>
> You can't be serious -- that's a total a recipe for disaster in a 
> complicated system.  Unbelievable. 
>

I am serious, and this method has been used since, oh, 1974 I would guess.
Of course you can easily rebind that precision, e.g.

do_careful_stuff(a,b):= block([fpprec:fpprec*2 ],    .... compute with a,b, 
at doubled precision)

I don't know why you think it is a recipe for disaster. Perhaps you can 
explain?



> > There is no reason to believe that a "low precision" representation is 
> > inaccurate, and that one can raise the precision by adding (binary) 
> zeros on 
> > the right. 
> > 
> > Or not. 
> > 
> >> 
> >> 
> >> > Or add a float to an exact rational? 
> >> 
> >> Coerce to a rational (same rule as above). 
> > 
> > 
> > That's not the rule used in most programming languages, where the result 
> is 
> > returned as a float.  Like FORTRAN did, I guess.  Any float can be 
> converted 
> > to an exact rational, but do you want to do that?  Sometimes.  That's 
> > why it is sticky. 
>
> It's not what Sage does.  Robert wrote the wrong thing (though his 
> "see above" means he just got his words confused). 
> In Sage, one converts the rational to the parent ring of the float, 
> since that has lower precision. 
>

It seems like your understanding of "precision" is more than a little fuzzy.
You would not be alone in that, but writing programs for other people to
use sometimes means you need to know enough more to keep from
offering them holes to fall into.

Also, as you know (machine)  floats don't form a ring. 

Also, as you know some rationals cannot be converted to machine floats.

( or do you mean software floats with an unbounded exponent?  
Since these are roughly the same as rationals with power-of-two 
denominators / multipliers).

What is the precision of the parent ring of a float??  Rings with 
precision???


> sage: 1.3 + 2/3 
> 1.96666666666667 
>
 
arguably, this is wrong.

1.3, if we presume this is a machine double-float in IEEE form, is
about
1.300000000000000044408920985006261616945266723633....
or EXACTLY
5854679515581645 / 4503599627370496

note that the denominator is 2^52.

adding 2/3 to that gives

26571237801485927 / 13510798882111488

EXACTLY.


which can be written
1.9666666666666667110755876516729282836119333903...

So you can either do "mathematical addition" by default or do
"addition yada yada".



> Systematically, throughout the system we coerce from higher precision 
> (more info) naturally to lower precision.  Another example is 
>
>      Z   ---> Z/13Z 
>
> If you add an exact integer ("high precision") to a number mod 13 
> ("less information"), you get a number mod 13 (as above).
>

This is a choice but it is hardly defensible.   
Here is an extremely accurate number 0.5
even though it can be represented in low precision, if I tell you it is
accurate to 100 decimal digits, that is independent of its representation.

If I write only 0.5,  does that mean that 0.5+0.04   = 0.5?  by your rule of
precision, the answer can only have 2 digits, the precision of 0.5, and so 
correctly rounded,
the answer is 0.5.  Tada.  [Seriously, some people do something very close 
to this.
Wolfram's Mathematica, using software floats. One of the easy ways of 
mocking it.]


I think that numbers mod 13  are perfectly precise, and have full 
information.

Now if you were representing integers modulo some huge modulus as
nearby floating-point numbers, I guess you would lose some information.

There is an excellent survey on What Every Computer Scientist Should Know 
About Floating Point...
by David Goldberg.    easily found via Google.

I recommend it, often.




> > 
> >> 
> >> 
> >> > Do you allow 1/(1+i) or do you coerce by rationalizing the 
> denominator? 
> >> 
> >> That depends on what you're going to do with it. Viewed as elements of 
> >> C, or Q[i], yes, rationalize the denominator. 
>

In other words, your coercion fails you and you have to (indirectly) 
specify the
operation by reconsidering 
(a) the type of the numerator
(b) the type of the denominator
(c) the type of the target 

Is this kind of multi-inheritance supported by Python?  Sage?

> 
> > 
> > So now you are going to require users to have a background in, what, 
> > modern algebra? 
>
> Here's what happens: 
>
> sage: 1/(1+i) 
> -1/2*I + 1/2 
>
> If a user does have a background in "modern" algebra (which was 
> developed in the 1800's by some French kid?), then they have more 
> options available.   At least way, way more options than they will 
> ever have in your CAS (Macsyma). 
>

I think that this approach was explored (and convincingly demonstrated to
be unworkable) in Axiom.  It was
also explored in this work..
http://www.eecs.berkeley.edu/Pubs/TechRpts/1983/CSD-83-160.pdf

When I said "what, modern algebra"  I was referring to the term used by
Birkhoff and MacLane, Survey of Modern Algebra. 
If you have a preferred name for this subject, please tell me.

 I think you may find
that parts of Maxima explicitly refer to it  or the earlier and in some ways
more constructive book by van der Waerden.  
Certainly the Berkeley project described in the report above
delves into the consequences of building software this way.

I think there is a choice to be made between making everything possible
but nothing easy,  and making a few common operations 
relatively easy, automatically  (but the less common -- more difficult)


> Just because Sage is capable of doing X, and X is only something that 
> a user with a background in quasi-functional triads can understand, 
> does not mean that ever user of Sage is assumed to have a background 
> in quasi-functional triads. 
>

No, but if you require that someone be facile with the construction of 
rings, fields, (etc) in order to do polynomial arithmetic, you have lost
a lot of users, and you also have a potentially exponential growth in 
coercions.

RJF


>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to