On Tue, Jun 3, 2008 at 12:11 PM, Robert Bradshaw
<[EMAIL PROTECTED]> wrote:
>
> On Jun 3, 2008, at 11:06 AM, Gary Furnish wrote:
>
>> I think we had a discussion on irc about how homsets still got used
>> for determining the result of something in parent1 op something in
>> parent2 (maybe it was with someone else?)
>
> I sure hope not. If so, that needs to change (but I'm pretty sure it
> isn't).
>
Well allegedly there is some function that computes what parent
something is in if you have parent1 op parentt2 that is new in this
system (but I can't find said function).  Allegedly said function has
to use Homsets, so performance is going to be horrible(and this makes
sense, because its what I have to use now).  I consider homsets to be
a gigantic flaw in coercion that absolutely have to be fixed for me to
consider using more of the coercion system in symbolics.

>> I'm also -1 for hard-coding
>> knowledge and logic about ZZ,QQ, etc into the coercion model.  I am +1
>> for hardcoding it into the elements of say, ZZ,QQ,RR and then having
>> them call the coercion model only if those hardcodes can't figure the
>> situation out.
>
> That sounds much better, though I'm still not a fan.
>
Sure, its kindof ugly, but it will give us the performance we need,
and I don't see a better way to allow us to use coercions all over the
place without having performance drop to 0.
>>
>> On Tue, Jun 3, 2008 at 11:48 AM, Robert Bradshaw
>> <[EMAIL PROTECTED]> wrote:
>>>
>>> On Jun 3, 2008, at 7:13 AM, Gary Furnish wrote:
>>>
>>>>>> As long as there are classes in pure python that use MI on the
>>>>>> critical path that symbolics has to use, the argument that
>>>>>> coercion
>>>>>> was written to be fast makes no sense to me.
>>>>>
>>>>> Not sure what you mean by "MI" here, could you explain. In any
>>>>> case,
>>>>> just because coercion isn't as fast as it could be doesn't mean
>>>>> that
>>>>> it's not written for speed and much faster than it used to be. Of
>>>>> course there's room for improvement, but right now the focus is
>>>>> trying to finish the new system (which isn't really that "new"
>>>>> compared to the change made a year ago) in place.
>>>>>
>>>> Sets, and in particular a bunch of the category functionality
>>>> (homset)
>>>> get used in coercion, and use MI, making them impossible to
>>>> cythonize.
>>>
>>> Ah, yes. Homsets. They're not used anywhere in the critical path
>>> though. (If so, that should be fixed.)
>>>
>>>>>
>>>
>>>>>
>>>>>>> 2) I personally don't like having to sprinkle the "expand" and
>>>>>>> and/or
>>>>>>> "simplify" all over the place. Now I don't think symbolics
>>>>>>> should be
>>>>>>> expanded automatically, but stuff like (1+sqrt(2))^2 should be or
>>>>>>> 1/(1
>>>>>>> +i). It's like just getting the question back. (I guess I'm
>>>>>>> revealing
>>>>>>> my bias that I don't think of it as living in SR, but rather a
>>>>>>> number
>>>>>>> field...) On that note, I can't even figure out how to do
>>>>>>> simplify
>>>>>>> "(sqrt(2)-3)/(sqrt(2)-1)" in the symbolics...as opposed to
>>>>>>>
>>>>>>> sage: K.<sqrt2> = NumberField(x^2-2)
>>>>>>> sage: (sqrt2-3)/(sqrt2-1)
>>>>>>> -2*sqrt2 - 1
>>>>>>> sage: 1/(sqrt2-1)
>>>>>>> sqrt2 + 1
>>>>>>>
>>>>>> Your going to have a hard time convincing me that the default
>>>>>> behavior
>>>>>> in Mathematica and Maple is wrong.  This makes sense for number
>>>>>> theory
>>>>>> but not for people using calculus.
>>>>>
>>>>> OK, this is a valid point, though the non-calculus portions (and
>>>>> emphasis) of Sage are (relatively) more significant. Sage is not a
>>>>> CAS, that is just one (important) piece of it.
>>>>>
>>>>> Maple does
>>>>>
>>>>>> 1/(1+I);
>>>>>                                   1/2 - 1/2 I
>>>>>
>>>> I somewhat ignored (1/1+i) (I agree there is an obvious
>>>> simplification), but (x+1)^2 shouldn't get simplified under any
>>>> circumstances.  This has (little) do with speed (for this small of
>>>> exponent) and everything to do with being consistent with the high
>>>> degree cases and keeping expressions uncluttered.
>>>
>>> I agree that (x+1)^2 shouldn't get simplified, but for me this has a
>>> very different feel than (1+I)^2 or (1+sqrt(2))^2.
>>>
>>>>> at least. Looking to the M's for ideas is good, but they should not
>>>>> always dictate how we do things--none but Magma has the concept of
>>>>> parents/elements, and Sage uses a very OO model which differs from
>>>>> all of them. Why doesn't it make sense for Mathematica/Maple? I
>>>>> think
>>>>> it's because they view simplification (or even deciding to
>>>>> simplify)
>>>>> as expensive.
>>>>>
>>>>>>> 3) The coercion system works best when things start as high up
>>>>>>> the
>>>>>>> tree as they can, and the Symbolic Ring is like the big black
>>>>>>> hole at
>>>>>>> the bottom that sucks everything in (and makes it slow). There
>>>>>>> is a
>>>>>>> coercion from ZZ[sqrt(2)] (with embedding) to SR, but not the
>>>>>>> other
>>>>>>> way around, and even trying to cast the other way is
>>>>>>> problematic. I'd
>>>>>>> rather that matrix([1, 1/2, 0, sqrt(2)]) land in a matrix space
>>>>>>> over
>>>>>>> the a number field (rather than over SR), and ZZ['x'].gen() +
>>>>>>> sqrt(2)
>>>>>>> be an actual polynomial in x. Also, the SR, though very useful,
>>>>>>> somehow seems less rigorous (I'm sure that this is improving).
>>>>>>>
>>>>>> When coercion is faster we can consider changing this.
>>>>>
>>>>> Coercion speed is irrelevant to the issues I mentioned here...
>>>>> and as
>>>>> coercion+number fields is *currently* faster than what you could
>>>>> hope
>>>>> to get with SR (the examples above all involve coercion) it
>>>>> wouldn't
>>>>> help your case either.
>>>> Only for the sqrt case, and I'm willing to work with that (provided
>>>> that for endusers, sum(sqrt(p)) behaves as expected.
>>>
>>> n-th root would have a similar speed increase, but other than those
>>> two cases I don't see one wanting algebraic extensions (short of
>>> explicitly making a number field).
>>>
>>>>>
>>>>>> My definition
>>>>>> of fast is "<10 cycles if the parents are the same,
>>>>>
>>>>> Python semantics tie our hands a bit here, but I think we're
>>>>> about as
>>>>> close as we can get.
>>>>>
>>>>>> no dictionary lookups if one parent is in the other for all common
>>>>>> cases,
>>>>>
>>>>> Would this mean hard-coding all common paths? Currently there is a
>>>>> single dictionary lookup for common cases (and not a Python dict).
>>>>>
>>>> Common cases should be no more then a virtual call and a few if
>>>> statements away (and not a bunch of virtual calls either.  They cost
>>>> performance too.  No more then one should be necessary for the
>>>> common
>>>> case (the code to handle this can probably go in the
>>>> addition/multiplication handlers)).  Then if that fails we can take
>>>> the cached dict lookup route.  Make the common case fast at the
>>>> expense of the uncommon case.
>>>
>>> I am -1 for hard-coding knowledge and logic about ZZ, QQ, RR, RDF,
>>> CC, CDF, ... into the coercion model.
>>>
>>>>>> otherwise reasonablely quick pure Cython code.
>>>>>
>>>>> Yes, it should be fast, but only has to be done once and then it's
>>>>> cached. Of course the code specific to the ring/elements is as fast
>>>>> or slow as whoever wrote it.
>>>> Sets should not be in python because of homsets!
>>>>>
>>>>>> New and old coercion fail these
>>>>>> tests of sufficiently quick, and I'm not waiting to finish
>>>>>> symbolics
>>>>>> until they do pass those tests.
>>>>>
>>>>> Thanks for your concise definition--if you have any input of how to
>>>>> make things faster without hard-coding tons of special cases I'd be
>>>>> very glad to hear (though the current focus is getting things
>>>>> merged
>>>>> back into the main sage branch before we focus on optimization
>>>>> again).
>>>>>
>>>> Sometimes hardcoding special cases is the only way to do things
>>>> fast.
>>>> It is more important for coercion to be fast (especially if we are
>>>> using it internally in algorithms) then for it to be pretty
>>>> (although
>>>> it can still be designed in a mathematically rigorous way, the code
>>>> that actually implements it may not be pretty)
>>>>
>>>>>> My alternative option is lets throw in a flag, defaults to off
>>>>>> (current behavior) that lets you turn on sqrt/powers as in number
>>>>>> theory by default instead of SR.  This makes the code perform as
>>>>>> expected by end users, and advanced users can use number fields if
>>>>>> they know they are appropriate.  This is just largely a one if
>>>>>> statement change in the dispatch of sqrt so this should be
>>>>>> reasonably
>>>>>> safe.
>>>>>
>>>>> "Perform as expected" is what we disagree on, though I'd call us
>>>>> both
>>>>> end- and advanced users. I generally dislike the idea of flags the
>>>>> user sets that change behavior, but it's an option to consider.
>>>>>
>>>> The average calculus student coming from maple is not going to
>>>> understand why he can't perform a sum of the sqrt of some
>>>> primes.  If
>>>> we are to be a viable alternative for non-research mathematicians we
>>>> can't run off and implement things that drastically change the
>>>> complexity of simple operations.
>>>
>>> If we can change the complexity for the better we should. It's a
>>> tradeoff of speed for code with a small number of radicals vs. speed
>>> with a large number of radicals.
>>>
>>> - Robert
>>>
>>>
>>>
>>>>
>>>
>>
>> >
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to