On Sun, Mar 11, 2012 at 10:39 AM, Joachim Durchholz <j...@durchholz.org> wrote:
> Am 11.03.2012 05:21, schrieb Aaron Meurer:
>>>>
>>>> * There is a categorical oversight in your analysis: in mathematical
>>>> terms, 'Monoid' and 'Ring' are categories, while specific algebraic
>>>> structures (e.g. the ring of integers (ZZ, +, *)) are members of these
>>>> categories.
>>>
>>>
>>> I'm aware of that.
>>> The problem is that the class names get unwieldy if you name them
>>> MonoidElement.
>>> Besides, the class for whole numbers is Integer, not IntegerElement. I'm
>>> in
>>> good company with that category error ;-)
>>
>>
>> You could use Integer to name an element and Integers to name the whole
>> class.
>
>
> That won't work. A class name of "Monoids" has all the wrong connotations.
> Besides, all programming languages that I know about says "Integer", not
> "Integers". I'm talking to programmers there, not to category theorists, so
> I'm sticking with established terminology with that.
>
>
>>>>>  >    Would that make it more or less extensible?
>>>>>
>>>>> More extensible, until the mechanisms are in widespread use, then it
>>>>> will start becoming less extensible.
>>>>>
>>>>> My advice would be to stick with
>>>>> - single dispatch
>>>>
>>>>
>>>>
>>>> We can't. "a + b" necessarily invokes some form of double dispatch.
>>>
>>>
>>>
>>> Not necessarily.
>>> Conversion is a linear operation. (Selecting the target type to convert
>>> to
>>> is a kind of double dispatch, but it's still linear, not "bilinear".)
>>
>>
>> I think what he means is that literally a + b (as opposed to add(a, b)
>> or some other way) uses Python's rules for evaluating operators, which
>> is double dispatch.  If a.__add__ does not know about b, then
>> b.__radd__ is called.  The only other way is to raise an exception or
>> to return a nonsense value, neither of which is helpful.
>
>
> Ah, okay.
> Still, "a + b" does not "necessarily" invoke a double dispatch, even if
> Python does something along these lines.
>
>
>>> There are other forms of linearization. For example, when writing code
>>> for
>>> collisions between balls, you could write double-dispatching code to
>>> handle
>>> rubber vs. stone, rubber vs. lead, stone vs. lead.
>>> Or you could introduce a mediating parameter, such as impulse (plus spin
>>> etc., I'm grossly oversimplifying here).
>>> That way, the code transforms from a double dispatch between two
>>> materials
>>> to a single dispatch on material giving an impulse, plus a single
>>> dispatch
>>> on material plus nonpolymorphic impulse to give an effect (changed
>>> movement
>>> vector, for example).
>>
>>
>> But to be as general as possible, you have to allow dispatch.
>
>
> Sure.
> All I'm maintaining is that you can't have full modularity if you want full
> generality.
>
>
>> We want
>>
>> to allow objects that do all kinds of crazy things, which may seem to
>> be out of place in the usual sense.  To use your example, what if we
>> wanted to make a magic ball that, when collided with by another ball,
>> does nothing but make the other ball disappear.
>
>
> What if somebody else defines a material that doubles its weight on every
> collision, and a self-weight-doubler and an other's-weight-nullifier ball
> interact?
>
> IOW fully general double dispatch means you have to think about and fill out
> ALL cells in the matrix.
> Which means that the matrix cannot be extended in more than one direction
> independently.

Right, we have to accept that there will be certain combinations that
are undefined with this, because of conflicting rules.  That's
actually an important consideration, and one of the reasons that I
wanted to make that wiki page that lists everything. FWIW, I think
that undefined combinations should remain just that, undefined, i.e.,
they can return anything within the constructs of either definition.

>
>
>> That may sound like an
>>
>> odd example, but it's more or less the idea at
>> http://code.google.com/p/sympy/issues/detail?q=1336, which might be
>> considered as an interesting corner case to study in this dispatch
>> system.
>
>
> I see.
>
> I think the problem here is that we have different kinds of arithmetic in
> different contexts.
> Inside O notation, we don't care about constants. For integration result, we
> don't care about additive constants. When multiplying the sides of an
> equation with a factor, all we care about is that the factor is not zero;
> with inequalities, we also need to consider the sign of the factor.

I forgot about O-notation.  That's actually very similar to the
constant idea, and, unlike the constants, it is currently special
cased very heavily in the core canonicalization routines.

>
> Maybe the best way to deal with this is to have a RuleSet class: a list of
> allowed transformations.
> And we'd have a different rulesets depending on context: for transforming
> isolated arithmetic expressions, for transforming equations, for
> transforming expressions inside an integral, for
> boolean/temporal/higher-order logic etc.
>
> Here, it is the Context (or RuleSet) that does the "linearization".
>
>
>>> The other advice would be to move away from coding and towards rule
>>> specification (as far as possible). Checking the existing definitions for
>>> completeness and soundness is easier if they are encoded as data
>>> structures,
>>> not as Python code.
>>> Of course, that's not always possible. I know no easy, 100%, no-tradeoffs
>>> solution to this kind of problem.
>>
>>
>> This sounds good, though I can think of some arguments why you might
>> want to stick with code:
>>
>> - If you are not careful, it's quite possible to make data much less
>> easy to read and follow than code.  For example, at least with code,
>> you know exactly what path things will take (assuming you know the
>> rules of dispatch).
>
>
> Yes, that can become a problem.
> For code, we have debuggers that follow each step of application. For data,
> there's no way to set a breakpoint.
> On the plus side, data is easier to decompose, so it's easier to strip the
> irrelevant parts of some problematic behaviour.
>
>
>> - You have to be careful if you separate the code from the rules that
>> you still allow things to be extensible.  Of course, if you do it
>> right, it might be even easier to extend (like with the separate
>> canonicalizer idea).
>
>
> Making it easier to extend is usually not a problem.
>
>
>> - Existing code coverage tools make it easier to check the test
>> coverage when the rules are in code.
>
>
> Testing the rules themselves: Yes.
>
> Counter-question: Do we actually do code coverage analysis?

Yes.  See the bin/coverage_report.py script.  Thanks to the work of
people like Chris, and many of our GCI students, we actually have
pretty good coverage right now.

>
>
>> - Code can be much simpler if the rules follow some kind of pattern,
>> because you can then just code the pattern.  You might say that that's
>> also possible with data, but then you are just doing things in code
>> anyway, but in a more circuitous way.
>
>
> Patterns can be factored out in code just as in data.

No, I meant that patterns can be factored out only in code, not in
data.  Because you have the full power of programmability when you do
things in code, whereas data will be limited to whatever level of
power you give it (unless you essentially invent your own programming
language, which again would be circuitous and a waste of time).

Aaron Meurer

>
>
>> Don't get me wrong.  Data has advantages too, such as making it much
>> easier to check for duplicate or even wrong rules.  But it isn't
>> always the best way.
>
>
> I guess a lot depends on the design of the library.
> If the data is easy to understand and verify, there are no problems.
> If the data become complicated, start to need tool support for inspecting,
> verifying and debugging the data, and things can become pretty ugly; for
> example, having more than two, at most three fallback levels usually means
> that people get into trouble determining which fallback level is going to
> handle some given case.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To post to this group, send email to sympy@googlegroups.com.
> To unsubscribe from this group, send email to
> sympy+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/sympy?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to