Thanks for the write-up. It does confirm what I had been thinking: this
model basically amounts to rewriting sympy in a Lispish rather than
Pythonic style - consider, for instance, (ADD, (x, y, 5)) vs Add(x, y,
5).

Well first of all this "lisp-style" is already pretty much there in the core, disguised as .func and .args attributes. I think there is even a bug report saying that this information should be used in preference over class information, which is an implementation detail.

Apart from that I agree, and I would certainly like another way too, I just didn't see how to come up with one! The underlying issue seems to me to be that there are various operations (addition etc) that we want to look uniform over certain (essentially all) classes of objects, but that have very different implementations. So we *need* a uniform representation, and S-exprs just seem a very simple solution.

Besides that, I don't see anything that couldn't be done with the
current design

Yes. I think the more important question is how painful it is going to be :-).

replacing "the object's algebra" with "the object's
class" and with the equivalences Verbatim == Basic, Calculus == Expr,
Algebra == BasicMeta, CachingAlgebra == AssumeMeths, etc. but I'm
probably overlooking something.


I'm not sure I understand.

The class of an object in theory determines how it is combined with others, but in practice that is decided in the Mul, Add etc classes *for all objects*.

Verbatim == Basic does not seem to work for me because everything derives from Basic, but not everything is a verbatim. The whole point is that objects from different algebras can represent themselves as they wish, they do *not* have to follow the func-args convention.

Also, I get the impression that this doesn't solve any problem with the
assumptions, while assumptions and caching are an additional problem to
solve for this model.


Well as far as I understand the problems wrt caching and assumptions in the current model are:

1) Assumptions are tied in at the lowest level of the core (which apparently is a problem; I am just accepting that as a fact of life for now). 2) If we remove assumptions from objects entirely the cache breaks down. (Apart from there not being a viable transition strategy, as far as I see.) 3) There are certain hacks to fix this like flushing the cache aggressively, but they remain hacks. 4) At any rate the cache causes lots of headaches and there are calls for disabling it in general. 5) But that won't work either because our performance depends crucially on it for some computations.

Having assumptions at a mix-in lower down in the algebra hierarchy solves (1) [I think], avoids the trouble with (2) and (3). Similarly having caching as a "mixin algebra" solves (4) without the hacks of (5).

The rest is just sugar to make things look coherent.

--
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sympy@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to