On Sat, May 25, 2013 at 4:38 PM, Aaron Meurer <asmeu...@gmail.com> wrote:

> OK, sorry for not following this for a few days. I was busy moving to
> Austin.
>
> A few points:
>
> - I like Ronan's idea of putting the info in the func.  This is the
> whole reason that we use expr.func instead of type(expr), so that we
> can potentially make non-class head objects.   This idea also opens up
> a lot of possiblities with
> https://code.google.com/p/sympy/issues/detail?id=1688.
>

I also like this.  The operation (head) can have identifying information.
 I'm quite comfortable with "All identifying information is present in
.args and .op/.head."  Really I'm comfortable with "All identifying
information is present in a predefined set of attributes" and "We can
reconstruct an object from a predefined set of information"

My understanding of this idea is that Symbol.name would access information
in Symbol.op/head/func.  What is the structure of the op/head data
structure?  Another tuple? a dict? arbitrary python object?


> One comment on Ronan's reply, though: I don't see the point of
> rebuild().  How is that different from func.__call__?  What are the
> obstacles to making the head an object (or just using metaclasses,
> though that can cause issues with things like pickling)?
>

Perhaps this is the ._from_args method that exists in some Expr classes?


> - Regarding Symbol being Expr, this is orthogonal to this discussion.
> I agree it is an issue for using it as a name, and that Symbol really
> should have a more generic Basic superclass (this comes up with using
> Symbol for booleans already, see
> https://code.google.com/p/sympy/issues/detail?id=1887#c26).
>
> - I agree that we should reuse common traversal patterns as named
> functions.  Quite a few traversal algorithms can probably be rewritten
> using bottom_up, pre/postorder_traversal, atoms, xreplace, and so on.
> But even so, traversal is pretty easy, and sometimes it's simpler to
> just write a custom function.  Also, if you want to perform some
> custom action while traversing, it can be more efficient to do the
> traversal once (e.g., using atoms + xreplace traverses the tree
> twice).
>

The problem with custom traversal is that it interweaves math code with
traversal code.  If I later decide that a traversal should be top_down
instead of bottom_up (or something weirder) I shouldn't need to understand
the mathematical code to perform this change.  There are lots of parts in
SymPy where only a few people are qualified to make changes even if those
changes have little to do with the domain.  By restricting yourself to
standard traversal functionality we engage much more of the developer
population (and potentially automation) across more of the codebase. My
solution to this is to make many small functions that only operate locally
and then call traversal functions, like bottom_up, on them.


> > Even with this change I'm not confident that SymPy follows this
> convention.  Things like AppliedPredicate screw things up.
>
> SymPy doesn't follow any conventions consistently. That's one of the
> reasons we have this discussion, so we can decide what the convention
> should even be.  All the skipped tests in test_args represent a
> breaking of the Basic args convention. It doesn't even test for
> rebuildability: if you add that, even more tests fail.
>
> I like option 3 because it allows us to take advantage of native
> Python class abilities.  In a language like lisp, you only have lists,
> so it is natural to structure things as simply as possible with lists.
>  But in Python, you can override equality testing.  You can make
> classes that return arbitrary other classes in their constructors. You
> can define objects that act like classes (using metaclasses or just
> __call__, depending on your purposes). The coding is harder for us,
> for sure, but if we provide consistent high-level interfaces like
> .func and .args, I think it is OK, and it lets us solve problems in a
> simpler way than if we were more restricted.
>

I think that it's important to keep things as simple as is meaningful.  I
think that custom data structures limit interoperation.  I think that
interoperation should be a high priority for SymPy's broader impact.
 Clearly this can be taken too far.  I think that Tuples/s-expressions
unnecessarily interweave the operation and the arguments.  I.e. in (op,
arg, arg, arg) op and the args are on the same footing; I think that this
goes too far.  Still, I don't think we need anything that is substantially
more complex than this.

Something like obj.op and obj.args makes sense to me.

The Python language does offer us a lot of options and I think that a lot
of these options, like syntax overloading, are awesome.  I think I agree
(though not with high certainty) that a consistent interface is sufficient,
even with a complex data structure.  This requires us to be pretty strict
about exactly how objects implement that interface though; are you
confident that we can achieve this?  In general I prefer simple data
structures because it forces a simple interface and because I think that
many of the fancier features aren't particularly helpful.  Perhaps I just
haven't run into these use cases.

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sympy+unsubscr...@googlegroups.com.
To post to this group, send email to sympy@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy?hl=en-US.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to