I have been working on some linear algebra algorithms and have hit a
situation that I don't know if it has been addressed by people before.
 Any thoughts would be appreciated.

In my system I have a base object TensorProps with shapes, ranks, and
the usual + - * / overloaded with the __foo__ routines.  Currently the
__div__ and __rdiv__ routines will check to see if an expr has an
inverse and if not throw an error.

For example, given a matrix A with .has_inverse=True then 1/A is
legal, but given vector x then 1/x is not (since there is never an
inverse of rank 1 objects). Under this system A*x will not have an
inverse but doing 1/(A*x) will not throw an error, Expr.__rdiv__ gets
called not TensorProps.__rdiv__.  Right now the only way I see to
rectify this is to create my own TensorMul operators and limit my use
to objects inheriting TensorProps, since even with a TensorMul
1/(a*TensorMul(A, x)) would still be legal.

Does anyone have any thoughts on how to make the Expr.__rdiv__ check
its arguments a bit better?

We could have the operators call the users specified Mul, Add, and
Pow, but this would probably have a significant impact in performance.
(In fact a friend shared a code that does something similar in
Mathematica, it checks registered patterns and then calls the
appropriate matching function.)

-- Andy

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to