On Tue, Sep 28, 2010 at 11:20 AM, Aaron S. Meurer <asmeu...@gmail.com> wrote:
> Well, to me, the ideal fix is to make Mul and Add more independent of the 
> objects they hold by putting the combining routines inside the objects 
> themselves (see issue 1941). Of course, this fix would be difficult to 
> implement, since it implies rewriting the core in a pretty significant way.
>

Okay I didn't know if there was something else that people were doing
other than noting the problem.

> Is the new assumptions system smart enough yet to handle (A*x).has_inverse 
> <=> A.has_inverse and x.has_inverse?

Haven't tried the assumption system yet.  I guess I could later, but
with div the way it is I don't expect it to help much.

>
> Also, in this particular case, you can compute that A*x doesn't have an 
> inverse by the shape of the object.  But I assume that you would also want it 
> to work with A*B, where A and B are both n x n but A has an inverse and B 
> doesn't.
>
> It isn't really a fix, but you could avoid using "1/" completely, and instead 
> only use **-1.  Am I wrong in thinking that ._eval_power() will be able to do 
> what you want then? This is actually better anyway, since this is a 
> non-commutative  multiplication.  (That reminds me that we ought to split out 
> the non-commutative stuff from Mul; maybe that could be a fix too).

The problem would still exist even if I created some inverse class
(__div__ already implements things by Pow(expr, -1)).  Roughly anytime
someone calls __div__ I really need it to call my expr_has_inverse
function to see if it is legit.

-- Andy

>
> Aaron Meurer
>
> On Sep 28, 2010, at 9:00 AM, Andy Ray Terrel wrote:
>
>> I have been working on some linear algebra algorithms and have hit a
>> situation that I don't know if it has been addressed by people before.
>> Any thoughts would be appreciated.
>>
>> In my system I have a base object TensorProps with shapes, ranks, and
>> the usual + - * / overloaded with the __foo__ routines.  Currently the
>> __div__ and __rdiv__ routines will check to see if an expr has an
>> inverse and if not throw an error.
>>
>> For example, given a matrix A with .has_inverse=True then 1/A is
>> legal, but given vector x then 1/x is not (since there is never an
>> inverse of rank 1 objects). Under this system A*x will not have an
>> inverse but doing 1/(A*x) will not throw an error, Expr.__rdiv__ gets
>> called not TensorProps.__rdiv__.  Right now the only way I see to
>> rectify this is to create my own TensorMul operators and limit my use
>> to objects inheriting TensorProps, since even with a TensorMul
>> 1/(a*TensorMul(A, x)) would still be legal.
>>
>> Does anyone have any thoughts on how to make the Expr.__rdiv__ check
>> its arguments a bit better?
>>
>> We could have the operators call the users specified Mul, Add, and
>> Pow, but this would probably have a significant impact in performance.
>> (In fact a friend shared a code that does something similar in
>> Mathematica, it checks registered patterns and then calls the
>> appropriate matching function.)
>>
>> — Andy
>
> --
> You received this message because you are subscribed to the Google Groups 
> "sympy" group.
> To post to this group, send email to sy...@googlegroups.com.
> To unsubscribe from this group, send email to 
> sympy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sympy?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to 
sympy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sympy?hl=en.

Reply via email to