>> I'm using fractions.Fraction as entries in a matrix because I need to
>> have very high precision and fractions.Fraction provides infinite
>> precision . . .
>>
>> Probably it doesn't matter but the matrix has all components non-zero
>> and is about a thousand by thousand in size.
>
> I wonder how big the numerators and denominators in those
> fractions are going to get during the matrix inversion.  Would
> it be surprising if the elements of the inverse matrix had
> numerators and denominators a million times longer than the
> original matrix?

I've checked this with Maple for matrix size 150 x 150 and the
numerators and denominators do get pretty long. But that's okay as
long as it is kept exact.

The whole story is that I have a matrix A and matrix B both of which
have rational entries and they both have pretty crazy entries too.
Their magnitude spans many orders of magnitude, but inverse(A)*B is an
okay matrix and I can deal with it using floating point numbers. I
only need this exact fraction business for inverse(A)*B (yes, a
preconditioner would be useful :))

And I wouldn't want to write the whole matrix into a file, call Maple
on it, parse the result, etc.

So after all I might just code the inversion via Gauss elimination
myself in a way that can deal with fractions, shouldn't be that hard.

Cheers,
Daniel




-- 
Psss, psss, put it down! - http://www.cafepress.com/putitdown
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to