I have a program with a lot of floating point constants/variables (this is a translation of a C++ program).
Nowadays -by default- all these values live in RealField(53).

But, as my problem is a bit ill conditioned, I would like to compute in higher precision, say in RealField(1000).

Is there a possibility to change the default behavior of say,

x=1.0
x.parent()
Real Field with 53 bits of precision

so that:

x=1.0
x.parent()

gives
Real Field with 1000 bits of precision
?

Yours
t.

--
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

<<attachment: tdumont.vcf>>

Reply via email to