>>> sympy.root(n, 6)
10*13717421**(1/6)*3**(1/3)
>>> sympy.root(n, 6).evalf(50)
22314431635.562095902499928269233656421704825692573
>>> mpmath.root(n, 6)
mpf('22314431635.562096')
>>> mpmath
On 2023-02-18 03:52:51 +, Oscar Benjamin wrote:
> On Sat, 18 Feb 2023 at 01:47, Chris Angelico wrote:
> > On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
> > > To avoid it you would need to use an algorithm that computes nth
> > > roots directly rather than raising to the power 1/n.
>
92181745e+20
SymPy can also evaluate any rational power either exactly or to any
desired accuracy. Under the hood SymPy uses mpmath for the approximate
numerical evaluation part of this and mpmath can also be used directly
with its cbrt and nthroot functions to do this working with any
desired precis
On 2/17/23 15:03, Grant Edwards wrote:
> Every fall, the groups were again full of a new crop of people who had
> just discovered all sorts of bugs in the way
> implemented floating point, and pointing them to a nicely written
> document that explained it never did any good.
But to be fair, Goldb
On Sat, 18 Feb 2023 at 12:41, Greg Ewing via Python-list
wrote:
>
> On 18/02/23 7:42 am, Richard Damon wrote:
> > On 2/17/23 5:27 AM, Stephen Tucker wrote:
> >> None of the digits in RootNZZZ's string should be different from the
> >> corresponding digits in RootN.
> >
> > Only if the storage form
On 18/02/23 7:42 am, Richard Damon wrote:
On 2/17/23 5:27 AM, Stephen Tucker wrote:
None of the digits in RootNZZZ's string should be different from the
corresponding digits in RootN.
Only if the storage format was DECIMAL.
Note that using decimal wouldn't eliminate this particular problem,
On 2023-02-17, Mats Wichmann wrote:
> And... this topic as a whole comes up over and over again, like
> everywhere.
That's an understatement.
I remember it getting rehashed over and over again in various USENET
groups 35 years ago when when the VAX 11/780 BSD machine on which I
read news exchan
On 2/17/23 11:42, Richard Damon wrote:
On 2/17/23 5:27 AM, Stephen Tucker wrote:
The key factor here is IEEE floating point is storing numbers in BINARY,
not DECIMAL, so a multiply by 1000 will change the representation of the
number, and thus the possible resolution errors.
Store you numbe
On 2023-02-17, Richard Damon wrote:
> [...]
>
>> Perhaps this observation should be brought to the attention of the IEEE. I
>> would like to know their response to it.
>
> That is why they have developed the Decimal Floating point format, to
> handle people with those sorts of problems.
>
> They
On 2023-02-17 14:39:42 +, Weatherby,Gerard wrote:
> IEEE did not define a standard for floating point arithmetics. They
> designed multiple standards, including a decimal float point one.
> Although decimal floating point (DFP) hardware used to be
> manufactured, I couldn’t find any current man
On 2023-02-17 10:27:08 +, Stephen Tucker wrote:
> This is a hugely controversial claim, I know, but I would consider this
> behaviour to be a serious deficiency in the IEEE standard.
>
> Consider an integer N consisting of a finitely-long string of digits in
> base 10.
>
> Consider the infini
is just a convenient way to write
a small subset of real numbers. By using any base you limit yourself to
rational numbers (no e or π or √2) and in fact only those rational
numbers where the denominator is a power of the base.
Converting numbers from one base to another with any finite precision
wil
n [9]: e.evalf(50)
Out[9]: 49793385921817.447440261250171604380899353243631762
Because the *entire* expression is represented here *exactly* as e it
is then possible to evaluate different parts of the expression
repeatedly with different levels of precision and it is necessary to
do that for full acc
On 2/17/23 5:27 AM, Stephen Tucker wrote:
Thanks, one and all, for your reponses.
This is a hugely controversial claim, I know, but I would consider this
behaviour to be a serious deficiency in the IEEE standard.
Consider an integer N consisting of a finitely-long string of digits in
base 10.
mplemented in software. It's the way your CPU represents floating point
numbers in silicon. And in your GPUs (where speed is preferred to
precision). So it's not like Python could just arbitrarily do something
different unless you were willing to pay a huge penalty for speed. For
exa
p]
>> >> I have just produced the following log in IDLE (admittedly, in Python
>> >> 2.7.10 and, yes I know that it has been superseded).
>> >>
>> >> It appears to show a precision tail-off as the supplied float gets
>> bigger.
>> [sn
are ones. For more complex calculations where the errors can
accumulate, you may need to choose a small number with more such bits near
the end.
Extended precision arithmetic is perhaps cheaper now and can be done for a
reasonable number of digits. It probably is not realistic to do most such
until a few
years ago, but they seem to have gone dark: https://twitter.com/SilMinds
From: Python-list on
behalf of Thomas Passin
Date: Friday, February 17, 2023 at 9:02 AM
To: python-list@python.org
Subject: Re: Precision Tail-off?
*** Attention: This is an external email. Use caution
.
What you are not considering is that the IEEE standard is about trying
to achieve a balance between resource use (memory and registers),
precision, speed of computation, reliability (consistent and stable
results), and compatibility. So there have to be many tradeoffs. One
of them is the
>
> Stephen Tucker.
>
>
> On Thu, Feb 16, 2023 at 6:49 PM Peter Pearson
> wrote:
>
>> On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
>> > On Tue, 14 Feb 2023 at 07:12, Stephen Tucker
>> wrote:
>> [snip]
>> >> I have just produced
2023 at 07:12, Stephen Tucker
> wrote:
> [snip]
> >> I have just produced the following log in IDLE (admittedly, in Python
> >> 2.7.10 and, yes I know that it has been superseded).
> >>
> >> It appears to show a precision tail-off as the supplied float gets
>
On Tue, 14 Feb 2023 11:17:20 +, Oscar Benjamin wrote:
> On Tue, 14 Feb 2023 at 07:12, Stephen Tucker wrote:
[snip]
>> I have just produced the following log in IDLE (admittedly, in Python
>> 2.7.10 and, yes I know that it has been superseded).
>>
>> It appears to s
)
8.881784197001252e-16
1E-99
From: Python-list on
behalf of Michael Torrie
Date: Tuesday, February 14, 2023 at 5:52 PM
To: python-list@python.org
Subject: Re: Precision Tail-off?
*** Attention: This is an external email. Use caution responding, opening
attachments or clicking on links. ***
On 2
On 2/14/23 00:09, Stephen Tucker wrote:
> I have two questions:
> 1. Is there a straightforward explanation for this or is it a bug?
To you 1/3 may be an exact fraction, and the definition of raising a
number to that power means a cube root which also has an exact answer,
but to the computer, 1/3 i
Use Python3
Use the decimal module: https://docs.python.org/3/library/decimal.html
From: Python-list on
behalf of Stephen Tucker
Date: Tuesday, February 14, 2023 at 2:11 AM
To: Python
Subject: Precision Tail-off?
*** Attention: This is an external email. Use caution responding, opening
On Tue, 14 Feb 2023 at 07:12, Stephen Tucker wrote:
>
> Hi,
>
> I have just produced the following log in IDLE (admittedly, in Python
> 2.7.10 and, yes I know that it has been superseded).
>
> It appears to show a precision tail-off as the supplied float gets bigger.
>
>
Hi,
I have just produced the following log in IDLE (admittedly, in Python
2.7.10 and, yes I know that it has been superseded).
It appears to show a precision tail-off as the supplied float gets bigger.
I have two questions:
1. Is there a straightforward explanation for this or is it a bug?
2
On 2021-03-08 09:29:31 +0100, Mirko via Python-list wrote:
> Am 07.03.2021 um 21:52 schrieb Avi Gross via Python-list:
> > The precedence example used below made a strange assumption that the
> > imaginary program would not be told up-front what computer language it was
> > being asked to convert f
Am 07.03.2021 um 21:52 schrieb Avi Gross via Python-list:
> The precedence example used below made a strange assumption that the
> imaginary program would not be told up-front what computer language it was
> being asked to convert from. That is not the scenario being discussed as we
> have describe
Am 07.03.21 um 20:42 schrieb Peter J. Holzer:
The second part is converting a parse tree into code. I am quite sure
that it is possible to devise a formal language to specify the semantics
of any programming language and then to use this to generate the code.
However, I strongly suspect that such
king
anything, but I think a universal translator may not be imminent.
-Original Message-
From: Python-list On
Behalf Of Peter J. Holzer
Sent: Sunday, March 7, 2021 2:43 PM
To: python-list@python.org
Subject: Re: neonumeric - C++ arbitrary precision arithmetic library
On 2021-03-06 23:4
On 2021-03-06 23:41:10 +0100, Mirko via Python-list wrote:
> I even wonder why they have tried. Writing a universal
> compiler/interpreter sounds logically impossible to me, Here's a
> simple Python expression:
>
> >>> 3+3*5
> 18
>
> And here's the same expression in (GNU) Smalltalk:
>
> st> 3+3
On Sun, Mar 7, 2021 at 9:42 AM Mirko via Python-list
wrote:
> I even wonder why they have tried. Writing a universal
> compiler/interpreter sounds logically impossible to me, Here's a
> simple Python expression:
>
> >>> 3+3*5
> 18
>
> And here's the same expression in (GNU) Smalltalk:
>
> st> 3+3*
Am 06.03.2021 um 22:24 schrieb Ben Bacarisse:
> Mr Flibble writes:
>
>>> Someone who says that he is capable of writing a compiler that
>>> translates every language has megalomania. No one can do this.
>>
>> Just because you can't make one it doesn't follow that nobody else
>> can.
>
> True, bu
On 3/5/2021 8:51 AM, Mr Flibble wrote:
neonumeric - C++ arbitrary precision arithmetic library with arbitrary
precision integers, floats and rationals:
https://github.com/i42output/neonumeric
It hasn't been formally released yet as it still requires more extensive
testing. It will be
Bonita Montero
Sent: Saturday, March 6, 2021 2:12 PM
To: python-list@python.org
Subject: Re: neonumeric - C++ arbitrary precision arithmetic library
>> There is no projection.
>> _You_ have megalomania, not me.
>> And there's also no Dunning Kruger effect.
>> You can'
Mr Flibble writes:
>> Someone who says that he is capable of writing a compiler that
>> translates every language has megalomania. No one can do this.
>
> Just because you can't make one it doesn't follow that nobody else
> can.
True, but lots of very knowledgeable people have tried and failed.
On 3/6/2021 11:35 AM, Mr Flibble wrote:
On 06/03/2021 19:11, Bonita Montero wrote:
There is no projection.
_You_ have megalomania, not me.
And there's also no Dunning Kruger effect.
You can't assess your capabilites, not me.
no u
Someone who says that he is capable of writing a compiler tha
There is no projection.
_You_ have megalomania, not me.
And there's also no Dunning Kruger effect.
You can't assess your capabilites, not me.
no u
Someone who says that he is capable of writing a compiler that
translates every language has megalomania. No one can do this.
--
https://mail.pyt
On Sun, Mar 7, 2021 at 4:51 AM Bonita Montero wrote:
>
> > Whilst you need to read the following:
> > * https://en.wikipedia.org/wiki/Psychological_projection
> > * https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
>
> There is no projection.
> _You_ have megalomania, not me.
> And there
Whilst you need to read the following:
* https://en.wikipedia.org/wiki/Psychological_projection
* https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
There is no projection.
_You_ have megalomania, not me.
And there's also no Dunning Kruger effect.
You can't assess your capabilites, not
It hasn't been formally released yet as it still requires more extensive
testing. It will be used as part of my universal compiler, neos, that
can compile any programming language ...
You will get the same room in the same psychiatric ward like
Amine Moulay Ramdane.
--
https://mail.python.org/
t pass truncates the output with a field width
> specifier or similar. If not, then you should see the initial change
> you've noticed and then nothing more after that for that particular datum.
Ok, I like that.
>
>
> Having said all that, if you use Python's decimal.Decimal
at/double for processing your files, then you are better off if
absolute precision is what you need.
E.
--
https://mail.python.org/mailman/listinfo/python-list
On 2017-03-11 22:01, Paulo da Silva wrote:
Hi!
I have a dir with lots of csv files. These files are updated +- once a
day. I could see that some values are converted, during output, to very
close values but with lots of digits. I understand that is caused by the
internal bits' representation of
Hi!
I have a dir with lots of csv files. These files are updated +- once a
day. I could see that some values are converted, during output, to very
close values but with lots of digits. I understand that is caused by the
internal bits' representation of the float/double values.
Now my question is:
On Tue, Jul 12, 2016, at 02:21, Steven D'Aprano wrote:
> If not, then what are the alternatives? Using str.format, how would
> you get the same output as this?
>
>
> py> "%8.4d" % 25
> '0025'
"%04d" % 25
"%8s" % ("%04d" % 25)
The latter (well, generally, "format it how you want and the
On Tue, 12 Jul 2016 07:50 pm, Antoon Pardon wrote:
> Op 12-07-16 om 06:19 schreef Steven D'Aprano:
>> On Tue, 12 Jul 2016 07:51 am, Chris Angelico wrote:
>>
>>> say, 2,147
>>> millimeters, with a precision of four significant digits
>>
>>
Op 12-07-16 om 12:27 schreef Marko Rauhamaa:
> Antoon Pardon :
>
>> Op 12-07-16 om 06:19 schreef Steven D'Aprano:
>>> How do you represent 1 mm to a precision of four significant digits,
>>> in such a way that it is distinguished from 1 mm to one significant
&g
Antoon Pardon :
> Op 12-07-16 om 06:19 schreef Steven D'Aprano:
>> How do you represent 1 mm to a precision of four significant digits,
>> in such a way that it is distinguished from 1 mm to one significant
>> digit, and 1 mm to a precision of four decimal places?
>
&
Op 12-07-16 om 06:19 schreef Steven D'Aprano:
> On Tue, 12 Jul 2016 07:51 am, Chris Angelico wrote:
>
>> say, 2,147
>> millimeters, with a precision of four significant digits
>
> How do you represent 1 mm to a precision of four significant digits, in such
> a way
On Tuesday 12 July 2016 08:17, Ethan Furman wrote:
> So, so far there is no explanation of why leading zeroes make a number
> more precise.
Obviously it doesn't, just as trailing zeroes doesn't make a number more
precise. Precision in the sense used by scientists is a pro
On Tue, Jul 12, 2016 at 2:19 PM, Steven D'Aprano wrote:
> On Tue, 12 Jul 2016 07:51 am, Chris Angelico wrote:
>
>> say, 2,147
>> millimeters, with a precision of four significant digits
>
>
> How do you represent 1 mm to a precision of four significant di
eters, with a precision of four significant digits, and
excellent accuracy. But if I multiply those numbers together to
establish the floor area of the corridor, the result does NOT have
four significant figures. It would be 64 square meters (not 64.41),
and the accuracy would be pretty low (effect
On Tue, 12 Jul 2016 07:51 am, Chris Angelico wrote:
> say, 2,147
> millimeters, with a precision of four significant digits
How do you represent 1 mm to a precision of four significant digits, in such
a way that it is distinguished from 1 mm to one significant digit, and 1 mm
to a precis
On 7/11/2016 5:51 PM, Chris Angelico wrote:
This is why it's important to be able to record precisions of
arbitrary numbers. If I then measure the width of this corridor with a
laser, I could get an extremely precise answer - say, 2,147
millimeters, with a precision of four significant d
Ethan Furman writes:
> On 07/11/2016 01:56 PM, Ben Finney wrote:
>
> > Precision is not a property of the number. It is a property of the
> > *representation* of that number.
> >
> > The representation “1×10²” has a precision of one digit.
> > The represent
On Tue, Jul 12, 2016 at 8:17 AM, Ethan Furman wrote:
>> This is why it's important to be able to record precisions of
>> arbitrary numbers. If I then measure the width of this corridor with a
>> laser, I could get an extremely precise answer - say, 2,147
>> millimet
On Tue, Jul 12, 2016 at 9:14 AM, Jan Coombs
wrote:
> Thees all look good, but you may get into trouble if you trust a
> PC with them!
>
> If the language/PC uses floating point representation then it
> will assign a fixed number of bits for the fractional part, and
> this will be left aligned in a
On Tue, 12 Jul 2016 07:51:23 +1000
Chris Angelico wrote:
[snip]
>
> Yep. Precision is also a property of a measurement, the same
> way that a unit is. If I pace out the length of the main
> corridor in my house, I might come up with a result of thirty
> meters. The number is &q
Ben Finney writes:
> Ethan Furman writes:
>
>> I will readily admit to not having a maths degree, and so of course to
>> me saying the integer 123 has a precision of 5, 10, or 99 digits seems
>> like hogwash to me.
>
> Precision is not a property of the n
On 07/11/2016 03:17 PM, Ethan Furman wrote:
So, so far there is no explanation of why leading zeroes make a number
more precise.
An example of what I mean:
174 with a precision of 3 tells us that the tenths place could be any of
0-9, or, put another way, the actual number could be anywhere
On 07/11/2016 02:51 PM, Chris Angelico wrote:
On Tue, Jul 12, 2016 at 6:56 AM, Ben Finney wrote:
Precision is not a property of the number. It is a property of the
*representation* of that number.
The representation “1×10²” has a precision of one digit.
The representation “100” has a
On Tue, Jul 12, 2016 at 6:56 AM, Ben Finney wrote:
> Precision is not a property of the number. It is a property of the
> *representation* of that number.
>
> The representation “1×10²” has a precision of one digit.
> The representation “100” has a precision of three digits.
>
On 07/11/2016 01:56 PM, Ben Finney wrote:
Precision is not a property of the number. It is a property of the
*representation* of that number.
The representation “1×10²” has a precision of one digit.
The representation “100” has a precision of three digits.
The representation “00100” has a
Ethan Furman writes:
> I will readily admit to not having a maths degree, and so of course to
> me saying the integer 123 has a precision of 5, 10, or 99 digits seems
> like hogwash to me.
Precision is not a property of the number. It is a property of the
*representation* of that num
Hi, Thank you for your answer.
Actually this is the third version I am writing for using the QD library; the
first onPython using ctypes; the second one was in Cython; this one is in C. I
don't claim being a Cython expert and maybe my Cython code was not optimal but
I can say the C version is s
Chris Angelico schrieb am 31.07.2015 um 09:37:
> On Fri, Jul 31, 2015 at 5:26 PM, Stefan Behnel wrote:
>> Your C code seems to be only about 1500 lines, not too late to translate
>> it. That should save you a couple of hundred lines and at the same time
>> make it work with Python 3 (which it curre
On Fri, Jul 31, 2015 at 5:26 PM, Stefan Behnel wrote:
> Your C code seems to be only about 1500 lines, not too late to translate
> it. That should save you a couple of hundred lines and at the same time
> make it work with Python 3 (which it currently doesn't, from what I see).
I was just looking
baruc...@gmail.com schrieb am 30.07.2015 um 22:09:
> It is written in pure C with the CPython C-API in order to get the highest
> possible speed.
This is a common fallacy. Cython should still be able to squeeze another
bit of performance out of your wrapper for you. It tends to know the C-API
bet
Hi,
I wrote a module for wrapping the well-known high-precision QD library written
by D.H. Bailey.
You can find it here: https://github.com/baruchel/qd
It is written in pure C with the CPython C-API in order to get the highest
possible speed.
The QD library provides floating number types for
On 11/20/2013 11:34 AM, Kay Y. Jheallee wrote:
On 13.Nov.20.Wed 14:02, Steven D'Aprano wrote:> Hi Kay,
>
> You emailed me off-list, but your email address is bouncing or invalid,
> so I have no way to email you back.
So THAT's where it went! Sorry about that...yes, it WAS meant for the group
On 20/11/2013 19:34, Kay Y. Jheallee wrote:
Ah, that looks like just the puppy I'm looking for. :)
Okay then, I just installed the PortableApps version of Python,
but when I downloaded "mpmath-0.17.win32" the installer aborted with "No
Python installation found in the registry".
So I'm trying to
On 20/11/2013 19:59, Mark Lawrence wrote:
On 20/11/2013 19:34, Kay Y. Jheallee wrote:
Ah, that looks like just the puppy I'm looking for. :)
Okay then, I just installed the PortableApps version of Python,
but when I downloaded "mpmath-0.17.win32" the installer aborted with "No
Python installati
On 20/11/2013 19:34, Kay Y. Jheallee wrote:
Ah, that looks like just the puppy I'm looking for. :)
Okay then, I just installed the PortableApps version of Python,
but when I downloaded "mpmath-0.17.win32" the installer aborted with "No
Python installation found in the registry".
So I'm trying to
> Okay,but after I import "math" and "decimal",
>
> py> decimal.getcontext().prec=75
> py> print decimal.Decimal(math.atan(1))
> 0.78539816339744827899949086713604629039764404296875
>
> though I set precision to 75, it onl
a regular
> float.
>
> Unfortunately, Decimals don't support high-precision trig functions. If
> you study the decimal.py module, you could possibly work out how to add
> support for trig functions, but they have no current support for them.
The documentation has examples for how to make hig
int decimal.Decimal(math.atan(1))
0.78539816339744827899949086713604629039764404296875
though I set precision to 75, it only did the trig function to 50
places AND it is only right to 16 places,
0.785398163397448309615660845819875721049292349843776...(actual).
[end quote]
Here, you calculate the atan
On Mon, 18 Nov 2013 14:14:33 +, Kay Y. Jheallee wrote:
> Using 1/3 as an example,
[snip examples]
> which seems to mean real (at least default) decimal precision is limited
> to "double", 16 digit precision (with rounding error).
That's because Python floats actua
; % (100./3)
33.33570180911920033395290374755859375000
which seems to mean real (at least default) decimal precision
is limited to "double", 16 digit precision (with rounding error).
Is there a way to increase the real precision, preferably as
the default?
For instance, UBasi
Documents
>
> The time stamp has millisecond precision but the decimal separator is a
> comma.
>
> Can I change the comma (,) into a period (.) and if so how?
I do it by:
1. Replacing the default date format string to exclude ms.
2. Including %(msecs)03d in the format string where
gt;
> The time stamp has millisecond precision but the decimal separator is a
> comma.
>
> Can I change the comma (,) into a period (.) and if so how?
I think you have to subclass Formatter.formatTime(). Here's a monkey-
patching session to get you started:
>>
I use this formatter in logging:
formatter = logging.Formatter(fmt='%(asctime)s \t %(name)s \t %(levelname)s
\t %(message)s')
Sample output:
2012-07-19 21:34:58,382 root INFO Removed - C:\Users\ZDoor\Documents
The time stamp has millisecond precision but the decimal separator
I use this formatter in logging:
formatter = logging.Formatter(fmt='%(asctime)s \t %(name)s \t %(levelname)s
\t %(message)s')
Sample output:
2012-07-19 21:34:58,382 root INFO Removed - C:\Users\ZDoor\Documents
The time stamp has millisecond precision but the decimal sepa
In article ,
Ken wrote:
>Brand new Python user and a bit overwhelmed with the variety of
>packages available. Any recommendation for performing numerical
>linear algebra (specifically least squares and generalized least
>squares using QR or SVD) in arbitrary precision? I'v
On 2/17/12 6:09 AM, Tim Roberts wrote:
Ken wrote:
Brand new Python user and a bit overwhelmed with the variety of
packages available. Any recommendation for performing numerical
linear algebra (specifically least squares and generalized least
squares using QR or SVD) in arbitrary precision
Ken wrote:
>
>Brand new Python user and a bit overwhelmed with the variety of
>packages available. Any recommendation for performing numerical
>linear algebra (specifically least squares and generalized least
>squares using QR or SVD) in arbitrary precision? I've been lo
Brand new Python user and a bit overwhelmed with the variety of
packages available. Any recommendation for performing numerical
linear algebra (specifically least squares and generalized least
squares using QR or SVD) in arbitrary precision? I've been looking at
mpmath but can't se
t;> Hello. I have a written Python program which currently uses numpy to
> >> >> perform linear algebra operations. Specifically, I do matrix*matrix,
> >> >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
> >> >> operations. Now I am in
and linalg.eig(matrix)
> > operations. Now I am interested in allowing arbitrary precision. I
> > have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
> > to easily implement any with my current program. I suspect I have to
> > change some commands but I
nterested in allowing arbitrary precision. I
> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
> to easily implement any with my current program. I suspect I have to
> change some commands but I am unsure what.
numpy is implemented in C, and is limited to the language&
Ben123 writes:
> I'll ask on the Sage forums about this. In the mean time, I'm still
> trying to get arbitrary precision linear algebra in Python
You probably have to use something like gmpy.mpq to implement your
favorite eigenvalue computation algorithm. Maxima might be able
numpy to
>> >> perform linear algebra operations. Specifically, I do matrix*matrix,
>> >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
>> >> operations. Now I am interested in allowing arbitrary precision. I
>> >> have tried gmpy,
fically, I do matrix*matrix,
> >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
> >> operations. Now I am interested in allowing arbitrary precision. I
> >> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
> >> to ea
fically, I do matrix*matrix,
> >> matrix*vector, numpy.linalg.inv(matrix), and linalg.eig(matrix)
> >> operations. Now I am interested in allowing arbitrary precision. I
> >> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
> >> to ea
nalg.inv(matrix), and linalg.eig(matrix)
>> operations. Now I am interested in allowing arbitrary precision. I
>> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
>> to easily implement any with my current program. I suspect I have to
>> change some comman
nterested in allowing arbitrary precision. I
> have tried gmpy, bigfloat, mpmath, and decimal but I have been unable
> to easily implement any with my current program. I suspect I have to
> change some commands but I am unsure what.
>
> My question is which of the arbitrary precision impleme
s if this
> >> will not involve on the method. For example, I did this when was
> >> calculating geometric means of computer benchmarks.
>
> > Currently I have values between 1 and 1E-300 (not infinitely small). I
> > don't see how scaling by powers of 10
> Are you saying python cares whether I express a number as 0.001 or
> scaled by 10^5 to read 100? If this is the case, I'm still stuck. I
> need the full range of eigenvalues from 1 to 1E-300, so the entire
> range could be scaled by 1E300 but I would still need better precisio
puter benchmarks.
Currently I have values between 1 and 1E-300 (not infinitely small). I
don't see how scaling by powers of 10 will increase precision.
In such way you will be storing the number of zeros as n.
Are you saying python cares whether I express a number as 0.001 or
scaled by 10^5 t
1 - 100 of 303 matches
Mail list logo