In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|>
|> [ Interval arithmetic ]
|>
|> > |> For people just getting into it, it can be shocking to realize just how
|> > |> wide the interval can become after some computations.
|> >
|> > Yes. Even when you can prove (m
"Nick Maclaren" <[EMAIL PROTECTED]> wrote:
[Tim Roberts]
> |> Actually, this is a very well studied part of computer science called
> |> "interval arithmetic". As you say, you do every computation twice, once to
> |> compute the minimum, once to compute the maximum. When you're done, you
> |> ca
Tim Peters wrote:
> ... Alas, most people wouldn't read that either <0.5 wink>.
Oh the loss, you missed the chance for a <0.47684987 wink>.
--Scott David Daniels
[EMAIL PROTECTED]
--
http://mail.python.org/mailman/listinfo/python-list
In article <[EMAIL PROTECTED]>,
"Rhamphoryncus" <[EMAIL PROTECTED]> writes:
|>
|> I've been experimenting with a fixed-point interval type in python. I
|> expect many algorithms would require you to explicitly
|> round/collapse/whatever-term the interval as they go along, essentially
|> making i
Nick Maclaren wrote:
> The problem with it is that it is an unrealistically pessimal model,
> and there are huge classes of algorithm that it can't handle at all;
> anything involving iterative convergence for a start. It has been
> around for yonks (I first dabbled with it 30+ years ago), and it
In article <[EMAIL PROTECTED]>,
Tim Roberts <[EMAIL PROTECTED]> writes:
|> "Hendrik van Rooyen" <[EMAIL PROTECTED]> wrote:
|>
|> >> What I don't know is how much precision this approximation loses when
|> >> used in real applications, and I have never found anyone else who has
|> >> much of a clu
"Dennis Lee Bieber" <[EMAIL PROTECTED]>wrote:
> On Sun, 14 Jan 2007 07:18:11 +0200, "Hendrik van Rooyen"
> <[EMAIL PROTECTED]> declaimed the following in comp.lang.python:
>
> >
> > I recall an SF character known as "Slipstick Libby",
> > who was supposed to be a Genius - but I forget
> > the s
"Hendrik van Rooyen" <[EMAIL PROTECTED]> wrote:
>"Nick Maclaren" <[EMAIL PROTECTED]> wrote:
>
>> What I don't know is how much precision this approximation loses when
>> used in real applications, and I have never found anyone else who has
>> much of a clue, either.
>>
>I would suspect that this
In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|> >
|> I would suspect that this is one of those questions which are simple
|> to ask, but horribly difficult to answer - I mean - if the hardware has
|> thrown it away, how do you study it - you need somehow two
|
In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|> "Tim Peters" <[EMAIL PROTECTED]> wrote:
|>
|> > What you will still see stated is variations on Kahan's telegraphic
|> > "binary is better than any other radix for error analysis (but not very
|> > much)", list
"Tim Peters" <[EMAIL PROTECTED]> wrote:
> [Nick Maclaren]
> >> ...
> >> Yes, but that wasn't their point. It was that in (say) iterative
> >> algorithms, the error builds up by a factor of the base at every
> >> step. If it wasn't for the fact that errors build up, almost all
> >> programs coul
"Nick Maclaren" <[EMAIL PROTECTED]> wrote:
> The "cheap" means "cheap in hardware" - it needs very little logic,
> which is why it was used on the old, discrete-logic, machines.
>
> I have been told by hardware people that implementing IEEE 754 rounding
> and denormalised numbers needs a horrific
"Dennis Lee Bieber" <[EMAIL PROTECTED]> wrote:
> {My 8th grade teacher was a bit worried at seeing me with a slipstick
> ; and my HighSchool Trig/Geometry teacher only required 3 significant
> digits for answers -- even though half the class had calculators by
> then}
LOL - I haven't seen the w
[Nick Maclaren]
>> ...
>> Yes, but that wasn't their point. It was that in (say) iterative
>> algorithms, the error builds up by a factor of the base at every
>> step. If it wasn't for the fact that errors build up, almost all
>> programs could ignore numerical analysis and still get reliable
>> a
In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|>
|> *grin* - I was around at that time, and some of the inappropriate habits
|> almost forced by the lack of processing power still linger in my mind,
|> like - "Don't use division if you can possibly avoid it, - i
"Nick Maclaren" <[EMAIL PROTECTED]> wrote:
>
> In article <[EMAIL PROTECTED]>,
> "Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
> |>
> |> I would have thought that this sort of thing was a natural consequence
> |> of rounding errors - if I round (or worse truncate) a binary, I can be off
> |> by
In article <[EMAIL PROTECTED]>,
"Hendrik van Rooyen" <[EMAIL PROTECTED]> writes:
|>
|> I would have thought that this sort of thing was a natural consequence
|> of rounding errors - if I round (or worse truncate) a binary, I can be off
|> by at most one, with an expectation of a half of a least s
"Nick Maclaren" <[EMAIL PROTECTED]> wrote:
> Yes, but that wasn't their point. It was that in (say) iterative
> algorithms, the error builds up by a factor of the base at every step.
> If it wasn't for the fact that errors build up, almost all programs
> could ignore numerical analysis and still
In article <[EMAIL PROTECTED]>,
Tim Peters <[EMAIL PROTECTED]> writes:
|>
|> Sure. Possibly even most. Short of writing a long & gentle tutorial,
|> can that be improved? Alas, most people wouldn't read that either <0.5
|> wink>.
Yes. Improved wording would be only slightly longer, and it
[Tim Peters]
...
>> Huh. I don't read it that way. If it said "numbers can be ..." I
>> might, but reading that way seems to requires effort to overlook the
>> "decimal" in "decimal numbers can be ...".
[Nick Maclaren]
> I wouldn't expect YOU to read it that way,
Of course I meant "putting my
In article <[EMAIL PROTECTED]>,
Tim Peters <[EMAIL PROTECTED]> writes:
|>
|> Huh. I don't read it that way. If it said "numbers can be ..." I
|> might, but reading that way seems to requires effort to overlook the
|> "decimal" in "decimal numbers can be ...".
I wouldn't expect YOU to read it
[Tim Peters]
...
>|> Well, just about any technical statement can be misleading if not
>|> qualified to such an extent that the only people who can still
>|> understand it knew it to begin with <0.8 wink>. The most dubious
>|> statement here to my eyes is the intro's "exactness carries over
>|> in
In article <[EMAIL PROTECTED]>,
Robert Kern <[EMAIL PROTECTED]> writes:
|> >
|> >> No, don't. That is about another matter entirely,
|> >
|> > It isn't.
|>
|> Actually it really is. That thread is about the difference between
|> str(some_float) and repr(some_float) and why str(some_tuple) use
Bjoern Schliessmann wrote:
> Nick Maclaren wrote:
>
>> No, don't. That is about another matter entirely,
>
> It isn't.
Actually it really is. That thread is about the difference between
str(some_float) and repr(some_float) and why str(some_tuple) uses the repr() of
its elements.
--
Robert Ke
On 1/9/07, Tim Peters <[EMAIL PROTECTED]> wrote:
> Well, just about any technical statement can be misleading if not qualified
> to such an extent that the only people who can still understand it knew it
> to begin with <0.8 wink>.
+1 QTOW
--
Cheers,
Simon B
[EMAIL PROTECTED]
--
http://mail.pyt
"Carsten Haese" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
| On Tue, 2007-01-09 at 11:38 +, Nick Maclaren wrote:
| > As Dan Bishop says, probably not. The introduction to the decimal
| > module makes exaggerated claims of accuracy, amounting to propaganda.
| > It is numerica
In article <[EMAIL PROTECTED]>,
Tim Peters <[EMAIL PROTECTED]> writes:
|>
|> Well, just about any technical statement can be misleading if not qualified
|> to such an extent that the only people who can still understand it knew it
|> to begin with <0.8 wink>. The most dubious statement here to
Nick Maclaren wrote:
> No, don't. That is about another matter entirely,
It isn't.
Regards,
Björn
--
BOFH excuse #366:
ATM cell has no roaming feature turned on, notebooks can't connect
--
http://mail.python.org/mailman/listinfo/python-list
[Rory Campbell-Lange]
>>> Is using the decimal module the best way around this? (I'm
>>> expecting the first sum to match the second). It seem
>>> anachronistic that decimal takes strings as input, though.
[Nick Maclaren]
>> As Dan Bishop says, probably not. The introduction to the decimal
>> mod
On Tue, 2007-01-09 at 11:38 +, Nick Maclaren wrote:
> |> Rory Campbell-Lange wrote:
> |>
> |> > Is using the decimal module the best way around this? (I'm
> |> > expecting the first sum to match the second). It seem
> |> > anachronistic that decimal takes strings as input, though.
>
> As Dan
|> Rory Campbell-Lange wrote:
|>
|> > Is using the decimal module the best way around this? (I'm
|> > expecting the first sum to match the second). It seem
|> > anachronistic that decimal takes strings as input, though.
As Dan Bishop says, probably not. The introduction to the decimal
module ma
On Jan 8, 3:30 pm, Rory Campbell-Lange <[EMAIL PROTECTED]> wrote:
> >>> (1.0/10.0) + (2.0/10.0) + (3.0/10.0)
> 0.60009
> >>> 6.0/10.0
> 0.59998
>
> Is using the decimal module the best way around this? (I'm expecting the first
> sum to match the second).
Probably not. Dec
At Monday 8/1/2007 19:20, Bjoern Schliessmann wrote:
Rory Campbell-Lange wrote:
> Is using the decimal module the best way around this? (I'm
> expecting the first sum to match the second). It seem
> anachronistic that decimal takes strings as input, though.
[...]
Also check the recent thread "b
Rory Campbell-Lange wrote:
> Is using the decimal module the best way around this? (I'm
> expecting the first sum to match the second). It seem
> anachronistic that decimal takes strings as input, though.
What's your problem with the result, or what's your goal? Such
precision errors with floatin
34 matches
Mail list logo