On Mar 16, 2020, at 02:54, Stephen J. Turnbull 
<turnbull.stephen...@u.tsukuba.ac.jp> wrote:
> 
> Andrew Barnert writes:
> 
>> Well, there are an infinite number of ever larger infinite
>> ordinals, ω or ω_0 being the first one, and likewise an infinite
>> number of infinite cardinal, aleph_0 being the first one, and
>> people rarely use the ∞ symbol for any of them.
> 
> s/people/mathematicians/ and I'd agree with you.  But I did write
> "people".

But people rarely talk about infinite ordinals or cardinals. Anyone who’s 
talking about, e.g., whether there’s a set larger than the naturals but smaller 
than the reals isn’t calling either one of those sets’ cardinalities ∞.

>> There are a few different obvious ways you could build an
>> IEEE-float-style complex out of IEEE floats, but the one that C99
>> and C++ both use is probably the simplest: just model then as the
>> Cartesian product of IEEE float with itself, applying the usual
>> arithmetic rules over IEEE float.
> 
> FVO "simple" = "simplistic". :-)

What does FVO mean?

At any rate, there’s nothing wrong with simplistic. Our usual addition on 
natural numbers is simplistic, and there are all kinds of other 
sort-of-addition-like things you could define on top of successor instead of it 
that are less simplistic, but none of them are nearly as useful, natural, or 
intuitive.

>> And that means these odd things make sense:
> 
> FVO of "sense" = "derived from an arbitrary model (as long as we're
> consistent)".  (This time I'm not trolling.)

But it’s not an arbitrary model (except in the sense that every number system 
like Z is an arbitrary model), it’s a model that falls out of the natural 
composition of “build C from R” and “build R-bar from R and then build IEEE 
from R-bar”, in either order. And it’s one that preserves all the properties 
you’d hope—most importantly, continuation. The fact that 2+3=5, etc., when you 
use complex addition instead of real addition—is the reason we call complex 
addition “addition” in the first place. And C-bar built in this way continues 
R-bar in the same way C continues R. And the C-style approximation of C-bar 
with IEEE float approximately continues IEEE float in the same way (albeit 
sadly not always with the same bounds of approximation). Which is why we can 
call C/Python/etc. complex addition “addition”: complex.__add__(complex(2.0), 
complex(3.0)) == 2.0+3.0 (in this case exactly so, but in general you need 
isclose and it’s not always easy to calculate the cutoff…).

>>>>>> complex("inf")
>>> (inf+0j)
>>> 
>>> Oof. ;-)
>> 
>> What else would you expect?
> 
> I don't "expect" anything when there are several competing
> interpretations.  I would *like* it to be 'complex("inf")' FVO inf =
> projective complex plane infinity.

If you’re suggesting that our reals should be projectively extended and then 
our complexes should also be projectively extended, that would make sense. But 
then our reals wouldn’t be modeled by IEEE floats.

If you’re suggesting that our complexes should be projectively extended even 
though our reals are affinely extended, then you’re giving up the continuation 
property; complex is no longer an extension of real.

>  My reasoning is the available
> *mathematical* values we model should make sense as expressing the set
> of possible limits in polar coordinates as well as in Cartesian
> coordinates (and as the limits of arbitrary lines).  But these are in
> some sense distinct, with a couple of exceptions.

But the infinite values we’re trying to model aren’t distinct between the two 
coordinate systems.

The finite approximations are very different, on the other hand—but that’s 
already true even with finite numbers. The density of covered values over any 
part of the complex plane is different based on whether you approximate with 
cartesian IEEE floats or polar IEEE floats, so of course the same is true for 
the infinite parts of the plane as well. So what?

>  So I would prefer
> my calculations to tell me "you're out of bounds" rather than give me
> a result that looks precise but actually doesn't tell me much about
> the limiting process.  E.g., the mathematical limit in R^2 of (ax, bx)
> for all a, b > 0 is (inf, inf) -- thank you very much, I guess.

Consider electrical circuits (which is presumably what the people designing 
this system in the first place were considering, since IEEE math is designed 
for EE). All observable outputs have finite real values at all times. But 
intermediate values in the circuit are affinely-extended complex values. (You 
can argue about whether those values “really exist” or are “just a way of 
talking about real values that change over time” or whatever; I suspect most 
electrical engineers don’t care, they just want to be able to use them.) You 
can design a circuit where it some value is inf with phase pi/2 the output will 
be real 5V, if it’s inf with phase pi the output will be 0V, and if it’s in 
between the output is in between. If you insist on calculating that in the 
projectively extended complexes instead, you’ll just get NaN at all inputs; if 
you calculate it with the affinely extended complexes, even using the 
approximation made from the Cartesian product of IEEE float with itself, you 
get a well-bounded approximation of the curve from 5V to 0V that you were 
looking for. If that’s not meaningful, then none of our approximate number 
systems are meaningful.

I’m not saying there are no advantages to the projectively extended complex 
numbers. But there are also advantages to the projectively extended reals over 
the affinely extended reals, and yet, we chose the advantages of the latter 
over those of the former. (Well, we just went with what a bunch of electrical 
engineers designed in the 1970s, but…) For complex, it’s mostly the same 
tradeoff again—and it’s not an independent tradeoff; making it inconsistently 
adds complexity while throwing away half the benefits—which is why almost every 
language has made the same decision as Python. It’s not arbitrary, it’s the 
most sensible choice.

>> Again, Python, C, and lots of other languages agree here, and it
>> makes sense once you think about it. We have a number that’s either
>> indeterminate or multivalued or unknown on one axis, but it’s
>> infinite on the other axis, so whatever value(s) it may represent,
>> they all must be infinite.
> 
> Pragmatically, that is what I said I like, except I like it in maximum
> generality. ;-)

Why? If you only care about whether something is infinite, rather than which 
infinity, you can use isinf. If you want to insist that nobody else can ever 
care about which infinity, even when it’s useful to them and we could have 
calculated it for them, just because you don’t have a use for it, and that we 
should either give up IEEE float semantics or have complex semantics that don’t 
match our float semantics in fundamental ways and that are derived in a more 
complicated way with unmotivated special cases instead of the natural way that 
falls out of any of the usual constructions of C in mathematics, that’s not 
really “maximum generality” you’re asking for, but the opposite.
_______________________________________________
Python-ideas mailing list -- python-ideas@python.org
To unsubscribe send an email to python-ideas-le...@python.org
https://mail.python.org/mailman3/lists/python-ideas.python.org/
Message archived at 
https://mail.python.org/archives/list/python-ideas@python.org/message/4VMI6YLCF467LKHDQ2VUQDDJONOONVFP/
Code of Conduct: http://python.org/psf/codeofconduct/

Reply via email to