On Sun, Jun 18, 2017 at 00:02 Brent Royal-Gordon <br...@architechies.com>
wrote:

> On Jun 17, 2017, at 8:43 PM, Xiaodi Wu via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> How do you express the idea that, when you add values of disparate types T
> and U, the result should be of the type with greater precision? You need to
> be able to spell this somehow.
>
>
> To be slightly less coy:
>

:)

You need to be able to say that one value type is a subtype of another.
> Then you can say Int8 is a subtype of Int16, and the compiler knows that it
> can convert any Int8 to an Int16 but not vice versa. This adds lots of
> complexity and makes parts of the compiler that are currently far too slow
> even slower, but it's not difficult to imagine, just difficult to
> practically implement given the current state of the Swift compiler.
>

And then there are follow-on issues. As but a single example, consider
integer literals, which are of course commonly used with these operations.
Take the following statements:

var x = 42 as UInt32
let y = x + 2

What is the inferred type of 2? Currently, that would be UInt32. What is
the inferred type of y? That would also be UInt32. So, it's tempting to
say, let's keep this rule that integer literals are inferred to be of the
same type. But now:

let z = x + (-2)

What is the inferred type of -2? If it's UInt32, then this expression is a
compile-time error and we've ruled out integer promotion for a common use
case. If OTOH it's the default IntegerLiteralType (Int), what is the type
of z? It would have to be Int.

Now suppose x were instead of type UInt64. What would be the type of z, if
-2 is inferred to be of type Int? The answer would have to be
DoubleWidth<Int64>. That is clearly overkill for subtracting 2. So let's
say instead that the literal is inferred to be of the smallest type that
can represent the value (i.e. Int8). If so, then what is the result of this
computation?

let a = x / ~0

Currently, in Swift 3, ~0 is equal to UInt32.max. But if we have a rule
that the literal should be inferred to be the smallest type that can
represent the value, then the result of this computation _changes_. That
won't do. So let's say instead that the literal is inferred to be of the
same type as the other operand, unless it is not representable as such, in
which case it is then of the smallest type that can represent the value.
Firstly, and critically, this is not very easy to reason about. Secondly,
it still does not solve another problem with the smallest-type rule.
Consider this example:

let b = x / -64

If I import a library that exposes Int7 (the standard library itself has an
internal Int63 type and, I think, other Int{2**n - 1} types as well), then
the type of b would change!

Of all the alternatives here, it would seem that disallowing integer
promotion with literals is after all the most straightforward answer.
However, it is not a satisfying one.

My point here, at this point, is not to drive at a consensus answer for
this particular issue, but to point out that what we are discussing is
essentially a major change to the type system. As such, and because Swift
already has so many rich features, _numerous_ such questions will arise
about how any such change interacts with other parts of the type system.
For these questions, sometimes there is not an "obvious" solution, and
there is no guarantee that there will even be a single fully satisfying
solution even after full consideration. That makes this a _very_ difficult
topic, very difficult indeed.

To evaluate whether any such undertaking is a good idea, we would need to
discuss a fully thought-out design and consider very carefully how it
changes all the other moving parts of the type system; it is not enough to
say merely that the feature is a good one.
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to