On May 23, 2016, at 8:18 PM, Xiaodi Wu <xiaodi...@gmail.com> wrote:
> 
> Int is the same size as Int64 on a 64-bit machine but the same size as Int32 
> on a 32-bit machine. By contrast, modern 32-bit architectures have FPUs that 
> handle 64-bit and even 80-bit floating point types. Therefore, it does not 
> make sense for Float to be Float32 on a 32-bit machine, as would be the case 
> in one interpretation of what it means to mirror naming "conventions." 
> However, if you interpret the convention to mean that Float should be the 
> largest floating point type supported by the FPU, Float should actually be a 
> typealias for Float80 even on some 32-bit machines. In neither interpretation 
> does it mean that Float should simply be a typealias for what's now called 
> Double.
IIRC, `Int` is typealiased to the target's biggest native/efficient/practical 
integer type, regardless of its bit-depth (although I believe some do exist, I 
can’t think of any CPUs in which those are different). I don’t see why it 
shouldn’t be the same way with floats… IMHO, `Float` should be typealiased to 
the biggest native/efficient/practical floating point type, which I think is 
pretty universally Float64. I’m under the impression that Intel’s 80-bit format 
is intended to be an interim representation which is automatically converted 
to/from 64-bit, and loading & storing a full 80-bits is a non-trivial matter. 
I’m not even sure if the standard “math.h" functions are defined for Float80 
arguments. If Float80 is just as native/efficient/practical as Float64, I 
wouldn’t object to Float being typealiased to Float80 on such platforms.

> Another issue to consider: a number like 42 is stored exactly regardless of 
> whether you're using an Int32 or an Int64. However, a number like 1.1 is not 
> stored exactly as a binary floating point type, and it's approximated 
> *differently* as a Float than as a Double. Thus, it can be essential to 
> consider what kind of floating point type you're using in scenarios even when 
> the number is small, whereas the same is not true for integer types.
Oh I know. I’m not arguing that floating point math isn’t messy, just that 
since we can use “Int” for when we don’t care and “IntXX” for when we do, we 
should also be able to use “Float” when we don’t care and “FloatXX” when we do. 
If someone’s worried about the exact value of “1.1”, they should be specifying 
the bit-depth anyway. Otherwise, give them most precise type which can work 
with the language’s goals.

Have we (meaning the list in general, not you & me in particular) had this 
conversation before? This feels familiar...

-Dave Sweeris
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to