On 21.10.2011 05:24, Robert Jacques wrote:
On Thu, 20 Oct 2011 09:11:27 -0400, Don <nos...@nospam.com> wrote:
[snip]
I'd like to get to the situation where those overloads can be added
without breaking peoples code. The draconian possibility is to disallow
them in all cases: integer types never match floating point function
parameters.
The second possibility is to introduce a tie-breaker rule: when there's
an ambiguity, choose double.
And a third possibility is to only apply that tie-breaker rule to
literals.
And the fourth possibility is to keep the language as it is now, and
allow code to break when overloads get added.

The one I really, really don't want, is the situation we have now:
#5: whenever an overload gets added, introduce a hack for that
function...

I agree that #5 and #4 not acceptable longer term solutions. I do
CUDA/GPU programming, so I live in a world of floats and ints. So
changing the rules does worries me, but mainly because most people don't
use floats on a daily basis, which introduces bias into the discussion.

Yeah, that's a valuable perspective.
sqrt(2) is "I don't care what the precision is".
What I get from you and Manu is:
if you're working in a float world, you want float to be the tiebreaker.
Otherwise, you want double (or possibly real!) to be the tiebreaker.

And therefore, the

Thinking it over, here are my suggestions, though I'm not sure if 2a or
2b would be best:

1) Integer literals and expressions should use range propagation to use
the thinnest loss-less conversion. If no loss-less conversion exists,
then an error is raised. Choosing double as a default is always the
wrong choice for GPUs and most embedded systems.
2a) Lossy variable conversions are disallowed.
2b) Lossy variable conversions undergo bounds checking when asserts are
turned on.

The spec says: "Integer values cannot be implicitly converted to another type that cannot represent the integer bit pattern after integral promotion." Now although that was intended to only apply to integers, it reads as if it should apply to floating point as well.

The idea behind 2b) would be:

int i = 1;
float f = i; // assert(true);
i = int.max;
f = i; // assert(false);

That would be catastrophically slow.

I wonder how painful disallowing lossy conversions would be.

Reply via email to