On Thu, 20 Oct 2011 09:11:27 -0400, Don <nos...@nospam.com> wrote:
[snip]
I'd like to get to the situation where those overloads can be added
without breaking peoples code. The draconian possibility is to disallow
them in all cases: integer types never match floating point function
parameters.
The second possibility is to introduce a tie-breaker rule: when there's
an ambiguity, choose double.
And a third possibility is to only apply that tie-breaker rule to literals.
And the fourth possibility is to keep the language as it is now, and
allow code to break when overloads get added.

The one I really, really don't want, is the situation we have now:
#5: whenever an overload gets added, introduce a hack for that function...

I agree that #5 and #4 not acceptable longer term solutions. I do CUDA/GPU 
programming, so I live in a world of floats and ints. So changing the rules 
does worries me, but mainly because most people don't use floats on a daily 
basis, which introduces bias into the discussion.

Thinking it over, here are my suggestions, though I'm not sure if 2a or 2b 
would be best:

1) Integer literals and expressions should use range propagation to use the 
thinnest loss-less conversion. If no loss-less conversion exists, then an error 
is raised. Choosing double as a default is always the wrong choice for GPUs and 
most embedded systems.
2a) Lossy variable conversions are disallowed.
2b) Lossy variable conversions undergo bounds checking when asserts are turned 
on.

The idea behind 2b) would be:

int   i = 1;
float f = i; // assert(true);
      i = int.max;
      f = i; // assert(false);

Reply via email to