On 20.10.2011 14:48, Manu wrote:
On 20 October 2011 15:13, Don <nos...@nospam.com
<mailto:nos...@nospam.com>> wrote:
On 20.10.2011 13:12, Manu wrote:
On 20 October 2011 11:02, Don <nos...@nospam.com
<mailto:nos...@nospam.com>
<mailto:nos...@nospam.com <mailto:nos...@nospam.com>>> wrote:
On 20.10.2011 09:47, Manu wrote:
Many architectures do not support real, and therefore it
should
never be
used implicitly by the language.
Precision problems aside, I would personally insist that
implicit
conversation from any sized int always be to float, not
double, for
performance reasons (the whole point of a compiled
language trying
to supersede C/C++).
On almost all platforms, float and double are the same speed.
This isn't true. Consider ARM, hard to say this isn't a vitally
important architecture these days, and there are plenty of embedded
architectures that don't support doubles at all, I would say it's a
really bad idea to invent a systems programming language that
excludes
many architectures by its design... Atmel AVR is another important
architecture.
It doesn't exclude anything. What we're talking about as desirable
behaviour, is exactly what C does. If you care about performance on
ARM, you'll type sqrt(2.0f).
Personally, I'd rather completely eliminate implicit conversions
between integers and floating point types. But that's just me.
I maintain that implicit conversion of integers of any length should
always target the same precision float, and that should be a
compiler
flag to specify the desired precision throughout the app (possibly
defaulting to double).
I can't believe that you'd ever write an app without that being an
upfront decision. Casually flipping it with a compiler flag??
Remember that it affects very few things (as discussed below).
If you choose 'float' you may lose some precision obviously, but you
expected that when you chose the options, and did the cast...
Explicit casts are not affected in any way.
Note that what we're discussing here is parameter passing of
single
values; if it's part of an aggregate (array or struct), the
issue
doesn't arise.
Are we? I thought we were discussing implicit conversion of ints to
floats? This may be parameter passing, but also assignment I expect?
There's no problem with assignment, it's never ambiguous.
There seems to be some confusion about what the issue is.
To reiterate:
void foo(float x) {}
void foo(double x) {}
void bar(float x) {}
void baz(double x) {}
void main()
{
bar(2); // OK -- 2 becomes 2.0f
baz(2); // OK -- 2 becomes 2.0
foo(2); // fails -- ambiguous.
}
My proposal was effectively: if it's ambiguous, choose double.
That's all.
Yeah sorry, I think you're right, the discussion got slightly lost in
the noise here...
Just to clarify, where you advocate eliminating implicit casting, do you
now refer to ALL implicit casting? Or just implicit casting to an
ambiguous target?
Let me reposition myself to suit what it would seem is actually being
discussed... :)
void sqrt(float x);
void sqrt(double x);
void sqrt(real x);
{
sqrt(2);
}
Surely this produces some error: "Ambiguous call to overloaded
function", and then there is no implicit cast rule to talk about... end
of discussion?
But you speak of "eliminating implicit casting" as if this may also
refer to:
void NotOverloaded(float x);
{
NotOverloaded(2); // not ambiguous... so what's the problem?
}
Actually there is a problem there, I think. If someone later on adds
NotOverload(double x), that call will suddenly stop compiling.
That isn't just a theoretical problem.
Currently log(2) will compile, but only because in std.math there is
log(real), but not yet log(double) or log(float).
So once we add those overloads, peoples code will break.
I'd like to get to the situation where those overloads can be added
without breaking peoples code. The draconian possibility is to disallow
them in all cases: integer types never match floating point function
parameters.
The second possibility is to introduce a tie-breaker rule: when there's
an ambiguity, choose double.
And a third possibility is to only apply that tie-breaker rule to literals.
And the fourth possibility is to keep the language as it is now, and
allow code to break when overloads get added.
The one I really, really don't want, is the situation we have now:
#5: whenever an overload gets added, introduce a hack for that function...
or:
float x = 10;
Which I can imagine why most would feel this is undesirable...
I'm not clear now where you intend to draw the lines.
If you're advocating banning ALL implicit casting between float/int
outright, I actually feel really good about that idea. I can just
imagine the number of hours saved while optimising where junior/ignorant
programmers cast back and fourth with no real regard to or awareness of
what they're doing.
Or are we only drawing the distinction for literals?
I don't mind a compile error if I incorrectly state the literal a
function expects. That sort of thing becomes second nature in no time.
I don't know. I think I probably prefer possibility #1 on my list, but
it'd break existing code. I also like #2 and #3, and I think they'd be
more generally popular.
I don't like #4, but anything is better than #5.