On 25/07/14 08:44, Jonathan M Davis wrote:

So, in the case where opCmp was defined but not opEquals, instead of
using the normal, built-in opEquals (which should already be equivalent
to lhs.opCmp(rhs) == 0), we're going to make the compiler generate
opEquals as lhs.opCmp(rhs) == 0? That's a silent performance hit for no
good reason IMHO.

So are default initialized variables, virtual by default and other similar cases. D aims for correctness and safety first. With the option to get better performance by, possibly, writing some extra code.

It doesn't even improve correctness except in the
cases where the programmer should have been defining opEquals in the
first place, because lhs.opCmp(rhs) == 0 wasn't equivalent to the
compiler-generate opEquals. So, we'd be making good code worse just to
try and fix an existing bug in bad code in order to do what? Not break
the already broken code?

I don't understand this. How are we making good code worse? If the code was working previously, opCmp == 0 should have had the same result as the default generated opEquals. In that case it's perfectly safe to define opEquals to be opCmp == 0.

I can understand wanting to avoid breaking code when changing from using
opCmp to using opEquals with AAs, but it's only an issue if the code was
already broken by defining opCmp in a way that didn't match opEquals, so
if I find it odd that any part of this is controversial, it's the fact
that anyone thinks that we should try and avoid breaking code where
opEquals and opCmp weren't equivalent.

By defining opEquals to be opCmp == 0 we're:

1. We're not breaking code where it wasn't broken previously
2. We're fixing broken code. That is when opEqual and opCmp == 0 gave different results

--
/Jacob Carlborg

Reply via email to