On Tue, 14 Dec 2010 13:51:50 -0500, so <s...@so.do> wrote:
Any idea how this can be 'solved' or do we need to continue doing
things like this? My naive instinct is to use the declaration order to
determine a match (first one to match wins), but that kind of goes
against other overloads in D.
One big plus of current solution is that everything you need for that
specialization lies in the signature.
I can't see another approach that scales better at this. If scaling for
constraints is something important.
In:
void foo(R)(R r) if(isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
void foo(R)(R r) if(isInputRange!R && !isRandomAccessRange!R) {...}
We can deduce it is equal to:
void foo(R)(R r) if(isInputRange!R) {...}
For single/two constraints it isn't hard, when things get ugly
determining what means what is not quite easy as far as i can see.
Just consider if your first specialization had two constraints.
On the contrary, I think it scales very poorly from a function-writer
point of view.
Let's say I wanted to add a version that implements a specialization for
forward ranges, I now have to modify the constraint on the one that does
input ranges. This is opposite to how derived classes or specialized
overloads work, I just define the specialization, and if it doesn't match
it falls back to the default.
Imagine now if I wanted to define a foo that worked only on my specific
range, I now have to go back and modify the constraints of all the other
functions. What if I don't have control over that module?
-Steve