On Thu, May 16, 2019 at 10:09:38AM -0700, Walter Bright via Digitalmars-d-announce wrote: > On 5/15/2019 6:05 PM, H. S. Teoh wrote: > > Gah, so apparently .hashOf is a gigantic overload set of *21* > > different overloads, so this is not really "truly" reduced. =-O > > I've often thought that Phobos excessively overused overloading.
This is in druntime. > And you're quite right, it's a chore figuring out which one is the > culprit. More and more, I'm becoming convinced that this sort of usage of function overloading is an anti-pattern. It should instead be written as something like this: size_t hashOf(T)(T* arg) // N.B.: no sig constraints: because we expect to be // able to hash anything. { static if (is(T == struct)) return hashOfStruct(arg); else static if (is(T == U[], U)) return hashOfArray(arg); ... else static assert(0, "Hash of " ~ T.stringof ~ " not supported"); } The sig constraints, or lack thereof, ought to reflect the *logical* set of acceptable types, not necessarily the actual set supported by the implementation. I.e., hashOf logically *should* support all types, but maybe the current implementation doesn't (yet) support a particular corner case; so it should still accept the type, but emit an error explaining the implementation deficiency in a static assert, rather than just passing the buck back to the compiler which then spews forth a text wall of incomprehensible gibberish of how all 21 overloads failed to match. > What I do is change the name(s) to .hashOfx so it won't be picked, > then one can figure out which one is selected through a process of > elimination. Or insert: > > pragma(msg, __LINE__); > > statements in each one. Good idea. But looks like Nicholas has already done the heavy lifting for us. :-D T -- Always remember that you are unique. Just like everybody else. -- despair.com