On Thu, 30 Dec 2010 11:00:20 -0500, Andrei Alexandrescu <seewebsiteforem...@erdani.org> wrote:

On 12/30/10 9:00 AM, Steven Schveighoffer wrote:

I'm assuming you meant this (once the bug is fixed):

template translateOperators()
{
auto opBinary(string op)(List other) {return doCat(other);} if (op == "~")
}

and adding this mixin to the interface?

In fact if the type doesn't define doCat the operator shouldn't be generated.

   auto opBinary(string op, T)(T other) {return doCat(other);}
     if (op == "~" && is(typeof(doCat(other))))

The other thing that I didn't mention and that I think it would save you some grief is that this is meant to be a once-for-all library solution, not code that needs to be written by the user. In fact I'm thinking the mixin should translate from the new scheme to the old one. So for people who want to use operator overloading with inheritance we can say: just import std.typecons and mixin(translateOperators()) in your class definition. I think this is entirely reasonable.

I'd have to see how it works. I also thought the new operator overloading scheme was reasonable -- until I tried to use it.

Note this is even more bloated because you generate one function per pair of types used in concatenation, vs. one function per class defined.

I find this solution extremely convoluted, not to mention bloated, and
how do the docs work? It's like we're going back to C macros! This
operator overloading scheme is way more trouble than the original.

How do you mean bloated? For documentation you specify in the documentation of the type what operators it supports, or for each named method you specify that operator xxx forwards to it.

I mean bloated because you are generating template functions that just forward to other functions. Those functions are compiled in and take up space, even if they are inlined out.

Let's also realize that the mixin is going to be required *per interface* and *per class*, meaning even more bloat.

I agree if there is a "standard" way of forwarding with a library mixin, the documentation will be reasonable, since readers should be able to get used to looking for the 'atlernative' operators.

The thing I find ironic is that with the original operator overloading
scheme, the issue was that for types that define multiple operator
overloads in a similar fashion, forcing you to repeat boilerplate code.
The solution to it was a mixin similar to what you are suggesting.
Except now, even mundane and common operator overloads require verbose
template definitions (possibly with mixins), and it's the uncommon case
that benefits.

Not at all. The common case is shorter and simpler. I wrote the chapter on operator overloading twice, once for the old scheme and once for the new one. It uses commonly-encountered designs for its code samples. The chapter and its code samples got considerably shorter in the second version. You can't blow your one example into an epic disaster.

The case for overloading a single operator is shorter and simpler with the old method:

auto opAdd(Foo other)

vs.

auto opBinary(string op)(Foo other) if (op == "+")

Where the new scheme wins in brevity (for written code at least, and certainly not simpler to understand) is cases where:

1. inheritance is not used
2. you can consolidate many overloads into one function.

So the question is, how many times does one define operator overloading on a multitude of operators *with the same code* vs. how many times does one define a few operators or defines the operators with different code?

In my experience, I have not yet defined a type that uses a multitude of operators with the same code. In fact, I have only defined the "~=" and "~" operators for the most part.

So I'd say, while my example is not proof that this is a disaster, I think it shows the change in operator overloading cannot yet be declared a success. One good example does not prove anything just like one bad example does not prove anything.

So really, we haven't made any progress (mixins are still
required, except now they will be more common). I think this is one area
where D has gotten decidedly worse. I mean, just look at the difference
above between defining the opcat operator in D1 and your mixin solution!

I very strongly believe the new operator overloading is a vast improvement over the existing one and over most of today's languages.

I haven't had that experience. This is just me talking. Maybe others believe it is good.

I agree that the flexibility is good, I really think it should have that kind of flexibility. Especially when we start talking about the whole opAddAssign mess that was in D1. It also allows making wrapper types easier.

The problem with flexibility is that it comes with complexity. Most programmers looking to understand how to overload operators in D are going to be daunted by having to use both templates and template constraints, and possibly mixins.

There once was a discussion on how to improve operators on the phobos mailing list (don't have the history, because i think it was on erdani.com). Essentially, the two things were:

1) let's make it possible to easily specify template constraints for typed parameters (such as string) like this:

auto opBinary("+")(Foo other)

which would look far less complex and verbose than the current incarnation. And simple to define when all you need is one or two operators.

2) make template instantiations that provably evaluate to a single instance virtual. Or have a way to designate they should be virtual. e.g. the above operator syntax can only have one instantiation.

We shouldn't discount all of its advantages and focus exclusively on covariance, which is a rather obscure facility.

I respectfully disagree. Covariance is very important when using class hierarchies, because to have something that returns itself degrade into a basic interface is very cumbersome. I'd say dcollections would be quite clunky if it weren't for covariance (not just for operator overloads). It feels along the same lines as inout -- where inout allows you to continue using your same type with the same constancy, covariance allows you to continue to use the most derived type that you have.

Using operator overloading in conjunction with class inheritance is rare.

I don't use operator overloads and class inheritance, but I do use operator overloads with interfaces. I think rare is not the right term, it's somewhat infrequent, but chances are if you do a lot of interfaces, you will encounter it at least once. It certainly doesn't dominate the API being defined.

Rare as it is, we need to allow it and make it convenient. I believe this is eminently possible along the lines discussed in this thread.

Convenience is good. I hope we can do it at a lower exe footprint cost than what you have proposed.

As a compromise, can we work on a way to forward covariance, or to have
the compiler reevaluate the template in more derived types?

I understand. I've had this lure a few times, too. The concern there is that this is a potentially surprising change.

Actually, the functionality almost exists in template this parameters. At least, the reevaluation part is working. However, you still must incur a performance penalty to cast to the derived type, plus the template nature of it adds unnecessary bloat.

-Steve

Reply via email to