On Sunday, 3 September 2017 at 02:39:19 UTC, Moritz Maxeiner wrote:
On Saturday, 2 September 2017 at 23:12:35 UTC, EntangledQuanta wrote:
On Saturday, 2 September 2017 at 21:19:31 UTC, Moritz Maxeiner wrote:
On Saturday, 2 September 2017 at 00:00:43 UTC, EntangledQuanta wrote:
On Friday, 1 September 2017 at 23:25:04 UTC, Jesse Phillips wrote:
I've love being able to inherit and override generic functions in C#. Unfortunately C# doesn't use templates and I hit so many other issues where Generics just suck.

I don't think it is appropriate to dismiss the need for the compiler to generate a virtual function for every instantiated T, after all, the compiler can't know you have a finite known set of T unless you tell it.

But lets assume we've told the compiler that it is compiling all the source code and it does not need to compile for future linking.

First the compiler will need to make sure all virtual functions can be generated for the derived classes. In this case the compiler must note the template function and validate all derived classes include it. That was easy.

Next up each instantiation of the function needs a new v-table entry in all derived classes. Current compiler implementation will compile each module independently of each other; so this feature could be specified to work within the same module or new semantics can be written up of how the compiler modifies already compiled modules and those which reference the compiled modules (the object sizes would be changing due to the v-table modifications)

With those three simple changes to the language I think that this feature will work for every T.

Specifying that there will be no further linkage is the same as making T finite. T must be finite.

C# uses generics/IR/CLR so it can do things at run time that is effectively compile time for D.

By simply extending the grammar slightly in an intuitive way, we can get the explicit finite case, which is easy:

foo(T in [A,B,C])()

and possibly for your case

foo(T in <module>)() would work

or

foo(T in <program>)()

the `in` keyword makes sense here and is not used nor ambiguous, I believe.

While I agree that `in` does make sense for the semantics involved, it is already used to do a failable key lookup (return pointer to value or null if not present) into an associative array [1] and input contracts. It wouldn't be ambiguous AFAICT, but having a keyword mean three different things depending on context would make the language even more complex (to read).

Yes, but they are independent, are they not? Maybe not.

foo(T in Typelist)()

in, as used here is not a input contract and completely independent. I suppose for arrays it could be ambiguous.

The contexts being independent of each other doesn't change that we would still be overloading the same keyword with three vastly different meanings. Two is already bad enough imho (and if I had a good idea with what to replace the "in" for AA's I'd propose removing that meaning).

Why? Don't you realize that the contexts matters and it's what separates the meaning? In truly unambiguous contexts, it shouldn't matter. It may require one to decipher the context, which takes time, but there is nothing inherently wrong with it and we are limited to how many symbols we use(unfortunately we are generally stuck with the querty keyboard design, else we could use symbols out the ying yang and make things much clearer, but even mathematics, which is a near perfect language, "overloads" symbols meanings).

You have to do this sort of thing when you limit the number of keywords you use. Again, ultimately it doesn't matter. A symbol is just a symbol. For me, as long as the context is clear, I don't see what kind of harm it can cause. You say it is bad, but you don't give the reasons why it is bad. If you like to think of `in` has having only one definition then the question is why? You are limiting yourself. The natural languages are abound with such multi-definitions. Usually in an ambiguous way and it can cause a lot of problems, but for computer languages, it can't(else we couldn't actually compile the programs). Context sensitive grammars are provably more expressive than context free.

https://en.wikipedia.org/wiki/Context-sensitive_grammar

Again, I'm not necessarily arguing for them, just saying that one shouldn't avoid them just to avoid them.




For me, and this is just me, I do not find it ambiguous. I don't find different meanings ambiguous unless the context overlaps. Perceived ambiguity is not ambiguity, it's just ignorance... which can be overcome through learning. Hell, D has many cases where there are perceived ambiguities... as do most things.

It's not about ambiguity for me, it's about readability. The more significantly different meanings you overload some keyword - or symbol, for that matter - with, the harder it becomes to read.

I don't think that is true. Everything is hard to read. It's about experience. The more you experience something the more clear it becomes. Only with true ambiguity is something impossible. I realize that in one can design a language to be hard to parse due to apparent ambiguities, but am I am talking about cases where they can be resolved immediately(at most a few milliseconds).

You are making general statements, and it is not that I disagree, but it depends on context(everything does). In this specific case, I think it is extremely clear what in means, so it is effectively like using a different token. Again, everyone is different though and have different experiences that help them parse things more naturally. I'm sure there are things that you might find easy that I would find hard. But that shouldn't stop me from learning about them. It makes me "smarter", to simplify the discussion.



But in any case, I could care less about the exact syntax. It's just a suggestion that makes the most logical sense with regard to the standard usage of in. If it is truly unambiguous then it can be used.

Well, yes, as I wrote, I think it is unambiguous (and can thus be used), I just think it shouldn't be used.

Yes, but you have only given the reason that it shouldn't be used because you believe that one shouldn't overload keywords because it makes it harder to parse the meaning. My rebuttal, as I have said, is that it is not harder, so your argument is not valid. All you could do is claim that it is hard and we would have to find out who is more right.

I have a logical argument against your absolute restriction though... in that it causes one to have to use more symbols. I would imagine you are against stuff like using "in1", "in2", etc because they visibly are to close to each other. If you want "maximum" readability you are going to have to mathematically define that in a precise way then could up with a grammar that expresses it. I think you'll find that the grammar will depend on each individual person. At best you could then take an average which satisfies most people up to some threshold... in which case, at some point in time later, that average will shift and your grammar will no longer be valid(it will no longer satisfy the average).

Again, it's not that I completely disagree with you on a practical level. Lines have to be drawn, but it's about where to precisely draw that line. Drawing it in the wrong place leads to certain solutions that are generally problematic. That's how we know they are wrong, we draw a line and later realize it cause a bunch of problems then we say "oh that was the wrong way to do it". Only with drawing a bunch of wrong lines can we determine which ones are the best and use that info to predict better locations.


Another alternative is

foo(T of Typelist)

which, AFAIK, of is not used in D and even most programming languages. Another could be

foo(T -> Typelist)

or even

foo(T from Typelist)

I would much rather see it as a generalization of existing template specialization syntax [1], which this is t.b.h. just a superset of (current syntax allows limiting to exactly one, you propose limiting to 'n'):

---
foo(T: char) // Existing syntax: Limit T to the single type `char` foo(T: (A, B, C)) // New syntax: Limit T to one of A, B, or C
---

Yes, if this worked, I'd be fine with it. Again, I could care less. `:` == `in` for me as long as `:` has the correct meaning of "can be one of the following" or whatever.

But AFAIK, : is not "can be one of the following"(which is "in" or "element of" in the mathematical sense) but can also mean "is a derived type of".

All I'm after is the capability to do something elegantly, and when it doesn't exist, I "crave" that it does. I don't really care how it is done(but remember, it must be done elegantly). I am not "confused"(or whatever you want to call it) by symbolic notation. As long as it's clearly defined so I can learn the definition and it is not ambiguous.

There are all kinds of symbols that can be used, again, we are limited by querty(for speed, no one wants to have to use alt-codes in programming, there is a way around this but would scare most people).

e.g.,

T ∈ X is another expression(more mathematical, I assume that ∈ will be displayed correctly, it is Alt +2208) that could work, but ∈ is not ascii an so can't be used(not because it can't but because of peoples lack of will to progress out of the dark ages).

Strictly speaking, this is exactly what template specialization is for, it's just that the current one only supports a single type instead of a set of types. Looking at the grammar rules, upgrading it like this is a fairly small change, so the cost there should be minimal.


If that is the case then go for it ;) It is not a concern of mine. You tell me the syntax and I will use it. (I'd have no choice, of course, but if it's short and sweet then I won't have any problem).

The main reason I suggest syntax is because none exist and I assume, maybe wrongly, that people will get what I am saying easier than writing up some example library solution and demonstrating that.

if I say something like

class/struct { foo(T ∈ X)(); }

defines a virtual template function for all T in X. Which is equivalent to

class/struct
{
   foo(X1)();
   ...
   foo(Xn)();
}
I assume that most people will understand, more or less the notation I used to be able to interpret what am trying to get at. It is a mix of psuedo-programming and mathematics, but it is not complex. ∈ might be a bit confusing but looking it up and learning about it will educate those that want to be educated and expand everyones ability to communicate better. I could, of course, be more precise, but I try to be precise only when it suits me(which may be fault, but, again, I only have so many hours in the day to do stuff).



or whatever. Doesn't really matter. They all mean the same to me once the definition has been written in stone. Could use `foo(T eifjasldj Typelist)` for all I care.

That's okay, but it does matter to me.


That's fine. I am willing to compromise. Lucky for you, symbols/tokens and context are not a big deal to me. Of course, I do like short and sweet, so I am biased too, but I have much more leeway it seems.


The import thing for me is that such a simple syntax exists rather than the "complex syntax's" that have already been given(which are ultimately syntax's as everything is at the end of the day).

Quoting a certain person (you know who you are) from DConf 2017: "Write a DIP". I'm quite happy to discuss this idea, but at the end of the day, as it's not an insignificant change to the language someone will to do the work and write a proposal.


My main issues with going through the trouble is that basically I have more important things to do. If I were going to try to get D to do all the changes I actually wanted, I'd be better off writing my own language the way I envision it and want it... but I don't have 10+ years to invest in such a beast and to do it right would require my full attention, which I'm not willing to give, because again, I have better things to do(things I really enjoy).

So, all I can do is hopefully stoke the fire enough to get someone else interested in the feature and have them do the work. If they don't, then they don't, that is fine. But I feel like I've done something to try to right a wrong.



W.r.t. to the idea in general: I think something like that could be valuable to have in the language, but since this essentially amounts to syntactic sugar (AFAICT), but I'm not (yet) convinced that with `static foreach` being included it's worth the cost.


Everything is syntactic sugar. So it isn't about if but how much. We are all coding in 0's and 1's whether we realize it or not. The point if syntax(or syntactic sugar) is to reduce the amount of 0's and 1's that we have to *effectively* code by grouping common patterns in to symbolic equivalents(by definition).

AFAIK the difference between syntax sugar and enabling syntax in PLs usually comes down to the former allowing you to express concepts already representable by other constructs in the PL; when encountered, the syntax sugar could be lowered by the compiler to the more verbose syntax and still be both valid in the PL and recognizable as the concept (while this is vague, a prominent example would be lambdas in Java 8).

Yes, but everything is "lowered" it's just how you define it. It is all lowering to 0's and 1's. Syntactic sugar is colloquially used like you have defined it, but in the limit(the most general sense), it's just stuff. Why? Because what is sugar to one person is salt to another(this is hyperbole, of course, but you should be able to get my point).

e.g., You could define syntactic sugar to be enhancement that can be directly rewritten in to a currently expressible syntax in the language.

That is fine. But then what if that expressible syntax was also syntactic sugar? You end up with something like L(L(L(L(x)))) where L is a "lowering" and x is something that is not "lowered". But if you actually were able to trace the evolution of the compiler, You'd surely notice that x is just L(...L(y)...) for some y.

A programming language is simply something that takes a set of bits and transforms them to another set of bits. No more and no less. Everything else is "syntactic sugar". The definition may be so general as to be useless, but it is what a programming language is(mathematically at least).

Think about it a bit. How did programmers program before modern compilers came along? They used punch cards or levers, which are basically setting "bits" or various "function"(behaviors) that the machine would carry out. Certain functions and combinations of functions were deemed more useful and were combined in to "meta-functions" and given special bits to represent them. This process has been carried out ad-nauseam and we are were we are today because of this process(fundamentally)

But the point is, at each step, someone can claim that the current "simplifying" of complex functions in to a "meta-function" just "syntactic sugar". This process though is actually what creates the "power" in things. Same thing happens at the hardware level... same thing happens with atoms and molecules(except we are not in control of the rules of how those things combine).



No one can judge the usefulness of syntax until it has been created because what determines how useful something is is its use. But you can't use something if it doesn't exist. I think many fail to get that.

Why do you think that? Less than ten people have participated in this thread so far.

I am not talking about just this thread, I am talking about in all threads and all things in which humans attempt to determine the use of something. e.g., the use of computers(used to be completely useless for most people because they failed to see the use in it(it wasn't useful to them)). The use of medicine... the use of a new born baby, the use of life. The use of a turtle. People judge use in terms of what it does for them on a "personal" level, and my point is, that this inability to see the use of something in an absolute sense(how useful is it to the whole, be it the whole of the D programming community, the whole of humanity, the whole of life, or whatever) is a sever shortcoming of almost all humans. It didn't creep up too much in this thread but I have definitely see in it other threads. Most first say "Well, hell, that won't help me, that is useless". They forget that it may be useless to them at that moment, but might be useful to them and might be useful to other people.

Why something is useless to someone, though, almost entirely depends on their use of it. You can't know how useful something is until you use it... and this is why so many people judge the use of something the way they do(They can't help it, it's sort of law of the universe).

Let me explain, as it might not be clear: Many people many years ago used to think X was useless. Today, those same people cannot live without X. Replace X with just about anything(computers, music, oop, etc). But if you asked those people back then they would have told you those things are useless. But through whatever means(the way life is) things change and things that were previously useless become useful. They didn't know that at first because they didn't use those things to find out if they were useful.

The same logic SHOULD be applied to everything. We don't know how useful something is until we use it *enough* to determine if it is useful. But this is not the logic most people use, including many people in the D community. They first judge, and almost exclusively(depends on the person), how it relates to their own person self. This is fundamentally wrong IMO and, while I don't have mathematical proof, I do have a lot of experience that tells me so(history being a good friend).



The initial questions should be: Is there a gap in the language? (Yes in this case). Can the gap be filled? (this is a theoretical/mathematical question that has to be answered.

Most people jump the gun here and make assumptions)

Why do you assume that? I've not seen anyone here claiming template parameter specialization to one of n types (which is the idea I replied to) couldn't be done in theory, only that it can't be done right now (the only claim as to that it can't be done I noticed was w.r.t. (unspecialized) templates and virtual functions, which is correct due to D supporting separate compilation; specialized templates, however, should work in theory).

Let me quote the first two responses:

"It can't work this way. You can try std.variant."

and

"It is not possible to have a function be both virtual and templated. A function template generates a new function definition every time that it's a called with a new set of template arguments. So, the actual functions are not known up front, and that fundamentally does not work with virtual functions, where the functions need to be known up front, and you get a different function by a look-up for occurring in the virtual function call table for the class. Templates and virtual functions simply don't mix. You're going to have to come up with a solution that does not try and mix templates and virtual functions."

Now, I realize I might have no been clear about things and maybe there is confusion/ambiguity in what I meant, how they interpreted it, or how I interpreted their response... but there is definitely no sense of "Yes, we can make this work in some way..." type of mentality.

e.g., "Templates and virtual functions simply don't mix."

That is an absolute statement. It isn't even qualified with "in D".

Does the gap need to be filled? Yes in this case, because all gaps ultimately need to be filled, but this then leads the practical issues:

Actually, I disagree here. It only *needs* filling if enough users of the language actually care about it not being there. Otherwise, it's a *nice to have* (like generics and Go, or memory safety and C :p ).

Yes, on some level you are right... but again, who's to judge? the current users or the future users? You have to take in to account the future users if you care about the future of D, because those will be the users of it and so the current users actually have only a certain percentage of weight. Also, who will be more informed about the capabilities and useful features of D? The current users or the future users? Surely when you first started using D, you were ignorant of many of the pro's and con's of D. Your future self(in regard to that time period when you first started using D) new a lot more about it? ie., you know more now than you did, and you will know more in the future than you do now. The great thing about knowledge it grows with time when watered. You stuck around with D, learned it each "day" and became more knowledgeable about it. At the time, there were people making decisions about the future of D features, and now you get to experience them and determine their usefulness PRECISELY because of those people in the past filling in the gaps.

EVERYTHING that D currently has it didn't have in the past. Hence, someone had to create it(Dip or no dip)... thank god they did, or D would just be a pimple on Walters brain. But D can't progress any further unless the same principles are applied. Sure it is more bulky(complex) and sure not everything has to be implemented in the compiler to make progress... But the only way we can truly know what we should do is first to do things we think are correct(and don't do things we know are wrong).

So, when people say "this can't be done" and I know it damn well can, I will throw a little tantrum... maybe they will give me a cookie, who knows? Sure, I could be wrong... but I could also be right(just as much as they could be wrong or be right). This is why we talk about things, to combine our experiences and ideas to figure out how well something will work. The main problem I see, in the D community, is that very little cooperation is done in those regards unless it's initiated by the core team(that isn't a bad thing in some sense but it isn't a good thing in another sense).

I guess some people just haven't learned the old proverb "Where there's a will, there's a way".

[1] https://dlang.org/spec/template.html#parameters_specialization

As I mentioned, and I'm unclear if it : behaves exactly that way or not, but : seems to do more than be inclusive. If it's current meaning can still work with virtual templated functions, then I think it would be even better. But ultimately all this would have to be fleshed out properly before any real work could be done.





Reply via email to